Oct 06 11:47:24 crc systemd[1]: Starting Kubernetes Kubelet... Oct 06 11:47:24 crc restorecon[4674]: Relabeled /var/lib/kubelet/config.json from system_u:object_r:unlabeled_t:s0 to system_u:object_r:container_var_lib_t:s0 Oct 06 11:47:24 crc restorecon[4674]: /var/lib/kubelet/device-plugins not reset as customized by admin to system_u:object_r:container_file_t:s0 Oct 06 11:47:24 crc restorecon[4674]: /var/lib/kubelet/device-plugins/kubelet.sock not reset as customized by admin to system_u:object_r:container_file_t:s0 Oct 06 11:47:24 crc restorecon[4674]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/volumes/kubernetes.io~configmap/nginx-conf/..2025_02_23_05_40_35.4114275528/nginx.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Oct 06 11:47:24 crc restorecon[4674]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Oct 06 11:47:24 crc restorecon[4674]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/22e96971 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Oct 06 11:47:24 crc restorecon[4674]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/21c98286 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Oct 06 11:47:24 crc restorecon[4674]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/0f1869e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Oct 06 11:47:24 crc restorecon[4674]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Oct 06 11:47:24 crc restorecon[4674]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/46889d52 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Oct 06 11:47:24 crc restorecon[4674]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/5b6a5969 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c963 Oct 06 11:47:24 crc restorecon[4674]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/6c7921f5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Oct 06 11:47:24 crc restorecon[4674]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/4804f443 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Oct 06 11:47:24 crc restorecon[4674]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/2a46b283 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Oct 06 11:47:24 crc restorecon[4674]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/a6b5573e not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Oct 06 11:47:24 crc restorecon[4674]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/4f88ee5b not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Oct 06 11:47:24 crc restorecon[4674]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/5a4eee4b not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c963 Oct 06 11:47:24 crc restorecon[4674]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/cd87c521 not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Oct 06 11:47:24 crc restorecon[4674]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Oct 06 11:47:24 crc restorecon[4674]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_33_42.2574241751 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Oct 06 11:47:24 crc restorecon[4674]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_33_42.2574241751/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Oct 06 11:47:24 crc restorecon[4674]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Oct 06 11:47:24 crc restorecon[4674]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Oct 06 11:47:24 crc restorecon[4674]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Oct 06 11:47:24 crc restorecon[4674]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/38602af4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Oct 06 11:47:24 crc restorecon[4674]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/1483b002 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Oct 06 11:47:24 crc restorecon[4674]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/0346718b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Oct 06 11:47:24 crc restorecon[4674]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/d3ed4ada not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Oct 06 11:47:24 crc restorecon[4674]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/3bb473a5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Oct 06 11:47:24 crc restorecon[4674]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/8cd075a9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Oct 06 11:47:24 crc restorecon[4674]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/00ab4760 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Oct 06 11:47:24 crc restorecon[4674]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/54a21c09 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Oct 06 11:47:24 crc restorecon[4674]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c589,c726 Oct 06 11:47:24 crc restorecon[4674]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/70478888 not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Oct 06 11:47:24 crc restorecon[4674]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/43802770 not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Oct 06 11:47:24 crc restorecon[4674]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/955a0edc not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Oct 06 11:47:24 crc restorecon[4674]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/bca2d009 not reset as customized by admin to system_u:object_r:container_file_t:s0:c140,c1009 Oct 06 11:47:24 crc restorecon[4674]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/b295f9bd not reset as customized by admin to system_u:object_r:container_file_t:s0:c589,c726 Oct 06 11:47:24 crc restorecon[4674]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Oct 06 11:47:24 crc restorecon[4674]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..2025_02_23_05_21_22.3617465230 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Oct 06 11:47:24 crc restorecon[4674]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..2025_02_23_05_21_22.3617465230/cnibincopy.sh not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Oct 06 11:47:24 crc restorecon[4674]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Oct 06 11:47:24 crc restorecon[4674]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/cnibincopy.sh not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Oct 06 11:47:24 crc restorecon[4674]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Oct 06 11:47:24 crc restorecon[4674]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..2025_02_23_05_21_22.2050650026 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..2025_02_23_05_21_22.2050650026/allowlist.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/allowlist.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/bc46ea27 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/5731fc1b not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/5e1b2a3c not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/943f0936 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/3f764ee4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/8695e3f9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/aed7aa86 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/c64d7448 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/0ba16bd2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/207a939f not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/54aa8cdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/1f5fa595 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/bf9c8153 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/47fba4ea not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/7ae55ce9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/7906a268 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/ce43fa69 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/7fc7ea3a not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/d8c38b7d not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/9ef015fb not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/b9db6a41 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/b1733d79 not reset as customized by admin to system_u:object_r:container_file_t:s0:c476,c820 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/afccd338 not reset as customized by admin to system_u:object_r:container_file_t:s0:c272,c818 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/9df0a185 not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/18938cf8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c476,c820 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/7ab4eb23 not reset as customized by admin to system_u:object_r:container_file_t:s0:c272,c818 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/56930be6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides/..2025_02_23_05_21_35.630010865 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..2025_02_23_05_21_35.1088506337 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..2025_02_23_05_21_35.1088506337/ovnkube.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/ovnkube.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/0d8e3722 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/d22b2e76 not reset as customized by admin to system_u:object_r:container_file_t:s0:c382,c850 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/e036759f not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/2734c483 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/57878fe7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/3f3c2e58 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/375bec3e not reset as customized by admin to system_u:object_r:container_file_t:s0:c382,c850 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/7bc41e08 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/48c7a72d not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/4b66701f not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/a5a1c202 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666/additional-cert-acceptance-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666/additional-pod-admission-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/additional-cert-acceptance-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/additional-pod-admission-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides/..2025_02_23_05_21_40.1388695756 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/26f3df5b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/6d8fb21d not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/50e94777 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/208473b3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/ec9e08ba not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/3b787c39 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/208eaed5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/93aa3a2b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/3c697968 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/ba950ec9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/cb5cdb37 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/f2df9827 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..2025_02_23_05_22_30.473230615 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..2025_02_23_05_22_30.473230615/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_24_06_22_02.1904938450 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_24_06_22_02.1904938450/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/fedaa673 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/9ca2df95 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/b2d7460e not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/2207853c not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/241c1c29 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/2d910eaf not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..2025_02_23_05_23_49.3726007728 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..2025_02_23_05_23_49.3726007728/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..2025_02_23_05_23_49.841175008 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..2025_02_23_05_23_49.841175008/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.843437178 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.843437178/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/c6c0f2e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/399edc97 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/8049f7cc not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/0cec5484 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/312446d0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c406,c828 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/8e56a35d not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.133159589 not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.133159589/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/2d30ddb9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c380,c909 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/eca8053d not reset as customized by admin to system_u:object_r:container_file_t:s0:c380,c909 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/c3a25c9a not reset as customized by admin to system_u:object_r:container_file_t:s0:c168,c522 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/b9609c22 not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/e8b0eca9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c106,c418 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/b36a9c3f not reset as customized by admin to system_u:object_r:container_file_t:s0:c529,c711 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/38af7b07 not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/ae821620 not reset as customized by admin to system_u:object_r:container_file_t:s0:c106,c418 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/baa23338 not reset as customized by admin to system_u:object_r:container_file_t:s0:c529,c711 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/2c534809 not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3532625537 not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3532625537/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/59b29eae not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c381 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/c91a8e4f not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c381 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/4d87494a not reset as customized by admin to system_u:object_r:container_file_t:s0:c442,c857 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/1e33ca63 not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/8dea7be2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/d0b04a99 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/d84f01e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/4109059b not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/a7258a3e not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/05bdf2b6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/f3261b51 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/315d045e not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/5fdcf278 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/d053f757 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/c2850dc7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..2025_02_23_05_22_30.2390596521 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..2025_02_23_05_22_30.2390596521/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/fcfb0b2b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/c7ac9b7d not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/fa0c0d52 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/c609b6ba not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/2be6c296 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/89a32653 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/4eb9afeb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/13af6efa not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/b03f9724 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/e3d105cc not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/3aed4d83 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1906041176 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1906041176/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/0765fa6e not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/2cefc627 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/3dcc6345 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/365af391 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-Default.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-TechPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-DevPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-TechPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-DevPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-Default.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/b1130c0f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/236a5913 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/b9432e26 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/5ddb0e3f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/986dc4fd not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/8a23ff9a not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/9728ae68 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/665f31d0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1255385357 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1255385357/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_23_57.573792656 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_23_57.573792656/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_22_30.3254245399 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_22_30.3254245399/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/136c9b42 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/98a1575b not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/cac69136 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/5deb77a7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/2ae53400 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3608339744 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3608339744/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/e46f2326 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/dc688d3c not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/3497c3cd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/177eb008 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3819292994 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3819292994/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/af5a2afa not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/d780cb1f not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/49b0f374 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/26fbb125 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.3244779536 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.3244779536/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/cf14125a not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/b7f86972 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/e51d739c not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/88ba6a69 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/669a9acf not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/5cd51231 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/75349ec7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/15c26839 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/45023dcd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/2bb66a50 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/64d03bdd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/ab8e7ca0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/bb9be25f not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.2034221258 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.2034221258/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/9a0b61d3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/d471b9d2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/8cb76b8e not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/11a00840 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/ec355a92 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/992f735e not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1782968797 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1782968797/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/d59cdbbc not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/72133ff0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/c56c834c not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/d13724c7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/0a498258 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fa471982 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fc900d92 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fa7d68da not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/4bacf9b4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/424021b1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/fc2e31a3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/f51eefac not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/c8997f2f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/7481f599 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..2025_02_23_05_22_49.2255460704 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..2025_02_23_05_22_49.2255460704/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/fdafea19 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/d0e1c571 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/ee398915 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/682bb6b8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/a3e67855 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/a989f289 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/915431bd not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/7796fdab not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/dcdb5f19 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/a3aaa88c not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/5508e3e6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/160585de not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/e99f8da3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/8bc85570 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/a5861c91 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/84db1135 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/9e1a6043 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/c1aba1c2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/d55ccd6d not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/971cc9f6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/8f2e3dcf not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/ceb35e9c not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/1c192745 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/5209e501 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/f83de4df not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/e7b978ac not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/c64304a1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/5384386b not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/multus-admission-controller/cce3e3ff not reset as customized by admin to system_u:object_r:container_file_t:s0:c435,c756 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/multus-admission-controller/8fb75465 not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/kube-rbac-proxy/740f573e not reset as customized by admin to system_u:object_r:container_file_t:s0:c435,c756 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/kube-rbac-proxy/32fd1134 not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/0a861bd3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/80363026 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/bfa952a8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_23_05_33_31.2122464563 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_23_05_33_31.2122464563/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config/..2025_02_23_05_33_31.333075221 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/793bf43d not reset as customized by admin to system_u:object_r:container_file_t:s0:c381,c387 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/7db1bb6e not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/4f6a0368 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/c12c7d86 not reset as customized by admin to system_u:object_r:container_file_t:s0:c381,c387 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/36c4a773 not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/4c1e98ae not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/a4c8115c not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/setup/7db1802e not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver/a008a7ab not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-cert-syncer/2c836bac not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-cert-regeneration-controller/0ce62299 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-insecure-readyz/945d2457 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-check-endpoints/7d5c1dd8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/advanced-cluster-management not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/advanced-cluster-management/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-broker-rhel8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-broker-rhel8/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-online not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-online/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams-console not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams-console/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq7-interconnect-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq7-interconnect-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-automation-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-automation-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-cloud-addons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-cloud-addons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry-3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry-3/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-load-balancer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-load-balancer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-businessautomation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-businessautomation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator/index.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/businessautomation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/businessautomation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cephcsi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cephcsi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cincinnati-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cincinnati-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-kube-descheduler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-kube-descheduler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-logging not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-logging/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/compliance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/compliance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/container-security-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/container-security-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/costmanagement-metrics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/costmanagement-metrics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cryostat-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cryostat-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datagrid not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datagrid/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devspaces not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devspaces/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devworkspace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devworkspace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dpu-network-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dpu-network-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eap not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eap/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-dns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-dns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/file-integrity-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/file-integrity-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-apicurito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-apicurito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-console not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-console/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-online not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-online/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gatekeeper-operator-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gatekeeper-operator-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jws-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jws-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management-hub not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management-hub/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kiali-ossm not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kiali-ossm/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubevirt-hyperconverged not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubevirt-hyperconverged/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logic-operator-rhel8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logic-operator-rhel8/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lvms-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lvms-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mcg-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mcg-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mta-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mta-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtr-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtr-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-client-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-client-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-csi-addons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-csi-addons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-multicluster-orchestrator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-multicluster-orchestrator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-prometheus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-prometheus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-hub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-hub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/bundle-v1.15.0.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/channel.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/package.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-custom-metrics-autoscaler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-custom-metrics-autoscaler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-gitops-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-gitops-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-pipelines-operator-rh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-pipelines-operator-rh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-secondary-scheduler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-secondary-scheduler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-bridge-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-bridge-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/recipe not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/recipe/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-camel-k not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-camel-k/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-hawtio-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-hawtio-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redhat-oadp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redhat-oadp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rh-service-binding-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rh-service-binding-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhacs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhacs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhbk-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhbk-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhdh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhdh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-prometheus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-prometheus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhpam-kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhpam-kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhsso-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhsso-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rook-ceph-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rook-ceph-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/run-once-duration-override-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/run-once-duration-override-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sandboxed-containers-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sandboxed-containers-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/security-profiles-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/security-profiles-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/serverless-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/serverless-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-registry-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-registry-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator3/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/submariner not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/submariner/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tang-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tang-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustee-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustee-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volsync-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volsync-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/web-terminal not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/web-terminal/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/bc8d0691 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/6b76097a not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/34d1af30 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/312ba61c not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/645d5dd1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/16e825f0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/4cf51fc9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/2a23d348 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/075dbd49 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/dd585ddd not reset as customized by admin to system_u:object_r:container_file_t:s0:c377,c642 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/17ebd0ab not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c343 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/005579f4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_23_05_23_11.449897510 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_23_05_23_11.449897510/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_23_11.1287037894 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..2025_02_23_05_23_11.1301053334 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..2025_02_23_05_23_11.1301053334/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/bf5f3b9c not reset as customized by admin to system_u:object_r:container_file_t:s0:c49,c263 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/af276eb7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c701 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/ea28e322 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/692e6683 not reset as customized by admin to system_u:object_r:container_file_t:s0:c49,c263 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/871746a7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c701 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/4eb2e958 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..2025_02_24_06_09_06.2875086261 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..2025_02_24_06_09_06.2875086261/console-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/console-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_09_06.286118152 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_09_06.286118152/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..2025_02_24_06_09_06.3865795478 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..2025_02_24_06_09_06.3865795478/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..2025_02_24_06_09_06.584414814 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..2025_02_24_06_09_06.584414814/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/containers/console/ca9b62da not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/containers/console/0edd6fce not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.openshift-global-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.openshift-global-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.1071801880 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.1071801880/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..2025_02_24_06_20_07.2494444877 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..2025_02_24_06_20_07.2494444877/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/containers/controller-manager/89b4555f not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..2025_02_23_05_23_22.4071100442 not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..2025_02_23_05_23_22.4071100442/Corefile not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/Corefile not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/655fcd71 not reset as customized by admin to system_u:object_r:container_file_t:s0:c457,c841 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/0d43c002 not reset as customized by admin to system_u:object_r:container_file_t:s0:c55,c1022 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/e68efd17 not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/9acf9b65 not reset as customized by admin to system_u:object_r:container_file_t:s0:c457,c841 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/5ae3ff11 not reset as customized by admin to system_u:object_r:container_file_t:s0:c55,c1022 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/1e59206a not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/27af16d1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c304,c1017 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/7918e729 not reset as customized by admin to system_u:object_r:container_file_t:s0:c853,c893 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/5d976d0e not reset as customized by admin to system_u:object_r:container_file_t:s0:c585,c981 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..2025_02_23_05_38_56.1112187283 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..2025_02_23_05_38_56.1112187283/controller-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/controller-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_38_56.2839772658 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_38_56.2839772658/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/d7f55cbb not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/f0812073 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/1a56cbeb not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/7fdd437e not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/cdfb5652 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_24_06_17_29.3844392896 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_24_06_17_29.3844392896/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..2025_02_24_06_17_29.848549803 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..2025_02_24_06_17_29.848549803/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..2025_02_24_06_17_29.780046231 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..2025_02_24_06_17_29.780046231/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_17_29.2729721485 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_17_29.2729721485/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/fix-audit-permissions/fb93119e not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/openshift-apiserver/f1e8fc0e not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/openshift-apiserver-check-endpoints/218511f3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs/k8s-webhook-server not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs/k8s-webhook-server/serving-certs not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/ca8af7b3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/72cc8a75 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/6e8a3760 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..2025_02_23_05_27_30.557428972 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..2025_02_23_05_27_30.557428972/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/4c3455c0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/2278acb0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/4b453e4f not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/3ec09bda not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132/anchors not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132/anchors/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/anchors not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/edk2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/edk2/cacerts.bin not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/java not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/java/cacerts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/openssl not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/openssl/ca-bundle.trust.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/email-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/objsign-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2ae6433e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fde84897.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/75680d2e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/openshift-service-serving-signer_1740288168.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/facfc4fa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8f5a969c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CFCA_EV_ROOT.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9ef4a08a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ingress-operator_1740288202.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2f332aed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/248c8271.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d10a21f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ACCVRAIZ1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a94d09e5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c9a4d3b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/40193066.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AC_RAIZ_FNMT-RCM.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cd8c0d63.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b936d1c6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CA_Disig_Root_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4fd49c6c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AC_RAIZ_FNMT-RCM_SERVIDORES_SEGUROS.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b81b93f0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f9a69fa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certigna.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b30d5fda.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ANF_Secure_Server_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b433981b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/93851c9e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9282e51c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e7dd1bc4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Actalis_Authentication_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/930ac5d2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f47b495.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e113c810.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5931b5bc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Commercial.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2b349938.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e48193cf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/302904dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a716d4ed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Networking.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/93bc0acc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/86212b19.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certigna_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Premium.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b727005e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dbc54cab.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f51bb24c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c28a8a30.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Premium_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9c8dfbd4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ccc52f49.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cb1c3204.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ce5e74ef.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fd08c599.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6d41d539.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fb5fa911.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e35234b1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8cb5ee0f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a7c655d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f8fc53da.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/de6d66f3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d41b5e2a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/41a3f684.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1df5a75f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_2011.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e36a6752.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b872f2b4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9576d26b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/228f89db.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_Root_CA_ECC_TLS_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fb717492.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2d21b73c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0b1b94ef.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/595e996b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_Root_CA_RSA_TLS_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9b46e03d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/128f4b91.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Buypass_Class_3_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/81f2d2b1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Autoridad_de_Certificacion_Firmaprofesional_CIF_A62634068.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3bde41ac.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d16a5865.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_EC-384_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/BJCA_Global_Root_CA1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0179095f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ffa7f1eb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9482e63a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d4dae3dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/BJCA_Global_Root_CA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3e359ba6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7e067d03.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/95aff9e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d7746a63.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Baltimore_CyberTrust_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/653b494a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3ad48a91.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Network_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Buypass_Class_2_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/54657681.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/82223c44.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e8de2f56.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2d9dafe4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d96b65e2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee64a828.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/40547a79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5a3f0ff8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a780d93.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/34d996fb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_ECC_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/eed8c118.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/89c02a45.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certainly_Root_R1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b1159c4c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_RSA_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d6325660.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d4c339cb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8312c4c1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certainly_Root_E1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8508e720.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5fdd185d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/48bec511.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/69105f4f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0b9bc432.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Network_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/32888f65.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_ECC_Root-01.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b03dec0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/219d9499.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_ECC_Root-02.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5acf816d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cbf06781.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_RSA_Root-01.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dc99f41e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_RSA_Root-02.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AAA_Certificate_Services.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/985c1f52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8794b4e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_BR_Root_CA_1_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e7c037b4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ef954a4e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_EV_Root_CA_1_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2add47b6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/90c5a3c8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_Root_Class_3_CA_2_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0f3e76e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/53a1b57a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_Root_Class_3_CA_2_EV_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5ad8a5d6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/68dd7389.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9d04f354.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d6437c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/062cdee6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bd43e1dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7f3d5d1d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c491639e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_E46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3513523f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/399e7759.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/feffd413.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d18e9066.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/607986c7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c90bc37d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1b0f7e5c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e08bfd1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dd8e9d41.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ed39abd0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a3418fda.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bc3f2570.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_High_Assurance_EV_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/244b5494.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/81b9768f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4be590e0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_TLS_ECC_P384_Root_G5.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9846683b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/252252d2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e8e7201.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ISRG_Root_X1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_TLS_RSA4096_Root_G5.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d52c538d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c44cc0c0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_R46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Trusted_Root_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/75d1b2ed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a2c66da8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ecccd8db.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust.net_Certification_Authority__2048_.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/aee5f10d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3e7271e8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0e59380.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4c3982f2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b99d060.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bf64f35b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0a775a30.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/002c0b4f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cc450945.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_EC1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/106f3e4d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b3fb433b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4042bcee.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/02265526.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/455f1b52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0d69c7e1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9f727ac7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5e98733a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f0cd152c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dc4d6a89.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6187b673.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/FIRMAPROFESIONAL_CA_ROOT-A_WEB.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ba8887ce.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/068570d1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f081611a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/48a195d8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GDCA_TrustAUTH_R5_ROOT.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0f6fa695.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ab59055e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b92fd57f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GLOBALTRUST_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fa5da96b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1ec40989.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7719f463.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1001acf7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f013ecaf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/626dceaf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c559d742.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1d3472b9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9479c8c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a81e292b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4bfab552.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Go_Daddy_Class_2_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Sectigo_Public_Server_Authentication_Root_E46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Go_Daddy_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e071171e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/57bcb2da.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HARICA_TLS_ECC_Root_CA_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ab5346f4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5046c355.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HARICA_TLS_RSA_Root_CA_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/865fbdf9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/da0cfd1d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/85cde254.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hellenic_Academic_and_Research_Institutions_ECC_RootCA_2015.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cbb3f32b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SecureSign_RootCA11.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hellenic_Academic_and_Research_Institutions_RootCA_2015.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5860aaa6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/31188b5e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HiPKI_Root_CA_-_G1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c7f1359b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f15c80c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hongkong_Post_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/09789157.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ISRG_Root_X2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/18856ac4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e09d511.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/IdenTrust_Commercial_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cf701eeb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d06393bb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/IdenTrust_Public_Sector_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/10531352.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Izenpe.com.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SecureTrust_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0ed035a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsec_e-Szigno_Root_CA_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8160b96c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e8651083.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2c63f966.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_RootCA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsoft_ECC_Root_Certificate_Authority_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d89cda1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/01419da9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_TLS_RSA_Root_CA_2022.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b7a5b843.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsoft_RSA_Root_Certificate_Authority_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bf53fb88.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9591a472.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3afde786.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SwissSign_Gold_CA_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/NAVER_Global_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3fb36b73.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d39b0a2c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a89d74c2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cd58d51e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b7db1890.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/NetLock_Arany__Class_Gold__F__tan__s__tv__ny.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/988a38cb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/60afe812.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f39fc864.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5443e9e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/OISTE_WISeKey_Global_Root_GB_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e73d606e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dfc0fe80.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b66938e9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e1eab7c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/OISTE_WISeKey_Global_Root_GC_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/773e07ad.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c899c73.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d59297b8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ddcda989.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_1_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/749e9e03.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/52b525c7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_RootCA3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d7e8dc79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a819ef2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/08063a00.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b483515.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_2_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/064e0aa9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1f58a078.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6f7454b3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7fa05551.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/76faf6c0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9339512a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f387163d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee37c333.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_3_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e18bfb83.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e442e424.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fe8a2cd8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/23f4c490.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5cd81ad7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_EV_Root_Certification_Authority_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f0c70a8d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7892ad52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SZAFIR_ROOT_CA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4f316efb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_EV_Root_Certification_Authority_RSA_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/06dc52d5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/583d0756.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Sectigo_Public_Server_Authentication_Root_R46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_Root_Certification_Authority_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0bf05006.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/88950faa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9046744a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c860d51.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_Root_Certification_Authority_RSA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6fa5da56.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/33ee480d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Secure_Global_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/63a2c897.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_TLS_ECC_Root_CA_2022.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bdacca6f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ff34af3f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dbff3a01.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_ECC_RootCA1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_Root_CA_-_C1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Class_2_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/406c9bb1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_ECC_Root_CA_-_C3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Services_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SwissSign_Silver_CA_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/99e1b953.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/T-TeleSec_GlobalRoot_Class_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/vTrus_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/T-TeleSec_GlobalRoot_Class_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/14bc7599.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TUBITAK_Kamu_SM_SSL_Kok_Sertifikasi_-_Surum_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TWCA_Global_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a3adc42.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TWCA_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f459871d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telekom_Security_TLS_ECC_Root_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_Root_CA_-_G1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telekom_Security_TLS_RSA_Root_2023.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TeliaSonera_Root_CA_v1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telia_Root_CA_v2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8f103249.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f058632f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca-certificates.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TrustAsia_Global_Root_CA_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9bf03295.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/98aaf404.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TrustAsia_Global_Root_CA_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1cef98f5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/073bfcc5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2923b3f9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f249de83.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/edcbddb5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_ECC_Root_CA_-_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_ECC_P256_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9b5697b0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1ae85e5e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b74d2bd5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_ECC_P384_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d887a5bb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9aef356c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TunTrust_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fd64f3fc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e13665f9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/UCA_Extended_Validation_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0f5dc4f3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/da7377f6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/UCA_Global_G2_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c01eb047.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/304d27c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ed858448.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/USERTrust_ECC_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f30dd6ad.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/04f60c28.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/vTrus_ECC_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/USERTrust_RSA_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fc5a8f99.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/35105088.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee532fd5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/XRamp_Global_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/706f604c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/76579174.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/certSIGN_ROOT_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d86cdd1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/882de061.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/certSIGN_ROOT_CA_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f618aec.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a9d40e02.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e-Szigno_Root_CA_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e868b802.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/83e9984f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ePKI_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca6e4ad9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9d6523ce.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4b718d9b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/869fbf79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/containers/registry/f8d22bdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/6e8bbfac not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/54dd7996 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/a4f1bb05 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/207129da not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/c1df39e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/15b8f1cd not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3523263858 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3523263858/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..2025_02_23_05_27_49.3256605594 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..2025_02_23_05_27_49.3256605594/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/77bd6913 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/2382c1b1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/704ce128 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/70d16fe0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/bfb95535 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/57a8e8e2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3413793711 not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3413793711/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/1b9d3e5e not reset as customized by admin to system_u:object_r:container_file_t:s0:c107,c917 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/fddb173c not reset as customized by admin to system_u:object_r:container_file_t:s0:c202,c983 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/95d3c6c4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/bfb5fff5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/2aef40aa not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/c0391cad not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/1119e69d not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/660608b4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/8220bd53 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/cluster-policy-controller/85f99d5c not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/cluster-policy-controller/4b0225f6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-cert-syncer/9c2a3394 not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-cert-syncer/e820b243 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-recovery-controller/1ca52ea0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-recovery-controller/e6988e45 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..2025_02_24_06_09_21.2517297950 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..2025_02_24_06_09_21.2517297950/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/6655f00b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/98bc3986 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/08e3458a not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/2a191cb0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/6c4eeefb not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/f61a549c not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/hostpath-provisioner/24891863 not reset as customized by admin to system_u:object_r:container_file_t:s0:c37,c572 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/hostpath-provisioner/fbdfd89c not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/liveness-probe/9b63b3bc not reset as customized by admin to system_u:object_r:container_file_t:s0:c37,c572 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/liveness-probe/8acde6d6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/node-driver-registrar/59ecbba3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/csi-provisioner/685d4be3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/openshift-route-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/openshift-route-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/openshift-route-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/openshift-route-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.2950937851 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.2950937851/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/containers/route-controller-manager/feaea55e not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abinitio-runtime-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abinitio-runtime-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/accuknox-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/accuknox-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aci-containers-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aci-containers-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airlock-microgateway not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airlock-microgateway/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ako-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ako-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloy not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloy/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anchore-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anchore-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-cloud-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-cloud-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-dcap-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-dcap-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cfm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cfm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium-enterprise not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium-enterprise/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloud-native-postgresql not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloud-native-postgresql/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudera-streams-messaging-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudera-streams-messaging-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudnative-pg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudnative-pg/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cnfv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cnfv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/conjur-follower-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/conjur-follower-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/coroot-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/coroot-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cte-k8s-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cte-k8s-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-deploy-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-deploy-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-release-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-release-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edb-hcp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edb-hcp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-eck-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-eck-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/federatorai-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/federatorai-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fujitsu-enterprise-postgres-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fujitsu-enterprise-postgres-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/function-mesh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/function-mesh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/harness-gitops-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/harness-gitops-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hcp-terraform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hcp-terraform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hpe-ezmeral-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hpe-ezmeral-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-application-gateway-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-application-gateway-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-directory-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-directory-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-dr-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-dr-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-licensing-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-licensing-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-sds-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-sds-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infrastructure-asset-orchestrator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infrastructure-asset-orchestrator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-device-plugins-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-device-plugins-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-kubernetes-power-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-kubernetes-power-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-openshift-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-openshift-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8s-triliovault not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8s-triliovault/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-ati-updates not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-ati-updates/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-framework not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-framework/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-ingress not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-ingress/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-licensing not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-licensing/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-sso not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-sso/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-load-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-load-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-loadcore-agents not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-loadcore-agents/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nats-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nats-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nimbusmosaic-dusim not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nimbusmosaic-dusim/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-rest-api-browser-v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-rest-api-browser-v1/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-appsec not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-appsec/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-db/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-diagnostics not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-diagnostics/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-logging not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-logging/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-migration not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-migration/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-msg-broker not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-msg-broker/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-notifications not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-notifications/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-stats-dashboards not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-stats-dashboards/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-storage not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-storage/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-test-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-test-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-ui not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-ui/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-websocket-service not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-websocket-service/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kong-gateway-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kong-gateway-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubearmor-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubearmor-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lenovo-locd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lenovo-locd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memcached-operator-ogaye not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memcached-operator-ogaye/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memory-machine-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memory-machine-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-enterprise not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-enterprise/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netapp-spark-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netapp-spark-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-adm-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-adm-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-repository-ha-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-repository-ha-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nginx-ingress-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nginx-ingress-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nim-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nim-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxiq-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxiq-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxrm-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxrm-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odigos-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odigos-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/open-liberty-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/open-liberty-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftartifactoryha-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftartifactoryha-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftxray-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftxray-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/operator-certification-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/operator-certification-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pmem-csi-operator-os not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pmem-csi-operator-os/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-component-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-component-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-fabric-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-fabric-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sanstoragecsi-operator-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sanstoragecsi-operator-bundle/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/smilecdr-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/smilecdr-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sriov-fec not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sriov-fec/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-commons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-commons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-zookeeper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-zookeeper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-tsc-client-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-tsc-client-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tawon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tawon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tigera-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tigera-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-secrets-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-secrets-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vcp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vcp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/webotx-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/webotx-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/63709497 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/d966b7fd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/f5773757 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/81c9edb9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/57bf57ee not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/86f5e6aa not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/0aabe31d not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/d2af85c2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/09d157d9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acm-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acm-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acmpca-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acmpca-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigateway-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigateway-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigatewayv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigatewayv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-applicationautoscaling-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-applicationautoscaling-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-athena-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-athena-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudfront-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudfront-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudtrail-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudtrail-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatch-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatch-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatchlogs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatchlogs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-documentdb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-documentdb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-dynamodb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-dynamodb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ec2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ec2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecr-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecr-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-efs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-efs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eks-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eks-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elasticache-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elasticache-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elbv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elbv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-emrcontainers-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-emrcontainers-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eventbridge-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eventbridge-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-iam-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-iam-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kafka-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kafka-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-keyspaces-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-keyspaces-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kinesis-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kinesis-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kms-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kms-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-lambda-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-lambda-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-memorydb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-memorydb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-mq-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-mq-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-networkfirewall-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-networkfirewall-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-opensearchservice-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-opensearchservice-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-organizations-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-organizations-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-pipes-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-pipes-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-prometheusservice-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-prometheusservice-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-rds-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-rds-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-recyclebin-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-recyclebin-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53resolver-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53resolver-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-s3-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-s3-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sagemaker-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sagemaker-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-secretsmanager-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-secretsmanager-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ses-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ses-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sfn-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sfn-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sns-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sns-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sqs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sqs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ssm-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ssm-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-wafv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-wafv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airflow-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airflow-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloydb-omni-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloydb-omni-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alvearie-imaging-ingestion not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alvearie-imaging-ingestion/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amd-gpu-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amd-gpu-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/analytics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/analytics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/annotationlab not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/annotationlab/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-api-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-api-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apimatic-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apimatic-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/application-services-metering-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/application-services-metering-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/argocd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/argocd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/assisted-service-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/assisted-service-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/automotive-infra not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/automotive-infra/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-efs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-efs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/awss3-operator-registry not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/awss3-operator-registry/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/azure-service-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/azure-service-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/beegfs-csi-driver-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/beegfs-csi-driver-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-k not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-k/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-karavan-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-karavan-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator-community not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator-community/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-utils-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-utils-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-aas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-aas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-impairment-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-impairment-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/codeflare-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/codeflare-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-kubevirt-hyperconverged not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-kubevirt-hyperconverged/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-trivy-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-trivy-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-windows-machine-config-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-windows-machine-config-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/customized-user-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/customized-user-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cxl-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cxl-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dapr-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dapr-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datatrucker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datatrucker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dbaas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dbaas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/debezium-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/debezium-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/deployment-validation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/deployment-validation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devopsinabox not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devopsinabox/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-amlen-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-amlen-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-che not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-che/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ecr-secret-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ecr-secret-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edp-keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edp-keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/egressip-ipam-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/egressip-ipam-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ember-csi-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ember-csi-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/etcd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/etcd/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eventing-kogito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eventing-kogito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-secrets-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-secrets-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flink-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flink-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8gb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8gb/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fossul-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fossul-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/github-arc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/github-arc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitops-primer not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitops-primer/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitwebhook-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitwebhook-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/global-load-balancer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/global-load-balancer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/grafana-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/grafana-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/group-sync-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/group-sync-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hawtio-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hawtio-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hedvig-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hedvig-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hive-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hive-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/horreum-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/horreum-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hyperfoil-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hyperfoil-bundle/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator-community not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator-community/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-spectrum-scale-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-spectrum-scale-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibmcloud-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibmcloud-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infinispan not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infinispan/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/integrity-shield-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/integrity-shield-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ipfs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ipfs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/istio-workspace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/istio-workspace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kaoto-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kaoto-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keda not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keda/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keepalived-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keepalived-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-permissions-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-permissions-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/klusterlet not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/klusterlet/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/koku-metrics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/koku-metrics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/konveyor-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/konveyor-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/korrel8r not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/korrel8r/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kuadrant-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kuadrant-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kube-green not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kube-green/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubernetes-imagepuller-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubernetes-imagepuller-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/l5-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/l5-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/layer7-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/layer7-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lbconfig-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lbconfig-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lib-bucket-provisioner not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lib-bucket-provisioner/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/limitador-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/limitador-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logging-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logging-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mariadb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mariadb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marin3r not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marin3r/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mercury-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mercury-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/microcks not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/microcks/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/move2kube-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/move2kube-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multi-nic-cni-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multi-nic-cni-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-global-hub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-global-hub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-operators-subscription not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-operators-subscription/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/must-gather-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/must-gather-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/namespace-configuration-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/namespace-configuration-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ncn-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ncn-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ndmspc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ndmspc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator-m88i not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator-m88i/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nfs-provisioner-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nfs-provisioner-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nlp-server not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nlp-server/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-discovery-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-discovery-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nsm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nsm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oadp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oadp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oci-ccm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oci-ccm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odoo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odoo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opendatahub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opendatahub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openebs not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openebs/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-nfd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-nfd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-node-upgrade-mutex-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-node-upgrade-mutex-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-qiskit-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-qiskit-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patch-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patch-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patterns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patterns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pelorus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pelorus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/percona-xtradb-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/percona-xtradb-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-essentials not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-essentials/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/postgresql not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/postgresql/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/proactive-node-scaling-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/proactive-node-scaling-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/project-quay not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/project-quay/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus-exporter-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus-exporter-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pulp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pulp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-messaging-topology-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-messaging-topology-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/reportportal-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/reportportal-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/resource-locker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/resource-locker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhoas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhoas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ripsaw not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ripsaw/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sailoperator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sailoperator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-commerce-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-commerce-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-data-intelligence-observer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-data-intelligence-observer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-hana-express-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-hana-express-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-binding-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-binding-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/shipwright-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/shipwright-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sigstore-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sigstore-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snapscheduler not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snapscheduler/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snyk-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snyk-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/socmmd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/socmmd/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonar-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonar-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosivio not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosivio/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonataflow-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonataflow-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosreport-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosreport-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/spark-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/spark-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/special-resource-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/special-resource-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/strimzi-kafka-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/strimzi-kafka-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/syndesis not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/syndesis/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tagger not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tagger/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tf-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tf-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tidb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tidb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trident-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trident-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustify-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustify-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ucs-ci-solutions-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ucs-ci-solutions-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/universal-crossplane not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/universal-crossplane/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/varnish-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/varnish-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-config-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-config-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/verticadb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/verticadb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volume-expander-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volume-expander-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/wandb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/wandb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/windup-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/windup-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yaks not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yaks/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/c0fe7256 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/c30319e4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/e6b1dd45 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/2bb643f0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/920de426 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/70fa1e87 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/a1c12a2f not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/9442e6c7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/5b45ec72 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abot-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abot-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/entando-k8s-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/entando-k8s-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-paygo-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-paygo-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-term-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-term-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/linstor-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/linstor-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-deploy-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-deploy-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-paygo-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-paygo-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vfunction-server-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vfunction-server-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yugabyte-platform-operator-bundle-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yugabyte-platform-operator-bundle-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/3c9f3a59 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/1091c11b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/9a6821c6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/ec0c35e2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/517f37e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/6214fe78 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/ba189c8b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/351e4f31 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/c0f219ff not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/8069f607 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/559c3d82 not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/605ad488 not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/148df488 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/3bf6dcb4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/022a2feb not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/938c3924 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/729fe23e not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/1fd5cbd4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/a96697e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/e155ddca not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/10dd0e0f not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..2025_02_24_06_09_35.3018472960 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..2025_02_24_06_09_35.3018472960/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..2025_02_24_06_09_35.4262376737 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..2025_02_24_06_09_35.4262376737/audit.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/audit.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..2025_02_24_06_09_35.2630275752 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..2025_02_24_06_09_35.2630275752/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..2025_02_24_06_09_35.2376963788 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..2025_02_24_06_09_35.2376963788/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/containers/oauth-openshift/6f2c8392 not reset as customized by admin to system_u:object_r:container_file_t:s0:c267,c588 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/containers/oauth-openshift/bd241ad9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/plugins not reset as customized by admin to system_u:object_r:container_file_t:s0 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/plugins/csi-hostpath not reset as customized by admin to system_u:object_r:container_file_t:s0 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/plugins/csi-hostpath/csi.sock not reset as customized by admin to system_u:object_r:container_file_t:s0 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/plugins/kubernetes.io not reset as customized by admin to system_u:object_r:container_file_t:s0 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/plugins/kubernetes.io/csi not reset as customized by admin to system_u:object_r:container_file_t:s0 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner not reset as customized by admin to system_u:object_r:container_file_t:s0 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983 not reset as customized by admin to system_u:object_r:container_file_t:s0 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount not reset as customized by admin to system_u:object_r:container_file_t:s0 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/vol_data.json not reset as customized by admin to system_u:object_r:container_file_t:s0 Oct 06 11:47:25 crc restorecon[4674]: /var/lib/kubelet/plugins_registry not reset as customized by admin to system_u:object_r:container_file_t:s0 Oct 06 11:47:25 crc restorecon[4674]: Relabeled /var/usrlocal/bin/kubenswrapper from system_u:object_r:bin_t:s0 to system_u:object_r:kubelet_exec_t:s0 Oct 06 11:47:26 crc kubenswrapper[4958]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Oct 06 11:47:26 crc kubenswrapper[4958]: Flag --minimum-container-ttl-duration has been deprecated, Use --eviction-hard or --eviction-soft instead. Will be removed in a future version. Oct 06 11:47:26 crc kubenswrapper[4958]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Oct 06 11:47:26 crc kubenswrapper[4958]: Flag --register-with-taints has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Oct 06 11:47:26 crc kubenswrapper[4958]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Oct 06 11:47:26 crc kubenswrapper[4958]: Flag --system-reserved has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Oct 06 11:47:26 crc kubenswrapper[4958]: I1006 11:47:26.641625 4958 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Oct 06 11:47:26 crc kubenswrapper[4958]: W1006 11:47:26.652101 4958 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Oct 06 11:47:26 crc kubenswrapper[4958]: W1006 11:47:26.652225 4958 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Oct 06 11:47:26 crc kubenswrapper[4958]: W1006 11:47:26.652240 4958 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Oct 06 11:47:26 crc kubenswrapper[4958]: W1006 11:47:26.652252 4958 feature_gate.go:330] unrecognized feature gate: Example Oct 06 11:47:26 crc kubenswrapper[4958]: W1006 11:47:26.652264 4958 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Oct 06 11:47:26 crc kubenswrapper[4958]: W1006 11:47:26.652273 4958 feature_gate.go:330] unrecognized feature gate: GatewayAPI Oct 06 11:47:26 crc kubenswrapper[4958]: W1006 11:47:26.652281 4958 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Oct 06 11:47:26 crc kubenswrapper[4958]: W1006 11:47:26.652291 4958 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Oct 06 11:47:26 crc kubenswrapper[4958]: W1006 11:47:26.652302 4958 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Oct 06 11:47:26 crc kubenswrapper[4958]: W1006 11:47:26.652312 4958 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Oct 06 11:47:26 crc kubenswrapper[4958]: W1006 11:47:26.652320 4958 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Oct 06 11:47:26 crc kubenswrapper[4958]: W1006 11:47:26.652328 4958 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Oct 06 11:47:26 crc kubenswrapper[4958]: W1006 11:47:26.652336 4958 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Oct 06 11:47:26 crc kubenswrapper[4958]: W1006 11:47:26.652345 4958 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Oct 06 11:47:26 crc kubenswrapper[4958]: W1006 11:47:26.652353 4958 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Oct 06 11:47:26 crc kubenswrapper[4958]: W1006 11:47:26.652361 4958 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Oct 06 11:47:26 crc kubenswrapper[4958]: W1006 11:47:26.652369 4958 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Oct 06 11:47:26 crc kubenswrapper[4958]: W1006 11:47:26.652376 4958 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Oct 06 11:47:26 crc kubenswrapper[4958]: W1006 11:47:26.652384 4958 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Oct 06 11:47:26 crc kubenswrapper[4958]: W1006 11:47:26.652392 4958 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Oct 06 11:47:26 crc kubenswrapper[4958]: W1006 11:47:26.652400 4958 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Oct 06 11:47:26 crc kubenswrapper[4958]: W1006 11:47:26.652415 4958 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Oct 06 11:47:26 crc kubenswrapper[4958]: W1006 11:47:26.652424 4958 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Oct 06 11:47:26 crc kubenswrapper[4958]: W1006 11:47:26.652434 4958 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Oct 06 11:47:26 crc kubenswrapper[4958]: W1006 11:47:26.652444 4958 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Oct 06 11:47:26 crc kubenswrapper[4958]: W1006 11:47:26.652452 4958 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Oct 06 11:47:26 crc kubenswrapper[4958]: W1006 11:47:26.652460 4958 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Oct 06 11:47:26 crc kubenswrapper[4958]: W1006 11:47:26.652468 4958 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Oct 06 11:47:26 crc kubenswrapper[4958]: W1006 11:47:26.652475 4958 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Oct 06 11:47:26 crc kubenswrapper[4958]: W1006 11:47:26.652483 4958 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Oct 06 11:47:26 crc kubenswrapper[4958]: W1006 11:47:26.652491 4958 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Oct 06 11:47:26 crc kubenswrapper[4958]: W1006 11:47:26.652498 4958 feature_gate.go:330] unrecognized feature gate: SignatureStores Oct 06 11:47:26 crc kubenswrapper[4958]: W1006 11:47:26.652506 4958 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Oct 06 11:47:26 crc kubenswrapper[4958]: W1006 11:47:26.652515 4958 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Oct 06 11:47:26 crc kubenswrapper[4958]: W1006 11:47:26.652524 4958 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Oct 06 11:47:26 crc kubenswrapper[4958]: W1006 11:47:26.652533 4958 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Oct 06 11:47:26 crc kubenswrapper[4958]: W1006 11:47:26.652542 4958 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Oct 06 11:47:26 crc kubenswrapper[4958]: W1006 11:47:26.652552 4958 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Oct 06 11:47:26 crc kubenswrapper[4958]: W1006 11:47:26.652560 4958 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Oct 06 11:47:26 crc kubenswrapper[4958]: W1006 11:47:26.652569 4958 feature_gate.go:330] unrecognized feature gate: PinnedImages Oct 06 11:47:26 crc kubenswrapper[4958]: W1006 11:47:26.652577 4958 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Oct 06 11:47:26 crc kubenswrapper[4958]: W1006 11:47:26.652586 4958 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Oct 06 11:47:26 crc kubenswrapper[4958]: W1006 11:47:26.652593 4958 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Oct 06 11:47:26 crc kubenswrapper[4958]: W1006 11:47:26.652601 4958 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Oct 06 11:47:26 crc kubenswrapper[4958]: W1006 11:47:26.652608 4958 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Oct 06 11:47:26 crc kubenswrapper[4958]: W1006 11:47:26.652616 4958 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Oct 06 11:47:26 crc kubenswrapper[4958]: W1006 11:47:26.652624 4958 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Oct 06 11:47:26 crc kubenswrapper[4958]: W1006 11:47:26.652631 4958 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Oct 06 11:47:26 crc kubenswrapper[4958]: W1006 11:47:26.652639 4958 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Oct 06 11:47:26 crc kubenswrapper[4958]: W1006 11:47:26.652650 4958 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Oct 06 11:47:26 crc kubenswrapper[4958]: W1006 11:47:26.652659 4958 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Oct 06 11:47:26 crc kubenswrapper[4958]: W1006 11:47:26.652667 4958 feature_gate.go:330] unrecognized feature gate: InsightsConfig Oct 06 11:47:26 crc kubenswrapper[4958]: W1006 11:47:26.652674 4958 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Oct 06 11:47:26 crc kubenswrapper[4958]: W1006 11:47:26.652682 4958 feature_gate.go:330] unrecognized feature gate: PlatformOperators Oct 06 11:47:26 crc kubenswrapper[4958]: W1006 11:47:26.652691 4958 feature_gate.go:330] unrecognized feature gate: NewOLM Oct 06 11:47:26 crc kubenswrapper[4958]: W1006 11:47:26.652699 4958 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Oct 06 11:47:26 crc kubenswrapper[4958]: W1006 11:47:26.652707 4958 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Oct 06 11:47:26 crc kubenswrapper[4958]: W1006 11:47:26.652714 4958 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Oct 06 11:47:26 crc kubenswrapper[4958]: W1006 11:47:26.652722 4958 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Oct 06 11:47:26 crc kubenswrapper[4958]: W1006 11:47:26.652730 4958 feature_gate.go:330] unrecognized feature gate: OVNObservability Oct 06 11:47:26 crc kubenswrapper[4958]: W1006 11:47:26.652737 4958 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Oct 06 11:47:26 crc kubenswrapper[4958]: W1006 11:47:26.652745 4958 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Oct 06 11:47:26 crc kubenswrapper[4958]: W1006 11:47:26.652753 4958 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Oct 06 11:47:26 crc kubenswrapper[4958]: W1006 11:47:26.652760 4958 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Oct 06 11:47:26 crc kubenswrapper[4958]: W1006 11:47:26.652768 4958 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Oct 06 11:47:26 crc kubenswrapper[4958]: W1006 11:47:26.652776 4958 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Oct 06 11:47:26 crc kubenswrapper[4958]: W1006 11:47:26.652784 4958 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Oct 06 11:47:26 crc kubenswrapper[4958]: W1006 11:47:26.652792 4958 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Oct 06 11:47:26 crc kubenswrapper[4958]: W1006 11:47:26.652800 4958 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Oct 06 11:47:26 crc kubenswrapper[4958]: W1006 11:47:26.652807 4958 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Oct 06 11:47:26 crc kubenswrapper[4958]: W1006 11:47:26.652815 4958 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Oct 06 11:47:26 crc kubenswrapper[4958]: I1006 11:47:26.653662 4958 flags.go:64] FLAG: --address="0.0.0.0" Oct 06 11:47:26 crc kubenswrapper[4958]: I1006 11:47:26.653688 4958 flags.go:64] FLAG: --allowed-unsafe-sysctls="[]" Oct 06 11:47:26 crc kubenswrapper[4958]: I1006 11:47:26.653705 4958 flags.go:64] FLAG: --anonymous-auth="true" Oct 06 11:47:26 crc kubenswrapper[4958]: I1006 11:47:26.653716 4958 flags.go:64] FLAG: --application-metrics-count-limit="100" Oct 06 11:47:26 crc kubenswrapper[4958]: I1006 11:47:26.653728 4958 flags.go:64] FLAG: --authentication-token-webhook="false" Oct 06 11:47:26 crc kubenswrapper[4958]: I1006 11:47:26.653738 4958 flags.go:64] FLAG: --authentication-token-webhook-cache-ttl="2m0s" Oct 06 11:47:26 crc kubenswrapper[4958]: I1006 11:47:26.653751 4958 flags.go:64] FLAG: --authorization-mode="AlwaysAllow" Oct 06 11:47:26 crc kubenswrapper[4958]: I1006 11:47:26.653762 4958 flags.go:64] FLAG: --authorization-webhook-cache-authorized-ttl="5m0s" Oct 06 11:47:26 crc kubenswrapper[4958]: I1006 11:47:26.653771 4958 flags.go:64] FLAG: --authorization-webhook-cache-unauthorized-ttl="30s" Oct 06 11:47:26 crc kubenswrapper[4958]: I1006 11:47:26.653780 4958 flags.go:64] FLAG: --boot-id-file="/proc/sys/kernel/random/boot_id" Oct 06 11:47:26 crc kubenswrapper[4958]: I1006 11:47:26.653790 4958 flags.go:64] FLAG: --bootstrap-kubeconfig="/etc/kubernetes/kubeconfig" Oct 06 11:47:26 crc kubenswrapper[4958]: I1006 11:47:26.653801 4958 flags.go:64] FLAG: --cert-dir="/var/lib/kubelet/pki" Oct 06 11:47:26 crc kubenswrapper[4958]: I1006 11:47:26.653810 4958 flags.go:64] FLAG: --cgroup-driver="cgroupfs" Oct 06 11:47:26 crc kubenswrapper[4958]: I1006 11:47:26.653818 4958 flags.go:64] FLAG: --cgroup-root="" Oct 06 11:47:26 crc kubenswrapper[4958]: I1006 11:47:26.653827 4958 flags.go:64] FLAG: --cgroups-per-qos="true" Oct 06 11:47:26 crc kubenswrapper[4958]: I1006 11:47:26.653837 4958 flags.go:64] FLAG: --client-ca-file="" Oct 06 11:47:26 crc kubenswrapper[4958]: I1006 11:47:26.653846 4958 flags.go:64] FLAG: --cloud-config="" Oct 06 11:47:26 crc kubenswrapper[4958]: I1006 11:47:26.653855 4958 flags.go:64] FLAG: --cloud-provider="" Oct 06 11:47:26 crc kubenswrapper[4958]: I1006 11:47:26.653863 4958 flags.go:64] FLAG: --cluster-dns="[]" Oct 06 11:47:26 crc kubenswrapper[4958]: I1006 11:47:26.653874 4958 flags.go:64] FLAG: --cluster-domain="" Oct 06 11:47:26 crc kubenswrapper[4958]: I1006 11:47:26.653882 4958 flags.go:64] FLAG: --config="/etc/kubernetes/kubelet.conf" Oct 06 11:47:26 crc kubenswrapper[4958]: I1006 11:47:26.653891 4958 flags.go:64] FLAG: --config-dir="" Oct 06 11:47:26 crc kubenswrapper[4958]: I1006 11:47:26.653900 4958 flags.go:64] FLAG: --container-hints="/etc/cadvisor/container_hints.json" Oct 06 11:47:26 crc kubenswrapper[4958]: I1006 11:47:26.653912 4958 flags.go:64] FLAG: --container-log-max-files="5" Oct 06 11:47:26 crc kubenswrapper[4958]: I1006 11:47:26.653936 4958 flags.go:64] FLAG: --container-log-max-size="10Mi" Oct 06 11:47:26 crc kubenswrapper[4958]: I1006 11:47:26.653957 4958 flags.go:64] FLAG: --container-runtime-endpoint="/var/run/crio/crio.sock" Oct 06 11:47:26 crc kubenswrapper[4958]: I1006 11:47:26.653974 4958 flags.go:64] FLAG: --containerd="/run/containerd/containerd.sock" Oct 06 11:47:26 crc kubenswrapper[4958]: I1006 11:47:26.653988 4958 flags.go:64] FLAG: --containerd-namespace="k8s.io" Oct 06 11:47:26 crc kubenswrapper[4958]: I1006 11:47:26.654000 4958 flags.go:64] FLAG: --contention-profiling="false" Oct 06 11:47:26 crc kubenswrapper[4958]: I1006 11:47:26.654014 4958 flags.go:64] FLAG: --cpu-cfs-quota="true" Oct 06 11:47:26 crc kubenswrapper[4958]: I1006 11:47:26.654027 4958 flags.go:64] FLAG: --cpu-cfs-quota-period="100ms" Oct 06 11:47:26 crc kubenswrapper[4958]: I1006 11:47:26.654039 4958 flags.go:64] FLAG: --cpu-manager-policy="none" Oct 06 11:47:26 crc kubenswrapper[4958]: I1006 11:47:26.654049 4958 flags.go:64] FLAG: --cpu-manager-policy-options="" Oct 06 11:47:26 crc kubenswrapper[4958]: I1006 11:47:26.654076 4958 flags.go:64] FLAG: --cpu-manager-reconcile-period="10s" Oct 06 11:47:26 crc kubenswrapper[4958]: I1006 11:47:26.654085 4958 flags.go:64] FLAG: --enable-controller-attach-detach="true" Oct 06 11:47:26 crc kubenswrapper[4958]: I1006 11:47:26.654094 4958 flags.go:64] FLAG: --enable-debugging-handlers="true" Oct 06 11:47:26 crc kubenswrapper[4958]: I1006 11:47:26.654103 4958 flags.go:64] FLAG: --enable-load-reader="false" Oct 06 11:47:26 crc kubenswrapper[4958]: I1006 11:47:26.654112 4958 flags.go:64] FLAG: --enable-server="true" Oct 06 11:47:26 crc kubenswrapper[4958]: I1006 11:47:26.654122 4958 flags.go:64] FLAG: --enforce-node-allocatable="[pods]" Oct 06 11:47:26 crc kubenswrapper[4958]: I1006 11:47:26.654138 4958 flags.go:64] FLAG: --event-burst="100" Oct 06 11:47:26 crc kubenswrapper[4958]: I1006 11:47:26.654196 4958 flags.go:64] FLAG: --event-qps="50" Oct 06 11:47:26 crc kubenswrapper[4958]: I1006 11:47:26.654214 4958 flags.go:64] FLAG: --event-storage-age-limit="default=0" Oct 06 11:47:26 crc kubenswrapper[4958]: I1006 11:47:26.654226 4958 flags.go:64] FLAG: --event-storage-event-limit="default=0" Oct 06 11:47:26 crc kubenswrapper[4958]: I1006 11:47:26.654238 4958 flags.go:64] FLAG: --eviction-hard="" Oct 06 11:47:26 crc kubenswrapper[4958]: I1006 11:47:26.654253 4958 flags.go:64] FLAG: --eviction-max-pod-grace-period="0" Oct 06 11:47:26 crc kubenswrapper[4958]: I1006 11:47:26.654263 4958 flags.go:64] FLAG: --eviction-minimum-reclaim="" Oct 06 11:47:26 crc kubenswrapper[4958]: I1006 11:47:26.654272 4958 flags.go:64] FLAG: --eviction-pressure-transition-period="5m0s" Oct 06 11:47:26 crc kubenswrapper[4958]: I1006 11:47:26.654284 4958 flags.go:64] FLAG: --eviction-soft="" Oct 06 11:47:26 crc kubenswrapper[4958]: I1006 11:47:26.654294 4958 flags.go:64] FLAG: --eviction-soft-grace-period="" Oct 06 11:47:26 crc kubenswrapper[4958]: I1006 11:47:26.654303 4958 flags.go:64] FLAG: --exit-on-lock-contention="false" Oct 06 11:47:26 crc kubenswrapper[4958]: I1006 11:47:26.654312 4958 flags.go:64] FLAG: --experimental-allocatable-ignore-eviction="false" Oct 06 11:47:26 crc kubenswrapper[4958]: I1006 11:47:26.654321 4958 flags.go:64] FLAG: --experimental-mounter-path="" Oct 06 11:47:26 crc kubenswrapper[4958]: I1006 11:47:26.654329 4958 flags.go:64] FLAG: --fail-cgroupv1="false" Oct 06 11:47:26 crc kubenswrapper[4958]: I1006 11:47:26.654339 4958 flags.go:64] FLAG: --fail-swap-on="true" Oct 06 11:47:26 crc kubenswrapper[4958]: I1006 11:47:26.654347 4958 flags.go:64] FLAG: --feature-gates="" Oct 06 11:47:26 crc kubenswrapper[4958]: I1006 11:47:26.654358 4958 flags.go:64] FLAG: --file-check-frequency="20s" Oct 06 11:47:26 crc kubenswrapper[4958]: I1006 11:47:26.654367 4958 flags.go:64] FLAG: --global-housekeeping-interval="1m0s" Oct 06 11:47:26 crc kubenswrapper[4958]: I1006 11:47:26.654377 4958 flags.go:64] FLAG: --hairpin-mode="promiscuous-bridge" Oct 06 11:47:26 crc kubenswrapper[4958]: I1006 11:47:26.654387 4958 flags.go:64] FLAG: --healthz-bind-address="127.0.0.1" Oct 06 11:47:26 crc kubenswrapper[4958]: I1006 11:47:26.654396 4958 flags.go:64] FLAG: --healthz-port="10248" Oct 06 11:47:26 crc kubenswrapper[4958]: I1006 11:47:26.654406 4958 flags.go:64] FLAG: --help="false" Oct 06 11:47:26 crc kubenswrapper[4958]: I1006 11:47:26.654417 4958 flags.go:64] FLAG: --hostname-override="" Oct 06 11:47:26 crc kubenswrapper[4958]: I1006 11:47:26.654426 4958 flags.go:64] FLAG: --housekeeping-interval="10s" Oct 06 11:47:26 crc kubenswrapper[4958]: I1006 11:47:26.654435 4958 flags.go:64] FLAG: --http-check-frequency="20s" Oct 06 11:47:26 crc kubenswrapper[4958]: I1006 11:47:26.654444 4958 flags.go:64] FLAG: --image-credential-provider-bin-dir="" Oct 06 11:47:26 crc kubenswrapper[4958]: I1006 11:47:26.654454 4958 flags.go:64] FLAG: --image-credential-provider-config="" Oct 06 11:47:26 crc kubenswrapper[4958]: I1006 11:47:26.654462 4958 flags.go:64] FLAG: --image-gc-high-threshold="85" Oct 06 11:47:26 crc kubenswrapper[4958]: I1006 11:47:26.654471 4958 flags.go:64] FLAG: --image-gc-low-threshold="80" Oct 06 11:47:26 crc kubenswrapper[4958]: I1006 11:47:26.654480 4958 flags.go:64] FLAG: --image-service-endpoint="" Oct 06 11:47:26 crc kubenswrapper[4958]: I1006 11:47:26.654490 4958 flags.go:64] FLAG: --kernel-memcg-notification="false" Oct 06 11:47:26 crc kubenswrapper[4958]: I1006 11:47:26.654502 4958 flags.go:64] FLAG: --kube-api-burst="100" Oct 06 11:47:26 crc kubenswrapper[4958]: I1006 11:47:26.654525 4958 flags.go:64] FLAG: --kube-api-content-type="application/vnd.kubernetes.protobuf" Oct 06 11:47:26 crc kubenswrapper[4958]: I1006 11:47:26.654542 4958 flags.go:64] FLAG: --kube-api-qps="50" Oct 06 11:47:26 crc kubenswrapper[4958]: I1006 11:47:26.654553 4958 flags.go:64] FLAG: --kube-reserved="" Oct 06 11:47:26 crc kubenswrapper[4958]: I1006 11:47:26.654565 4958 flags.go:64] FLAG: --kube-reserved-cgroup="" Oct 06 11:47:26 crc kubenswrapper[4958]: I1006 11:47:26.654576 4958 flags.go:64] FLAG: --kubeconfig="/var/lib/kubelet/kubeconfig" Oct 06 11:47:26 crc kubenswrapper[4958]: I1006 11:47:26.654588 4958 flags.go:64] FLAG: --kubelet-cgroups="" Oct 06 11:47:26 crc kubenswrapper[4958]: I1006 11:47:26.654599 4958 flags.go:64] FLAG: --local-storage-capacity-isolation="true" Oct 06 11:47:26 crc kubenswrapper[4958]: I1006 11:47:26.654609 4958 flags.go:64] FLAG: --lock-file="" Oct 06 11:47:26 crc kubenswrapper[4958]: I1006 11:47:26.654619 4958 flags.go:64] FLAG: --log-cadvisor-usage="false" Oct 06 11:47:26 crc kubenswrapper[4958]: I1006 11:47:26.654629 4958 flags.go:64] FLAG: --log-flush-frequency="5s" Oct 06 11:47:26 crc kubenswrapper[4958]: I1006 11:47:26.654639 4958 flags.go:64] FLAG: --log-json-info-buffer-size="0" Oct 06 11:47:26 crc kubenswrapper[4958]: I1006 11:47:26.654654 4958 flags.go:64] FLAG: --log-json-split-stream="false" Oct 06 11:47:26 crc kubenswrapper[4958]: I1006 11:47:26.654666 4958 flags.go:64] FLAG: --log-text-info-buffer-size="0" Oct 06 11:47:26 crc kubenswrapper[4958]: I1006 11:47:26.654676 4958 flags.go:64] FLAG: --log-text-split-stream="false" Oct 06 11:47:26 crc kubenswrapper[4958]: I1006 11:47:26.654685 4958 flags.go:64] FLAG: --logging-format="text" Oct 06 11:47:26 crc kubenswrapper[4958]: I1006 11:47:26.654694 4958 flags.go:64] FLAG: --machine-id-file="/etc/machine-id,/var/lib/dbus/machine-id" Oct 06 11:47:26 crc kubenswrapper[4958]: I1006 11:47:26.654704 4958 flags.go:64] FLAG: --make-iptables-util-chains="true" Oct 06 11:47:26 crc kubenswrapper[4958]: I1006 11:47:26.654713 4958 flags.go:64] FLAG: --manifest-url="" Oct 06 11:47:26 crc kubenswrapper[4958]: I1006 11:47:26.654723 4958 flags.go:64] FLAG: --manifest-url-header="" Oct 06 11:47:26 crc kubenswrapper[4958]: I1006 11:47:26.654736 4958 flags.go:64] FLAG: --max-housekeeping-interval="15s" Oct 06 11:47:26 crc kubenswrapper[4958]: I1006 11:47:26.654745 4958 flags.go:64] FLAG: --max-open-files="1000000" Oct 06 11:47:26 crc kubenswrapper[4958]: I1006 11:47:26.654765 4958 flags.go:64] FLAG: --max-pods="110" Oct 06 11:47:26 crc kubenswrapper[4958]: I1006 11:47:26.654775 4958 flags.go:64] FLAG: --maximum-dead-containers="-1" Oct 06 11:47:26 crc kubenswrapper[4958]: I1006 11:47:26.654784 4958 flags.go:64] FLAG: --maximum-dead-containers-per-container="1" Oct 06 11:47:26 crc kubenswrapper[4958]: I1006 11:47:26.654793 4958 flags.go:64] FLAG: --memory-manager-policy="None" Oct 06 11:47:26 crc kubenswrapper[4958]: I1006 11:47:26.654802 4958 flags.go:64] FLAG: --minimum-container-ttl-duration="6m0s" Oct 06 11:47:26 crc kubenswrapper[4958]: I1006 11:47:26.654811 4958 flags.go:64] FLAG: --minimum-image-ttl-duration="2m0s" Oct 06 11:47:26 crc kubenswrapper[4958]: I1006 11:47:26.654820 4958 flags.go:64] FLAG: --node-ip="192.168.126.11" Oct 06 11:47:26 crc kubenswrapper[4958]: I1006 11:47:26.654829 4958 flags.go:64] FLAG: --node-labels="node-role.kubernetes.io/control-plane=,node-role.kubernetes.io/master=,node.openshift.io/os_id=rhcos" Oct 06 11:47:26 crc kubenswrapper[4958]: I1006 11:47:26.654853 4958 flags.go:64] FLAG: --node-status-max-images="50" Oct 06 11:47:26 crc kubenswrapper[4958]: I1006 11:47:26.654862 4958 flags.go:64] FLAG: --node-status-update-frequency="10s" Oct 06 11:47:26 crc kubenswrapper[4958]: I1006 11:47:26.654871 4958 flags.go:64] FLAG: --oom-score-adj="-999" Oct 06 11:47:26 crc kubenswrapper[4958]: I1006 11:47:26.654880 4958 flags.go:64] FLAG: --pod-cidr="" Oct 06 11:47:26 crc kubenswrapper[4958]: I1006 11:47:26.654889 4958 flags.go:64] FLAG: --pod-infra-container-image="quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:33549946e22a9ffa738fd94b1345f90921bc8f92fa6137784cb33c77ad806f9d" Oct 06 11:47:26 crc kubenswrapper[4958]: I1006 11:47:26.654906 4958 flags.go:64] FLAG: --pod-manifest-path="" Oct 06 11:47:26 crc kubenswrapper[4958]: I1006 11:47:26.654915 4958 flags.go:64] FLAG: --pod-max-pids="-1" Oct 06 11:47:26 crc kubenswrapper[4958]: I1006 11:47:26.654925 4958 flags.go:64] FLAG: --pods-per-core="0" Oct 06 11:47:26 crc kubenswrapper[4958]: I1006 11:47:26.654934 4958 flags.go:64] FLAG: --port="10250" Oct 06 11:47:26 crc kubenswrapper[4958]: I1006 11:47:26.654943 4958 flags.go:64] FLAG: --protect-kernel-defaults="false" Oct 06 11:47:26 crc kubenswrapper[4958]: I1006 11:47:26.654952 4958 flags.go:64] FLAG: --provider-id="" Oct 06 11:47:26 crc kubenswrapper[4958]: I1006 11:47:26.654961 4958 flags.go:64] FLAG: --qos-reserved="" Oct 06 11:47:26 crc kubenswrapper[4958]: I1006 11:47:26.654970 4958 flags.go:64] FLAG: --read-only-port="10255" Oct 06 11:47:26 crc kubenswrapper[4958]: I1006 11:47:26.654979 4958 flags.go:64] FLAG: --register-node="true" Oct 06 11:47:26 crc kubenswrapper[4958]: I1006 11:47:26.654988 4958 flags.go:64] FLAG: --register-schedulable="true" Oct 06 11:47:26 crc kubenswrapper[4958]: I1006 11:47:26.654997 4958 flags.go:64] FLAG: --register-with-taints="node-role.kubernetes.io/master=:NoSchedule" Oct 06 11:47:26 crc kubenswrapper[4958]: I1006 11:47:26.655012 4958 flags.go:64] FLAG: --registry-burst="10" Oct 06 11:47:26 crc kubenswrapper[4958]: I1006 11:47:26.655022 4958 flags.go:64] FLAG: --registry-qps="5" Oct 06 11:47:26 crc kubenswrapper[4958]: I1006 11:47:26.655031 4958 flags.go:64] FLAG: --reserved-cpus="" Oct 06 11:47:26 crc kubenswrapper[4958]: I1006 11:47:26.655041 4958 flags.go:64] FLAG: --reserved-memory="" Oct 06 11:47:26 crc kubenswrapper[4958]: I1006 11:47:26.655054 4958 flags.go:64] FLAG: --resolv-conf="/etc/resolv.conf" Oct 06 11:47:26 crc kubenswrapper[4958]: I1006 11:47:26.655067 4958 flags.go:64] FLAG: --root-dir="/var/lib/kubelet" Oct 06 11:47:26 crc kubenswrapper[4958]: I1006 11:47:26.655079 4958 flags.go:64] FLAG: --rotate-certificates="false" Oct 06 11:47:26 crc kubenswrapper[4958]: I1006 11:47:26.655090 4958 flags.go:64] FLAG: --rotate-server-certificates="false" Oct 06 11:47:26 crc kubenswrapper[4958]: I1006 11:47:26.655102 4958 flags.go:64] FLAG: --runonce="false" Oct 06 11:47:26 crc kubenswrapper[4958]: I1006 11:47:26.655113 4958 flags.go:64] FLAG: --runtime-cgroups="/system.slice/crio.service" Oct 06 11:47:26 crc kubenswrapper[4958]: I1006 11:47:26.655126 4958 flags.go:64] FLAG: --runtime-request-timeout="2m0s" Oct 06 11:47:26 crc kubenswrapper[4958]: I1006 11:47:26.655138 4958 flags.go:64] FLAG: --seccomp-default="false" Oct 06 11:47:26 crc kubenswrapper[4958]: I1006 11:47:26.655187 4958 flags.go:64] FLAG: --serialize-image-pulls="true" Oct 06 11:47:26 crc kubenswrapper[4958]: I1006 11:47:26.655200 4958 flags.go:64] FLAG: --storage-driver-buffer-duration="1m0s" Oct 06 11:47:26 crc kubenswrapper[4958]: I1006 11:47:26.655213 4958 flags.go:64] FLAG: --storage-driver-db="cadvisor" Oct 06 11:47:26 crc kubenswrapper[4958]: I1006 11:47:26.655225 4958 flags.go:64] FLAG: --storage-driver-host="localhost:8086" Oct 06 11:47:26 crc kubenswrapper[4958]: I1006 11:47:26.655238 4958 flags.go:64] FLAG: --storage-driver-password="root" Oct 06 11:47:26 crc kubenswrapper[4958]: I1006 11:47:26.655249 4958 flags.go:64] FLAG: --storage-driver-secure="false" Oct 06 11:47:26 crc kubenswrapper[4958]: I1006 11:47:26.655261 4958 flags.go:64] FLAG: --storage-driver-table="stats" Oct 06 11:47:26 crc kubenswrapper[4958]: I1006 11:47:26.655273 4958 flags.go:64] FLAG: --storage-driver-user="root" Oct 06 11:47:26 crc kubenswrapper[4958]: I1006 11:47:26.655285 4958 flags.go:64] FLAG: --streaming-connection-idle-timeout="4h0m0s" Oct 06 11:47:26 crc kubenswrapper[4958]: I1006 11:47:26.655299 4958 flags.go:64] FLAG: --sync-frequency="1m0s" Oct 06 11:47:26 crc kubenswrapper[4958]: I1006 11:47:26.655312 4958 flags.go:64] FLAG: --system-cgroups="" Oct 06 11:47:26 crc kubenswrapper[4958]: I1006 11:47:26.655332 4958 flags.go:64] FLAG: --system-reserved="cpu=200m,ephemeral-storage=350Mi,memory=350Mi" Oct 06 11:47:26 crc kubenswrapper[4958]: I1006 11:47:26.655350 4958 flags.go:64] FLAG: --system-reserved-cgroup="" Oct 06 11:47:26 crc kubenswrapper[4958]: I1006 11:47:26.655360 4958 flags.go:64] FLAG: --tls-cert-file="" Oct 06 11:47:26 crc kubenswrapper[4958]: I1006 11:47:26.655369 4958 flags.go:64] FLAG: --tls-cipher-suites="[]" Oct 06 11:47:26 crc kubenswrapper[4958]: I1006 11:47:26.655381 4958 flags.go:64] FLAG: --tls-min-version="" Oct 06 11:47:26 crc kubenswrapper[4958]: I1006 11:47:26.655390 4958 flags.go:64] FLAG: --tls-private-key-file="" Oct 06 11:47:26 crc kubenswrapper[4958]: I1006 11:47:26.655399 4958 flags.go:64] FLAG: --topology-manager-policy="none" Oct 06 11:47:26 crc kubenswrapper[4958]: I1006 11:47:26.655408 4958 flags.go:64] FLAG: --topology-manager-policy-options="" Oct 06 11:47:26 crc kubenswrapper[4958]: I1006 11:47:26.655417 4958 flags.go:64] FLAG: --topology-manager-scope="container" Oct 06 11:47:26 crc kubenswrapper[4958]: I1006 11:47:26.655426 4958 flags.go:64] FLAG: --v="2" Oct 06 11:47:26 crc kubenswrapper[4958]: I1006 11:47:26.655439 4958 flags.go:64] FLAG: --version="false" Oct 06 11:47:26 crc kubenswrapper[4958]: I1006 11:47:26.655450 4958 flags.go:64] FLAG: --vmodule="" Oct 06 11:47:26 crc kubenswrapper[4958]: I1006 11:47:26.655462 4958 flags.go:64] FLAG: --volume-plugin-dir="/etc/kubernetes/kubelet-plugins/volume/exec" Oct 06 11:47:26 crc kubenswrapper[4958]: I1006 11:47:26.655472 4958 flags.go:64] FLAG: --volume-stats-agg-period="1m0s" Oct 06 11:47:26 crc kubenswrapper[4958]: W1006 11:47:26.656853 4958 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Oct 06 11:47:26 crc kubenswrapper[4958]: W1006 11:47:26.656876 4958 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Oct 06 11:47:26 crc kubenswrapper[4958]: W1006 11:47:26.656890 4958 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Oct 06 11:47:26 crc kubenswrapper[4958]: W1006 11:47:26.656901 4958 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Oct 06 11:47:26 crc kubenswrapper[4958]: W1006 11:47:26.656911 4958 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Oct 06 11:47:26 crc kubenswrapper[4958]: W1006 11:47:26.656921 4958 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Oct 06 11:47:26 crc kubenswrapper[4958]: W1006 11:47:26.656930 4958 feature_gate.go:330] unrecognized feature gate: Example Oct 06 11:47:26 crc kubenswrapper[4958]: W1006 11:47:26.656939 4958 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Oct 06 11:47:26 crc kubenswrapper[4958]: W1006 11:47:26.656947 4958 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Oct 06 11:47:26 crc kubenswrapper[4958]: W1006 11:47:26.656956 4958 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Oct 06 11:47:26 crc kubenswrapper[4958]: W1006 11:47:26.656965 4958 feature_gate.go:330] unrecognized feature gate: SignatureStores Oct 06 11:47:26 crc kubenswrapper[4958]: W1006 11:47:26.656976 4958 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Oct 06 11:47:26 crc kubenswrapper[4958]: W1006 11:47:26.656986 4958 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Oct 06 11:47:26 crc kubenswrapper[4958]: W1006 11:47:26.656994 4958 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Oct 06 11:47:26 crc kubenswrapper[4958]: W1006 11:47:26.657004 4958 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Oct 06 11:47:26 crc kubenswrapper[4958]: W1006 11:47:26.657014 4958 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Oct 06 11:47:26 crc kubenswrapper[4958]: W1006 11:47:26.657023 4958 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Oct 06 11:47:26 crc kubenswrapper[4958]: W1006 11:47:26.657032 4958 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Oct 06 11:47:26 crc kubenswrapper[4958]: W1006 11:47:26.657043 4958 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Oct 06 11:47:26 crc kubenswrapper[4958]: W1006 11:47:26.657051 4958 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Oct 06 11:47:26 crc kubenswrapper[4958]: W1006 11:47:26.657060 4958 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Oct 06 11:47:26 crc kubenswrapper[4958]: W1006 11:47:26.657069 4958 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Oct 06 11:47:26 crc kubenswrapper[4958]: W1006 11:47:26.657077 4958 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Oct 06 11:47:26 crc kubenswrapper[4958]: W1006 11:47:26.657085 4958 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Oct 06 11:47:26 crc kubenswrapper[4958]: W1006 11:47:26.657094 4958 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Oct 06 11:47:26 crc kubenswrapper[4958]: W1006 11:47:26.657102 4958 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Oct 06 11:47:26 crc kubenswrapper[4958]: W1006 11:47:26.657110 4958 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Oct 06 11:47:26 crc kubenswrapper[4958]: W1006 11:47:26.657118 4958 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Oct 06 11:47:26 crc kubenswrapper[4958]: W1006 11:47:26.657126 4958 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Oct 06 11:47:26 crc kubenswrapper[4958]: W1006 11:47:26.657134 4958 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Oct 06 11:47:26 crc kubenswrapper[4958]: W1006 11:47:26.657172 4958 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Oct 06 11:47:26 crc kubenswrapper[4958]: W1006 11:47:26.657180 4958 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Oct 06 11:47:26 crc kubenswrapper[4958]: W1006 11:47:26.657189 4958 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Oct 06 11:47:26 crc kubenswrapper[4958]: W1006 11:47:26.657197 4958 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Oct 06 11:47:26 crc kubenswrapper[4958]: W1006 11:47:26.657206 4958 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Oct 06 11:47:26 crc kubenswrapper[4958]: W1006 11:47:26.657214 4958 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Oct 06 11:47:26 crc kubenswrapper[4958]: W1006 11:47:26.657223 4958 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Oct 06 11:47:26 crc kubenswrapper[4958]: W1006 11:47:26.657243 4958 feature_gate.go:330] unrecognized feature gate: OVNObservability Oct 06 11:47:26 crc kubenswrapper[4958]: W1006 11:47:26.657253 4958 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Oct 06 11:47:26 crc kubenswrapper[4958]: W1006 11:47:26.657261 4958 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Oct 06 11:47:26 crc kubenswrapper[4958]: W1006 11:47:26.657269 4958 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Oct 06 11:47:26 crc kubenswrapper[4958]: W1006 11:47:26.657277 4958 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Oct 06 11:47:26 crc kubenswrapper[4958]: W1006 11:47:26.657285 4958 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Oct 06 11:47:26 crc kubenswrapper[4958]: W1006 11:47:26.657293 4958 feature_gate.go:330] unrecognized feature gate: GatewayAPI Oct 06 11:47:26 crc kubenswrapper[4958]: W1006 11:47:26.657301 4958 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Oct 06 11:47:26 crc kubenswrapper[4958]: W1006 11:47:26.657308 4958 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Oct 06 11:47:26 crc kubenswrapper[4958]: W1006 11:47:26.657317 4958 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Oct 06 11:47:26 crc kubenswrapper[4958]: W1006 11:47:26.657325 4958 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Oct 06 11:47:26 crc kubenswrapper[4958]: W1006 11:47:26.657333 4958 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Oct 06 11:47:26 crc kubenswrapper[4958]: W1006 11:47:26.657343 4958 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Oct 06 11:47:26 crc kubenswrapper[4958]: W1006 11:47:26.657353 4958 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Oct 06 11:47:26 crc kubenswrapper[4958]: W1006 11:47:26.657364 4958 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Oct 06 11:47:26 crc kubenswrapper[4958]: W1006 11:47:26.657376 4958 feature_gate.go:330] unrecognized feature gate: NewOLM Oct 06 11:47:26 crc kubenswrapper[4958]: W1006 11:47:26.657387 4958 feature_gate.go:330] unrecognized feature gate: PinnedImages Oct 06 11:47:26 crc kubenswrapper[4958]: W1006 11:47:26.657397 4958 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Oct 06 11:47:26 crc kubenswrapper[4958]: W1006 11:47:26.657407 4958 feature_gate.go:330] unrecognized feature gate: InsightsConfig Oct 06 11:47:26 crc kubenswrapper[4958]: W1006 11:47:26.657418 4958 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Oct 06 11:47:26 crc kubenswrapper[4958]: W1006 11:47:26.657427 4958 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Oct 06 11:47:26 crc kubenswrapper[4958]: W1006 11:47:26.657438 4958 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Oct 06 11:47:26 crc kubenswrapper[4958]: W1006 11:47:26.657447 4958 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Oct 06 11:47:26 crc kubenswrapper[4958]: W1006 11:47:26.657458 4958 feature_gate.go:330] unrecognized feature gate: PlatformOperators Oct 06 11:47:26 crc kubenswrapper[4958]: W1006 11:47:26.657467 4958 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Oct 06 11:47:26 crc kubenswrapper[4958]: W1006 11:47:26.657477 4958 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Oct 06 11:47:26 crc kubenswrapper[4958]: W1006 11:47:26.657487 4958 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Oct 06 11:47:26 crc kubenswrapper[4958]: W1006 11:47:26.657496 4958 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Oct 06 11:47:26 crc kubenswrapper[4958]: W1006 11:47:26.657506 4958 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Oct 06 11:47:26 crc kubenswrapper[4958]: W1006 11:47:26.657516 4958 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Oct 06 11:47:26 crc kubenswrapper[4958]: W1006 11:47:26.657529 4958 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Oct 06 11:47:26 crc kubenswrapper[4958]: W1006 11:47:26.657540 4958 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Oct 06 11:47:26 crc kubenswrapper[4958]: W1006 11:47:26.657556 4958 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Oct 06 11:47:26 crc kubenswrapper[4958]: W1006 11:47:26.657566 4958 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Oct 06 11:47:26 crc kubenswrapper[4958]: I1006 11:47:26.657582 4958 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Oct 06 11:47:26 crc kubenswrapper[4958]: I1006 11:47:26.672351 4958 server.go:491] "Kubelet version" kubeletVersion="v1.31.5" Oct 06 11:47:26 crc kubenswrapper[4958]: I1006 11:47:26.672416 4958 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Oct 06 11:47:26 crc kubenswrapper[4958]: W1006 11:47:26.672565 4958 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Oct 06 11:47:26 crc kubenswrapper[4958]: W1006 11:47:26.672582 4958 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Oct 06 11:47:26 crc kubenswrapper[4958]: W1006 11:47:26.672593 4958 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Oct 06 11:47:26 crc kubenswrapper[4958]: W1006 11:47:26.672604 4958 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Oct 06 11:47:26 crc kubenswrapper[4958]: W1006 11:47:26.672614 4958 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Oct 06 11:47:26 crc kubenswrapper[4958]: W1006 11:47:26.672624 4958 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Oct 06 11:47:26 crc kubenswrapper[4958]: W1006 11:47:26.672633 4958 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Oct 06 11:47:26 crc kubenswrapper[4958]: W1006 11:47:26.672641 4958 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Oct 06 11:47:26 crc kubenswrapper[4958]: W1006 11:47:26.672650 4958 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Oct 06 11:47:26 crc kubenswrapper[4958]: W1006 11:47:26.672659 4958 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Oct 06 11:47:26 crc kubenswrapper[4958]: W1006 11:47:26.672667 4958 feature_gate.go:330] unrecognized feature gate: PinnedImages Oct 06 11:47:26 crc kubenswrapper[4958]: W1006 11:47:26.672676 4958 feature_gate.go:330] unrecognized feature gate: PlatformOperators Oct 06 11:47:26 crc kubenswrapper[4958]: W1006 11:47:26.672684 4958 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Oct 06 11:47:26 crc kubenswrapper[4958]: W1006 11:47:26.672693 4958 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Oct 06 11:47:26 crc kubenswrapper[4958]: W1006 11:47:26.672703 4958 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Oct 06 11:47:26 crc kubenswrapper[4958]: W1006 11:47:26.672711 4958 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Oct 06 11:47:26 crc kubenswrapper[4958]: W1006 11:47:26.672719 4958 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Oct 06 11:47:26 crc kubenswrapper[4958]: W1006 11:47:26.672731 4958 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Oct 06 11:47:26 crc kubenswrapper[4958]: W1006 11:47:26.672746 4958 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Oct 06 11:47:26 crc kubenswrapper[4958]: W1006 11:47:26.672755 4958 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Oct 06 11:47:26 crc kubenswrapper[4958]: W1006 11:47:26.672764 4958 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Oct 06 11:47:26 crc kubenswrapper[4958]: W1006 11:47:26.672774 4958 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Oct 06 11:47:26 crc kubenswrapper[4958]: W1006 11:47:26.672783 4958 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Oct 06 11:47:26 crc kubenswrapper[4958]: W1006 11:47:26.672792 4958 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Oct 06 11:47:26 crc kubenswrapper[4958]: W1006 11:47:26.672801 4958 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Oct 06 11:47:26 crc kubenswrapper[4958]: W1006 11:47:26.672810 4958 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Oct 06 11:47:26 crc kubenswrapper[4958]: W1006 11:47:26.672819 4958 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Oct 06 11:47:26 crc kubenswrapper[4958]: W1006 11:47:26.672828 4958 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Oct 06 11:47:26 crc kubenswrapper[4958]: W1006 11:47:26.672837 4958 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Oct 06 11:47:26 crc kubenswrapper[4958]: W1006 11:47:26.672847 4958 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Oct 06 11:47:26 crc kubenswrapper[4958]: W1006 11:47:26.672856 4958 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Oct 06 11:47:26 crc kubenswrapper[4958]: W1006 11:47:26.672865 4958 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Oct 06 11:47:26 crc kubenswrapper[4958]: W1006 11:47:26.672875 4958 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Oct 06 11:47:26 crc kubenswrapper[4958]: W1006 11:47:26.672884 4958 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Oct 06 11:47:26 crc kubenswrapper[4958]: W1006 11:47:26.672895 4958 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Oct 06 11:47:26 crc kubenswrapper[4958]: W1006 11:47:26.672903 4958 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Oct 06 11:47:26 crc kubenswrapper[4958]: W1006 11:47:26.672914 4958 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Oct 06 11:47:26 crc kubenswrapper[4958]: W1006 11:47:26.672926 4958 feature_gate.go:330] unrecognized feature gate: Example Oct 06 11:47:26 crc kubenswrapper[4958]: W1006 11:47:26.672935 4958 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Oct 06 11:47:26 crc kubenswrapper[4958]: W1006 11:47:26.672944 4958 feature_gate.go:330] unrecognized feature gate: SignatureStores Oct 06 11:47:26 crc kubenswrapper[4958]: W1006 11:47:26.672953 4958 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Oct 06 11:47:26 crc kubenswrapper[4958]: W1006 11:47:26.672963 4958 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Oct 06 11:47:26 crc kubenswrapper[4958]: W1006 11:47:26.672972 4958 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Oct 06 11:47:26 crc kubenswrapper[4958]: W1006 11:47:26.672981 4958 feature_gate.go:330] unrecognized feature gate: InsightsConfig Oct 06 11:47:26 crc kubenswrapper[4958]: W1006 11:47:26.672990 4958 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Oct 06 11:47:26 crc kubenswrapper[4958]: W1006 11:47:26.672999 4958 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Oct 06 11:47:26 crc kubenswrapper[4958]: W1006 11:47:26.673007 4958 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Oct 06 11:47:26 crc kubenswrapper[4958]: W1006 11:47:26.673019 4958 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Oct 06 11:47:26 crc kubenswrapper[4958]: W1006 11:47:26.673029 4958 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Oct 06 11:47:26 crc kubenswrapper[4958]: W1006 11:47:26.673037 4958 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Oct 06 11:47:26 crc kubenswrapper[4958]: W1006 11:47:26.673046 4958 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Oct 06 11:47:26 crc kubenswrapper[4958]: W1006 11:47:26.673055 4958 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Oct 06 11:47:26 crc kubenswrapper[4958]: W1006 11:47:26.673064 4958 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Oct 06 11:47:26 crc kubenswrapper[4958]: W1006 11:47:26.673075 4958 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Oct 06 11:47:26 crc kubenswrapper[4958]: W1006 11:47:26.673083 4958 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Oct 06 11:47:26 crc kubenswrapper[4958]: W1006 11:47:26.673093 4958 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Oct 06 11:47:26 crc kubenswrapper[4958]: W1006 11:47:26.673101 4958 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Oct 06 11:47:26 crc kubenswrapper[4958]: W1006 11:47:26.673111 4958 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Oct 06 11:47:26 crc kubenswrapper[4958]: W1006 11:47:26.673119 4958 feature_gate.go:330] unrecognized feature gate: GatewayAPI Oct 06 11:47:26 crc kubenswrapper[4958]: W1006 11:47:26.673128 4958 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Oct 06 11:47:26 crc kubenswrapper[4958]: W1006 11:47:26.673137 4958 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Oct 06 11:47:26 crc kubenswrapper[4958]: W1006 11:47:26.673222 4958 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Oct 06 11:47:26 crc kubenswrapper[4958]: W1006 11:47:26.673243 4958 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Oct 06 11:47:26 crc kubenswrapper[4958]: W1006 11:47:26.673254 4958 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Oct 06 11:47:26 crc kubenswrapper[4958]: W1006 11:47:26.673265 4958 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Oct 06 11:47:26 crc kubenswrapper[4958]: W1006 11:47:26.673277 4958 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Oct 06 11:47:26 crc kubenswrapper[4958]: W1006 11:47:26.673286 4958 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Oct 06 11:47:26 crc kubenswrapper[4958]: W1006 11:47:26.673295 4958 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Oct 06 11:47:26 crc kubenswrapper[4958]: W1006 11:47:26.673303 4958 feature_gate.go:330] unrecognized feature gate: NewOLM Oct 06 11:47:26 crc kubenswrapper[4958]: W1006 11:47:26.673312 4958 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Oct 06 11:47:26 crc kubenswrapper[4958]: W1006 11:47:26.673324 4958 feature_gate.go:330] unrecognized feature gate: OVNObservability Oct 06 11:47:26 crc kubenswrapper[4958]: I1006 11:47:26.673341 4958 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Oct 06 11:47:26 crc kubenswrapper[4958]: W1006 11:47:26.673649 4958 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Oct 06 11:47:26 crc kubenswrapper[4958]: W1006 11:47:26.673669 4958 feature_gate.go:330] unrecognized feature gate: SignatureStores Oct 06 11:47:26 crc kubenswrapper[4958]: W1006 11:47:26.673680 4958 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Oct 06 11:47:26 crc kubenswrapper[4958]: W1006 11:47:26.673690 4958 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Oct 06 11:47:26 crc kubenswrapper[4958]: W1006 11:47:26.673701 4958 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Oct 06 11:47:26 crc kubenswrapper[4958]: W1006 11:47:26.673710 4958 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Oct 06 11:47:26 crc kubenswrapper[4958]: W1006 11:47:26.673720 4958 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Oct 06 11:47:26 crc kubenswrapper[4958]: W1006 11:47:26.673729 4958 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Oct 06 11:47:26 crc kubenswrapper[4958]: W1006 11:47:26.673738 4958 feature_gate.go:330] unrecognized feature gate: InsightsConfig Oct 06 11:47:26 crc kubenswrapper[4958]: W1006 11:47:26.673747 4958 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Oct 06 11:47:26 crc kubenswrapper[4958]: W1006 11:47:26.673756 4958 feature_gate.go:330] unrecognized feature gate: Example Oct 06 11:47:26 crc kubenswrapper[4958]: W1006 11:47:26.673764 4958 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Oct 06 11:47:26 crc kubenswrapper[4958]: W1006 11:47:26.673773 4958 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Oct 06 11:47:26 crc kubenswrapper[4958]: W1006 11:47:26.673781 4958 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Oct 06 11:47:26 crc kubenswrapper[4958]: W1006 11:47:26.673790 4958 feature_gate.go:330] unrecognized feature gate: PlatformOperators Oct 06 11:47:26 crc kubenswrapper[4958]: W1006 11:47:26.673798 4958 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Oct 06 11:47:26 crc kubenswrapper[4958]: W1006 11:47:26.673810 4958 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Oct 06 11:47:26 crc kubenswrapper[4958]: W1006 11:47:26.673823 4958 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Oct 06 11:47:26 crc kubenswrapper[4958]: W1006 11:47:26.673834 4958 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Oct 06 11:47:26 crc kubenswrapper[4958]: W1006 11:47:26.673844 4958 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Oct 06 11:47:26 crc kubenswrapper[4958]: W1006 11:47:26.673853 4958 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Oct 06 11:47:26 crc kubenswrapper[4958]: W1006 11:47:26.673862 4958 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Oct 06 11:47:26 crc kubenswrapper[4958]: W1006 11:47:26.673870 4958 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Oct 06 11:47:26 crc kubenswrapper[4958]: W1006 11:47:26.673879 4958 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Oct 06 11:47:26 crc kubenswrapper[4958]: W1006 11:47:26.673887 4958 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Oct 06 11:47:26 crc kubenswrapper[4958]: W1006 11:47:26.673897 4958 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Oct 06 11:47:26 crc kubenswrapper[4958]: W1006 11:47:26.673905 4958 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Oct 06 11:47:26 crc kubenswrapper[4958]: W1006 11:47:26.673913 4958 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Oct 06 11:47:26 crc kubenswrapper[4958]: W1006 11:47:26.673921 4958 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Oct 06 11:47:26 crc kubenswrapper[4958]: W1006 11:47:26.673930 4958 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Oct 06 11:47:26 crc kubenswrapper[4958]: W1006 11:47:26.673938 4958 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Oct 06 11:47:26 crc kubenswrapper[4958]: W1006 11:47:26.673946 4958 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Oct 06 11:47:26 crc kubenswrapper[4958]: W1006 11:47:26.673955 4958 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Oct 06 11:47:26 crc kubenswrapper[4958]: W1006 11:47:26.673965 4958 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Oct 06 11:47:26 crc kubenswrapper[4958]: W1006 11:47:26.673980 4958 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Oct 06 11:47:26 crc kubenswrapper[4958]: W1006 11:47:26.673989 4958 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Oct 06 11:47:26 crc kubenswrapper[4958]: W1006 11:47:26.673998 4958 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Oct 06 11:47:26 crc kubenswrapper[4958]: W1006 11:47:26.674006 4958 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Oct 06 11:47:26 crc kubenswrapper[4958]: W1006 11:47:26.674014 4958 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Oct 06 11:47:26 crc kubenswrapper[4958]: W1006 11:47:26.674022 4958 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Oct 06 11:47:26 crc kubenswrapper[4958]: W1006 11:47:26.674031 4958 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Oct 06 11:47:26 crc kubenswrapper[4958]: W1006 11:47:26.674039 4958 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Oct 06 11:47:26 crc kubenswrapper[4958]: W1006 11:47:26.674047 4958 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Oct 06 11:47:26 crc kubenswrapper[4958]: W1006 11:47:26.674055 4958 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Oct 06 11:47:26 crc kubenswrapper[4958]: W1006 11:47:26.674063 4958 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Oct 06 11:47:26 crc kubenswrapper[4958]: W1006 11:47:26.674072 4958 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Oct 06 11:47:26 crc kubenswrapper[4958]: W1006 11:47:26.674080 4958 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Oct 06 11:47:26 crc kubenswrapper[4958]: W1006 11:47:26.674089 4958 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Oct 06 11:47:26 crc kubenswrapper[4958]: W1006 11:47:26.674097 4958 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Oct 06 11:47:26 crc kubenswrapper[4958]: W1006 11:47:26.674105 4958 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Oct 06 11:47:26 crc kubenswrapper[4958]: W1006 11:47:26.674114 4958 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Oct 06 11:47:26 crc kubenswrapper[4958]: W1006 11:47:26.674122 4958 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Oct 06 11:47:26 crc kubenswrapper[4958]: W1006 11:47:26.674131 4958 feature_gate.go:330] unrecognized feature gate: PinnedImages Oct 06 11:47:26 crc kubenswrapper[4958]: W1006 11:47:26.674139 4958 feature_gate.go:330] unrecognized feature gate: OVNObservability Oct 06 11:47:26 crc kubenswrapper[4958]: W1006 11:47:26.674192 4958 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Oct 06 11:47:26 crc kubenswrapper[4958]: W1006 11:47:26.674204 4958 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Oct 06 11:47:26 crc kubenswrapper[4958]: W1006 11:47:26.674215 4958 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Oct 06 11:47:26 crc kubenswrapper[4958]: W1006 11:47:26.674229 4958 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Oct 06 11:47:26 crc kubenswrapper[4958]: W1006 11:47:26.674242 4958 feature_gate.go:330] unrecognized feature gate: GatewayAPI Oct 06 11:47:26 crc kubenswrapper[4958]: W1006 11:47:26.674255 4958 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Oct 06 11:47:26 crc kubenswrapper[4958]: W1006 11:47:26.674266 4958 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Oct 06 11:47:26 crc kubenswrapper[4958]: W1006 11:47:26.674279 4958 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Oct 06 11:47:26 crc kubenswrapper[4958]: W1006 11:47:26.674288 4958 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Oct 06 11:47:26 crc kubenswrapper[4958]: W1006 11:47:26.674297 4958 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Oct 06 11:47:26 crc kubenswrapper[4958]: W1006 11:47:26.674306 4958 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Oct 06 11:47:26 crc kubenswrapper[4958]: W1006 11:47:26.674317 4958 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Oct 06 11:47:26 crc kubenswrapper[4958]: W1006 11:47:26.674328 4958 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Oct 06 11:47:26 crc kubenswrapper[4958]: W1006 11:47:26.674341 4958 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Oct 06 11:47:26 crc kubenswrapper[4958]: W1006 11:47:26.674351 4958 feature_gate.go:330] unrecognized feature gate: NewOLM Oct 06 11:47:26 crc kubenswrapper[4958]: W1006 11:47:26.674361 4958 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Oct 06 11:47:26 crc kubenswrapper[4958]: W1006 11:47:26.674372 4958 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Oct 06 11:47:26 crc kubenswrapper[4958]: I1006 11:47:26.674386 4958 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Oct 06 11:47:26 crc kubenswrapper[4958]: I1006 11:47:26.674706 4958 server.go:940] "Client rotation is on, will bootstrap in background" Oct 06 11:47:26 crc kubenswrapper[4958]: I1006 11:47:26.680961 4958 bootstrap.go:85] "Current kubeconfig file contents are still valid, no bootstrap necessary" Oct 06 11:47:26 crc kubenswrapper[4958]: I1006 11:47:26.681125 4958 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Oct 06 11:47:26 crc kubenswrapper[4958]: I1006 11:47:26.683302 4958 server.go:997] "Starting client certificate rotation" Oct 06 11:47:26 crc kubenswrapper[4958]: I1006 11:47:26.683342 4958 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate rotation is enabled Oct 06 11:47:26 crc kubenswrapper[4958]: I1006 11:47:26.683579 4958 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate expiration is 2026-02-24 05:52:08 +0000 UTC, rotation deadline is 2026-01-04 10:20:21.40788368 +0000 UTC Oct 06 11:47:26 crc kubenswrapper[4958]: I1006 11:47:26.683757 4958 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Waiting 2158h32m54.724131651s for next certificate rotation Oct 06 11:47:26 crc kubenswrapper[4958]: I1006 11:47:26.710414 4958 dynamic_cafile_content.go:123] "Loaded a new CA Bundle and Verifier" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Oct 06 11:47:26 crc kubenswrapper[4958]: I1006 11:47:26.714964 4958 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Oct 06 11:47:26 crc kubenswrapper[4958]: I1006 11:47:26.729516 4958 log.go:25] "Validated CRI v1 runtime API" Oct 06 11:47:26 crc kubenswrapper[4958]: I1006 11:47:26.765929 4958 log.go:25] "Validated CRI v1 image API" Oct 06 11:47:26 crc kubenswrapper[4958]: I1006 11:47:26.768377 4958 server.go:1437] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Oct 06 11:47:26 crc kubenswrapper[4958]: I1006 11:47:26.774304 4958 fs.go:133] Filesystem UUIDs: map[0b076daa-c26a-46d2-b3a6-72a8dbc6e257:/dev/vda4 2025-10-06-11-43-06-00:/dev/sr0 7B77-95E7:/dev/vda2 de0497b0-db1b-465a-b278-03db02455c71:/dev/vda3] Oct 06 11:47:26 crc kubenswrapper[4958]: I1006 11:47:26.774355 4958 fs.go:134] Filesystem partitions: map[/dev/shm:{mountpoint:/dev/shm major:0 minor:22 fsType:tmpfs blockSize:0} /dev/vda3:{mountpoint:/boot major:252 minor:3 fsType:ext4 blockSize:0} /dev/vda4:{mountpoint:/var major:252 minor:4 fsType:xfs blockSize:0} /run:{mountpoint:/run major:0 minor:24 fsType:tmpfs blockSize:0} /run/user/1000:{mountpoint:/run/user/1000 major:0 minor:42 fsType:tmpfs blockSize:0} /tmp:{mountpoint:/tmp major:0 minor:30 fsType:tmpfs blockSize:0} /var/lib/etcd:{mountpoint:/var/lib/etcd major:0 minor:43 fsType:tmpfs blockSize:0}] Oct 06 11:47:26 crc kubenswrapper[4958]: I1006 11:47:26.804978 4958 manager.go:217] Machine: {Timestamp:2025-10-06 11:47:26.802456474 +0000 UTC m=+0.688481862 CPUVendorID:AuthenticAMD NumCores:12 NumPhysicalCores:1 NumSockets:12 CpuFrequency:2800000 MemoryCapacity:33654120448 SwapCapacity:0 MemoryByType:map[] NVMInfo:{MemoryModeCapacity:0 AppDirectModeCapacity:0 AvgPowerBudget:0} HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] MachineID:21801e6708c44f15b81395eb736a7cec SystemUUID:c838f8b7-52cc-46da-a415-eff8b7b887b9 BootID:71775188-f0be-4b91-a34f-f469bf9337b6 Filesystems:[{Device:/dev/shm DeviceMajor:0 DeviceMinor:22 Capacity:16827060224 Type:vfs Inodes:4108169 HasInodes:true} {Device:/run DeviceMajor:0 DeviceMinor:24 Capacity:6730825728 Type:vfs Inodes:819200 HasInodes:true} {Device:/dev/vda4 DeviceMajor:252 DeviceMinor:4 Capacity:85292941312 Type:vfs Inodes:41679680 HasInodes:true} {Device:/tmp DeviceMajor:0 DeviceMinor:30 Capacity:16827060224 Type:vfs Inodes:1048576 HasInodes:true} {Device:/dev/vda3 DeviceMajor:252 DeviceMinor:3 Capacity:366869504 Type:vfs Inodes:98304 HasInodes:true} {Device:/run/user/1000 DeviceMajor:0 DeviceMinor:42 Capacity:3365408768 Type:vfs Inodes:821633 HasInodes:true} {Device:/var/lib/etcd DeviceMajor:0 DeviceMinor:43 Capacity:1073741824 Type:vfs Inodes:4108169 HasInodes:true}] DiskMap:map[252:0:{Name:vda Major:252 Minor:0 Size:214748364800 Scheduler:none}] NetworkDevices:[{Name:br-ex MacAddress:fa:16:3e:90:e0:02 Speed:0 Mtu:1500} {Name:br-int MacAddress:d6:39:55:2e:22:71 Speed:0 Mtu:1400} {Name:ens3 MacAddress:fa:16:3e:90:e0:02 Speed:-1 Mtu:1500} {Name:ens7 MacAddress:fa:16:3e:55:d8:3d Speed:-1 Mtu:1500} {Name:ens7.20 MacAddress:52:54:00:b5:2a:fd Speed:-1 Mtu:1496} {Name:ens7.21 MacAddress:52:54:00:e9:ba:dc Speed:-1 Mtu:1496} {Name:ens7.22 MacAddress:52:54:00:56:81:2b Speed:-1 Mtu:1496} {Name:eth10 MacAddress:66:e6:11:d0:2a:c4 Speed:0 Mtu:1500} {Name:ovn-k8s-mp0 MacAddress:0a:58:0a:d9:00:02 Speed:0 Mtu:1400} {Name:ovs-system MacAddress:5e:17:11:e6:e9:ff Speed:0 Mtu:1500}] Topology:[{Id:0 Memory:33654120448 HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] Cores:[{Id:0 Threads:[0] Caches:[{Id:0 Size:32768 Type:Data Level:1} {Id:0 Size:32768 Type:Instruction Level:1} {Id:0 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:0 Size:16777216 Type:Unified Level:3}] SocketID:0 BookID: DrawerID:} {Id:0 Threads:[1] Caches:[{Id:1 Size:32768 Type:Data Level:1} {Id:1 Size:32768 Type:Instruction Level:1} {Id:1 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:1 Size:16777216 Type:Unified Level:3}] SocketID:1 BookID: DrawerID:} {Id:0 Threads:[10] Caches:[{Id:10 Size:32768 Type:Data Level:1} {Id:10 Size:32768 Type:Instruction Level:1} {Id:10 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:10 Size:16777216 Type:Unified Level:3}] SocketID:10 BookID: DrawerID:} {Id:0 Threads:[11] Caches:[{Id:11 Size:32768 Type:Data Level:1} {Id:11 Size:32768 Type:Instruction Level:1} {Id:11 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:11 Size:16777216 Type:Unified Level:3}] SocketID:11 BookID: DrawerID:} {Id:0 Threads:[2] Caches:[{Id:2 Size:32768 Type:Data Level:1} {Id:2 Size:32768 Type:Instruction Level:1} {Id:2 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:2 Size:16777216 Type:Unified Level:3}] SocketID:2 BookID: DrawerID:} {Id:0 Threads:[3] Caches:[{Id:3 Size:32768 Type:Data Level:1} {Id:3 Size:32768 Type:Instruction Level:1} {Id:3 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:3 Size:16777216 Type:Unified Level:3}] SocketID:3 BookID: DrawerID:} {Id:0 Threads:[4] Caches:[{Id:4 Size:32768 Type:Data Level:1} {Id:4 Size:32768 Type:Instruction Level:1} {Id:4 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:4 Size:16777216 Type:Unified Level:3}] SocketID:4 BookID: DrawerID:} {Id:0 Threads:[5] Caches:[{Id:5 Size:32768 Type:Data Level:1} {Id:5 Size:32768 Type:Instruction Level:1} {Id:5 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:5 Size:16777216 Type:Unified Level:3}] SocketID:5 BookID: DrawerID:} {Id:0 Threads:[6] Caches:[{Id:6 Size:32768 Type:Data Level:1} {Id:6 Size:32768 Type:Instruction Level:1} {Id:6 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:6 Size:16777216 Type:Unified Level:3}] SocketID:6 BookID: DrawerID:} {Id:0 Threads:[7] Caches:[{Id:7 Size:32768 Type:Data Level:1} {Id:7 Size:32768 Type:Instruction Level:1} {Id:7 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:7 Size:16777216 Type:Unified Level:3}] SocketID:7 BookID: DrawerID:} {Id:0 Threads:[8] Caches:[{Id:8 Size:32768 Type:Data Level:1} {Id:8 Size:32768 Type:Instruction Level:1} {Id:8 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:8 Size:16777216 Type:Unified Level:3}] SocketID:8 BookID: DrawerID:} {Id:0 Threads:[9] Caches:[{Id:9 Size:32768 Type:Data Level:1} {Id:9 Size:32768 Type:Instruction Level:1} {Id:9 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:9 Size:16777216 Type:Unified Level:3}] SocketID:9 BookID: DrawerID:}] Caches:[] Distances:[10]}] CloudProvider:Unknown InstanceType:Unknown InstanceID:None} Oct 06 11:47:26 crc kubenswrapper[4958]: I1006 11:47:26.805563 4958 manager_no_libpfm.go:29] cAdvisor is build without cgo and/or libpfm support. Perf event counters are not available. Oct 06 11:47:26 crc kubenswrapper[4958]: I1006 11:47:26.805847 4958 manager.go:233] Version: {KernelVersion:5.14.0-427.50.2.el9_4.x86_64 ContainerOsVersion:Red Hat Enterprise Linux CoreOS 418.94.202502100215-0 DockerVersion: DockerAPIVersion: CadvisorVersion: CadvisorRevision:} Oct 06 11:47:26 crc kubenswrapper[4958]: I1006 11:47:26.806781 4958 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Oct 06 11:47:26 crc kubenswrapper[4958]: I1006 11:47:26.807133 4958 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Oct 06 11:47:26 crc kubenswrapper[4958]: I1006 11:47:26.807226 4958 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"crc","RuntimeCgroupsName":"/system.slice/crio.service","SystemCgroupsName":"/system.slice","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":true,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":{"cpu":"200m","ephemeral-storage":"350Mi","memory":"350Mi"},"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":4096,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Oct 06 11:47:26 crc kubenswrapper[4958]: I1006 11:47:26.807583 4958 topology_manager.go:138] "Creating topology manager with none policy" Oct 06 11:47:26 crc kubenswrapper[4958]: I1006 11:47:26.807603 4958 container_manager_linux.go:303] "Creating device plugin manager" Oct 06 11:47:26 crc kubenswrapper[4958]: I1006 11:47:26.808333 4958 manager.go:142] "Creating Device Plugin manager" path="/var/lib/kubelet/device-plugins/kubelet.sock" Oct 06 11:47:26 crc kubenswrapper[4958]: I1006 11:47:26.809075 4958 server.go:66] "Creating device plugin registration server" version="v1beta1" socket="/var/lib/kubelet/device-plugins/kubelet.sock" Oct 06 11:47:26 crc kubenswrapper[4958]: I1006 11:47:26.809449 4958 state_mem.go:36] "Initialized new in-memory state store" Oct 06 11:47:26 crc kubenswrapper[4958]: I1006 11:47:26.809597 4958 server.go:1245] "Using root directory" path="/var/lib/kubelet" Oct 06 11:47:26 crc kubenswrapper[4958]: I1006 11:47:26.814456 4958 kubelet.go:418] "Attempting to sync node with API server" Oct 06 11:47:26 crc kubenswrapper[4958]: I1006 11:47:26.814497 4958 kubelet.go:313] "Adding static pod path" path="/etc/kubernetes/manifests" Oct 06 11:47:26 crc kubenswrapper[4958]: I1006 11:47:26.814542 4958 file.go:69] "Watching path" path="/etc/kubernetes/manifests" Oct 06 11:47:26 crc kubenswrapper[4958]: I1006 11:47:26.814567 4958 kubelet.go:324] "Adding apiserver pod source" Oct 06 11:47:26 crc kubenswrapper[4958]: I1006 11:47:26.814692 4958 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Oct 06 11:47:26 crc kubenswrapper[4958]: W1006 11:47:26.824950 4958 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": dial tcp 38.102.83.73:6443: connect: connection refused Oct 06 11:47:26 crc kubenswrapper[4958]: E1006 11:47:26.825057 4958 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": dial tcp 38.102.83.73:6443: connect: connection refused" logger="UnhandledError" Oct 06 11:47:26 crc kubenswrapper[4958]: I1006 11:47:26.825220 4958 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="cri-o" version="1.31.5-4.rhaos4.18.gitdad78d5.el9" apiVersion="v1" Oct 06 11:47:26 crc kubenswrapper[4958]: W1006 11:47:26.825278 4958 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 38.102.83.73:6443: connect: connection refused Oct 06 11:47:26 crc kubenswrapper[4958]: E1006 11:47:26.825437 4958 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 38.102.83.73:6443: connect: connection refused" logger="UnhandledError" Oct 06 11:47:26 crc kubenswrapper[4958]: I1006 11:47:26.826321 4958 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-server-current.pem". Oct 06 11:47:26 crc kubenswrapper[4958]: I1006 11:47:26.828321 4958 kubelet.go:854] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Oct 06 11:47:26 crc kubenswrapper[4958]: I1006 11:47:26.829953 4958 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/portworx-volume" Oct 06 11:47:26 crc kubenswrapper[4958]: I1006 11:47:26.829981 4958 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/empty-dir" Oct 06 11:47:26 crc kubenswrapper[4958]: I1006 11:47:26.829989 4958 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/git-repo" Oct 06 11:47:26 crc kubenswrapper[4958]: I1006 11:47:26.829997 4958 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/host-path" Oct 06 11:47:26 crc kubenswrapper[4958]: I1006 11:47:26.830010 4958 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/nfs" Oct 06 11:47:26 crc kubenswrapper[4958]: I1006 11:47:26.830020 4958 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/secret" Oct 06 11:47:26 crc kubenswrapper[4958]: I1006 11:47:26.830030 4958 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/iscsi" Oct 06 11:47:26 crc kubenswrapper[4958]: I1006 11:47:26.830045 4958 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/downward-api" Oct 06 11:47:26 crc kubenswrapper[4958]: I1006 11:47:26.830054 4958 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/fc" Oct 06 11:47:26 crc kubenswrapper[4958]: I1006 11:47:26.830065 4958 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/configmap" Oct 06 11:47:26 crc kubenswrapper[4958]: I1006 11:47:26.830086 4958 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/projected" Oct 06 11:47:26 crc kubenswrapper[4958]: I1006 11:47:26.830094 4958 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/local-volume" Oct 06 11:47:26 crc kubenswrapper[4958]: I1006 11:47:26.831692 4958 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/csi" Oct 06 11:47:26 crc kubenswrapper[4958]: I1006 11:47:26.832344 4958 server.go:1280] "Started kubelet" Oct 06 11:47:26 crc kubenswrapper[4958]: I1006 11:47:26.832503 4958 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Oct 06 11:47:26 crc kubenswrapper[4958]: I1006 11:47:26.832598 4958 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Oct 06 11:47:26 crc kubenswrapper[4958]: I1006 11:47:26.833401 4958 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Oct 06 11:47:26 crc systemd[1]: Started Kubernetes Kubelet. Oct 06 11:47:26 crc kubenswrapper[4958]: I1006 11:47:26.833715 4958 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.73:6443: connect: connection refused Oct 06 11:47:26 crc kubenswrapper[4958]: I1006 11:47:26.839750 4958 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate rotation is enabled Oct 06 11:47:26 crc kubenswrapper[4958]: I1006 11:47:26.839909 4958 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Oct 06 11:47:26 crc kubenswrapper[4958]: I1006 11:47:26.839839 4958 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-25 12:24:28.545346878 +0000 UTC Oct 06 11:47:26 crc kubenswrapper[4958]: I1006 11:47:26.840032 4958 certificate_manager.go:356] kubernetes.io/kubelet-serving: Waiting 1200h37m1.705325459s for next certificate rotation Oct 06 11:47:26 crc kubenswrapper[4958]: E1006 11:47:26.840204 4958 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Oct 06 11:47:26 crc kubenswrapper[4958]: I1006 11:47:26.840695 4958 volume_manager.go:287] "The desired_state_of_world populator starts" Oct 06 11:47:26 crc kubenswrapper[4958]: I1006 11:47:26.840748 4958 volume_manager.go:289] "Starting Kubelet Volume Manager" Oct 06 11:47:26 crc kubenswrapper[4958]: I1006 11:47:26.840933 4958 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Oct 06 11:47:26 crc kubenswrapper[4958]: I1006 11:47:26.841483 4958 factory.go:55] Registering systemd factory Oct 06 11:47:26 crc kubenswrapper[4958]: I1006 11:47:26.841512 4958 factory.go:221] Registration of the systemd container factory successfully Oct 06 11:47:26 crc kubenswrapper[4958]: W1006 11:47:26.842253 4958 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 38.102.83.73:6443: connect: connection refused Oct 06 11:47:26 crc kubenswrapper[4958]: E1006 11:47:26.842332 4958 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 38.102.83.73:6443: connect: connection refused" logger="UnhandledError" Oct 06 11:47:26 crc kubenswrapper[4958]: I1006 11:47:26.842486 4958 factory.go:153] Registering CRI-O factory Oct 06 11:47:26 crc kubenswrapper[4958]: I1006 11:47:26.842537 4958 factory.go:221] Registration of the crio container factory successfully Oct 06 11:47:26 crc kubenswrapper[4958]: E1006 11:47:26.842478 4958 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.73:6443: connect: connection refused" interval="200ms" Oct 06 11:47:26 crc kubenswrapper[4958]: I1006 11:47:26.842694 4958 factory.go:219] Registration of the containerd container factory failed: unable to create containerd client: containerd: cannot unix dial containerd api service: dial unix /run/containerd/containerd.sock: connect: no such file or directory Oct 06 11:47:26 crc kubenswrapper[4958]: I1006 11:47:26.842727 4958 factory.go:103] Registering Raw factory Oct 06 11:47:26 crc kubenswrapper[4958]: I1006 11:47:26.842747 4958 manager.go:1196] Started watching for new ooms in manager Oct 06 11:47:26 crc kubenswrapper[4958]: E1006 11:47:26.841181 4958 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/default/events\": dial tcp 38.102.83.73:6443: connect: connection refused" event="&Event{ObjectMeta:{crc.186be46f787d74bf default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-10-06 11:47:26.832301247 +0000 UTC m=+0.718326555,LastTimestamp:2025-10-06 11:47:26.832301247 +0000 UTC m=+0.718326555,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Oct 06 11:47:26 crc kubenswrapper[4958]: I1006 11:47:26.843322 4958 manager.go:319] Starting recovery of all containers Oct 06 11:47:26 crc kubenswrapper[4958]: I1006 11:47:26.847111 4958 server.go:460] "Adding debug handlers to kubelet server" Oct 06 11:47:26 crc kubenswrapper[4958]: I1006 11:47:26.852658 4958 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy" seLinuxMountContext="" Oct 06 11:47:26 crc kubenswrapper[4958]: I1006 11:47:26.852730 4958 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca" seLinuxMountContext="" Oct 06 11:47:26 crc kubenswrapper[4958]: I1006 11:47:26.852745 4958 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls" seLinuxMountContext="" Oct 06 11:47:26 crc kubenswrapper[4958]: I1006 11:47:26.852761 4958 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config" seLinuxMountContext="" Oct 06 11:47:26 crc kubenswrapper[4958]: I1006 11:47:26.852774 4958 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs" seLinuxMountContext="" Oct 06 11:47:26 crc kubenswrapper[4958]: I1006 11:47:26.852787 4958 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config" seLinuxMountContext="" Oct 06 11:47:26 crc kubenswrapper[4958]: I1006 11:47:26.852801 4958 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb" seLinuxMountContext="" Oct 06 11:47:26 crc kubenswrapper[4958]: I1006 11:47:26.852814 4958 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls" seLinuxMountContext="" Oct 06 11:47:26 crc kubenswrapper[4958]: I1006 11:47:26.852831 4958 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bd23aa5c-e532-4e53-bccf-e79f130c5ae8" volumeName="kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2" seLinuxMountContext="" Oct 06 11:47:26 crc kubenswrapper[4958]: I1006 11:47:26.852842 4958 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token" seLinuxMountContext="" Oct 06 11:47:26 crc kubenswrapper[4958]: I1006 11:47:26.852853 4958 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert" seLinuxMountContext="" Oct 06 11:47:26 crc kubenswrapper[4958]: I1006 11:47:26.852867 4958 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access" seLinuxMountContext="" Oct 06 11:47:26 crc kubenswrapper[4958]: I1006 11:47:26.852880 4958 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert" seLinuxMountContext="" Oct 06 11:47:26 crc kubenswrapper[4958]: I1006 11:47:26.852896 4958 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle" seLinuxMountContext="" Oct 06 11:47:26 crc kubenswrapper[4958]: I1006 11:47:26.852912 4958 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct" seLinuxMountContext="" Oct 06 11:47:26 crc kubenswrapper[4958]: I1006 11:47:26.852928 4958 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides" seLinuxMountContext="" Oct 06 11:47:26 crc kubenswrapper[4958]: I1006 11:47:26.852941 4958 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls" seLinuxMountContext="" Oct 06 11:47:26 crc kubenswrapper[4958]: I1006 11:47:26.852957 4958 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn" seLinuxMountContext="" Oct 06 11:47:26 crc kubenswrapper[4958]: I1006 11:47:26.852970 4958 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk" seLinuxMountContext="" Oct 06 11:47:26 crc kubenswrapper[4958]: I1006 11:47:26.852988 4958 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca" seLinuxMountContext="" Oct 06 11:47:26 crc kubenswrapper[4958]: I1006 11:47:26.853033 4958 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config" seLinuxMountContext="" Oct 06 11:47:26 crc kubenswrapper[4958]: I1006 11:47:26.853051 4958 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config" seLinuxMountContext="" Oct 06 11:47:26 crc kubenswrapper[4958]: I1006 11:47:26.853064 4958 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="efdd0498-1daa-4136-9a4a-3b948c2293fc" volumeName="kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs" seLinuxMountContext="" Oct 06 11:47:26 crc kubenswrapper[4958]: I1006 11:47:26.853079 4958 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content" seLinuxMountContext="" Oct 06 11:47:26 crc kubenswrapper[4958]: I1006 11:47:26.853095 4958 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib" seLinuxMountContext="" Oct 06 11:47:26 crc kubenswrapper[4958]: I1006 11:47:26.853113 4958 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content" seLinuxMountContext="" Oct 06 11:47:26 crc kubenswrapper[4958]: I1006 11:47:26.853135 4958 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca" seLinuxMountContext="" Oct 06 11:47:26 crc kubenswrapper[4958]: I1006 11:47:26.853166 4958 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle" seLinuxMountContext="" Oct 06 11:47:26 crc kubenswrapper[4958]: I1006 11:47:26.853181 4958 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access" seLinuxMountContext="" Oct 06 11:47:26 crc kubenswrapper[4958]: I1006 11:47:26.853197 4958 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" volumeName="kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca" seLinuxMountContext="" Oct 06 11:47:26 crc kubenswrapper[4958]: I1006 11:47:26.853214 4958 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="44663579-783b-4372-86d6-acf235a62d72" volumeName="kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc" seLinuxMountContext="" Oct 06 11:47:26 crc kubenswrapper[4958]: I1006 11:47:26.853241 4958 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca" seLinuxMountContext="" Oct 06 11:47:26 crc kubenswrapper[4958]: I1006 11:47:26.853256 4958 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config" seLinuxMountContext="" Oct 06 11:47:26 crc kubenswrapper[4958]: I1006 11:47:26.853272 4958 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c" seLinuxMountContext="" Oct 06 11:47:26 crc kubenswrapper[4958]: I1006 11:47:26.853285 4958 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates" seLinuxMountContext="" Oct 06 11:47:26 crc kubenswrapper[4958]: I1006 11:47:26.853301 4958 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection" seLinuxMountContext="" Oct 06 11:47:26 crc kubenswrapper[4958]: I1006 11:47:26.853315 4958 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49ef4625-1d3a-4a9f-b595-c2433d32326d" volumeName="kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v" seLinuxMountContext="" Oct 06 11:47:26 crc kubenswrapper[4958]: I1006 11:47:26.853333 4958 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities" seLinuxMountContext="" Oct 06 11:47:26 crc kubenswrapper[4958]: I1006 11:47:26.853353 4958 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert" seLinuxMountContext="" Oct 06 11:47:26 crc kubenswrapper[4958]: I1006 11:47:26.853367 4958 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert" seLinuxMountContext="" Oct 06 11:47:26 crc kubenswrapper[4958]: I1006 11:47:26.853380 4958 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca" seLinuxMountContext="" Oct 06 11:47:26 crc kubenswrapper[4958]: I1006 11:47:26.853394 4958 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert" seLinuxMountContext="" Oct 06 11:47:26 crc kubenswrapper[4958]: I1006 11:47:26.853408 4958 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config" seLinuxMountContext="" Oct 06 11:47:26 crc kubenswrapper[4958]: I1006 11:47:26.853421 4958 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth" seLinuxMountContext="" Oct 06 11:47:26 crc kubenswrapper[4958]: I1006 11:47:26.853436 4958 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy" seLinuxMountContext="" Oct 06 11:47:26 crc kubenswrapper[4958]: I1006 11:47:26.853450 4958 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5b88f790-22fa-440e-b583-365168c0b23d" volumeName="kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn" seLinuxMountContext="" Oct 06 11:47:26 crc kubenswrapper[4958]: I1006 11:47:26.853463 4958 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" volumeName="kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf" seLinuxMountContext="" Oct 06 11:47:26 crc kubenswrapper[4958]: I1006 11:47:26.853478 4958 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr" seLinuxMountContext="" Oct 06 11:47:26 crc kubenswrapper[4958]: I1006 11:47:26.853491 4958 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities" seLinuxMountContext="" Oct 06 11:47:26 crc kubenswrapper[4958]: I1006 11:47:26.853509 4958 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config" seLinuxMountContext="" Oct 06 11:47:26 crc kubenswrapper[4958]: I1006 11:47:26.853521 4958 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access" seLinuxMountContext="" Oct 06 11:47:26 crc kubenswrapper[4958]: I1006 11:47:26.853533 4958 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics" seLinuxMountContext="" Oct 06 11:47:26 crc kubenswrapper[4958]: I1006 11:47:26.853554 4958 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access" seLinuxMountContext="" Oct 06 11:47:26 crc kubenswrapper[4958]: I1006 11:47:26.853568 4958 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5" seLinuxMountContext="" Oct 06 11:47:26 crc kubenswrapper[4958]: I1006 11:47:26.853584 4958 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert" seLinuxMountContext="" Oct 06 11:47:26 crc kubenswrapper[4958]: I1006 11:47:26.853598 4958 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca" seLinuxMountContext="" Oct 06 11:47:26 crc kubenswrapper[4958]: I1006 11:47:26.853616 4958 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities" seLinuxMountContext="" Oct 06 11:47:26 crc kubenswrapper[4958]: I1006 11:47:26.853629 4958 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls" seLinuxMountContext="" Oct 06 11:47:26 crc kubenswrapper[4958]: I1006 11:47:26.853646 4958 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist" seLinuxMountContext="" Oct 06 11:47:26 crc kubenswrapper[4958]: I1006 11:47:26.853659 4958 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls" seLinuxMountContext="" Oct 06 11:47:26 crc kubenswrapper[4958]: I1006 11:47:26.853672 4958 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52" seLinuxMountContext="" Oct 06 11:47:26 crc kubenswrapper[4958]: I1006 11:47:26.853687 4958 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="efdd0498-1daa-4136-9a4a-3b948c2293fc" volumeName="kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt" seLinuxMountContext="" Oct 06 11:47:26 crc kubenswrapper[4958]: I1006 11:47:26.853701 4958 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca" seLinuxMountContext="" Oct 06 11:47:26 crc kubenswrapper[4958]: I1006 11:47:26.853724 4958 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error" seLinuxMountContext="" Oct 06 11:47:26 crc kubenswrapper[4958]: I1006 11:47:26.853739 4958 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782" seLinuxMountContext="" Oct 06 11:47:26 crc kubenswrapper[4958]: I1006 11:47:26.853753 4958 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca" seLinuxMountContext="" Oct 06 11:47:26 crc kubenswrapper[4958]: I1006 11:47:26.853767 4958 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert" seLinuxMountContext="" Oct 06 11:47:26 crc kubenswrapper[4958]: I1006 11:47:26.853782 4958 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert" seLinuxMountContext="" Oct 06 11:47:26 crc kubenswrapper[4958]: I1006 11:47:26.853795 4958 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca" seLinuxMountContext="" Oct 06 11:47:26 crc kubenswrapper[4958]: I1006 11:47:26.853810 4958 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content" seLinuxMountContext="" Oct 06 11:47:26 crc kubenswrapper[4958]: I1006 11:47:26.853827 4958 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp" seLinuxMountContext="" Oct 06 11:47:26 crc kubenswrapper[4958]: I1006 11:47:26.853840 4958 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp" seLinuxMountContext="" Oct 06 11:47:26 crc kubenswrapper[4958]: I1006 11:47:26.853856 4958 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs" seLinuxMountContext="" Oct 06 11:47:26 crc kubenswrapper[4958]: I1006 11:47:26.853870 4958 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config" seLinuxMountContext="" Oct 06 11:47:26 crc kubenswrapper[4958]: I1006 11:47:26.853884 4958 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" seLinuxMountContext="" Oct 06 11:47:26 crc kubenswrapper[4958]: I1006 11:47:26.856193 4958 reconstruct.go:144] "Volume is marked device as uncertain and added into the actual state" volumeName="kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" deviceMountPath="/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount" Oct 06 11:47:26 crc kubenswrapper[4958]: I1006 11:47:26.856266 4958 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config" seLinuxMountContext="" Oct 06 11:47:26 crc kubenswrapper[4958]: I1006 11:47:26.856293 4958 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client" seLinuxMountContext="" Oct 06 11:47:26 crc kubenswrapper[4958]: I1006 11:47:26.856311 4958 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert" seLinuxMountContext="" Oct 06 11:47:26 crc kubenswrapper[4958]: I1006 11:47:26.856333 4958 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca" seLinuxMountContext="" Oct 06 11:47:26 crc kubenswrapper[4958]: I1006 11:47:26.856351 4958 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config" seLinuxMountContext="" Oct 06 11:47:26 crc kubenswrapper[4958]: I1006 11:47:26.856368 4958 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" volumeName="kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7" seLinuxMountContext="" Oct 06 11:47:26 crc kubenswrapper[4958]: I1006 11:47:26.856418 4958 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" volumeName="kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls" seLinuxMountContext="" Oct 06 11:47:26 crc kubenswrapper[4958]: I1006 11:47:26.856438 4958 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert" seLinuxMountContext="" Oct 06 11:47:26 crc kubenswrapper[4958]: I1006 11:47:26.856456 4958 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token" seLinuxMountContext="" Oct 06 11:47:26 crc kubenswrapper[4958]: I1006 11:47:26.856475 4958 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx" seLinuxMountContext="" Oct 06 11:47:26 crc kubenswrapper[4958]: I1006 11:47:26.856494 4958 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token" seLinuxMountContext="" Oct 06 11:47:26 crc kubenswrapper[4958]: I1006 11:47:26.856514 4958 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle" seLinuxMountContext="" Oct 06 11:47:26 crc kubenswrapper[4958]: I1006 11:47:26.856534 4958 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert" seLinuxMountContext="" Oct 06 11:47:26 crc kubenswrapper[4958]: I1006 11:47:26.856552 4958 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5" seLinuxMountContext="" Oct 06 11:47:26 crc kubenswrapper[4958]: I1006 11:47:26.856567 4958 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig" seLinuxMountContext="" Oct 06 11:47:26 crc kubenswrapper[4958]: I1006 11:47:26.856585 4958 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5b88f790-22fa-440e-b583-365168c0b23d" volumeName="kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs" seLinuxMountContext="" Oct 06 11:47:26 crc kubenswrapper[4958]: I1006 11:47:26.856601 4958 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config" seLinuxMountContext="" Oct 06 11:47:26 crc kubenswrapper[4958]: I1006 11:47:26.856618 4958 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm" seLinuxMountContext="" Oct 06 11:47:26 crc kubenswrapper[4958]: I1006 11:47:26.856635 4958 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j" seLinuxMountContext="" Oct 06 11:47:26 crc kubenswrapper[4958]: I1006 11:47:26.856658 4958 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3ab1a177-2de0-46d9-b765-d0d0649bb42e" volumeName="kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj" seLinuxMountContext="" Oct 06 11:47:26 crc kubenswrapper[4958]: I1006 11:47:26.856679 4958 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3b6479f0-333b-4a96-9adf-2099afdc2447" volumeName="kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr" seLinuxMountContext="" Oct 06 11:47:26 crc kubenswrapper[4958]: I1006 11:47:26.856695 4958 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config" seLinuxMountContext="" Oct 06 11:47:26 crc kubenswrapper[4958]: I1006 11:47:26.856711 4958 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert" seLinuxMountContext="" Oct 06 11:47:26 crc kubenswrapper[4958]: I1006 11:47:26.856728 4958 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate" seLinuxMountContext="" Oct 06 11:47:26 crc kubenswrapper[4958]: I1006 11:47:26.856746 4958 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d75a4c96-2883-4a0b-bab2-0fab2b6c0b49" volumeName="kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb" seLinuxMountContext="" Oct 06 11:47:26 crc kubenswrapper[4958]: I1006 11:47:26.856764 4958 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert" seLinuxMountContext="" Oct 06 11:47:26 crc kubenswrapper[4958]: I1006 11:47:26.856784 4958 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key" seLinuxMountContext="" Oct 06 11:47:26 crc kubenswrapper[4958]: I1006 11:47:26.856814 4958 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert" seLinuxMountContext="" Oct 06 11:47:26 crc kubenswrapper[4958]: I1006 11:47:26.856834 4958 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config" seLinuxMountContext="" Oct 06 11:47:26 crc kubenswrapper[4958]: I1006 11:47:26.856883 4958 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert" seLinuxMountContext="" Oct 06 11:47:26 crc kubenswrapper[4958]: I1006 11:47:26.856904 4958 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config" seLinuxMountContext="" Oct 06 11:47:26 crc kubenswrapper[4958]: I1006 11:47:26.856931 4958 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides" seLinuxMountContext="" Oct 06 11:47:26 crc kubenswrapper[4958]: I1006 11:47:26.856958 4958 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert" seLinuxMountContext="" Oct 06 11:47:26 crc kubenswrapper[4958]: I1006 11:47:26.856981 4958 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs" seLinuxMountContext="" Oct 06 11:47:26 crc kubenswrapper[4958]: I1006 11:47:26.857009 4958 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d75a4c96-2883-4a0b-bab2-0fab2b6c0b49" volumeName="kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script" seLinuxMountContext="" Oct 06 11:47:26 crc kubenswrapper[4958]: I1006 11:47:26.857029 4958 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz" seLinuxMountContext="" Oct 06 11:47:26 crc kubenswrapper[4958]: I1006 11:47:26.857047 4958 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp" seLinuxMountContext="" Oct 06 11:47:26 crc kubenswrapper[4958]: I1006 11:47:26.857067 4958 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" volumeName="kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8" seLinuxMountContext="" Oct 06 11:47:26 crc kubenswrapper[4958]: I1006 11:47:26.857087 4958 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies" seLinuxMountContext="" Oct 06 11:47:26 crc kubenswrapper[4958]: I1006 11:47:26.857104 4958 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted" seLinuxMountContext="" Oct 06 11:47:26 crc kubenswrapper[4958]: I1006 11:47:26.857122 4958 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config" seLinuxMountContext="" Oct 06 11:47:26 crc kubenswrapper[4958]: I1006 11:47:26.857163 4958 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7" seLinuxMountContext="" Oct 06 11:47:26 crc kubenswrapper[4958]: I1006 11:47:26.857181 4958 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles" seLinuxMountContext="" Oct 06 11:47:26 crc kubenswrapper[4958]: I1006 11:47:26.857201 4958 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume" seLinuxMountContext="" Oct 06 11:47:26 crc kubenswrapper[4958]: I1006 11:47:26.857217 4958 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets" seLinuxMountContext="" Oct 06 11:47:26 crc kubenswrapper[4958]: I1006 11:47:26.857234 4958 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls" seLinuxMountContext="" Oct 06 11:47:26 crc kubenswrapper[4958]: I1006 11:47:26.857252 4958 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config" seLinuxMountContext="" Oct 06 11:47:26 crc kubenswrapper[4958]: I1006 11:47:26.857270 4958 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content" seLinuxMountContext="" Oct 06 11:47:26 crc kubenswrapper[4958]: I1006 11:47:26.857286 4958 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle" seLinuxMountContext="" Oct 06 11:47:26 crc kubenswrapper[4958]: I1006 11:47:26.857303 4958 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd" seLinuxMountContext="" Oct 06 11:47:26 crc kubenswrapper[4958]: I1006 11:47:26.857321 4958 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6" seLinuxMountContext="" Oct 06 11:47:26 crc kubenswrapper[4958]: I1006 11:47:26.857339 4958 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert" seLinuxMountContext="" Oct 06 11:47:26 crc kubenswrapper[4958]: I1006 11:47:26.857354 4958 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="20b0d48f-5fd6-431c-a545-e3c800c7b866" volumeName="kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert" seLinuxMountContext="" Oct 06 11:47:26 crc kubenswrapper[4958]: I1006 11:47:26.857372 4958 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle" seLinuxMountContext="" Oct 06 11:47:26 crc kubenswrapper[4958]: I1006 11:47:26.857393 4958 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config" seLinuxMountContext="" Oct 06 11:47:26 crc kubenswrapper[4958]: I1006 11:47:26.857409 4958 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf" seLinuxMountContext="" Oct 06 11:47:26 crc kubenswrapper[4958]: I1006 11:47:26.857426 4958 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh" seLinuxMountContext="" Oct 06 11:47:26 crc kubenswrapper[4958]: I1006 11:47:26.857442 4958 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz" seLinuxMountContext="" Oct 06 11:47:26 crc kubenswrapper[4958]: I1006 11:47:26.857457 4958 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" volumeName="kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls" seLinuxMountContext="" Oct 06 11:47:26 crc kubenswrapper[4958]: I1006 11:47:26.857473 4958 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert" seLinuxMountContext="" Oct 06 11:47:26 crc kubenswrapper[4958]: I1006 11:47:26.857490 4958 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config" seLinuxMountContext="" Oct 06 11:47:26 crc kubenswrapper[4958]: I1006 11:47:26.857510 4958 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7" seLinuxMountContext="" Oct 06 11:47:26 crc kubenswrapper[4958]: I1006 11:47:26.857527 4958 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert" seLinuxMountContext="" Oct 06 11:47:26 crc kubenswrapper[4958]: I1006 11:47:26.857544 4958 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client" seLinuxMountContext="" Oct 06 11:47:26 crc kubenswrapper[4958]: I1006 11:47:26.857560 4958 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle" seLinuxMountContext="" Oct 06 11:47:26 crc kubenswrapper[4958]: I1006 11:47:26.857575 4958 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d751cbb-f2e2-430d-9754-c882a5e924a5" volumeName="kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl" seLinuxMountContext="" Oct 06 11:47:26 crc kubenswrapper[4958]: I1006 11:47:26.857590 4958 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf" seLinuxMountContext="" Oct 06 11:47:26 crc kubenswrapper[4958]: I1006 11:47:26.857606 4958 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config" seLinuxMountContext="" Oct 06 11:47:26 crc kubenswrapper[4958]: I1006 11:47:26.857621 4958 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="37a5e44f-9a88-4405-be8a-b645485e7312" volumeName="kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls" seLinuxMountContext="" Oct 06 11:47:26 crc kubenswrapper[4958]: I1006 11:47:26.857635 4958 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login" seLinuxMountContext="" Oct 06 11:47:26 crc kubenswrapper[4958]: I1006 11:47:26.857650 4958 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates" seLinuxMountContext="" Oct 06 11:47:26 crc kubenswrapper[4958]: I1006 11:47:26.857664 4958 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert" seLinuxMountContext="" Oct 06 11:47:26 crc kubenswrapper[4958]: I1006 11:47:26.857679 4958 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client" seLinuxMountContext="" Oct 06 11:47:26 crc kubenswrapper[4958]: I1006 11:47:26.857693 4958 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert" seLinuxMountContext="" Oct 06 11:47:26 crc kubenswrapper[4958]: I1006 11:47:26.857709 4958 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images" seLinuxMountContext="" Oct 06 11:47:26 crc kubenswrapper[4958]: I1006 11:47:26.857979 4958 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca" seLinuxMountContext="" Oct 06 11:47:26 crc kubenswrapper[4958]: I1006 11:47:26.857996 4958 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert" seLinuxMountContext="" Oct 06 11:47:26 crc kubenswrapper[4958]: I1006 11:47:26.858008 4958 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data" seLinuxMountContext="" Oct 06 11:47:26 crc kubenswrapper[4958]: I1006 11:47:26.858022 4958 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert" seLinuxMountContext="" Oct 06 11:47:26 crc kubenswrapper[4958]: I1006 11:47:26.858035 4958 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7" seLinuxMountContext="" Oct 06 11:47:26 crc kubenswrapper[4958]: I1006 11:47:26.858048 4958 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert" seLinuxMountContext="" Oct 06 11:47:26 crc kubenswrapper[4958]: I1006 11:47:26.858063 4958 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca" seLinuxMountContext="" Oct 06 11:47:26 crc kubenswrapper[4958]: I1006 11:47:26.858075 4958 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88" seLinuxMountContext="" Oct 06 11:47:26 crc kubenswrapper[4958]: I1006 11:47:26.858088 4958 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca" seLinuxMountContext="" Oct 06 11:47:26 crc kubenswrapper[4958]: I1006 11:47:26.858102 4958 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides" seLinuxMountContext="" Oct 06 11:47:26 crc kubenswrapper[4958]: I1006 11:47:26.858115 4958 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5" seLinuxMountContext="" Oct 06 11:47:26 crc kubenswrapper[4958]: I1006 11:47:26.858127 4958 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls" seLinuxMountContext="" Oct 06 11:47:26 crc kubenswrapper[4958]: I1006 11:47:26.858159 4958 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3ab1a177-2de0-46d9-b765-d0d0649bb42e" volumeName="kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert" seLinuxMountContext="" Oct 06 11:47:26 crc kubenswrapper[4958]: I1006 11:47:26.858176 4958 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs" seLinuxMountContext="" Oct 06 11:47:26 crc kubenswrapper[4958]: I1006 11:47:26.858214 4958 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz" seLinuxMountContext="" Oct 06 11:47:26 crc kubenswrapper[4958]: I1006 11:47:26.858228 4958 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert" seLinuxMountContext="" Oct 06 11:47:26 crc kubenswrapper[4958]: I1006 11:47:26.858240 4958 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb" seLinuxMountContext="" Oct 06 11:47:26 crc kubenswrapper[4958]: I1006 11:47:26.858256 4958 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config" seLinuxMountContext="" Oct 06 11:47:26 crc kubenswrapper[4958]: I1006 11:47:26.858269 4958 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz" seLinuxMountContext="" Oct 06 11:47:26 crc kubenswrapper[4958]: I1006 11:47:26.858281 4958 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="20b0d48f-5fd6-431c-a545-e3c800c7b866" volumeName="kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds" seLinuxMountContext="" Oct 06 11:47:26 crc kubenswrapper[4958]: I1006 11:47:26.858296 4958 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config" seLinuxMountContext="" Oct 06 11:47:26 crc kubenswrapper[4958]: I1006 11:47:26.858308 4958 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv" seLinuxMountContext="" Oct 06 11:47:26 crc kubenswrapper[4958]: I1006 11:47:26.858320 4958 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies" seLinuxMountContext="" Oct 06 11:47:26 crc kubenswrapper[4958]: I1006 11:47:26.858332 4958 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template" seLinuxMountContext="" Oct 06 11:47:26 crc kubenswrapper[4958]: I1006 11:47:26.858345 4958 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config" seLinuxMountContext="" Oct 06 11:47:26 crc kubenswrapper[4958]: I1006 11:47:26.858358 4958 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv" seLinuxMountContext="" Oct 06 11:47:26 crc kubenswrapper[4958]: I1006 11:47:26.858372 4958 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config" seLinuxMountContext="" Oct 06 11:47:26 crc kubenswrapper[4958]: I1006 11:47:26.858385 4958 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle" seLinuxMountContext="" Oct 06 11:47:26 crc kubenswrapper[4958]: I1006 11:47:26.858398 4958 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6731426b-95fe-49ff-bb5f-40441049fde2" volumeName="kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls" seLinuxMountContext="" Oct 06 11:47:26 crc kubenswrapper[4958]: I1006 11:47:26.858410 4958 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config" seLinuxMountContext="" Oct 06 11:47:26 crc kubenswrapper[4958]: I1006 11:47:26.858423 4958 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl" seLinuxMountContext="" Oct 06 11:47:26 crc kubenswrapper[4958]: I1006 11:47:26.858436 4958 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca" seLinuxMountContext="" Oct 06 11:47:26 crc kubenswrapper[4958]: I1006 11:47:26.858452 4958 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" volumeName="kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert" seLinuxMountContext="" Oct 06 11:47:26 crc kubenswrapper[4958]: I1006 11:47:26.858464 4958 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6731426b-95fe-49ff-bb5f-40441049fde2" volumeName="kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh" seLinuxMountContext="" Oct 06 11:47:26 crc kubenswrapper[4958]: I1006 11:47:26.858476 4958 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" volumeName="kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m" seLinuxMountContext="" Oct 06 11:47:26 crc kubenswrapper[4958]: I1006 11:47:26.858488 4958 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert" seLinuxMountContext="" Oct 06 11:47:26 crc kubenswrapper[4958]: I1006 11:47:26.858504 4958 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca" seLinuxMountContext="" Oct 06 11:47:26 crc kubenswrapper[4958]: I1006 11:47:26.858515 4958 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config" seLinuxMountContext="" Oct 06 11:47:26 crc kubenswrapper[4958]: I1006 11:47:26.858528 4958 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert" seLinuxMountContext="" Oct 06 11:47:26 crc kubenswrapper[4958]: I1006 11:47:26.858543 4958 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities" seLinuxMountContext="" Oct 06 11:47:26 crc kubenswrapper[4958]: I1006 11:47:26.858561 4958 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images" seLinuxMountContext="" Oct 06 11:47:26 crc kubenswrapper[4958]: I1006 11:47:26.858576 4958 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh" seLinuxMountContext="" Oct 06 11:47:26 crc kubenswrapper[4958]: I1006 11:47:26.858589 4958 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg" seLinuxMountContext="" Oct 06 11:47:26 crc kubenswrapper[4958]: I1006 11:47:26.858602 4958 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh" seLinuxMountContext="" Oct 06 11:47:26 crc kubenswrapper[4958]: I1006 11:47:26.858615 4958 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit" seLinuxMountContext="" Oct 06 11:47:26 crc kubenswrapper[4958]: I1006 11:47:26.858627 4958 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8" seLinuxMountContext="" Oct 06 11:47:26 crc kubenswrapper[4958]: I1006 11:47:26.858639 4958 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session" seLinuxMountContext="" Oct 06 11:47:26 crc kubenswrapper[4958]: I1006 11:47:26.858651 4958 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls" seLinuxMountContext="" Oct 06 11:47:26 crc kubenswrapper[4958]: I1006 11:47:26.858663 4958 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="37a5e44f-9a88-4405-be8a-b645485e7312" volumeName="kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf" seLinuxMountContext="" Oct 06 11:47:26 crc kubenswrapper[4958]: I1006 11:47:26.858676 4958 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle" seLinuxMountContext="" Oct 06 11:47:26 crc kubenswrapper[4958]: I1006 11:47:26.858689 4958 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token" seLinuxMountContext="" Oct 06 11:47:26 crc kubenswrapper[4958]: I1006 11:47:26.858706 4958 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4" seLinuxMountContext="" Oct 06 11:47:26 crc kubenswrapper[4958]: I1006 11:47:26.858723 4958 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs" seLinuxMountContext="" Oct 06 11:47:26 crc kubenswrapper[4958]: I1006 11:47:26.858734 4958 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" volumeName="kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85" seLinuxMountContext="" Oct 06 11:47:26 crc kubenswrapper[4958]: I1006 11:47:26.858748 4958 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config" seLinuxMountContext="" Oct 06 11:47:26 crc kubenswrapper[4958]: I1006 11:47:26.858768 4958 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert" seLinuxMountContext="" Oct 06 11:47:26 crc kubenswrapper[4958]: I1006 11:47:26.858786 4958 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls" seLinuxMountContext="" Oct 06 11:47:26 crc kubenswrapper[4958]: I1006 11:47:26.858802 4958 reconstruct.go:97] "Volume reconstruction finished" Oct 06 11:47:26 crc kubenswrapper[4958]: I1006 11:47:26.858813 4958 reconciler.go:26] "Reconciler: start to sync state" Oct 06 11:47:26 crc kubenswrapper[4958]: I1006 11:47:26.867839 4958 manager.go:324] Recovery completed Oct 06 11:47:26 crc kubenswrapper[4958]: I1006 11:47:26.886839 4958 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Oct 06 11:47:26 crc kubenswrapper[4958]: I1006 11:47:26.889367 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:47:26 crc kubenswrapper[4958]: I1006 11:47:26.889405 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:47:26 crc kubenswrapper[4958]: I1006 11:47:26.889433 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:47:26 crc kubenswrapper[4958]: I1006 11:47:26.891558 4958 cpu_manager.go:225] "Starting CPU manager" policy="none" Oct 06 11:47:26 crc kubenswrapper[4958]: I1006 11:47:26.891575 4958 cpu_manager.go:226] "Reconciling" reconcilePeriod="10s" Oct 06 11:47:26 crc kubenswrapper[4958]: I1006 11:47:26.891619 4958 state_mem.go:36] "Initialized new in-memory state store" Oct 06 11:47:26 crc kubenswrapper[4958]: I1006 11:47:26.907858 4958 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Oct 06 11:47:26 crc kubenswrapper[4958]: I1006 11:47:26.910752 4958 policy_none.go:49] "None policy: Start" Oct 06 11:47:26 crc kubenswrapper[4958]: I1006 11:47:26.911943 4958 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Oct 06 11:47:26 crc kubenswrapper[4958]: I1006 11:47:26.911981 4958 status_manager.go:217] "Starting to sync pod status with apiserver" Oct 06 11:47:26 crc kubenswrapper[4958]: I1006 11:47:26.912010 4958 kubelet.go:2335] "Starting kubelet main sync loop" Oct 06 11:47:26 crc kubenswrapper[4958]: E1006 11:47:26.912230 4958 kubelet.go:2359] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Oct 06 11:47:26 crc kubenswrapper[4958]: I1006 11:47:26.912573 4958 memory_manager.go:170] "Starting memorymanager" policy="None" Oct 06 11:47:26 crc kubenswrapper[4958]: I1006 11:47:26.912630 4958 state_mem.go:35] "Initializing new in-memory state store" Oct 06 11:47:26 crc kubenswrapper[4958]: W1006 11:47:26.913208 4958 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 38.102.83.73:6443: connect: connection refused Oct 06 11:47:26 crc kubenswrapper[4958]: E1006 11:47:26.913269 4958 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 38.102.83.73:6443: connect: connection refused" logger="UnhandledError" Oct 06 11:47:26 crc kubenswrapper[4958]: E1006 11:47:26.940988 4958 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Oct 06 11:47:26 crc kubenswrapper[4958]: I1006 11:47:26.965880 4958 manager.go:334] "Starting Device Plugin manager" Oct 06 11:47:26 crc kubenswrapper[4958]: I1006 11:47:26.966260 4958 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Oct 06 11:47:26 crc kubenswrapper[4958]: I1006 11:47:26.966292 4958 server.go:79] "Starting device plugin registration server" Oct 06 11:47:26 crc kubenswrapper[4958]: I1006 11:47:26.966833 4958 eviction_manager.go:189] "Eviction manager: starting control loop" Oct 06 11:47:26 crc kubenswrapper[4958]: I1006 11:47:26.966853 4958 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Oct 06 11:47:26 crc kubenswrapper[4958]: I1006 11:47:26.967193 4958 plugin_watcher.go:51] "Plugin Watcher Start" path="/var/lib/kubelet/plugins_registry" Oct 06 11:47:26 crc kubenswrapper[4958]: I1006 11:47:26.967294 4958 plugin_manager.go:116] "The desired_state_of_world populator (plugin watcher) starts" Oct 06 11:47:26 crc kubenswrapper[4958]: I1006 11:47:26.967307 4958 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Oct 06 11:47:26 crc kubenswrapper[4958]: E1006 11:47:26.976197 4958 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Oct 06 11:47:27 crc kubenswrapper[4958]: I1006 11:47:27.012549 4958 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-machine-config-operator/kube-rbac-proxy-crio-crc","openshift-etcd/etcd-crc","openshift-kube-apiserver/kube-apiserver-crc","openshift-kube-controller-manager/kube-controller-manager-crc","openshift-kube-scheduler/openshift-kube-scheduler-crc"] Oct 06 11:47:27 crc kubenswrapper[4958]: I1006 11:47:27.012694 4958 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Oct 06 11:47:27 crc kubenswrapper[4958]: I1006 11:47:27.014530 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:47:27 crc kubenswrapper[4958]: I1006 11:47:27.014579 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:47:27 crc kubenswrapper[4958]: I1006 11:47:27.014590 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:47:27 crc kubenswrapper[4958]: I1006 11:47:27.014774 4958 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Oct 06 11:47:27 crc kubenswrapper[4958]: I1006 11:47:27.014987 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Oct 06 11:47:27 crc kubenswrapper[4958]: I1006 11:47:27.015080 4958 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Oct 06 11:47:27 crc kubenswrapper[4958]: I1006 11:47:27.015973 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:47:27 crc kubenswrapper[4958]: I1006 11:47:27.016000 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:47:27 crc kubenswrapper[4958]: I1006 11:47:27.016055 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:47:27 crc kubenswrapper[4958]: I1006 11:47:27.016188 4958 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Oct 06 11:47:27 crc kubenswrapper[4958]: I1006 11:47:27.016324 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-crc" Oct 06 11:47:27 crc kubenswrapper[4958]: I1006 11:47:27.016360 4958 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Oct 06 11:47:27 crc kubenswrapper[4958]: I1006 11:47:27.016588 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:47:27 crc kubenswrapper[4958]: I1006 11:47:27.016617 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:47:27 crc kubenswrapper[4958]: I1006 11:47:27.016630 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:47:27 crc kubenswrapper[4958]: I1006 11:47:27.017206 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:47:27 crc kubenswrapper[4958]: I1006 11:47:27.017233 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:47:27 crc kubenswrapper[4958]: I1006 11:47:27.017243 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:47:27 crc kubenswrapper[4958]: I1006 11:47:27.017400 4958 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Oct 06 11:47:27 crc kubenswrapper[4958]: I1006 11:47:27.017648 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Oct 06 11:47:27 crc kubenswrapper[4958]: I1006 11:47:27.017727 4958 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Oct 06 11:47:27 crc kubenswrapper[4958]: I1006 11:47:27.017881 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:47:27 crc kubenswrapper[4958]: I1006 11:47:27.017902 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:47:27 crc kubenswrapper[4958]: I1006 11:47:27.017912 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:47:27 crc kubenswrapper[4958]: I1006 11:47:27.018261 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:47:27 crc kubenswrapper[4958]: I1006 11:47:27.018287 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:47:27 crc kubenswrapper[4958]: I1006 11:47:27.018302 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:47:27 crc kubenswrapper[4958]: I1006 11:47:27.018452 4958 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Oct 06 11:47:27 crc kubenswrapper[4958]: I1006 11:47:27.018573 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Oct 06 11:47:27 crc kubenswrapper[4958]: I1006 11:47:27.018602 4958 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Oct 06 11:47:27 crc kubenswrapper[4958]: I1006 11:47:27.019230 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:47:27 crc kubenswrapper[4958]: I1006 11:47:27.019266 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:47:27 crc kubenswrapper[4958]: I1006 11:47:27.019276 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:47:27 crc kubenswrapper[4958]: I1006 11:47:27.019510 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:47:27 crc kubenswrapper[4958]: I1006 11:47:27.019564 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:47:27 crc kubenswrapper[4958]: I1006 11:47:27.019610 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:47:27 crc kubenswrapper[4958]: I1006 11:47:27.019631 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:47:27 crc kubenswrapper[4958]: I1006 11:47:27.019572 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:47:27 crc kubenswrapper[4958]: I1006 11:47:27.019734 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:47:27 crc kubenswrapper[4958]: I1006 11:47:27.019921 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Oct 06 11:47:27 crc kubenswrapper[4958]: I1006 11:47:27.019980 4958 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Oct 06 11:47:27 crc kubenswrapper[4958]: I1006 11:47:27.022573 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:47:27 crc kubenswrapper[4958]: I1006 11:47:27.022604 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:47:27 crc kubenswrapper[4958]: I1006 11:47:27.022617 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:47:27 crc kubenswrapper[4958]: E1006 11:47:27.043342 4958 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.73:6443: connect: connection refused" interval="400ms" Oct 06 11:47:27 crc kubenswrapper[4958]: I1006 11:47:27.067485 4958 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Oct 06 11:47:27 crc kubenswrapper[4958]: I1006 11:47:27.069261 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:47:27 crc kubenswrapper[4958]: I1006 11:47:27.069295 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:47:27 crc kubenswrapper[4958]: I1006 11:47:27.069307 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:47:27 crc kubenswrapper[4958]: I1006 11:47:27.069333 4958 kubelet_node_status.go:76] "Attempting to register node" node="crc" Oct 06 11:47:27 crc kubenswrapper[4958]: E1006 11:47:27.069855 4958 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.73:6443: connect: connection refused" node="crc" Oct 06 11:47:27 crc kubenswrapper[4958]: I1006 11:47:27.071245 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Oct 06 11:47:27 crc kubenswrapper[4958]: I1006 11:47:27.071293 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Oct 06 11:47:27 crc kubenswrapper[4958]: I1006 11:47:27.071423 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Oct 06 11:47:27 crc kubenswrapper[4958]: I1006 11:47:27.071460 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Oct 06 11:47:27 crc kubenswrapper[4958]: I1006 11:47:27.071479 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Oct 06 11:47:27 crc kubenswrapper[4958]: I1006 11:47:27.071504 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Oct 06 11:47:27 crc kubenswrapper[4958]: I1006 11:47:27.071522 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Oct 06 11:47:27 crc kubenswrapper[4958]: I1006 11:47:27.071621 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Oct 06 11:47:27 crc kubenswrapper[4958]: I1006 11:47:27.071690 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Oct 06 11:47:27 crc kubenswrapper[4958]: I1006 11:47:27.071741 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Oct 06 11:47:27 crc kubenswrapper[4958]: I1006 11:47:27.071778 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Oct 06 11:47:27 crc kubenswrapper[4958]: I1006 11:47:27.071802 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Oct 06 11:47:27 crc kubenswrapper[4958]: I1006 11:47:27.071833 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Oct 06 11:47:27 crc kubenswrapper[4958]: I1006 11:47:27.071875 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Oct 06 11:47:27 crc kubenswrapper[4958]: I1006 11:47:27.071909 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Oct 06 11:47:27 crc kubenswrapper[4958]: I1006 11:47:27.172764 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Oct 06 11:47:27 crc kubenswrapper[4958]: I1006 11:47:27.172835 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Oct 06 11:47:27 crc kubenswrapper[4958]: I1006 11:47:27.172869 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Oct 06 11:47:27 crc kubenswrapper[4958]: I1006 11:47:27.172903 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Oct 06 11:47:27 crc kubenswrapper[4958]: I1006 11:47:27.172936 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Oct 06 11:47:27 crc kubenswrapper[4958]: I1006 11:47:27.172966 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Oct 06 11:47:27 crc kubenswrapper[4958]: I1006 11:47:27.173048 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Oct 06 11:47:27 crc kubenswrapper[4958]: I1006 11:47:27.173082 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Oct 06 11:47:27 crc kubenswrapper[4958]: I1006 11:47:27.173247 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Oct 06 11:47:27 crc kubenswrapper[4958]: I1006 11:47:27.173310 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Oct 06 11:47:27 crc kubenswrapper[4958]: I1006 11:47:27.173364 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Oct 06 11:47:27 crc kubenswrapper[4958]: I1006 11:47:27.173411 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Oct 06 11:47:27 crc kubenswrapper[4958]: I1006 11:47:27.173423 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Oct 06 11:47:27 crc kubenswrapper[4958]: I1006 11:47:27.173479 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Oct 06 11:47:27 crc kubenswrapper[4958]: I1006 11:47:27.173498 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Oct 06 11:47:27 crc kubenswrapper[4958]: I1006 11:47:27.173529 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Oct 06 11:47:27 crc kubenswrapper[4958]: I1006 11:47:27.173521 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Oct 06 11:47:27 crc kubenswrapper[4958]: I1006 11:47:27.173582 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Oct 06 11:47:27 crc kubenswrapper[4958]: I1006 11:47:27.173565 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Oct 06 11:47:27 crc kubenswrapper[4958]: I1006 11:47:27.173642 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Oct 06 11:47:27 crc kubenswrapper[4958]: I1006 11:47:27.173642 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Oct 06 11:47:27 crc kubenswrapper[4958]: I1006 11:47:27.173643 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Oct 06 11:47:27 crc kubenswrapper[4958]: I1006 11:47:27.173648 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Oct 06 11:47:27 crc kubenswrapper[4958]: I1006 11:47:27.173682 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Oct 06 11:47:27 crc kubenswrapper[4958]: I1006 11:47:27.173671 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Oct 06 11:47:27 crc kubenswrapper[4958]: I1006 11:47:27.173680 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Oct 06 11:47:27 crc kubenswrapper[4958]: I1006 11:47:27.173435 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Oct 06 11:47:27 crc kubenswrapper[4958]: I1006 11:47:27.173724 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Oct 06 11:47:27 crc kubenswrapper[4958]: I1006 11:47:27.173760 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Oct 06 11:47:27 crc kubenswrapper[4958]: I1006 11:47:27.173770 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Oct 06 11:47:27 crc kubenswrapper[4958]: I1006 11:47:27.270010 4958 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Oct 06 11:47:27 crc kubenswrapper[4958]: I1006 11:47:27.271721 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:47:27 crc kubenswrapper[4958]: I1006 11:47:27.271763 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:47:27 crc kubenswrapper[4958]: I1006 11:47:27.271778 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:47:27 crc kubenswrapper[4958]: I1006 11:47:27.271828 4958 kubelet_node_status.go:76] "Attempting to register node" node="crc" Oct 06 11:47:27 crc kubenswrapper[4958]: E1006 11:47:27.272340 4958 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.73:6443: connect: connection refused" node="crc" Oct 06 11:47:27 crc kubenswrapper[4958]: I1006 11:47:27.372125 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Oct 06 11:47:27 crc kubenswrapper[4958]: I1006 11:47:27.396300 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-crc" Oct 06 11:47:27 crc kubenswrapper[4958]: W1006 11:47:27.411677 4958 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd1b160f5dda77d281dd8e69ec8d817f9.slice/crio-3dce897b1e0cca390003b4888051a447e1ebd376d4a66f172b7484987e305ef3 WatchSource:0}: Error finding container 3dce897b1e0cca390003b4888051a447e1ebd376d4a66f172b7484987e305ef3: Status 404 returned error can't find the container with id 3dce897b1e0cca390003b4888051a447e1ebd376d4a66f172b7484987e305ef3 Oct 06 11:47:27 crc kubenswrapper[4958]: I1006 11:47:27.428077 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Oct 06 11:47:27 crc kubenswrapper[4958]: W1006 11:47:27.428277 4958 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2139d3e2895fc6797b9c76a1b4c9886d.slice/crio-f24a99ed9eece06bc06942b2e1d4ababe9128f6ea2c461c8b54a84f145cfa7a3 WatchSource:0}: Error finding container f24a99ed9eece06bc06942b2e1d4ababe9128f6ea2c461c8b54a84f145cfa7a3: Status 404 returned error can't find the container with id f24a99ed9eece06bc06942b2e1d4ababe9128f6ea2c461c8b54a84f145cfa7a3 Oct 06 11:47:27 crc kubenswrapper[4958]: E1006 11:47:27.444896 4958 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.73:6443: connect: connection refused" interval="800ms" Oct 06 11:47:27 crc kubenswrapper[4958]: W1006 11:47:27.450555 4958 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf4b27818a5e8e43d0dc095d08835c792.slice/crio-631e55a1753369667386f79672af23e96b79974c04056d964adbea106bd471a3 WatchSource:0}: Error finding container 631e55a1753369667386f79672af23e96b79974c04056d964adbea106bd471a3: Status 404 returned error can't find the container with id 631e55a1753369667386f79672af23e96b79974c04056d964adbea106bd471a3 Oct 06 11:47:27 crc kubenswrapper[4958]: I1006 11:47:27.461740 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Oct 06 11:47:27 crc kubenswrapper[4958]: I1006 11:47:27.471254 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Oct 06 11:47:27 crc kubenswrapper[4958]: W1006 11:47:27.479954 4958 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf614b9022728cf315e60c057852e563e.slice/crio-ad279453399588917a99a5b1be047be74b2eee7b6a1275ea6b37cb08aa76355b WatchSource:0}: Error finding container ad279453399588917a99a5b1be047be74b2eee7b6a1275ea6b37cb08aa76355b: Status 404 returned error can't find the container with id ad279453399588917a99a5b1be047be74b2eee7b6a1275ea6b37cb08aa76355b Oct 06 11:47:27 crc kubenswrapper[4958]: W1006 11:47:27.486525 4958 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3dcd261975c3d6b9a6ad6367fd4facd3.slice/crio-2eacaace898b1630ba3e259c1ceb958d3d656c598e043c6d572d8dd02e83547b WatchSource:0}: Error finding container 2eacaace898b1630ba3e259c1ceb958d3d656c598e043c6d572d8dd02e83547b: Status 404 returned error can't find the container with id 2eacaace898b1630ba3e259c1ceb958d3d656c598e043c6d572d8dd02e83547b Oct 06 11:47:27 crc kubenswrapper[4958]: I1006 11:47:27.673065 4958 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Oct 06 11:47:27 crc kubenswrapper[4958]: I1006 11:47:27.674488 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:47:27 crc kubenswrapper[4958]: I1006 11:47:27.674523 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:47:27 crc kubenswrapper[4958]: I1006 11:47:27.674536 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:47:27 crc kubenswrapper[4958]: I1006 11:47:27.674567 4958 kubelet_node_status.go:76] "Attempting to register node" node="crc" Oct 06 11:47:27 crc kubenswrapper[4958]: E1006 11:47:27.675124 4958 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.73:6443: connect: connection refused" node="crc" Oct 06 11:47:27 crc kubenswrapper[4958]: I1006 11:47:27.838502 4958 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.73:6443: connect: connection refused Oct 06 11:47:27 crc kubenswrapper[4958]: I1006 11:47:27.919522 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"f24a99ed9eece06bc06942b2e1d4ababe9128f6ea2c461c8b54a84f145cfa7a3"} Oct 06 11:47:27 crc kubenswrapper[4958]: I1006 11:47:27.921704 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerStarted","Data":"3dce897b1e0cca390003b4888051a447e1ebd376d4a66f172b7484987e305ef3"} Oct 06 11:47:27 crc kubenswrapper[4958]: I1006 11:47:27.922625 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"2eacaace898b1630ba3e259c1ceb958d3d656c598e043c6d572d8dd02e83547b"} Oct 06 11:47:27 crc kubenswrapper[4958]: I1006 11:47:27.923352 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"ad279453399588917a99a5b1be047be74b2eee7b6a1275ea6b37cb08aa76355b"} Oct 06 11:47:27 crc kubenswrapper[4958]: I1006 11:47:27.924363 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"631e55a1753369667386f79672af23e96b79974c04056d964adbea106bd471a3"} Oct 06 11:47:27 crc kubenswrapper[4958]: W1006 11:47:27.945315 4958 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 38.102.83.73:6443: connect: connection refused Oct 06 11:47:27 crc kubenswrapper[4958]: E1006 11:47:27.945429 4958 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 38.102.83.73:6443: connect: connection refused" logger="UnhandledError" Oct 06 11:47:28 crc kubenswrapper[4958]: W1006 11:47:28.044662 4958 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 38.102.83.73:6443: connect: connection refused Oct 06 11:47:28 crc kubenswrapper[4958]: E1006 11:47:28.044776 4958 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 38.102.83.73:6443: connect: connection refused" logger="UnhandledError" Oct 06 11:47:28 crc kubenswrapper[4958]: W1006 11:47:28.079793 4958 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": dial tcp 38.102.83.73:6443: connect: connection refused Oct 06 11:47:28 crc kubenswrapper[4958]: E1006 11:47:28.079881 4958 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": dial tcp 38.102.83.73:6443: connect: connection refused" logger="UnhandledError" Oct 06 11:47:28 crc kubenswrapper[4958]: E1006 11:47:28.246619 4958 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.73:6443: connect: connection refused" interval="1.6s" Oct 06 11:47:28 crc kubenswrapper[4958]: W1006 11:47:28.405096 4958 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 38.102.83.73:6443: connect: connection refused Oct 06 11:47:28 crc kubenswrapper[4958]: E1006 11:47:28.405273 4958 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 38.102.83.73:6443: connect: connection refused" logger="UnhandledError" Oct 06 11:47:28 crc kubenswrapper[4958]: I1006 11:47:28.475907 4958 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Oct 06 11:47:28 crc kubenswrapper[4958]: I1006 11:47:28.477850 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:47:28 crc kubenswrapper[4958]: I1006 11:47:28.477906 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:47:28 crc kubenswrapper[4958]: I1006 11:47:28.477920 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:47:28 crc kubenswrapper[4958]: I1006 11:47:28.477958 4958 kubelet_node_status.go:76] "Attempting to register node" node="crc" Oct 06 11:47:28 crc kubenswrapper[4958]: E1006 11:47:28.478657 4958 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.73:6443: connect: connection refused" node="crc" Oct 06 11:47:28 crc kubenswrapper[4958]: I1006 11:47:28.838464 4958 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.73:6443: connect: connection refused Oct 06 11:47:28 crc kubenswrapper[4958]: I1006 11:47:28.929901 4958 generic.go:334] "Generic (PLEG): container finished" podID="3dcd261975c3d6b9a6ad6367fd4facd3" containerID="c7f87b2139a49d50f883f483bfbfff0af00d2580d85bbddb604d395afbae3a73" exitCode=0 Oct 06 11:47:28 crc kubenswrapper[4958]: I1006 11:47:28.929980 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerDied","Data":"c7f87b2139a49d50f883f483bfbfff0af00d2580d85bbddb604d395afbae3a73"} Oct 06 11:47:28 crc kubenswrapper[4958]: I1006 11:47:28.930079 4958 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Oct 06 11:47:28 crc kubenswrapper[4958]: I1006 11:47:28.931899 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:47:28 crc kubenswrapper[4958]: I1006 11:47:28.931958 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:47:28 crc kubenswrapper[4958]: I1006 11:47:28.931974 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:47:28 crc kubenswrapper[4958]: I1006 11:47:28.934204 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"95e5c689ad307b63dec13d774d1ff8fe8fa0a11a7d33e18a0470000a4df0f6dc"} Oct 06 11:47:28 crc kubenswrapper[4958]: I1006 11:47:28.934257 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"5cbba96aaea45073aaeada1136c677ba1ba87248f004cf3d7394ddfccf412478"} Oct 06 11:47:28 crc kubenswrapper[4958]: I1006 11:47:28.934268 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"42a6335b390e5de2da2d935de9e0cb1e60391521813d0928c59d1d9c27ae8e7f"} Oct 06 11:47:28 crc kubenswrapper[4958]: I1006 11:47:28.934281 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"64dfdaf2d8faa6e7ebde08361728077d3db9118ec963dc4c1fc8caf9eb9c752b"} Oct 06 11:47:28 crc kubenswrapper[4958]: I1006 11:47:28.934369 4958 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Oct 06 11:47:28 crc kubenswrapper[4958]: I1006 11:47:28.936084 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:47:28 crc kubenswrapper[4958]: I1006 11:47:28.936137 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:47:28 crc kubenswrapper[4958]: I1006 11:47:28.936212 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:47:28 crc kubenswrapper[4958]: I1006 11:47:28.937015 4958 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="70c0a2bcb44c905e3d39402f0992d3e50a939252201b98e9a723f5a6777d7f91" exitCode=0 Oct 06 11:47:28 crc kubenswrapper[4958]: I1006 11:47:28.937076 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerDied","Data":"70c0a2bcb44c905e3d39402f0992d3e50a939252201b98e9a723f5a6777d7f91"} Oct 06 11:47:28 crc kubenswrapper[4958]: I1006 11:47:28.937243 4958 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Oct 06 11:47:28 crc kubenswrapper[4958]: I1006 11:47:28.938270 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:47:28 crc kubenswrapper[4958]: I1006 11:47:28.938305 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:47:28 crc kubenswrapper[4958]: I1006 11:47:28.938319 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:47:28 crc kubenswrapper[4958]: I1006 11:47:28.939118 4958 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="d37b0a556d9c93384881a2bd3d6d16342ed70f9917f49aa348dd3b32586575d7" exitCode=0 Oct 06 11:47:28 crc kubenswrapper[4958]: I1006 11:47:28.939232 4958 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Oct 06 11:47:28 crc kubenswrapper[4958]: I1006 11:47:28.939275 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"d37b0a556d9c93384881a2bd3d6d16342ed70f9917f49aa348dd3b32586575d7"} Oct 06 11:47:28 crc kubenswrapper[4958]: I1006 11:47:28.940009 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:47:28 crc kubenswrapper[4958]: I1006 11:47:28.940041 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:47:28 crc kubenswrapper[4958]: I1006 11:47:28.940054 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:47:28 crc kubenswrapper[4958]: I1006 11:47:28.940952 4958 generic.go:334] "Generic (PLEG): container finished" podID="d1b160f5dda77d281dd8e69ec8d817f9" containerID="1ceb14a5d5b24537500f212dc4a7714e58a3019cfa39ba72563b38b9e5ac539f" exitCode=0 Oct 06 11:47:28 crc kubenswrapper[4958]: I1006 11:47:28.940996 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerDied","Data":"1ceb14a5d5b24537500f212dc4a7714e58a3019cfa39ba72563b38b9e5ac539f"} Oct 06 11:47:28 crc kubenswrapper[4958]: I1006 11:47:28.941075 4958 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Oct 06 11:47:28 crc kubenswrapper[4958]: I1006 11:47:28.941747 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:47:28 crc kubenswrapper[4958]: I1006 11:47:28.941780 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:47:28 crc kubenswrapper[4958]: I1006 11:47:28.941796 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:47:28 crc kubenswrapper[4958]: I1006 11:47:28.943239 4958 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Oct 06 11:47:28 crc kubenswrapper[4958]: I1006 11:47:28.944713 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:47:28 crc kubenswrapper[4958]: I1006 11:47:28.944769 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:47:28 crc kubenswrapper[4958]: I1006 11:47:28.944784 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:47:29 crc kubenswrapper[4958]: I1006 11:47:29.838912 4958 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.73:6443: connect: connection refused Oct 06 11:47:29 crc kubenswrapper[4958]: E1006 11:47:29.847877 4958 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.73:6443: connect: connection refused" interval="3.2s" Oct 06 11:47:29 crc kubenswrapper[4958]: I1006 11:47:29.855964 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Oct 06 11:47:29 crc kubenswrapper[4958]: I1006 11:47:29.953385 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"533257b553a482e13f8c8930f7ee7d6f8b70af1499cf861d3dfe73cb2225aa2a"} Oct 06 11:47:29 crc kubenswrapper[4958]: I1006 11:47:29.953466 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"0b16a70f5c26106f885601fdc9f41b6d606668ed3a4a527be3a7674dd4217d36"} Oct 06 11:47:29 crc kubenswrapper[4958]: I1006 11:47:29.953486 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"93cedfe1a3600f0eb73fd787333690ed1fd8cd5f766a54ba40eb6aad808593c7"} Oct 06 11:47:29 crc kubenswrapper[4958]: I1006 11:47:29.953505 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"8f40315a451e12055987bec9804330dac96578f86d7fdedfd96ed187936df857"} Oct 06 11:47:29 crc kubenswrapper[4958]: I1006 11:47:29.959945 4958 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="2515eb7bd285f5fe76b86155a8d863d766852939740431dfdd9ddc148726b8dd" exitCode=0 Oct 06 11:47:29 crc kubenswrapper[4958]: I1006 11:47:29.960063 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"2515eb7bd285f5fe76b86155a8d863d766852939740431dfdd9ddc148726b8dd"} Oct 06 11:47:29 crc kubenswrapper[4958]: I1006 11:47:29.960197 4958 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Oct 06 11:47:29 crc kubenswrapper[4958]: I1006 11:47:29.962462 4958 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Oct 06 11:47:29 crc kubenswrapper[4958]: I1006 11:47:29.962738 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerStarted","Data":"35372e4733f8be2a27441a86646b882967d6b859b3810925f64856d952f15215"} Oct 06 11:47:29 crc kubenswrapper[4958]: I1006 11:47:29.963580 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:47:29 crc kubenswrapper[4958]: I1006 11:47:29.963624 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:47:29 crc kubenswrapper[4958]: I1006 11:47:29.963640 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:47:29 crc kubenswrapper[4958]: I1006 11:47:29.963587 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:47:29 crc kubenswrapper[4958]: I1006 11:47:29.964068 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:47:29 crc kubenswrapper[4958]: I1006 11:47:29.964087 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:47:29 crc kubenswrapper[4958]: I1006 11:47:29.966982 4958 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Oct 06 11:47:29 crc kubenswrapper[4958]: I1006 11:47:29.967649 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"0dcb7478b9b3f837afca516fc35e7f3f41e293147c5f27119fcca05f0d4477b3"} Oct 06 11:47:29 crc kubenswrapper[4958]: I1006 11:47:29.967706 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"ec146c3b7bff7005ab104f50dbc189e81ee82b7b2275ddb606e60832db5a0046"} Oct 06 11:47:29 crc kubenswrapper[4958]: I1006 11:47:29.967724 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"fb611e3b5c05dc18f2016ae86df59c40ce281cb8b0180bd570de0ef76740db43"} Oct 06 11:47:29 crc kubenswrapper[4958]: I1006 11:47:29.969255 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:47:29 crc kubenswrapper[4958]: I1006 11:47:29.969298 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:47:29 crc kubenswrapper[4958]: I1006 11:47:29.969315 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:47:29 crc kubenswrapper[4958]: I1006 11:47:29.969409 4958 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Oct 06 11:47:29 crc kubenswrapper[4958]: I1006 11:47:29.970348 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:47:29 crc kubenswrapper[4958]: I1006 11:47:29.970377 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:47:29 crc kubenswrapper[4958]: I1006 11:47:29.970386 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:47:30 crc kubenswrapper[4958]: I1006 11:47:30.079245 4958 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Oct 06 11:47:30 crc kubenswrapper[4958]: I1006 11:47:30.080961 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:47:30 crc kubenswrapper[4958]: I1006 11:47:30.080997 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:47:30 crc kubenswrapper[4958]: I1006 11:47:30.081005 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:47:30 crc kubenswrapper[4958]: I1006 11:47:30.081031 4958 kubelet_node_status.go:76] "Attempting to register node" node="crc" Oct 06 11:47:30 crc kubenswrapper[4958]: E1006 11:47:30.081678 4958 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.73:6443: connect: connection refused" node="crc" Oct 06 11:47:30 crc kubenswrapper[4958]: W1006 11:47:30.277248 4958 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": dial tcp 38.102.83.73:6443: connect: connection refused Oct 06 11:47:30 crc kubenswrapper[4958]: E1006 11:47:30.277504 4958 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": dial tcp 38.102.83.73:6443: connect: connection refused" logger="UnhandledError" Oct 06 11:47:30 crc kubenswrapper[4958]: W1006 11:47:30.458692 4958 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 38.102.83.73:6443: connect: connection refused Oct 06 11:47:30 crc kubenswrapper[4958]: E1006 11:47:30.458873 4958 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 38.102.83.73:6443: connect: connection refused" logger="UnhandledError" Oct 06 11:47:30 crc kubenswrapper[4958]: I1006 11:47:30.974662 4958 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="7e0a8bb1d689ef3eb1c2db4df213be7b4ba246e093424a0034b18ce5b68b2cfd" exitCode=0 Oct 06 11:47:30 crc kubenswrapper[4958]: I1006 11:47:30.974728 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"7e0a8bb1d689ef3eb1c2db4df213be7b4ba246e093424a0034b18ce5b68b2cfd"} Oct 06 11:47:30 crc kubenswrapper[4958]: I1006 11:47:30.974802 4958 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Oct 06 11:47:30 crc kubenswrapper[4958]: I1006 11:47:30.975698 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:47:30 crc kubenswrapper[4958]: I1006 11:47:30.975732 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:47:30 crc kubenswrapper[4958]: I1006 11:47:30.975743 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:47:30 crc kubenswrapper[4958]: I1006 11:47:30.979783 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"da3746265bc4e00200b79550ea0cdf306905cd84aafae2759ac3b57d68f44a09"} Oct 06 11:47:30 crc kubenswrapper[4958]: I1006 11:47:30.979830 4958 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Oct 06 11:47:30 crc kubenswrapper[4958]: I1006 11:47:30.979870 4958 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Oct 06 11:47:30 crc kubenswrapper[4958]: I1006 11:47:30.979882 4958 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Oct 06 11:47:30 crc kubenswrapper[4958]: I1006 11:47:30.979902 4958 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Oct 06 11:47:30 crc kubenswrapper[4958]: I1006 11:47:30.980024 4958 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Oct 06 11:47:30 crc kubenswrapper[4958]: I1006 11:47:30.981077 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:47:30 crc kubenswrapper[4958]: I1006 11:47:30.981110 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:47:30 crc kubenswrapper[4958]: I1006 11:47:30.981131 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:47:30 crc kubenswrapper[4958]: I1006 11:47:30.981200 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:47:30 crc kubenswrapper[4958]: I1006 11:47:30.981219 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:47:30 crc kubenswrapper[4958]: I1006 11:47:30.981228 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:47:30 crc kubenswrapper[4958]: I1006 11:47:30.981236 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:47:30 crc kubenswrapper[4958]: I1006 11:47:30.981269 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:47:30 crc kubenswrapper[4958]: I1006 11:47:30.981280 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:47:30 crc kubenswrapper[4958]: I1006 11:47:30.981400 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:47:30 crc kubenswrapper[4958]: I1006 11:47:30.981422 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:47:30 crc kubenswrapper[4958]: I1006 11:47:30.981431 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:47:31 crc kubenswrapper[4958]: I1006 11:47:31.718718 4958 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Oct 06 11:47:31 crc kubenswrapper[4958]: I1006 11:47:31.728454 4958 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Oct 06 11:47:31 crc kubenswrapper[4958]: I1006 11:47:31.989523 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"b68c0aaf26d798be1e54562398e215687d5d87d1365dcf6932e84bfdebbfef2c"} Oct 06 11:47:31 crc kubenswrapper[4958]: I1006 11:47:31.989603 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"d7e898f6fcf406fe849a73767f83f6dd71b56fae41fd8c8078425f1a76952480"} Oct 06 11:47:31 crc kubenswrapper[4958]: I1006 11:47:31.989626 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"376f1a7800688b910ce6fe69690e4138e4a549b8e25f18c8ee3016cd96c94ddb"} Oct 06 11:47:31 crc kubenswrapper[4958]: I1006 11:47:31.989684 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"9824eb1afb8c5aae680c233213cd4177b7083863b1d57b11b73538f01bc19e74"} Oct 06 11:47:31 crc kubenswrapper[4958]: I1006 11:47:31.989713 4958 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Oct 06 11:47:31 crc kubenswrapper[4958]: I1006 11:47:31.989778 4958 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Oct 06 11:47:31 crc kubenswrapper[4958]: I1006 11:47:31.989860 4958 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Oct 06 11:47:31 crc kubenswrapper[4958]: I1006 11:47:31.991265 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:47:31 crc kubenswrapper[4958]: I1006 11:47:31.991343 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:47:31 crc kubenswrapper[4958]: I1006 11:47:31.991383 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:47:31 crc kubenswrapper[4958]: I1006 11:47:31.991346 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:47:31 crc kubenswrapper[4958]: I1006 11:47:31.991465 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:47:31 crc kubenswrapper[4958]: I1006 11:47:31.991406 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:47:32 crc kubenswrapper[4958]: I1006 11:47:32.520187 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Oct 06 11:47:32 crc kubenswrapper[4958]: I1006 11:47:32.574614 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Oct 06 11:47:33 crc kubenswrapper[4958]: I1006 11:47:33.005818 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"9bd51f876ba05a7a6eaa8b3b5192e9481df601848d04f99ebaa118e46d7c73e9"} Oct 06 11:47:33 crc kubenswrapper[4958]: I1006 11:47:33.005881 4958 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Oct 06 11:47:33 crc kubenswrapper[4958]: I1006 11:47:33.006045 4958 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Oct 06 11:47:33 crc kubenswrapper[4958]: I1006 11:47:33.006044 4958 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Oct 06 11:47:33 crc kubenswrapper[4958]: I1006 11:47:33.007243 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:47:33 crc kubenswrapper[4958]: I1006 11:47:33.007341 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:47:33 crc kubenswrapper[4958]: I1006 11:47:33.007362 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:47:33 crc kubenswrapper[4958]: I1006 11:47:33.007582 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:47:33 crc kubenswrapper[4958]: I1006 11:47:33.007611 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:47:33 crc kubenswrapper[4958]: I1006 11:47:33.007646 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:47:33 crc kubenswrapper[4958]: I1006 11:47:33.007660 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:47:33 crc kubenswrapper[4958]: I1006 11:47:33.007620 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:47:33 crc kubenswrapper[4958]: I1006 11:47:33.007718 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:47:33 crc kubenswrapper[4958]: I1006 11:47:33.281807 4958 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Oct 06 11:47:33 crc kubenswrapper[4958]: I1006 11:47:33.283839 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:47:33 crc kubenswrapper[4958]: I1006 11:47:33.283910 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:47:33 crc kubenswrapper[4958]: I1006 11:47:33.284006 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:47:33 crc kubenswrapper[4958]: I1006 11:47:33.284051 4958 kubelet_node_status.go:76] "Attempting to register node" node="crc" Oct 06 11:47:33 crc kubenswrapper[4958]: I1006 11:47:33.884260 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Oct 06 11:47:34 crc kubenswrapper[4958]: I1006 11:47:34.009471 4958 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Oct 06 11:47:34 crc kubenswrapper[4958]: I1006 11:47:34.009617 4958 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Oct 06 11:47:34 crc kubenswrapper[4958]: I1006 11:47:34.009471 4958 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Oct 06 11:47:34 crc kubenswrapper[4958]: I1006 11:47:34.010932 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:47:34 crc kubenswrapper[4958]: I1006 11:47:34.011008 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:47:34 crc kubenswrapper[4958]: I1006 11:47:34.011031 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:47:34 crc kubenswrapper[4958]: I1006 11:47:34.011068 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:47:34 crc kubenswrapper[4958]: I1006 11:47:34.011096 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:47:34 crc kubenswrapper[4958]: I1006 11:47:34.011107 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:47:34 crc kubenswrapper[4958]: I1006 11:47:34.011937 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:47:34 crc kubenswrapper[4958]: I1006 11:47:34.011975 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:47:34 crc kubenswrapper[4958]: I1006 11:47:34.011997 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:47:34 crc kubenswrapper[4958]: I1006 11:47:34.702053 4958 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Oct 06 11:47:35 crc kubenswrapper[4958]: I1006 11:47:35.016105 4958 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Oct 06 11:47:35 crc kubenswrapper[4958]: I1006 11:47:35.017596 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:47:35 crc kubenswrapper[4958]: I1006 11:47:35.017663 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:47:35 crc kubenswrapper[4958]: I1006 11:47:35.017688 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:47:36 crc kubenswrapper[4958]: E1006 11:47:36.976340 4958 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Oct 06 11:47:37 crc kubenswrapper[4958]: I1006 11:47:37.078017 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Oct 06 11:47:37 crc kubenswrapper[4958]: I1006 11:47:37.078312 4958 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Oct 06 11:47:37 crc kubenswrapper[4958]: I1006 11:47:37.080804 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:47:37 crc kubenswrapper[4958]: I1006 11:47:37.080865 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:47:37 crc kubenswrapper[4958]: I1006 11:47:37.080888 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:47:37 crc kubenswrapper[4958]: I1006 11:47:37.908239 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-etcd/etcd-crc" Oct 06 11:47:37 crc kubenswrapper[4958]: I1006 11:47:37.908887 4958 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Oct 06 11:47:37 crc kubenswrapper[4958]: I1006 11:47:37.910756 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:47:37 crc kubenswrapper[4958]: I1006 11:47:37.910859 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:47:37 crc kubenswrapper[4958]: I1006 11:47:37.910886 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:47:38 crc kubenswrapper[4958]: I1006 11:47:38.889494 4958 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Oct 06 11:47:38 crc kubenswrapper[4958]: I1006 11:47:38.889829 4958 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Oct 06 11:47:38 crc kubenswrapper[4958]: I1006 11:47:38.892388 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:47:38 crc kubenswrapper[4958]: I1006 11:47:38.892448 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:47:38 crc kubenswrapper[4958]: I1006 11:47:38.892467 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:47:38 crc kubenswrapper[4958]: I1006 11:47:38.898138 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Oct 06 11:47:39 crc kubenswrapper[4958]: I1006 11:47:39.027328 4958 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Oct 06 11:47:39 crc kubenswrapper[4958]: I1006 11:47:39.028820 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:47:39 crc kubenswrapper[4958]: I1006 11:47:39.028885 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:47:39 crc kubenswrapper[4958]: I1006 11:47:39.028907 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:47:40 crc kubenswrapper[4958]: W1006 11:47:40.781330 4958 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": net/http: TLS handshake timeout Oct 06 11:47:40 crc kubenswrapper[4958]: I1006 11:47:40.781504 4958 trace.go:236] Trace[1688983334]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (06-Oct-2025 11:47:30.780) (total time: 10001ms): Oct 06 11:47:40 crc kubenswrapper[4958]: Trace[1688983334]: ---"Objects listed" error:Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": net/http: TLS handshake timeout 10001ms (11:47:40.781) Oct 06 11:47:40 crc kubenswrapper[4958]: Trace[1688983334]: [10.001201161s] [10.001201161s] END Oct 06 11:47:40 crc kubenswrapper[4958]: E1006 11:47:40.781545 4958 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError" Oct 06 11:47:40 crc kubenswrapper[4958]: I1006 11:47:40.838758 4958 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": net/http: TLS handshake timeout Oct 06 11:47:41 crc kubenswrapper[4958]: W1006 11:47:41.251360 4958 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": net/http: TLS handshake timeout Oct 06 11:47:41 crc kubenswrapper[4958]: I1006 11:47:41.251467 4958 trace.go:236] Trace[568318416]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (06-Oct-2025 11:47:31.250) (total time: 10001ms): Oct 06 11:47:41 crc kubenswrapper[4958]: Trace[568318416]: ---"Objects listed" error:Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": net/http: TLS handshake timeout 10000ms (11:47:41.251) Oct 06 11:47:41 crc kubenswrapper[4958]: Trace[568318416]: [10.001017625s] [10.001017625s] END Oct 06 11:47:41 crc kubenswrapper[4958]: E1006 11:47:41.251494 4958 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError" Oct 06 11:47:41 crc kubenswrapper[4958]: I1006 11:47:41.305411 4958 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-etcd/etcd-crc" Oct 06 11:47:41 crc kubenswrapper[4958]: I1006 11:47:41.305779 4958 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Oct 06 11:47:41 crc kubenswrapper[4958]: I1006 11:47:41.308004 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:47:41 crc kubenswrapper[4958]: I1006 11:47:41.308088 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:47:41 crc kubenswrapper[4958]: I1006 11:47:41.308110 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:47:41 crc kubenswrapper[4958]: I1006 11:47:41.369697 4958 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-etcd/etcd-crc" Oct 06 11:47:41 crc kubenswrapper[4958]: I1006 11:47:41.765278 4958 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 403" start-of-body={"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/livez\"","reason":"Forbidden","details":{},"code":403} Oct 06 11:47:41 crc kubenswrapper[4958]: I1006 11:47:41.765342 4958 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 403" Oct 06 11:47:41 crc kubenswrapper[4958]: I1006 11:47:41.769656 4958 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 403" start-of-body={"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/livez\"","reason":"Forbidden","details":{},"code":403} Oct 06 11:47:41 crc kubenswrapper[4958]: I1006 11:47:41.769705 4958 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 403" Oct 06 11:47:41 crc kubenswrapper[4958]: I1006 11:47:41.889782 4958 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/cluster-policy-controller namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Oct 06 11:47:41 crc kubenswrapper[4958]: I1006 11:47:41.889868 4958 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Oct 06 11:47:42 crc kubenswrapper[4958]: I1006 11:47:42.038187 4958 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Oct 06 11:47:42 crc kubenswrapper[4958]: I1006 11:47:42.039595 4958 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="da3746265bc4e00200b79550ea0cdf306905cd84aafae2759ac3b57d68f44a09" exitCode=255 Oct 06 11:47:42 crc kubenswrapper[4958]: I1006 11:47:42.039668 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerDied","Data":"da3746265bc4e00200b79550ea0cdf306905cd84aafae2759ac3b57d68f44a09"} Oct 06 11:47:42 crc kubenswrapper[4958]: I1006 11:47:42.039739 4958 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Oct 06 11:47:42 crc kubenswrapper[4958]: I1006 11:47:42.039901 4958 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Oct 06 11:47:42 crc kubenswrapper[4958]: I1006 11:47:42.040796 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:47:42 crc kubenswrapper[4958]: I1006 11:47:42.040888 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:47:42 crc kubenswrapper[4958]: I1006 11:47:42.040903 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:47:42 crc kubenswrapper[4958]: I1006 11:47:42.041589 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:47:42 crc kubenswrapper[4958]: I1006 11:47:42.041629 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:47:42 crc kubenswrapper[4958]: I1006 11:47:42.041648 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:47:42 crc kubenswrapper[4958]: I1006 11:47:42.042476 4958 scope.go:117] "RemoveContainer" containerID="da3746265bc4e00200b79550ea0cdf306905cd84aafae2759ac3b57d68f44a09" Oct 06 11:47:42 crc kubenswrapper[4958]: I1006 11:47:42.061645 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-etcd/etcd-crc" Oct 06 11:47:43 crc kubenswrapper[4958]: I1006 11:47:43.044060 4958 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Oct 06 11:47:43 crc kubenswrapper[4958]: I1006 11:47:43.045776 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"fdb3ac1507f9a55981ed07786d2ab8bd2aab7072e8e625b17f15bc27f40f460d"} Oct 06 11:47:43 crc kubenswrapper[4958]: I1006 11:47:43.045888 4958 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Oct 06 11:47:43 crc kubenswrapper[4958]: I1006 11:47:43.045906 4958 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Oct 06 11:47:43 crc kubenswrapper[4958]: I1006 11:47:43.046801 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:47:43 crc kubenswrapper[4958]: I1006 11:47:43.046828 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:47:43 crc kubenswrapper[4958]: I1006 11:47:43.046840 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:47:43 crc kubenswrapper[4958]: I1006 11:47:43.046945 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:47:43 crc kubenswrapper[4958]: I1006 11:47:43.047002 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:47:43 crc kubenswrapper[4958]: I1006 11:47:43.047023 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:47:44 crc kubenswrapper[4958]: I1006 11:47:44.713618 4958 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-apiserver/kube-apiserver-crc" Oct 06 11:47:44 crc kubenswrapper[4958]: I1006 11:47:44.713897 4958 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Oct 06 11:47:44 crc kubenswrapper[4958]: I1006 11:47:44.714007 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Oct 06 11:47:44 crc kubenswrapper[4958]: I1006 11:47:44.715832 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:47:44 crc kubenswrapper[4958]: I1006 11:47:44.715896 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:47:44 crc kubenswrapper[4958]: I1006 11:47:44.715921 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:47:44 crc kubenswrapper[4958]: I1006 11:47:44.721250 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Oct 06 11:47:45 crc kubenswrapper[4958]: I1006 11:47:45.053853 4958 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Oct 06 11:47:45 crc kubenswrapper[4958]: I1006 11:47:45.055721 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:47:45 crc kubenswrapper[4958]: I1006 11:47:45.055787 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:47:45 crc kubenswrapper[4958]: I1006 11:47:45.055799 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:47:45 crc kubenswrapper[4958]: I1006 11:47:45.273306 4958 reflector.go:368] Caches populated for *v1.RuntimeClass from k8s.io/client-go/informers/factory.go:160 Oct 06 11:47:46 crc kubenswrapper[4958]: I1006 11:47:46.056722 4958 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Oct 06 11:47:46 crc kubenswrapper[4958]: I1006 11:47:46.058312 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:47:46 crc kubenswrapper[4958]: I1006 11:47:46.058379 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:47:46 crc kubenswrapper[4958]: I1006 11:47:46.058395 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:47:46 crc kubenswrapper[4958]: I1006 11:47:46.277373 4958 reflector.go:368] Caches populated for *v1.CSIDriver from k8s.io/client-go/informers/factory.go:160 Oct 06 11:47:46 crc kubenswrapper[4958]: E1006 11:47:46.745450 4958 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": context deadline exceeded" interval="6.4s" Oct 06 11:47:46 crc kubenswrapper[4958]: I1006 11:47:46.750545 4958 trace.go:236] Trace[1325914346]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (06-Oct-2025 11:47:36.307) (total time: 10442ms): Oct 06 11:47:46 crc kubenswrapper[4958]: Trace[1325914346]: ---"Objects listed" error: 10442ms (11:47:46.750) Oct 06 11:47:46 crc kubenswrapper[4958]: Trace[1325914346]: [10.442788215s] [10.442788215s] END Oct 06 11:47:46 crc kubenswrapper[4958]: I1006 11:47:46.750596 4958 reflector.go:368] Caches populated for *v1.Node from k8s.io/client-go/informers/factory.go:160 Oct 06 11:47:46 crc kubenswrapper[4958]: I1006 11:47:46.750552 4958 reconstruct.go:205] "DevicePaths of reconstructed volumes updated" Oct 06 11:47:46 crc kubenswrapper[4958]: I1006 11:47:46.751112 4958 trace.go:236] Trace[1110170492]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (06-Oct-2025 11:47:34.344) (total time: 12406ms): Oct 06 11:47:46 crc kubenswrapper[4958]: Trace[1110170492]: ---"Objects listed" error: 12406ms (11:47:46.750) Oct 06 11:47:46 crc kubenswrapper[4958]: Trace[1110170492]: [12.406842504s] [12.406842504s] END Oct 06 11:47:46 crc kubenswrapper[4958]: I1006 11:47:46.751194 4958 reflector.go:368] Caches populated for *v1.Service from k8s.io/client-go/informers/factory.go:160 Oct 06 11:47:46 crc kubenswrapper[4958]: E1006 11:47:46.751118 4958 kubelet_node_status.go:99] "Unable to register node with API server" err="nodes \"crc\" is forbidden: autoscaling.openshift.io/ManagedNode infra config cache not synchronized" node="crc" Oct 06 11:47:46 crc kubenswrapper[4958]: I1006 11:47:46.828825 4958 apiserver.go:52] "Watching apiserver" Oct 06 11:47:46 crc kubenswrapper[4958]: I1006 11:47:46.832811 4958 reflector.go:368] Caches populated for *v1.Pod from pkg/kubelet/config/apiserver.go:66 Oct 06 11:47:46 crc kubenswrapper[4958]: I1006 11:47:46.833314 4958 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-network-operator/iptables-alerter-4ln5h","openshift-network-operator/network-operator-58b4c7f79c-55gtf","openshift-network-console/networking-console-plugin-85b44fc459-gdk6g","openshift-network-diagnostics/network-check-source-55646444c4-trplf","openshift-network-diagnostics/network-check-target-xd92c","openshift-network-node-identity/network-node-identity-vrzqb"] Oct 06 11:47:46 crc kubenswrapper[4958]: I1006 11:47:46.833899 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Oct 06 11:47:46 crc kubenswrapper[4958]: I1006 11:47:46.834003 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 06 11:47:46 crc kubenswrapper[4958]: I1006 11:47:46.834140 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 06 11:47:46 crc kubenswrapper[4958]: I1006 11:47:46.834326 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-node-identity/network-node-identity-vrzqb" Oct 06 11:47:46 crc kubenswrapper[4958]: E1006 11:47:46.835496 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Oct 06 11:47:46 crc kubenswrapper[4958]: I1006 11:47:46.835726 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/iptables-alerter-4ln5h" Oct 06 11:47:46 crc kubenswrapper[4958]: I1006 11:47:46.839012 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 06 11:47:46 crc kubenswrapper[4958]: E1006 11:47:46.839126 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Oct 06 11:47:46 crc kubenswrapper[4958]: E1006 11:47:46.842507 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Oct 06 11:47:46 crc kubenswrapper[4958]: I1006 11:47:46.843657 4958 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"openshift-service-ca.crt" Oct 06 11:47:46 crc kubenswrapper[4958]: I1006 11:47:46.845352 4958 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"env-overrides" Oct 06 11:47:46 crc kubenswrapper[4958]: I1006 11:47:46.845819 4958 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"kube-root-ca.crt" Oct 06 11:47:46 crc kubenswrapper[4958]: I1006 11:47:46.845947 4958 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"openshift-service-ca.crt" Oct 06 11:47:46 crc kubenswrapper[4958]: I1006 11:47:46.846080 4958 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"kube-root-ca.crt" Oct 06 11:47:46 crc kubenswrapper[4958]: I1006 11:47:46.846290 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-operator"/"metrics-tls" Oct 06 11:47:46 crc kubenswrapper[4958]: I1006 11:47:46.846391 4958 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"iptables-alerter-script" Oct 06 11:47:46 crc kubenswrapper[4958]: I1006 11:47:46.846638 4958 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"ovnkube-identity-cm" Oct 06 11:47:46 crc kubenswrapper[4958]: I1006 11:47:46.847194 4958 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Oct 06 11:47:46 crc kubenswrapper[4958]: I1006 11:47:46.846792 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-node-identity"/"network-node-identity-cert" Oct 06 11:47:46 crc kubenswrapper[4958]: I1006 11:47:46.851468 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Oct 06 11:47:46 crc kubenswrapper[4958]: I1006 11:47:46.851547 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Oct 06 11:47:46 crc kubenswrapper[4958]: I1006 11:47:46.851603 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Oct 06 11:47:46 crc kubenswrapper[4958]: I1006 11:47:46.851646 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Oct 06 11:47:46 crc kubenswrapper[4958]: I1006 11:47:46.851694 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Oct 06 11:47:46 crc kubenswrapper[4958]: I1006 11:47:46.851740 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Oct 06 11:47:46 crc kubenswrapper[4958]: I1006 11:47:46.851785 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Oct 06 11:47:46 crc kubenswrapper[4958]: I1006 11:47:46.851832 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Oct 06 11:47:46 crc kubenswrapper[4958]: I1006 11:47:46.851883 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Oct 06 11:47:46 crc kubenswrapper[4958]: I1006 11:47:46.851928 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fcqwp\" (UniqueName: \"kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Oct 06 11:47:46 crc kubenswrapper[4958]: I1006 11:47:46.851963 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls\") pod \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\" (UID: \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\") " Oct 06 11:47:46 crc kubenswrapper[4958]: I1006 11:47:46.851999 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Oct 06 11:47:46 crc kubenswrapper[4958]: I1006 11:47:46.852044 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Oct 06 11:47:46 crc kubenswrapper[4958]: I1006 11:47:46.852092 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Oct 06 11:47:46 crc kubenswrapper[4958]: I1006 11:47:46.852138 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lz9wn\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Oct 06 11:47:46 crc kubenswrapper[4958]: I1006 11:47:46.852208 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zgdk5\" (UniqueName: \"kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Oct 06 11:47:46 crc kubenswrapper[4958]: I1006 11:47:46.852248 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Oct 06 11:47:46 crc kubenswrapper[4958]: I1006 11:47:46.852288 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Oct 06 11:47:46 crc kubenswrapper[4958]: I1006 11:47:46.852326 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Oct 06 11:47:46 crc kubenswrapper[4958]: I1006 11:47:46.852360 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sb6h7\" (UniqueName: \"kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Oct 06 11:47:46 crc kubenswrapper[4958]: I1006 11:47:46.852398 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Oct 06 11:47:46 crc kubenswrapper[4958]: I1006 11:47:46.852435 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Oct 06 11:47:46 crc kubenswrapper[4958]: I1006 11:47:46.852471 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Oct 06 11:47:46 crc kubenswrapper[4958]: I1006 11:47:46.852511 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Oct 06 11:47:46 crc kubenswrapper[4958]: I1006 11:47:46.852548 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Oct 06 11:47:46 crc kubenswrapper[4958]: I1006 11:47:46.852583 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Oct 06 11:47:46 crc kubenswrapper[4958]: I1006 11:47:46.852622 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Oct 06 11:47:46 crc kubenswrapper[4958]: I1006 11:47:46.852662 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Oct 06 11:47:46 crc kubenswrapper[4958]: I1006 11:47:46.852700 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Oct 06 11:47:46 crc kubenswrapper[4958]: I1006 11:47:46.852734 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Oct 06 11:47:46 crc kubenswrapper[4958]: I1006 11:47:46.852771 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xcphl\" (UniqueName: \"kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Oct 06 11:47:46 crc kubenswrapper[4958]: I1006 11:47:46.852804 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Oct 06 11:47:46 crc kubenswrapper[4958]: I1006 11:47:46.852837 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Oct 06 11:47:46 crc kubenswrapper[4958]: I1006 11:47:46.852871 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Oct 06 11:47:46 crc kubenswrapper[4958]: I1006 11:47:46.852913 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cfbct\" (UniqueName: \"kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Oct 06 11:47:46 crc kubenswrapper[4958]: I1006 11:47:46.852925 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls" (OuterVolumeSpecName: "samples-operator-tls") pod "a0128f3a-b052-44ed-a84e-c4c8aaf17c13" (UID: "a0128f3a-b052-44ed-a84e-c4c8aaf17c13"). InnerVolumeSpecName "samples-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 06 11:47:46 crc kubenswrapper[4958]: I1006 11:47:46.852947 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Oct 06 11:47:46 crc kubenswrapper[4958]: I1006 11:47:46.853028 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Oct 06 11:47:46 crc kubenswrapper[4958]: I1006 11:47:46.853058 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 06 11:47:46 crc kubenswrapper[4958]: I1006 11:47:46.853085 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Oct 06 11:47:46 crc kubenswrapper[4958]: I1006 11:47:46.853109 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Oct 06 11:47:46 crc kubenswrapper[4958]: I1006 11:47:46.853134 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8tdtz\" (UniqueName: \"kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Oct 06 11:47:46 crc kubenswrapper[4958]: I1006 11:47:46.853178 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Oct 06 11:47:46 crc kubenswrapper[4958]: I1006 11:47:46.853200 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Oct 06 11:47:46 crc kubenswrapper[4958]: I1006 11:47:46.853223 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Oct 06 11:47:46 crc kubenswrapper[4958]: I1006 11:47:46.853242 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Oct 06 11:47:46 crc kubenswrapper[4958]: I1006 11:47:46.853265 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9xfj7\" (UniqueName: \"kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Oct 06 11:47:46 crc kubenswrapper[4958]: I1006 11:47:46.853290 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2w9zh\" (UniqueName: \"kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Oct 06 11:47:46 crc kubenswrapper[4958]: I1006 11:47:46.853312 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Oct 06 11:47:46 crc kubenswrapper[4958]: I1006 11:47:46.853335 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Oct 06 11:47:46 crc kubenswrapper[4958]: I1006 11:47:46.853359 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s4n52\" (UniqueName: \"kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Oct 06 11:47:46 crc kubenswrapper[4958]: I1006 11:47:46.853380 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Oct 06 11:47:46 crc kubenswrapper[4958]: I1006 11:47:46.853404 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Oct 06 11:47:46 crc kubenswrapper[4958]: I1006 11:47:46.853426 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Oct 06 11:47:46 crc kubenswrapper[4958]: I1006 11:47:46.853449 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert\") pod \"20b0d48f-5fd6-431c-a545-e3c800c7b866\" (UID: \"20b0d48f-5fd6-431c-a545-e3c800c7b866\") " Oct 06 11:47:46 crc kubenswrapper[4958]: I1006 11:47:46.853473 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dbsvg\" (UniqueName: \"kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Oct 06 11:47:46 crc kubenswrapper[4958]: I1006 11:47:46.853498 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls\") pod \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\" (UID: \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\") " Oct 06 11:47:46 crc kubenswrapper[4958]: I1006 11:47:46.853521 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Oct 06 11:47:46 crc kubenswrapper[4958]: I1006 11:47:46.853544 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Oct 06 11:47:46 crc kubenswrapper[4958]: I1006 11:47:46.853567 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Oct 06 11:47:46 crc kubenswrapper[4958]: I1006 11:47:46.853591 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Oct 06 11:47:46 crc kubenswrapper[4958]: I1006 11:47:46.853617 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Oct 06 11:47:46 crc kubenswrapper[4958]: I1006 11:47:46.853639 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Oct 06 11:47:46 crc kubenswrapper[4958]: I1006 11:47:46.853661 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w4xd4\" (UniqueName: \"kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Oct 06 11:47:46 crc kubenswrapper[4958]: I1006 11:47:46.853682 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jkwtn\" (UniqueName: \"kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn\") pod \"5b88f790-22fa-440e-b583-365168c0b23d\" (UID: \"5b88f790-22fa-440e-b583-365168c0b23d\") " Oct 06 11:47:46 crc kubenswrapper[4958]: I1006 11:47:46.853681 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert" (OuterVolumeSpecName: "apiservice-cert") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "apiservice-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 06 11:47:46 crc kubenswrapper[4958]: I1006 11:47:46.853703 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Oct 06 11:47:46 crc kubenswrapper[4958]: I1006 11:47:46.853780 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Oct 06 11:47:46 crc kubenswrapper[4958]: I1006 11:47:46.853824 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kfwg7\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 06 11:47:46 crc kubenswrapper[4958]: I1006 11:47:46.853857 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Oct 06 11:47:46 crc kubenswrapper[4958]: I1006 11:47:46.853882 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Oct 06 11:47:46 crc kubenswrapper[4958]: I1006 11:47:46.853907 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6g6sz\" (UniqueName: \"kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Oct 06 11:47:46 crc kubenswrapper[4958]: I1006 11:47:46.853921 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 06 11:47:46 crc kubenswrapper[4958]: I1006 11:47:46.853941 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Oct 06 11:47:46 crc kubenswrapper[4958]: I1006 11:47:46.853969 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Oct 06 11:47:46 crc kubenswrapper[4958]: I1006 11:47:46.853994 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Oct 06 11:47:46 crc kubenswrapper[4958]: I1006 11:47:46.854019 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Oct 06 11:47:46 crc kubenswrapper[4958]: I1006 11:47:46.854041 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Oct 06 11:47:46 crc kubenswrapper[4958]: I1006 11:47:46.854065 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Oct 06 11:47:46 crc kubenswrapper[4958]: I1006 11:47:46.854088 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Oct 06 11:47:46 crc kubenswrapper[4958]: I1006 11:47:46.854110 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Oct 06 11:47:46 crc kubenswrapper[4958]: I1006 11:47:46.854128 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 06 11:47:46 crc kubenswrapper[4958]: I1006 11:47:46.854133 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs\") pod \"efdd0498-1daa-4136-9a4a-3b948c2293fc\" (UID: \"efdd0498-1daa-4136-9a4a-3b948c2293fc\") " Oct 06 11:47:46 crc kubenswrapper[4958]: I1006 11:47:46.854199 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Oct 06 11:47:46 crc kubenswrapper[4958]: I1006 11:47:46.854206 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images" (OuterVolumeSpecName: "images") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "images". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 06 11:47:46 crc kubenswrapper[4958]: I1006 11:47:46.854225 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Oct 06 11:47:46 crc kubenswrapper[4958]: I1006 11:47:46.854375 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-htfz6\" (UniqueName: \"kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Oct 06 11:47:46 crc kubenswrapper[4958]: I1006 11:47:46.854392 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs" (OuterVolumeSpecName: "webhook-certs") pod "efdd0498-1daa-4136-9a4a-3b948c2293fc" (UID: "efdd0498-1daa-4136-9a4a-3b948c2293fc"). InnerVolumeSpecName "webhook-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 06 11:47:46 crc kubenswrapper[4958]: I1006 11:47:46.854423 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x2m85\" (UniqueName: \"kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85\") pod \"cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d\" (UID: \"cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d\") " Oct 06 11:47:46 crc kubenswrapper[4958]: I1006 11:47:46.854400 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config" (OuterVolumeSpecName: "config") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 06 11:47:46 crc kubenswrapper[4958]: I1006 11:47:46.854467 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Oct 06 11:47:46 crc kubenswrapper[4958]: I1006 11:47:46.854541 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v47cf\" (UniqueName: \"kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Oct 06 11:47:46 crc kubenswrapper[4958]: I1006 11:47:46.854578 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ngvvp\" (UniqueName: \"kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Oct 06 11:47:46 crc kubenswrapper[4958]: I1006 11:47:46.854618 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Oct 06 11:47:46 crc kubenswrapper[4958]: I1006 11:47:46.854658 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Oct 06 11:47:46 crc kubenswrapper[4958]: I1006 11:47:46.854700 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Oct 06 11:47:46 crc kubenswrapper[4958]: I1006 11:47:46.854737 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Oct 06 11:47:46 crc kubenswrapper[4958]: I1006 11:47:46.854775 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Oct 06 11:47:46 crc kubenswrapper[4958]: I1006 11:47:46.854822 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls\") pod \"6731426b-95fe-49ff-bb5f-40441049fde2\" (UID: \"6731426b-95fe-49ff-bb5f-40441049fde2\") " Oct 06 11:47:46 crc kubenswrapper[4958]: I1006 11:47:46.854859 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 06 11:47:46 crc kubenswrapper[4958]: I1006 11:47:46.854896 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Oct 06 11:47:46 crc kubenswrapper[4958]: I1006 11:47:46.854929 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 06 11:47:46 crc kubenswrapper[4958]: I1006 11:47:46.854980 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pjr6v\" (UniqueName: \"kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v\") pod \"49ef4625-1d3a-4a9f-b595-c2433d32326d\" (UID: \"49ef4625-1d3a-4a9f-b595-c2433d32326d\") " Oct 06 11:47:46 crc kubenswrapper[4958]: I1006 11:47:46.855018 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Oct 06 11:47:46 crc kubenswrapper[4958]: I1006 11:47:46.855055 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gf66m\" (UniqueName: \"kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m\") pod \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\" (UID: \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\") " Oct 06 11:47:46 crc kubenswrapper[4958]: I1006 11:47:46.855092 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Oct 06 11:47:46 crc kubenswrapper[4958]: I1006 11:47:46.855131 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jhbk2\" (UniqueName: \"kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2\") pod \"bd23aa5c-e532-4e53-bccf-e79f130c5ae8\" (UID: \"bd23aa5c-e532-4e53-bccf-e79f130c5ae8\") " Oct 06 11:47:46 crc kubenswrapper[4958]: I1006 11:47:46.855195 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Oct 06 11:47:46 crc kubenswrapper[4958]: I1006 11:47:46.855361 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 06 11:47:46 crc kubenswrapper[4958]: I1006 11:47:46.855404 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Oct 06 11:47:46 crc kubenswrapper[4958]: I1006 11:47:46.855442 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Oct 06 11:47:46 crc kubenswrapper[4958]: I1006 11:47:46.855480 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Oct 06 11:47:46 crc kubenswrapper[4958]: I1006 11:47:46.855517 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Oct 06 11:47:46 crc kubenswrapper[4958]: I1006 11:47:46.855556 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Oct 06 11:47:46 crc kubenswrapper[4958]: I1006 11:47:46.855596 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rnphk\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Oct 06 11:47:46 crc kubenswrapper[4958]: I1006 11:47:46.855640 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fqsjt\" (UniqueName: \"kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt\") pod \"efdd0498-1daa-4136-9a4a-3b948c2293fc\" (UID: \"efdd0498-1daa-4136-9a4a-3b948c2293fc\") " Oct 06 11:47:46 crc kubenswrapper[4958]: I1006 11:47:46.855680 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Oct 06 11:47:46 crc kubenswrapper[4958]: I1006 11:47:46.855716 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vt5rc\" (UniqueName: \"kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc\") pod \"44663579-783b-4372-86d6-acf235a62d72\" (UID: \"44663579-783b-4372-86d6-acf235a62d72\") " Oct 06 11:47:46 crc kubenswrapper[4958]: I1006 11:47:46.855753 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 06 11:47:46 crc kubenswrapper[4958]: I1006 11:47:46.855798 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w7l8j\" (UniqueName: \"kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Oct 06 11:47:46 crc kubenswrapper[4958]: I1006 11:47:46.855836 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Oct 06 11:47:46 crc kubenswrapper[4958]: I1006 11:47:46.855869 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Oct 06 11:47:46 crc kubenswrapper[4958]: I1006 11:47:46.855903 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Oct 06 11:47:46 crc kubenswrapper[4958]: I1006 11:47:46.855941 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Oct 06 11:47:46 crc kubenswrapper[4958]: I1006 11:47:46.855977 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qg5z5\" (UniqueName: \"kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Oct 06 11:47:46 crc kubenswrapper[4958]: I1006 11:47:46.856009 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Oct 06 11:47:46 crc kubenswrapper[4958]: I1006 11:47:46.856043 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Oct 06 11:47:46 crc kubenswrapper[4958]: I1006 11:47:46.856078 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Oct 06 11:47:46 crc kubenswrapper[4958]: I1006 11:47:46.856119 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Oct 06 11:47:46 crc kubenswrapper[4958]: I1006 11:47:46.856182 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mnrrd\" (UniqueName: \"kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Oct 06 11:47:46 crc kubenswrapper[4958]: I1006 11:47:46.856220 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert\") pod \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\" (UID: \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\") " Oct 06 11:47:46 crc kubenswrapper[4958]: I1006 11:47:46.856256 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Oct 06 11:47:46 crc kubenswrapper[4958]: I1006 11:47:46.856347 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4d4hj\" (UniqueName: \"kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj\") pod \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\" (UID: \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\") " Oct 06 11:47:46 crc kubenswrapper[4958]: I1006 11:47:46.856385 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Oct 06 11:47:46 crc kubenswrapper[4958]: I1006 11:47:46.856423 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Oct 06 11:47:46 crc kubenswrapper[4958]: I1006 11:47:46.856461 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Oct 06 11:47:46 crc kubenswrapper[4958]: I1006 11:47:46.856499 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7c4vf\" (UniqueName: \"kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Oct 06 11:47:46 crc kubenswrapper[4958]: I1006 11:47:46.856535 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Oct 06 11:47:46 crc kubenswrapper[4958]: I1006 11:47:46.856571 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Oct 06 11:47:46 crc kubenswrapper[4958]: I1006 11:47:46.856607 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x4zgh\" (UniqueName: \"kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Oct 06 11:47:46 crc kubenswrapper[4958]: I1006 11:47:46.856649 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mg5zb\" (UniqueName: \"kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Oct 06 11:47:46 crc kubenswrapper[4958]: I1006 11:47:46.856692 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs\") pod \"5b88f790-22fa-440e-b583-365168c0b23d\" (UID: \"5b88f790-22fa-440e-b583-365168c0b23d\") " Oct 06 11:47:46 crc kubenswrapper[4958]: I1006 11:47:46.856725 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Oct 06 11:47:46 crc kubenswrapper[4958]: I1006 11:47:46.856759 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Oct 06 11:47:46 crc kubenswrapper[4958]: I1006 11:47:46.856794 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Oct 06 11:47:46 crc kubenswrapper[4958]: I1006 11:47:46.856831 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wxkg8\" (UniqueName: \"kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8\") pod \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\" (UID: \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\") " Oct 06 11:47:46 crc kubenswrapper[4958]: I1006 11:47:46.856874 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Oct 06 11:47:46 crc kubenswrapper[4958]: I1006 11:47:46.856910 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Oct 06 11:47:46 crc kubenswrapper[4958]: I1006 11:47:46.856947 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Oct 06 11:47:46 crc kubenswrapper[4958]: I1006 11:47:46.856987 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Oct 06 11:47:46 crc kubenswrapper[4958]: I1006 11:47:46.857032 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d6qdx\" (UniqueName: \"kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Oct 06 11:47:46 crc kubenswrapper[4958]: I1006 11:47:46.857068 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Oct 06 11:47:46 crc kubenswrapper[4958]: I1006 11:47:46.857103 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Oct 06 11:47:46 crc kubenswrapper[4958]: I1006 11:47:46.857138 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 06 11:47:46 crc kubenswrapper[4958]: I1006 11:47:46.857192 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lzf88\" (UniqueName: \"kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Oct 06 11:47:46 crc kubenswrapper[4958]: I1006 11:47:46.857226 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Oct 06 11:47:46 crc kubenswrapper[4958]: I1006 11:47:46.857261 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Oct 06 11:47:46 crc kubenswrapper[4958]: I1006 11:47:46.857295 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Oct 06 11:47:46 crc kubenswrapper[4958]: I1006 11:47:46.857327 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca\") pod \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\" (UID: \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\") " Oct 06 11:47:46 crc kubenswrapper[4958]: I1006 11:47:46.857362 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Oct 06 11:47:46 crc kubenswrapper[4958]: I1006 11:47:46.857396 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Oct 06 11:47:46 crc kubenswrapper[4958]: I1006 11:47:46.857452 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xcgwh\" (UniqueName: \"kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Oct 06 11:47:46 crc kubenswrapper[4958]: I1006 11:47:46.857491 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Oct 06 11:47:46 crc kubenswrapper[4958]: I1006 11:47:46.857529 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Oct 06 11:47:46 crc kubenswrapper[4958]: I1006 11:47:46.857565 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Oct 06 11:47:46 crc kubenswrapper[4958]: I1006 11:47:46.857600 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 06 11:47:46 crc kubenswrapper[4958]: I1006 11:47:46.857638 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Oct 06 11:47:46 crc kubenswrapper[4958]: I1006 11:47:46.857672 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Oct 06 11:47:46 crc kubenswrapper[4958]: I1006 11:47:46.857712 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Oct 06 11:47:46 crc kubenswrapper[4958]: I1006 11:47:46.857749 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2d4wz\" (UniqueName: \"kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Oct 06 11:47:46 crc kubenswrapper[4958]: I1006 11:47:46.857792 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Oct 06 11:47:46 crc kubenswrapper[4958]: I1006 11:47:46.857832 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tk88c\" (UniqueName: \"kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Oct 06 11:47:46 crc kubenswrapper[4958]: I1006 11:47:46.857871 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Oct 06 11:47:46 crc kubenswrapper[4958]: I1006 11:47:46.857908 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nzwt7\" (UniqueName: \"kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7\") pod \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\" (UID: \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\") " Oct 06 11:47:46 crc kubenswrapper[4958]: I1006 11:47:46.857943 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Oct 06 11:47:46 crc kubenswrapper[4958]: I1006 11:47:46.857979 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Oct 06 11:47:46 crc kubenswrapper[4958]: I1006 11:47:46.858016 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Oct 06 11:47:46 crc kubenswrapper[4958]: I1006 11:47:46.858053 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Oct 06 11:47:46 crc kubenswrapper[4958]: I1006 11:47:46.858093 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-249nr\" (UniqueName: \"kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Oct 06 11:47:46 crc kubenswrapper[4958]: I1006 11:47:46.858129 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Oct 06 11:47:46 crc kubenswrapper[4958]: I1006 11:47:46.858531 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Oct 06 11:47:46 crc kubenswrapper[4958]: I1006 11:47:46.858573 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w9rds\" (UniqueName: \"kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds\") pod \"20b0d48f-5fd6-431c-a545-e3c800c7b866\" (UID: \"20b0d48f-5fd6-431c-a545-e3c800c7b866\") " Oct 06 11:47:46 crc kubenswrapper[4958]: I1006 11:47:46.858612 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d4lsv\" (UniqueName: \"kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Oct 06 11:47:46 crc kubenswrapper[4958]: I1006 11:47:46.858649 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Oct 06 11:47:46 crc kubenswrapper[4958]: I1006 11:47:46.858683 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qs4fp\" (UniqueName: \"kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Oct 06 11:47:46 crc kubenswrapper[4958]: I1006 11:47:46.858716 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Oct 06 11:47:46 crc kubenswrapper[4958]: I1006 11:47:46.858749 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x7zkh\" (UniqueName: \"kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh\") pod \"6731426b-95fe-49ff-bb5f-40441049fde2\" (UID: \"6731426b-95fe-49ff-bb5f-40441049fde2\") " Oct 06 11:47:46 crc kubenswrapper[4958]: I1006 11:47:46.858782 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pcxfs\" (UniqueName: \"kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Oct 06 11:47:46 crc kubenswrapper[4958]: I1006 11:47:46.858818 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Oct 06 11:47:46 crc kubenswrapper[4958]: I1006 11:47:46.858855 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zkvpv\" (UniqueName: \"kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Oct 06 11:47:46 crc kubenswrapper[4958]: I1006 11:47:46.858892 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Oct 06 11:47:46 crc kubenswrapper[4958]: I1006 11:47:46.858929 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Oct 06 11:47:46 crc kubenswrapper[4958]: I1006 11:47:46.858968 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Oct 06 11:47:46 crc kubenswrapper[4958]: I1006 11:47:46.859011 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pj782\" (UniqueName: \"kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Oct 06 11:47:46 crc kubenswrapper[4958]: I1006 11:47:46.859055 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Oct 06 11:47:46 crc kubenswrapper[4958]: I1006 11:47:46.859090 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Oct 06 11:47:46 crc kubenswrapper[4958]: I1006 11:47:46.859125 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-279lb\" (UniqueName: \"kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Oct 06 11:47:46 crc kubenswrapper[4958]: I1006 11:47:46.859213 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bf2bz\" (UniqueName: \"kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Oct 06 11:47:46 crc kubenswrapper[4958]: I1006 11:47:46.859251 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Oct 06 11:47:46 crc kubenswrapper[4958]: I1006 11:47:46.859286 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Oct 06 11:47:46 crc kubenswrapper[4958]: I1006 11:47:46.859321 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Oct 06 11:47:46 crc kubenswrapper[4958]: I1006 11:47:46.859357 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6ccd8\" (UniqueName: \"kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Oct 06 11:47:46 crc kubenswrapper[4958]: I1006 11:47:46.859452 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 06 11:47:46 crc kubenswrapper[4958]: I1006 11:47:46.859499 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 06 11:47:46 crc kubenswrapper[4958]: I1006 11:47:46.859540 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 06 11:47:46 crc kubenswrapper[4958]: I1006 11:47:46.859576 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Oct 06 11:47:46 crc kubenswrapper[4958]: I1006 11:47:46.859618 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Oct 06 11:47:46 crc kubenswrapper[4958]: I1006 11:47:46.859662 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rdwmf\" (UniqueName: \"kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Oct 06 11:47:46 crc kubenswrapper[4958]: I1006 11:47:46.859708 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Oct 06 11:47:46 crc kubenswrapper[4958]: I1006 11:47:46.859752 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rczfb\" (UniqueName: \"kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Oct 06 11:47:46 crc kubenswrapper[4958]: I1006 11:47:46.859790 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Oct 06 11:47:46 crc kubenswrapper[4958]: I1006 11:47:46.859833 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Oct 06 11:47:46 crc kubenswrapper[4958]: I1006 11:47:46.859870 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2kz5\" (UniqueName: \"kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Oct 06 11:47:46 crc kubenswrapper[4958]: I1006 11:47:46.859908 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Oct 06 11:47:46 crc kubenswrapper[4958]: I1006 11:47:46.859939 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Oct 06 11:47:46 crc kubenswrapper[4958]: I1006 11:47:46.859978 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 06 11:47:46 crc kubenswrapper[4958]: I1006 11:47:46.860109 4958 reconciler_common.go:293] "Volume detached for volume \"images\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images\") on node \"crc\" DevicePath \"\"" Oct 06 11:47:46 crc kubenswrapper[4958]: I1006 11:47:46.860133 4958 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert\") on node \"crc\" DevicePath \"\"" Oct 06 11:47:46 crc kubenswrapper[4958]: I1006 11:47:46.860176 4958 reconciler_common.go:293] "Volume detached for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs\") on node \"crc\" DevicePath \"\"" Oct 06 11:47:46 crc kubenswrapper[4958]: I1006 11:47:46.860197 4958 reconciler_common.go:293] "Volume detached for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert\") on node \"crc\" DevicePath \"\"" Oct 06 11:47:46 crc kubenswrapper[4958]: I1006 11:47:46.860218 4958 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config\") on node \"crc\" DevicePath \"\"" Oct 06 11:47:46 crc kubenswrapper[4958]: I1006 11:47:46.860236 4958 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access\") on node \"crc\" DevicePath \"\"" Oct 06 11:47:46 crc kubenswrapper[4958]: I1006 11:47:46.860258 4958 reconciler_common.go:293] "Volume detached for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls\") on node \"crc\" DevicePath \"\"" Oct 06 11:47:46 crc kubenswrapper[4958]: I1006 11:47:46.854702 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 06 11:47:46 crc kubenswrapper[4958]: I1006 11:47:46.854938 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 06 11:47:46 crc kubenswrapper[4958]: I1006 11:47:46.854971 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 06 11:47:46 crc kubenswrapper[4958]: I1006 11:47:46.855030 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 06 11:47:46 crc kubenswrapper[4958]: I1006 11:47:46.855121 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig" (OuterVolumeSpecName: "v4-0-config-system-cliconfig") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-cliconfig". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 06 11:47:46 crc kubenswrapper[4958]: I1006 11:47:46.855190 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7" (OuterVolumeSpecName: "kube-api-access-kfwg7") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "kube-api-access-kfwg7". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 06 11:47:46 crc kubenswrapper[4958]: I1006 11:47:46.855490 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 06 11:47:46 crc kubenswrapper[4958]: I1006 11:47:46.855686 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 06 11:47:46 crc kubenswrapper[4958]: I1006 11:47:46.856084 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config" (OuterVolumeSpecName: "config") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 06 11:47:46 crc kubenswrapper[4958]: I1006 11:47:46.856075 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities" (OuterVolumeSpecName: "utilities") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 06 11:47:46 crc kubenswrapper[4958]: I1006 11:47:46.856495 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection" (OuterVolumeSpecName: "v4-0-config-user-template-provider-selection") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-provider-selection". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 06 11:47:46 crc kubenswrapper[4958]: I1006 11:47:46.856797 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key" (OuterVolumeSpecName: "signing-key") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "signing-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 06 11:47:46 crc kubenswrapper[4958]: I1006 11:47:46.857031 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error" (OuterVolumeSpecName: "v4-0-config-user-template-error") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-error". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 06 11:47:46 crc kubenswrapper[4958]: I1006 11:47:46.857354 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7" (OuterVolumeSpecName: "kube-api-access-sb6h7") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "kube-api-access-sb6h7". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 06 11:47:46 crc kubenswrapper[4958]: I1006 11:47:46.857483 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth" (OuterVolumeSpecName: "stats-auth") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "stats-auth". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 06 11:47:46 crc kubenswrapper[4958]: I1006 11:47:46.857550 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz" (OuterVolumeSpecName: "kube-api-access-6g6sz") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "kube-api-access-6g6sz". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 06 11:47:46 crc kubenswrapper[4958]: I1006 11:47:46.858090 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config" (OuterVolumeSpecName: "multus-daemon-config") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "multus-daemon-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 06 11:47:46 crc kubenswrapper[4958]: I1006 11:47:46.858298 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 06 11:47:46 crc kubenswrapper[4958]: I1006 11:47:46.858573 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert" (OuterVolumeSpecName: "v4-0-config-system-serving-cert") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 06 11:47:46 crc kubenswrapper[4958]: I1006 11:47:46.859102 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca" (OuterVolumeSpecName: "marketplace-trusted-ca") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "marketplace-trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 06 11:47:46 crc kubenswrapper[4958]: I1006 11:47:46.870192 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp" (OuterVolumeSpecName: "kube-api-access-qs4fp") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "kube-api-access-qs4fp". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 06 11:47:46 crc kubenswrapper[4958]: I1006 11:47:46.861542 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 06 11:47:46 crc kubenswrapper[4958]: I1006 11:47:46.861973 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert" (OuterVolumeSpecName: "srv-cert") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "srv-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 06 11:47:46 crc kubenswrapper[4958]: I1006 11:47:46.862444 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 06 11:47:46 crc kubenswrapper[4958]: I1006 11:47:46.863349 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit" (OuterVolumeSpecName: "audit") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "audit". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 06 11:47:46 crc kubenswrapper[4958]: I1006 11:47:46.863831 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn" (OuterVolumeSpecName: "kube-api-access-lz9wn") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "kube-api-access-lz9wn". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 06 11:47:46 crc kubenswrapper[4958]: I1006 11:47:46.864174 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5" (OuterVolumeSpecName: "kube-api-access-zgdk5") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "kube-api-access-zgdk5". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 06 11:47:46 crc kubenswrapper[4958]: I1006 11:47:46.864494 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6" (OuterVolumeSpecName: "kube-api-access-htfz6") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "kube-api-access-htfz6". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 06 11:47:46 crc kubenswrapper[4958]: I1006 11:47:46.864870 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85" (OuterVolumeSpecName: "kube-api-access-x2m85") pod "cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" (UID: "cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d"). InnerVolumeSpecName "kube-api-access-x2m85". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 06 11:47:46 crc kubenswrapper[4958]: I1006 11:47:46.865281 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 06 11:47:46 crc kubenswrapper[4958]: I1006 11:47:46.865261 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 06 11:47:46 crc kubenswrapper[4958]: I1006 11:47:46.865369 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics" (OuterVolumeSpecName: "marketplace-operator-metrics") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "marketplace-operator-metrics". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 06 11:47:46 crc kubenswrapper[4958]: I1006 11:47:46.865671 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf" (OuterVolumeSpecName: "kube-api-access-v47cf") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "kube-api-access-v47cf". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 06 11:47:46 crc kubenswrapper[4958]: I1006 11:47:46.866059 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs" (OuterVolumeSpecName: "v4-0-config-system-router-certs") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-router-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 06 11:47:46 crc kubenswrapper[4958]: I1006 11:47:46.866718 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp" (OuterVolumeSpecName: "kube-api-access-ngvvp") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "kube-api-access-ngvvp". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 06 11:47:46 crc kubenswrapper[4958]: I1006 11:47:46.866874 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates" (OuterVolumeSpecName: "available-featuregates") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "available-featuregates". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 06 11:47:46 crc kubenswrapper[4958]: I1006 11:47:46.867817 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config" (OuterVolumeSpecName: "config") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 06 11:47:46 crc kubenswrapper[4958]: I1006 11:47:46.868026 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy" (OuterVolumeSpecName: "cni-binary-copy") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "cni-binary-copy". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 06 11:47:46 crc kubenswrapper[4958]: I1006 11:47:46.868590 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template" (OuterVolumeSpecName: "v4-0-config-system-ocp-branding-template") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-ocp-branding-template". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 06 11:47:46 crc kubenswrapper[4958]: I1006 11:47:46.868829 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 06 11:47:46 crc kubenswrapper[4958]: I1006 11:47:46.869293 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config" (OuterVolumeSpecName: "encryption-config") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "encryption-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 06 11:47:46 crc kubenswrapper[4958]: I1006 11:47:46.869751 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 06 11:47:46 crc kubenswrapper[4958]: I1006 11:47:46.869878 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca" (OuterVolumeSpecName: "service-ca") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 06 11:47:46 crc kubenswrapper[4958]: E1006 11:47:46.869931 4958 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-06 11:47:47.369890718 +0000 UTC m=+21.255916036 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 06 11:47:46 crc kubenswrapper[4958]: I1006 11:47:46.870448 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5" (OuterVolumeSpecName: "kube-api-access-qg5z5") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "kube-api-access-qg5z5". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 06 11:47:46 crc kubenswrapper[4958]: I1006 11:47:46.870456 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config" (OuterVolumeSpecName: "mcd-auth-proxy-config") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "mcd-auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 06 11:47:46 crc kubenswrapper[4958]: I1006 11:47:46.870659 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88" (OuterVolumeSpecName: "kube-api-access-lzf88") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "kube-api-access-lzf88". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 06 11:47:46 crc kubenswrapper[4958]: I1006 11:47:46.870690 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle" (OuterVolumeSpecName: "service-ca-bundle") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "service-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 06 11:47:46 crc kubenswrapper[4958]: I1006 11:47:46.870857 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt" (OuterVolumeSpecName: "kube-api-access-fqsjt") pod "efdd0498-1daa-4136-9a4a-3b948c2293fc" (UID: "efdd0498-1daa-4136-9a4a-3b948c2293fc"). InnerVolumeSpecName "kube-api-access-fqsjt". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 06 11:47:46 crc kubenswrapper[4958]: I1006 11:47:46.870966 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 06 11:47:46 crc kubenswrapper[4958]: I1006 11:47:46.871221 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 06 11:47:46 crc kubenswrapper[4958]: I1006 11:47:46.871458 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config" (OuterVolumeSpecName: "config") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 06 11:47:46 crc kubenswrapper[4958]: I1006 11:47:46.871565 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config" (OuterVolumeSpecName: "config") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 06 11:47:46 crc kubenswrapper[4958]: I1006 11:47:46.871772 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle" (OuterVolumeSpecName: "v4-0-config-system-trusted-ca-bundle") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 06 11:47:46 crc kubenswrapper[4958]: I1006 11:47:46.879441 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config" (OuterVolumeSpecName: "mcc-auth-proxy-config") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "mcc-auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 06 11:47:46 crc kubenswrapper[4958]: I1006 11:47:46.881884 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 06 11:47:46 crc kubenswrapper[4958]: I1006 11:47:46.882121 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh" (OuterVolumeSpecName: "kube-api-access-x4zgh") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "kube-api-access-x4zgh". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 06 11:47:46 crc kubenswrapper[4958]: I1006 11:47:46.882359 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs" (OuterVolumeSpecName: "metrics-certs") pod "5b88f790-22fa-440e-b583-365168c0b23d" (UID: "5b88f790-22fa-440e-b583-365168c0b23d"). InnerVolumeSpecName "metrics-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 06 11:47:46 crc kubenswrapper[4958]: I1006 11:47:46.882752 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 06 11:47:46 crc kubenswrapper[4958]: I1006 11:47:46.884151 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 06 11:47:46 crc kubenswrapper[4958]: I1006 11:47:46.884642 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca" (OuterVolumeSpecName: "client-ca") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 06 11:47:46 crc kubenswrapper[4958]: I1006 11:47:46.885330 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca" (OuterVolumeSpecName: "serviceca") pod "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" (UID: "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59"). InnerVolumeSpecName "serviceca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 06 11:47:46 crc kubenswrapper[4958]: I1006 11:47:46.885579 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 06 11:47:46 crc kubenswrapper[4958]: I1006 11:47:46.880841 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:46Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Oct 06 11:47:46 crc kubenswrapper[4958]: E1006 11:47:46.887792 4958 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Oct 06 11:47:46 crc kubenswrapper[4958]: E1006 11:47:46.887824 4958 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Oct 06 11:47:46 crc kubenswrapper[4958]: E1006 11:47:46.887843 4958 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Oct 06 11:47:46 crc kubenswrapper[4958]: E1006 11:47:46.887937 4958 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2025-10-06 11:47:47.387901897 +0000 UTC m=+21.273927355 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Oct 06 11:47:46 crc kubenswrapper[4958]: I1006 11:47:46.889818 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl" (OuterVolumeSpecName: "kube-api-access-xcphl") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "kube-api-access-xcphl". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 06 11:47:46 crc kubenswrapper[4958]: I1006 11:47:46.889988 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities" (OuterVolumeSpecName: "utilities") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 06 11:47:46 crc kubenswrapper[4958]: I1006 11:47:46.890090 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m" (OuterVolumeSpecName: "kube-api-access-gf66m") pod "a0128f3a-b052-44ed-a84e-c4c8aaf17c13" (UID: "a0128f3a-b052-44ed-a84e-c4c8aaf17c13"). InnerVolumeSpecName "kube-api-access-gf66m". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 06 11:47:46 crc kubenswrapper[4958]: I1006 11:47:46.890368 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 06 11:47:46 crc kubenswrapper[4958]: I1006 11:47:46.890763 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config" (OuterVolumeSpecName: "config") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 06 11:47:46 crc kubenswrapper[4958]: I1006 11:47:46.891285 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert" (OuterVolumeSpecName: "ovn-control-plane-metrics-cert") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "ovn-control-plane-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 06 11:47:46 crc kubenswrapper[4958]: I1006 11:47:46.891283 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 06 11:47:46 crc kubenswrapper[4958]: I1006 11:47:46.891694 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd" (OuterVolumeSpecName: "kube-api-access-mnrrd") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "kube-api-access-mnrrd". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 06 11:47:46 crc kubenswrapper[4958]: I1006 11:47:46.891813 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 06 11:47:46 crc kubenswrapper[4958]: I1006 11:47:46.892018 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert" (OuterVolumeSpecName: "package-server-manager-serving-cert") pod "3ab1a177-2de0-46d9-b765-d0d0649bb42e" (UID: "3ab1a177-2de0-46d9-b765-d0d0649bb42e"). InnerVolumeSpecName "package-server-manager-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 06 11:47:46 crc kubenswrapper[4958]: I1006 11:47:46.892565 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 06 11:47:46 crc kubenswrapper[4958]: I1006 11:47:46.892930 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca" (OuterVolumeSpecName: "client-ca") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 06 11:47:46 crc kubenswrapper[4958]: I1006 11:47:46.892935 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 06 11:47:46 crc kubenswrapper[4958]: I1006 11:47:46.893287 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8" (OuterVolumeSpecName: "kube-api-access-wxkg8") pod "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" (UID: "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59"). InnerVolumeSpecName "kube-api-access-wxkg8". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 06 11:47:46 crc kubenswrapper[4958]: I1006 11:47:46.893651 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login" (OuterVolumeSpecName: "v4-0-config-user-template-login") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-login". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 06 11:47:46 crc kubenswrapper[4958]: I1006 11:47:46.893790 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca" (OuterVolumeSpecName: "etcd-serving-ca") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "etcd-serving-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 06 11:47:46 crc kubenswrapper[4958]: I1006 11:47:46.893817 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config" (OuterVolumeSpecName: "console-config") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 06 11:47:46 crc kubenswrapper[4958]: I1006 11:47:46.894269 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh" (OuterVolumeSpecName: "kube-api-access-xcgwh") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "kube-api-access-xcgwh". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 06 11:47:46 crc kubenswrapper[4958]: I1006 11:47:46.894455 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities" (OuterVolumeSpecName: "utilities") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 06 11:47:46 crc kubenswrapper[4958]: I1006 11:47:46.894783 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs" (OuterVolumeSpecName: "tmpfs") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "tmpfs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 06 11:47:46 crc kubenswrapper[4958]: I1006 11:47:46.895544 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls" (OuterVolumeSpecName: "control-plane-machine-set-operator-tls") pod "6731426b-95fe-49ff-bb5f-40441049fde2" (UID: "6731426b-95fe-49ff-bb5f-40441049fde2"). InnerVolumeSpecName "control-plane-machine-set-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 06 11:47:46 crc kubenswrapper[4958]: I1006 11:47:46.869938 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config" (OuterVolumeSpecName: "config") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 06 11:47:46 crc kubenswrapper[4958]: I1006 11:47:46.899969 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf" (OuterVolumeSpecName: "kube-api-access-7c4vf") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "kube-api-access-7c4vf". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 06 11:47:46 crc kubenswrapper[4958]: I1006 11:47:46.900335 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v" (OuterVolumeSpecName: "kube-api-access-pjr6v") pod "49ef4625-1d3a-4a9f-b595-c2433d32326d" (UID: "49ef4625-1d3a-4a9f-b595-c2433d32326d"). InnerVolumeSpecName "kube-api-access-pjr6v". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 06 11:47:46 crc kubenswrapper[4958]: I1006 11:47:46.900990 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config" (OuterVolumeSpecName: "config") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 06 11:47:46 crc kubenswrapper[4958]: I1006 11:47:46.901011 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config" (OuterVolumeSpecName: "auth-proxy-config") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 06 11:47:46 crc kubenswrapper[4958]: I1006 11:47:46.901846 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 06 11:47:46 crc kubenswrapper[4958]: I1006 11:47:46.902134 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 06 11:47:46 crc kubenswrapper[4958]: I1006 11:47:46.903904 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:46Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Oct 06 11:47:46 crc kubenswrapper[4958]: I1006 11:47:46.906124 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 06 11:47:46 crc kubenswrapper[4958]: I1006 11:47:46.909384 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc" (OuterVolumeSpecName: "kube-api-access-vt5rc") pod "44663579-783b-4372-86d6-acf235a62d72" (UID: "44663579-783b-4372-86d6-acf235a62d72"). InnerVolumeSpecName "kube-api-access-vt5rc". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 06 11:47:46 crc kubenswrapper[4958]: I1006 11:47:46.910116 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities" (OuterVolumeSpecName: "utilities") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 06 11:47:46 crc kubenswrapper[4958]: I1006 11:47:46.910523 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert" (OuterVolumeSpecName: "webhook-cert") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "webhook-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 06 11:47:46 crc kubenswrapper[4958]: E1006 11:47:46.911126 4958 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Oct 06 11:47:46 crc kubenswrapper[4958]: I1006 11:47:46.911329 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 06 11:47:46 crc kubenswrapper[4958]: I1006 11:47:46.911671 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782" (OuterVolumeSpecName: "kube-api-access-pj782") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "kube-api-access-pj782". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 06 11:47:46 crc kubenswrapper[4958]: I1006 11:47:46.911954 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 06 11:47:46 crc kubenswrapper[4958]: I1006 11:47:46.912069 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 06 11:47:46 crc kubenswrapper[4958]: I1006 11:47:46.912719 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls" (OuterVolumeSpecName: "machine-approver-tls") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "machine-approver-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 06 11:47:46 crc kubenswrapper[4958]: I1006 11:47:46.912793 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert" (OuterVolumeSpecName: "profile-collector-cert") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "profile-collector-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 06 11:47:46 crc kubenswrapper[4958]: I1006 11:47:46.913080 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7" (OuterVolumeSpecName: "kube-api-access-9xfj7") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "kube-api-access-9xfj7". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 06 11:47:46 crc kubenswrapper[4958]: E1006 11:47:46.915573 4958 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-10-06 11:47:47.411481804 +0000 UTC m=+21.297507142 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Oct 06 11:47:46 crc kubenswrapper[4958]: I1006 11:47:46.916424 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs" (OuterVolumeSpecName: "metrics-certs") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "metrics-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 06 11:47:46 crc kubenswrapper[4958]: I1006 11:47:46.916552 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Oct 06 11:47:46 crc kubenswrapper[4958]: I1006 11:47:46.917957 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token" (OuterVolumeSpecName: "node-bootstrap-token") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "node-bootstrap-token". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 06 11:47:46 crc kubenswrapper[4958]: I1006 11:47:46.918286 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate" (OuterVolumeSpecName: "default-certificate") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "default-certificate". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 06 11:47:46 crc kubenswrapper[4958]: I1006 11:47:46.918703 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images" (OuterVolumeSpecName: "images") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "images". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 06 11:47:46 crc kubenswrapper[4958]: I1006 11:47:46.920761 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp" (OuterVolumeSpecName: "kube-api-access-fcqwp") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "kube-api-access-fcqwp". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 06 11:47:46 crc kubenswrapper[4958]: I1006 11:47:46.920798 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2" (OuterVolumeSpecName: "kube-api-access-jhbk2") pod "bd23aa5c-e532-4e53-bccf-e79f130c5ae8" (UID: "bd23aa5c-e532-4e53-bccf-e79f130c5ae8"). InnerVolumeSpecName "kube-api-access-jhbk2". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 06 11:47:46 crc kubenswrapper[4958]: I1006 11:47:46.920819 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert" (OuterVolumeSpecName: "srv-cert") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "srv-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 06 11:47:46 crc kubenswrapper[4958]: I1006 11:47:46.920896 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 06 11:47:46 crc kubenswrapper[4958]: I1006 11:47:46.921014 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 06 11:47:46 crc kubenswrapper[4958]: I1006 11:47:46.921372 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config" (OuterVolumeSpecName: "auth-proxy-config") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 06 11:47:46 crc kubenswrapper[4958]: I1006 11:47:46.921780 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk" (OuterVolumeSpecName: "kube-api-access-rnphk") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "kube-api-access-rnphk". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 06 11:47:46 crc kubenswrapper[4958]: I1006 11:47:46.921949 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls" (OuterVolumeSpecName: "image-registry-operator-tls") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "image-registry-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 06 11:47:46 crc kubenswrapper[4958]: I1006 11:47:46.922315 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb" (OuterVolumeSpecName: "kube-api-access-mg5zb") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "kube-api-access-mg5zb". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 06 11:47:46 crc kubenswrapper[4958]: I1006 11:47:46.922523 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config" (OuterVolumeSpecName: "config") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 06 11:47:46 crc kubenswrapper[4958]: I1006 11:47:46.922798 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Oct 06 11:47:46 crc kubenswrapper[4958]: I1006 11:47:46.916589 4958 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory" Oct 06 11:47:46 crc kubenswrapper[4958]: I1006 11:47:46.924755 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config" (OuterVolumeSpecName: "config") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 06 11:47:46 crc kubenswrapper[4958]: I1006 11:47:46.925016 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 06 11:47:46 crc kubenswrapper[4958]: I1006 11:47:46.928052 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj" (OuterVolumeSpecName: "kube-api-access-4d4hj") pod "3ab1a177-2de0-46d9-b765-d0d0649bb42e" (UID: "3ab1a177-2de0-46d9-b765-d0d0649bb42e"). InnerVolumeSpecName "kube-api-access-4d4hj". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 06 11:47:46 crc kubenswrapper[4958]: I1006 11:47:46.928458 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets" (OuterVolumeSpecName: "installation-pull-secrets") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "installation-pull-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 06 11:47:46 crc kubenswrapper[4958]: I1006 11:47:46.928840 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs" (OuterVolumeSpecName: "kube-api-access-pcxfs") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "kube-api-access-pcxfs". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 06 11:47:46 crc kubenswrapper[4958]: I1006 11:47:46.929086 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls" (OuterVolumeSpecName: "registry-tls") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "registry-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 06 11:47:46 crc kubenswrapper[4958]: I1006 11:47:46.930094 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config" (OuterVolumeSpecName: "config") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 06 11:47:46 crc kubenswrapper[4958]: I1006 11:47:46.930111 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz" (OuterVolumeSpecName: "kube-api-access-8tdtz") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "kube-api-access-8tdtz". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 06 11:47:46 crc kubenswrapper[4958]: I1006 11:47:46.930365 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca" (OuterVolumeSpecName: "etcd-ca") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 06 11:47:46 crc kubenswrapper[4958]: I1006 11:47:46.930350 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca" (OuterVolumeSpecName: "etcd-service-ca") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 06 11:47:46 crc kubenswrapper[4958]: I1006 11:47:46.930812 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle" (OuterVolumeSpecName: "service-ca-bundle") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "service-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 06 11:47:46 crc kubenswrapper[4958]: I1006 11:47:46.930920 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh" (OuterVolumeSpecName: "kube-api-access-x7zkh") pod "6731426b-95fe-49ff-bb5f-40441049fde2" (UID: "6731426b-95fe-49ff-bb5f-40441049fde2"). InnerVolumeSpecName "kube-api-access-x7zkh". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 06 11:47:46 crc kubenswrapper[4958]: I1006 11:47:46.931204 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn" (OuterVolumeSpecName: "kube-api-access-jkwtn") pod "5b88f790-22fa-440e-b583-365168c0b23d" (UID: "5b88f790-22fa-440e-b583-365168c0b23d"). InnerVolumeSpecName "kube-api-access-jkwtn". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 06 11:47:46 crc kubenswrapper[4958]: I1006 11:47:46.931211 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates" (OuterVolumeSpecName: "registry-certificates") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "registry-certificates". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 06 11:47:46 crc kubenswrapper[4958]: E1006 11:47:46.931809 4958 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Oct 06 11:47:46 crc kubenswrapper[4958]: E1006 11:47:46.931845 4958 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Oct 06 11:47:46 crc kubenswrapper[4958]: E1006 11:47:46.931865 4958 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Oct 06 11:47:46 crc kubenswrapper[4958]: I1006 11:47:46.932001 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs" (OuterVolumeSpecName: "certs") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 06 11:47:46 crc kubenswrapper[4958]: I1006 11:47:46.932225 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config" (OuterVolumeSpecName: "encryption-config") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "encryption-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 06 11:47:46 crc kubenswrapper[4958]: I1006 11:47:46.932253 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 06 11:47:46 crc kubenswrapper[4958]: I1006 11:47:46.932275 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 06 11:47:46 crc kubenswrapper[4958]: I1006 11:47:46.932298 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8" (OuterVolumeSpecName: "kube-api-access-6ccd8") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "kube-api-access-6ccd8". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 06 11:47:46 crc kubenswrapper[4958]: I1006 11:47:46.932777 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca" (OuterVolumeSpecName: "image-import-ca") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "image-import-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 06 11:47:46 crc kubenswrapper[4958]: I1006 11:47:46.932799 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config" (OuterVolumeSpecName: "config") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 06 11:47:46 crc kubenswrapper[4958]: I1006 11:47:46.932849 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg" (OuterVolumeSpecName: "kube-api-access-dbsvg") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "kube-api-access-dbsvg". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 06 11:47:46 crc kubenswrapper[4958]: I1006 11:47:46.932869 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls" (OuterVolumeSpecName: "machine-api-operator-tls") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "machine-api-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 06 11:47:46 crc kubenswrapper[4958]: E1006 11:47:46.932998 4958 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2025-10-06 11:47:47.432970977 +0000 UTC m=+21.318996285 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Oct 06 11:47:46 crc kubenswrapper[4958]: E1006 11:47:46.933108 4958 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Oct 06 11:47:46 crc kubenswrapper[4958]: E1006 11:47:46.933189 4958 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-10-06 11:47:47.433181401 +0000 UTC m=+21.319206709 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Oct 06 11:47:46 crc kubenswrapper[4958]: I1006 11:47:46.937436 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 06 11:47:46 crc kubenswrapper[4958]: I1006 11:47:46.937500 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct" (OuterVolumeSpecName: "kube-api-access-cfbct") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "kube-api-access-cfbct". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 06 11:47:46 crc kubenswrapper[4958]: I1006 11:47:46.937636 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j" (OuterVolumeSpecName: "kube-api-access-w7l8j") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "kube-api-access-w7l8j". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 06 11:47:46 crc kubenswrapper[4958]: I1006 11:47:46.937759 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 06 11:47:46 crc kubenswrapper[4958]: I1006 11:47:46.938918 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Oct 06 11:47:46 crc kubenswrapper[4958]: I1006 11:47:46.939630 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 06 11:47:46 crc kubenswrapper[4958]: I1006 11:47:46.939853 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz" (OuterVolumeSpecName: "kube-api-access-bf2bz") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "kube-api-access-bf2bz". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 06 11:47:46 crc kubenswrapper[4958]: I1006 11:47:46.941216 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib" (OuterVolumeSpecName: "ovnkube-script-lib") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovnkube-script-lib". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 06 11:47:46 crc kubenswrapper[4958]: I1006 11:47:46.942442 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:46Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Oct 06 11:47:46 crc kubenswrapper[4958]: I1006 11:47:46.944199 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca" (OuterVolumeSpecName: "service-ca") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 06 11:47:46 crc kubenswrapper[4958]: I1006 11:47:46.947012 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert" (OuterVolumeSpecName: "ovn-node-metrics-cert") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovn-node-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 06 11:47:46 crc kubenswrapper[4958]: I1006 11:47:46.947254 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz" (OuterVolumeSpecName: "kube-api-access-2d4wz") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "kube-api-access-2d4wz". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 06 11:47:46 crc kubenswrapper[4958]: I1006 11:47:46.947293 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca" (OuterVolumeSpecName: "v4-0-config-system-service-ca") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 06 11:47:46 crc kubenswrapper[4958]: I1006 11:47:46.947366 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 06 11:47:46 crc kubenswrapper[4958]: I1006 11:47:46.947635 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config" (OuterVolumeSpecName: "config") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 06 11:47:46 crc kubenswrapper[4958]: I1006 11:47:46.947520 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Oct 06 11:47:46 crc kubenswrapper[4958]: I1006 11:47:46.949775 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume" (OuterVolumeSpecName: "config-volume") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 06 11:47:46 crc kubenswrapper[4958]: I1006 11:47:46.951869 4958 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="01ab3dd5-8196-46d0-ad33-122e2ca51def" path="/var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes" Oct 06 11:47:46 crc kubenswrapper[4958]: I1006 11:47:46.952597 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session" (OuterVolumeSpecName: "v4-0-config-system-session") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-session". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 06 11:47:46 crc kubenswrapper[4958]: I1006 11:47:46.953620 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c" (OuterVolumeSpecName: "kube-api-access-tk88c") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "kube-api-access-tk88c". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 06 11:47:46 crc kubenswrapper[4958]: I1006 11:47:46.953796 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv" (OuterVolumeSpecName: "kube-api-access-zkvpv") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "kube-api-access-zkvpv". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 06 11:47:46 crc kubenswrapper[4958]: I1006 11:47:46.953923 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle" (OuterVolumeSpecName: "signing-cabundle") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "signing-cabundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 06 11:47:46 crc kubenswrapper[4958]: I1006 11:47:46.954619 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 06 11:47:46 crc kubenswrapper[4958]: I1006 11:47:46.955526 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca" (OuterVolumeSpecName: "etcd-serving-ca") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "etcd-serving-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 06 11:47:46 crc kubenswrapper[4958]: I1006 11:47:46.955741 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 06 11:47:46 crc kubenswrapper[4958]: I1006 11:47:46.956295 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 06 11:47:46 crc kubenswrapper[4958]: I1006 11:47:46.956720 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52" (OuterVolumeSpecName: "kube-api-access-s4n52") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "kube-api-access-s4n52". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 06 11:47:46 crc kubenswrapper[4958]: I1006 11:47:46.959111 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Oct 06 11:47:46 crc kubenswrapper[4958]: I1006 11:47:46.960452 4958 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="09efc573-dbb6-4249-bd59-9b87aba8dd28" path="/var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes" Oct 06 11:47:46 crc kubenswrapper[4958]: I1006 11:47:46.966257 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv" (OuterVolumeSpecName: "kube-api-access-d4lsv") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "kube-api-access-d4lsv". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 06 11:47:46 crc kubenswrapper[4958]: I1006 11:47:46.967067 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s2kz5\" (UniqueName: \"kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Oct 06 11:47:46 crc kubenswrapper[4958]: I1006 11:47:46.967357 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data" (OuterVolumeSpecName: "v4-0-config-user-idp-0-file-data") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-idp-0-file-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 06 11:47:46 crc kubenswrapper[4958]: I1006 11:47:46.968104 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 06 11:47:46 crc kubenswrapper[4958]: I1006 11:47:46.968615 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7" (OuterVolumeSpecName: "kube-api-access-nzwt7") pod "96b93a3a-6083-4aea-8eab-fe1aa8245ad9" (UID: "96b93a3a-6083-4aea-8eab-fe1aa8245ad9"). InnerVolumeSpecName "kube-api-access-nzwt7". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 06 11:47:46 crc kubenswrapper[4958]: I1006 11:47:46.960166 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "96b93a3a-6083-4aea-8eab-fe1aa8245ad9" (UID: "96b93a3a-6083-4aea-8eab-fe1aa8245ad9"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 06 11:47:46 crc kubenswrapper[4958]: I1006 11:47:46.969617 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rczfb\" (UniqueName: \"kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Oct 06 11:47:46 crc kubenswrapper[4958]: I1006 11:47:46.970910 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:46Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Oct 06 11:47:46 crc kubenswrapper[4958]: I1006 11:47:46.971516 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 06 11:47:46 crc kubenswrapper[4958]: I1006 11:47:46.973677 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 06 11:47:46 crc kubenswrapper[4958]: I1006 11:47:46.974181 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4" (OuterVolumeSpecName: "kube-api-access-w4xd4") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "kube-api-access-w4xd4". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 06 11:47:46 crc kubenswrapper[4958]: I1006 11:47:46.974502 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh" (OuterVolumeSpecName: "kube-api-access-2w9zh") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "kube-api-access-2w9zh". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 06 11:47:46 crc kubenswrapper[4958]: I1006 11:47:46.974654 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds" (OuterVolumeSpecName: "kube-api-access-w9rds") pod "20b0d48f-5fd6-431c-a545-e3c800c7b866" (UID: "20b0d48f-5fd6-431c-a545-e3c800c7b866"). InnerVolumeSpecName "kube-api-access-w9rds". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 06 11:47:46 crc kubenswrapper[4958]: I1006 11:47:46.974700 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 06 11:47:46 crc kubenswrapper[4958]: I1006 11:47:46.974684 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr" (OuterVolumeSpecName: "kube-api-access-249nr") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "kube-api-access-249nr". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 06 11:47:46 crc kubenswrapper[4958]: I1006 11:47:46.975272 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config" (OuterVolumeSpecName: "config") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 06 11:47:46 crc kubenswrapper[4958]: I1006 11:47:46.975416 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config" (OuterVolumeSpecName: "config") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 06 11:47:46 crc kubenswrapper[4958]: I1006 11:47:46.975565 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy" (OuterVolumeSpecName: "cni-binary-copy") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "cni-binary-copy". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 06 11:47:46 crc kubenswrapper[4958]: I1006 11:47:46.975581 4958 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0b78653f-4ff9-4508-8672-245ed9b561e3" path="/var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes" Oct 06 11:47:46 crc kubenswrapper[4958]: I1006 11:47:46.976063 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Oct 06 11:47:46 crc kubenswrapper[4958]: I1006 11:47:46.976064 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Oct 06 11:47:46 crc kubenswrapper[4958]: I1006 11:47:46.976125 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Oct 06 11:47:46 crc kubenswrapper[4958]: I1006 11:47:46.976300 4958 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca\") on node \"crc\" DevicePath \"\"" Oct 06 11:47:46 crc kubenswrapper[4958]: I1006 11:47:46.976319 4958 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token\") on node \"crc\" DevicePath \"\"" Oct 06 11:47:46 crc kubenswrapper[4958]: I1006 11:47:46.976332 4958 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cfbct\" (UniqueName: \"kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct\") on node \"crc\" DevicePath \"\"" Oct 06 11:47:46 crc kubenswrapper[4958]: I1006 11:47:46.976350 4958 reconciler_common.go:293] "Volume detached for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets\") on node \"crc\" DevicePath \"\"" Oct 06 11:47:46 crc kubenswrapper[4958]: I1006 11:47:46.976360 4958 reconciler_common.go:293] "Volume detached for volume \"images\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images\") on node \"crc\" DevicePath \"\"" Oct 06 11:47:46 crc kubenswrapper[4958]: I1006 11:47:46.976372 4958 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca\") on node \"crc\" DevicePath \"\"" Oct 06 11:47:46 crc kubenswrapper[4958]: I1006 11:47:46.976385 4958 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content\") on node \"crc\" DevicePath \"\"" Oct 06 11:47:46 crc kubenswrapper[4958]: I1006 11:47:46.976396 4958 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities\") on node \"crc\" DevicePath \"\"" Oct 06 11:47:46 crc kubenswrapper[4958]: I1006 11:47:46.976406 4958 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8tdtz\" (UniqueName: \"kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz\") on node \"crc\" DevicePath \"\"" Oct 06 11:47:46 crc kubenswrapper[4958]: I1006 11:47:46.976443 4958 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9xfj7\" (UniqueName: \"kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7\") on node \"crc\" DevicePath \"\"" Oct 06 11:47:46 crc kubenswrapper[4958]: I1006 11:47:46.976470 4958 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2w9zh\" (UniqueName: \"kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh\") on node \"crc\" DevicePath \"\"" Oct 06 11:47:46 crc kubenswrapper[4958]: I1006 11:47:46.976491 4958 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config\") on node \"crc\" DevicePath \"\"" Oct 06 11:47:46 crc kubenswrapper[4958]: I1006 11:47:46.976507 4958 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides\") on node \"crc\" DevicePath \"\"" Oct 06 11:47:46 crc kubenswrapper[4958]: I1006 11:47:46.976688 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Oct 06 11:47:46 crc kubenswrapper[4958]: I1006 11:47:46.976811 4958 reconciler_common.go:293] "Volume detached for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config\") on node \"crc\" DevicePath \"\"" Oct 06 11:47:46 crc kubenswrapper[4958]: I1006 11:47:46.976892 4958 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1386a44e-36a2-460c-96d0-0359d2b6f0f5" path="/var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes" Oct 06 11:47:46 crc kubenswrapper[4958]: I1006 11:47:46.977176 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 06 11:47:46 crc kubenswrapper[4958]: I1006 11:47:46.977259 4958 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert\") on node \"crc\" DevicePath \"\"" Oct 06 11:47:46 crc kubenswrapper[4958]: I1006 11:47:46.977279 4958 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-s4n52\" (UniqueName: \"kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52\") on node \"crc\" DevicePath \"\"" Oct 06 11:47:46 crc kubenswrapper[4958]: I1006 11:47:46.977325 4958 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert\") on node \"crc\" DevicePath \"\"" Oct 06 11:47:46 crc kubenswrapper[4958]: I1006 11:47:46.977337 4958 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session\") on node \"crc\" DevicePath \"\"" Oct 06 11:47:46 crc kubenswrapper[4958]: I1006 11:47:46.977349 4958 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dbsvg\" (UniqueName: \"kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg\") on node \"crc\" DevicePath \"\"" Oct 06 11:47:46 crc kubenswrapper[4958]: I1006 11:47:46.977359 4958 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config\") on node \"crc\" DevicePath \"\"" Oct 06 11:47:46 crc kubenswrapper[4958]: I1006 11:47:46.977392 4958 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca\") on node \"crc\" DevicePath \"\"" Oct 06 11:47:46 crc kubenswrapper[4958]: I1006 11:47:46.977402 4958 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data\") on node \"crc\" DevicePath \"\"" Oct 06 11:47:46 crc kubenswrapper[4958]: I1006 11:47:46.977413 4958 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls\") on node \"crc\" DevicePath \"\"" Oct 06 11:47:46 crc kubenswrapper[4958]: I1006 11:47:46.977425 4958 reconciler_common.go:293] "Volume detached for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert\") on node \"crc\" DevicePath \"\"" Oct 06 11:47:46 crc kubenswrapper[4958]: I1006 11:47:46.977436 4958 reconciler_common.go:293] "Volume detached for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls\") on node \"crc\" DevicePath \"\"" Oct 06 11:47:46 crc kubenswrapper[4958]: I1006 11:47:46.977445 4958 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert\") on node \"crc\" DevicePath \"\"" Oct 06 11:47:46 crc kubenswrapper[4958]: I1006 11:47:46.977474 4958 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w4xd4\" (UniqueName: \"kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4\") on node \"crc\" DevicePath \"\"" Oct 06 11:47:46 crc kubenswrapper[4958]: I1006 11:47:46.977486 4958 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config\") on node \"crc\" DevicePath \"\"" Oct 06 11:47:46 crc kubenswrapper[4958]: I1006 11:47:46.977496 4958 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert\") on node \"crc\" DevicePath \"\"" Oct 06 11:47:46 crc kubenswrapper[4958]: I1006 11:47:46.977508 4958 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kfwg7\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7\") on node \"crc\" DevicePath \"\"" Oct 06 11:47:46 crc kubenswrapper[4958]: I1006 11:47:46.977517 4958 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jkwtn\" (UniqueName: \"kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn\") on node \"crc\" DevicePath \"\"" Oct 06 11:47:46 crc kubenswrapper[4958]: I1006 11:47:46.977547 4958 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6g6sz\" (UniqueName: \"kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz\") on node \"crc\" DevicePath \"\"" Oct 06 11:47:46 crc kubenswrapper[4958]: I1006 11:47:46.977559 4958 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config\") on node \"crc\" DevicePath \"\"" Oct 06 11:47:46 crc kubenswrapper[4958]: I1006 11:47:46.977568 4958 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error\") on node \"crc\" DevicePath \"\"" Oct 06 11:47:46 crc kubenswrapper[4958]: I1006 11:47:46.977580 4958 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca\") on node \"crc\" DevicePath \"\"" Oct 06 11:47:46 crc kubenswrapper[4958]: I1006 11:47:46.977589 4958 reconciler_common.go:293] "Volume detached for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics\") on node \"crc\" DevicePath \"\"" Oct 06 11:47:46 crc kubenswrapper[4958]: I1006 11:47:46.977598 4958 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies\") on node \"crc\" DevicePath \"\"" Oct 06 11:47:46 crc kubenswrapper[4958]: I1006 11:47:46.977624 4958 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert\") on node \"crc\" DevicePath \"\"" Oct 06 11:47:46 crc kubenswrapper[4958]: I1006 11:47:46.977672 4958 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection\") on node \"crc\" DevicePath \"\"" Oct 06 11:47:46 crc kubenswrapper[4958]: I1006 11:47:46.977702 4958 reconciler_common.go:293] "Volume detached for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth\") on node \"crc\" DevicePath \"\"" Oct 06 11:47:46 crc kubenswrapper[4958]: I1006 11:47:46.977713 4958 reconciler_common.go:293] "Volume detached for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca\") on node \"crc\" DevicePath \"\"" Oct 06 11:47:46 crc kubenswrapper[4958]: I1006 11:47:46.977722 4958 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert\") on node \"crc\" DevicePath \"\"" Oct 06 11:47:46 crc kubenswrapper[4958]: I1006 11:47:46.977735 4958 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig\") on node \"crc\" DevicePath \"\"" Oct 06 11:47:46 crc kubenswrapper[4958]: I1006 11:47:46.977745 4958 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-htfz6\" (UniqueName: \"kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6\") on node \"crc\" DevicePath \"\"" Oct 06 11:47:46 crc kubenswrapper[4958]: I1006 11:47:46.977756 4958 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x2m85\" (UniqueName: \"kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85\") on node \"crc\" DevicePath \"\"" Oct 06 11:47:46 crc kubenswrapper[4958]: I1006 11:47:46.977784 4958 reconciler_common.go:293] "Volume detached for volume \"certs\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs\") on node \"crc\" DevicePath \"\"" Oct 06 11:47:46 crc kubenswrapper[4958]: I1006 11:47:46.977797 4958 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v47cf\" (UniqueName: \"kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf\") on node \"crc\" DevicePath \"\"" Oct 06 11:47:46 crc kubenswrapper[4958]: I1006 11:47:46.977795 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert" (OuterVolumeSpecName: "cert") pod "20b0d48f-5fd6-431c-a545-e3c800c7b866" (UID: "20b0d48f-5fd6-431c-a545-e3c800c7b866"). InnerVolumeSpecName "cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 06 11:47:46 crc kubenswrapper[4958]: I1006 11:47:46.977809 4958 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert\") on node \"crc\" DevicePath \"\"" Oct 06 11:47:46 crc kubenswrapper[4958]: I1006 11:47:46.977837 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx" (OuterVolumeSpecName: "kube-api-access-d6qdx") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "kube-api-access-d6qdx". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 06 11:47:46 crc kubenswrapper[4958]: I1006 11:47:46.977886 4958 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template\") on node \"crc\" DevicePath \"\"" Oct 06 11:47:46 crc kubenswrapper[4958]: I1006 11:47:46.977900 4958 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ngvvp\" (UniqueName: \"kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp\") on node \"crc\" DevicePath \"\"" Oct 06 11:47:46 crc kubenswrapper[4958]: I1006 11:47:46.977913 4958 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config\") on node \"crc\" DevicePath \"\"" Oct 06 11:47:46 crc kubenswrapper[4958]: I1006 11:47:46.977944 4958 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert\") on node \"crc\" DevicePath \"\"" Oct 06 11:47:46 crc kubenswrapper[4958]: I1006 11:47:46.977954 4958 reconciler_common.go:293] "Volume detached for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls\") on node \"crc\" DevicePath \"\"" Oct 06 11:47:46 crc kubenswrapper[4958]: I1006 11:47:46.977964 4958 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca\") on node \"crc\" DevicePath \"\"" Oct 06 11:47:46 crc kubenswrapper[4958]: I1006 11:47:46.977976 4958 reconciler_common.go:293] "Volume detached for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Oct 06 11:47:46 crc kubenswrapper[4958]: I1006 11:47:46.977985 4958 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca\") on node \"crc\" DevicePath \"\"" Oct 06 11:47:46 crc kubenswrapper[4958]: I1006 11:47:46.977993 4958 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config\") on node \"crc\" DevicePath \"\"" Oct 06 11:47:46 crc kubenswrapper[4958]: I1006 11:47:46.978022 4958 reconciler_common.go:293] "Volume detached for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs\") on node \"crc\" DevicePath \"\"" Oct 06 11:47:46 crc kubenswrapper[4958]: I1006 11:47:46.978034 4958 reconciler_common.go:293] "Volume detached for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates\") on node \"crc\" DevicePath \"\"" Oct 06 11:47:46 crc kubenswrapper[4958]: I1006 11:47:46.978042 4958 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pjr6v\" (UniqueName: \"kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v\") on node \"crc\" DevicePath \"\"" Oct 06 11:47:46 crc kubenswrapper[4958]: I1006 11:47:46.978051 4958 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gf66m\" (UniqueName: \"kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m\") on node \"crc\" DevicePath \"\"" Oct 06 11:47:46 crc kubenswrapper[4958]: I1006 11:47:46.978060 4958 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca\") on node \"crc\" DevicePath \"\"" Oct 06 11:47:46 crc kubenswrapper[4958]: I1006 11:47:46.978071 4958 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert\") on node \"crc\" DevicePath \"\"" Oct 06 11:47:46 crc kubenswrapper[4958]: I1006 11:47:46.978096 4958 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config\") on node \"crc\" DevicePath \"\"" Oct 06 11:47:46 crc kubenswrapper[4958]: I1006 11:47:46.978107 4958 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jhbk2\" (UniqueName: \"kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2\") on node \"crc\" DevicePath \"\"" Oct 06 11:47:46 crc kubenswrapper[4958]: I1006 11:47:46.978118 4958 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca\") on node \"crc\" DevicePath \"\"" Oct 06 11:47:46 crc kubenswrapper[4958]: I1006 11:47:46.978128 4958 reconciler_common.go:293] "Volume detached for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate\") on node \"crc\" DevicePath \"\"" Oct 06 11:47:46 crc kubenswrapper[4958]: I1006 11:47:46.978137 4958 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies\") on node \"crc\" DevicePath \"\"" Oct 06 11:47:46 crc kubenswrapper[4958]: I1006 11:47:46.978184 4958 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls\") on node \"crc\" DevicePath \"\"" Oct 06 11:47:46 crc kubenswrapper[4958]: I1006 11:47:46.978197 4958 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rnphk\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk\") on node \"crc\" DevicePath \"\"" Oct 06 11:47:46 crc kubenswrapper[4958]: I1006 11:47:46.978207 4958 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fqsjt\" (UniqueName: \"kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt\") on node \"crc\" DevicePath \"\"" Oct 06 11:47:46 crc kubenswrapper[4958]: I1006 11:47:46.978215 4958 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client\") on node \"crc\" DevicePath \"\"" Oct 06 11:47:46 crc kubenswrapper[4958]: I1006 11:47:46.978224 4958 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w7l8j\" (UniqueName: \"kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j\") on node \"crc\" DevicePath \"\"" Oct 06 11:47:46 crc kubenswrapper[4958]: I1006 11:47:46.978260 4958 reconciler_common.go:293] "Volume detached for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Oct 06 11:47:46 crc kubenswrapper[4958]: I1006 11:47:46.978270 4958 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume\") on node \"crc\" DevicePath \"\"" Oct 06 11:47:46 crc kubenswrapper[4958]: I1006 11:47:46.978281 4958 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config\") on node \"crc\" DevicePath \"\"" Oct 06 11:47:46 crc kubenswrapper[4958]: I1006 11:47:46.978290 4958 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vt5rc\" (UniqueName: \"kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc\") on node \"crc\" DevicePath \"\"" Oct 06 11:47:46 crc kubenswrapper[4958]: I1006 11:47:46.978301 4958 reconciler_common.go:293] "Volume detached for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls\") on node \"crc\" DevicePath \"\"" Oct 06 11:47:46 crc kubenswrapper[4958]: I1006 11:47:46.978328 4958 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config\") on node \"crc\" DevicePath \"\"" Oct 06 11:47:46 crc kubenswrapper[4958]: I1006 11:47:46.978337 4958 reconciler_common.go:293] "Volume detached for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 06 11:47:46 crc kubenswrapper[4958]: I1006 11:47:46.978351 4958 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qg5z5\" (UniqueName: \"kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5\") on node \"crc\" DevicePath \"\"" Oct 06 11:47:46 crc kubenswrapper[4958]: I1006 11:47:46.978360 4958 reconciler_common.go:293] "Volume detached for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Oct 06 11:47:46 crc kubenswrapper[4958]: I1006 11:47:46.978369 4958 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mnrrd\" (UniqueName: \"kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd\") on node \"crc\" DevicePath \"\"" Oct 06 11:47:46 crc kubenswrapper[4958]: I1006 11:47:46.978378 4958 reconciler_common.go:293] "Volume detached for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert\") on node \"crc\" DevicePath \"\"" Oct 06 11:47:46 crc kubenswrapper[4958]: I1006 11:47:46.978408 4958 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls\") on node \"crc\" DevicePath \"\"" Oct 06 11:47:46 crc kubenswrapper[4958]: I1006 11:47:46.978419 4958 reconciler_common.go:293] "Volume detached for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls\") on node \"crc\" DevicePath \"\"" Oct 06 11:47:46 crc kubenswrapper[4958]: I1006 11:47:46.978431 4958 reconciler_common.go:293] "Volume detached for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert\") on node \"crc\" DevicePath \"\"" Oct 06 11:47:46 crc kubenswrapper[4958]: I1006 11:47:46.978442 4958 reconciler_common.go:293] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config\") on node \"crc\" DevicePath \"\"" Oct 06 11:47:46 crc kubenswrapper[4958]: I1006 11:47:46.978488 4958 reconciler_common.go:293] "Volume detached for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca\") on node \"crc\" DevicePath \"\"" Oct 06 11:47:46 crc kubenswrapper[4958]: I1006 11:47:46.978499 4958 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4d4hj\" (UniqueName: \"kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj\") on node \"crc\" DevicePath \"\"" Oct 06 11:47:46 crc kubenswrapper[4958]: I1006 11:47:46.978511 4958 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities\") on node \"crc\" DevicePath \"\"" Oct 06 11:47:46 crc kubenswrapper[4958]: I1006 11:47:46.978519 4958 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7c4vf\" (UniqueName: \"kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf\") on node \"crc\" DevicePath \"\"" Oct 06 11:47:46 crc kubenswrapper[4958]: I1006 11:47:46.978531 4958 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x4zgh\" (UniqueName: \"kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh\") on node \"crc\" DevicePath \"\"" Oct 06 11:47:46 crc kubenswrapper[4958]: I1006 11:47:46.978542 4958 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert\") on node \"crc\" DevicePath \"\"" Oct 06 11:47:46 crc kubenswrapper[4958]: I1006 11:47:46.978570 4958 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client\") on node \"crc\" DevicePath \"\"" Oct 06 11:47:46 crc kubenswrapper[4958]: I1006 11:47:46.978582 4958 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content\") on node \"crc\" DevicePath \"\"" Oct 06 11:47:46 crc kubenswrapper[4958]: I1006 11:47:46.978590 4958 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities\") on node \"crc\" DevicePath \"\"" Oct 06 11:47:46 crc kubenswrapper[4958]: I1006 11:47:46.978599 4958 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mg5zb\" (UniqueName: \"kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb\") on node \"crc\" DevicePath \"\"" Oct 06 11:47:46 crc kubenswrapper[4958]: I1006 11:47:46.977908 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert" (OuterVolumeSpecName: "profile-collector-cert") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "profile-collector-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 06 11:47:46 crc kubenswrapper[4958]: I1006 11:47:46.978647 4958 reconciler_common.go:293] "Volume detached for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs\") on node \"crc\" DevicePath \"\"" Oct 06 11:47:46 crc kubenswrapper[4958]: I1006 11:47:46.978661 4958 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login\") on node \"crc\" DevicePath \"\"" Oct 06 11:47:46 crc kubenswrapper[4958]: I1006 11:47:46.978671 4958 reconciler_common.go:293] "Volume detached for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs\") on node \"crc\" DevicePath \"\"" Oct 06 11:47:46 crc kubenswrapper[4958]: I1006 11:47:46.978684 4958 reconciler_common.go:293] "Volume detached for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 06 11:47:46 crc kubenswrapper[4958]: I1006 11:47:46.978696 4958 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config\") on node \"crc\" DevicePath \"\"" Oct 06 11:47:46 crc kubenswrapper[4958]: I1006 11:47:46.978722 4958 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token\") on node \"crc\" DevicePath \"\"" Oct 06 11:47:46 crc kubenswrapper[4958]: I1006 11:47:46.978735 4958 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wxkg8\" (UniqueName: \"kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8\") on node \"crc\" DevicePath \"\"" Oct 06 11:47:46 crc kubenswrapper[4958]: I1006 11:47:46.978743 4958 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert\") on node \"crc\" DevicePath \"\"" Oct 06 11:47:46 crc kubenswrapper[4958]: I1006 11:47:46.978755 4958 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lzf88\" (UniqueName: \"kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88\") on node \"crc\" DevicePath \"\"" Oct 06 11:47:46 crc kubenswrapper[4958]: I1006 11:47:46.978764 4958 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 06 11:47:46 crc kubenswrapper[4958]: I1006 11:47:46.978773 4958 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs\") on node \"crc\" DevicePath \"\"" Oct 06 11:47:46 crc kubenswrapper[4958]: I1006 11:47:46.978815 4958 reconciler_common.go:293] "Volume detached for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca\") on node \"crc\" DevicePath \"\"" Oct 06 11:47:46 crc kubenswrapper[4958]: I1006 11:47:46.978831 4958 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access\") on node \"crc\" DevicePath \"\"" Oct 06 11:47:46 crc kubenswrapper[4958]: I1006 11:47:46.978840 4958 reconciler_common.go:293] "Volume detached for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca\") on node \"crc\" DevicePath \"\"" Oct 06 11:47:46 crc kubenswrapper[4958]: I1006 11:47:46.978850 4958 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xcgwh\" (UniqueName: \"kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh\") on node \"crc\" DevicePath \"\"" Oct 06 11:47:46 crc kubenswrapper[4958]: I1006 11:47:46.978861 4958 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config\") on node \"crc\" DevicePath \"\"" Oct 06 11:47:46 crc kubenswrapper[4958]: I1006 11:47:46.978889 4958 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 06 11:47:46 crc kubenswrapper[4958]: I1006 11:47:46.978899 4958 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca\") on node \"crc\" DevicePath \"\"" Oct 06 11:47:46 crc kubenswrapper[4958]: I1006 11:47:46.978908 4958 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token\") on node \"crc\" DevicePath \"\"" Oct 06 11:47:46 crc kubenswrapper[4958]: I1006 11:47:46.978921 4958 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls\") on node \"crc\" DevicePath \"\"" Oct 06 11:47:46 crc kubenswrapper[4958]: I1006 11:47:46.979001 4958 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access\") on node \"crc\" DevicePath \"\"" Oct 06 11:47:46 crc kubenswrapper[4958]: I1006 11:47:46.979026 4958 reconciler_common.go:293] "Volume detached for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca\") on node \"crc\" DevicePath \"\"" Oct 06 11:47:46 crc kubenswrapper[4958]: I1006 11:47:46.979050 4958 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2d4wz\" (UniqueName: \"kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz\") on node \"crc\" DevicePath \"\"" Oct 06 11:47:46 crc kubenswrapper[4958]: I1006 11:47:46.979068 4958 reconciler_common.go:293] "Volume detached for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle\") on node \"crc\" DevicePath \"\"" Oct 06 11:47:46 crc kubenswrapper[4958]: I1006 11:47:46.979084 4958 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config\") on node \"crc\" DevicePath \"\"" Oct 06 11:47:46 crc kubenswrapper[4958]: I1006 11:47:46.979099 4958 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nzwt7\" (UniqueName: \"kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7\") on node \"crc\" DevicePath \"\"" Oct 06 11:47:46 crc kubenswrapper[4958]: I1006 11:47:46.979119 4958 reconciler_common.go:293] "Volume detached for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token\") on node \"crc\" DevicePath \"\"" Oct 06 11:47:46 crc kubenswrapper[4958]: I1006 11:47:46.979134 4958 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tk88c\" (UniqueName: \"kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c\") on node \"crc\" DevicePath \"\"" Oct 06 11:47:46 crc kubenswrapper[4958]: I1006 11:47:46.979232 4958 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config\") on node \"crc\" DevicePath \"\"" Oct 06 11:47:46 crc kubenswrapper[4958]: I1006 11:47:46.979259 4958 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 06 11:47:46 crc kubenswrapper[4958]: I1006 11:47:46.979279 4958 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config\") on node \"crc\" DevicePath \"\"" Oct 06 11:47:46 crc kubenswrapper[4958]: I1006 11:47:46.979299 4958 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 06 11:47:46 crc kubenswrapper[4958]: I1006 11:47:46.979314 4958 reconciler_common.go:293] "Volume detached for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca\") on node \"crc\" DevicePath \"\"" Oct 06 11:47:46 crc kubenswrapper[4958]: I1006 11:47:46.979328 4958 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-249nr\" (UniqueName: \"kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr\") on node \"crc\" DevicePath \"\"" Oct 06 11:47:46 crc kubenswrapper[4958]: I1006 11:47:46.979357 4958 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1bf7eb37-55a3-4c65-b768-a94c82151e69" path="/var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes" Oct 06 11:47:46 crc kubenswrapper[4958]: I1006 11:47:46.979379 4958 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d4lsv\" (UniqueName: \"kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv\") on node \"crc\" DevicePath \"\"" Oct 06 11:47:46 crc kubenswrapper[4958]: I1006 11:47:46.979397 4958 reconciler_common.go:293] "Volume detached for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls\") on node \"crc\" DevicePath \"\"" Oct 06 11:47:46 crc kubenswrapper[4958]: I1006 11:47:46.979412 4958 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w9rds\" (UniqueName: \"kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds\") on node \"crc\" DevicePath \"\"" Oct 06 11:47:46 crc kubenswrapper[4958]: I1006 11:47:46.979427 4958 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x7zkh\" (UniqueName: \"kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh\") on node \"crc\" DevicePath \"\"" Oct 06 11:47:46 crc kubenswrapper[4958]: I1006 11:47:46.979447 4958 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qs4fp\" (UniqueName: \"kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp\") on node \"crc\" DevicePath \"\"" Oct 06 11:47:46 crc kubenswrapper[4958]: I1006 11:47:46.979464 4958 reconciler_common.go:293] "Volume detached for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca\") on node \"crc\" DevicePath \"\"" Oct 06 11:47:46 crc kubenswrapper[4958]: I1006 11:47:46.979481 4958 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zkvpv\" (UniqueName: \"kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv\") on node \"crc\" DevicePath \"\"" Oct 06 11:47:46 crc kubenswrapper[4958]: I1006 11:47:46.979499 4958 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pcxfs\" (UniqueName: \"kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs\") on node \"crc\" DevicePath \"\"" Oct 06 11:47:46 crc kubenswrapper[4958]: I1006 11:47:46.979516 4958 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert\") on node \"crc\" DevicePath \"\"" Oct 06 11:47:46 crc kubenswrapper[4958]: I1006 11:47:46.979531 4958 reconciler_common.go:293] "Volume detached for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy\") on node \"crc\" DevicePath \"\"" Oct 06 11:47:46 crc kubenswrapper[4958]: I1006 11:47:46.979544 4958 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access\") on node \"crc\" DevicePath \"\"" Oct 06 11:47:46 crc kubenswrapper[4958]: I1006 11:47:46.979557 4958 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config\") on node \"crc\" DevicePath \"\"" Oct 06 11:47:46 crc kubenswrapper[4958]: I1006 11:47:46.979574 4958 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides\") on node \"crc\" DevicePath \"\"" Oct 06 11:47:46 crc kubenswrapper[4958]: I1006 11:47:46.979589 4958 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pj782\" (UniqueName: \"kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782\") on node \"crc\" DevicePath \"\"" Oct 06 11:47:46 crc kubenswrapper[4958]: I1006 11:47:46.979603 4958 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bf2bz\" (UniqueName: \"kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz\") on node \"crc\" DevicePath \"\"" Oct 06 11:47:46 crc kubenswrapper[4958]: I1006 11:47:46.979621 4958 reconciler_common.go:293] "Volume detached for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert\") on node \"crc\" DevicePath \"\"" Oct 06 11:47:46 crc kubenswrapper[4958]: I1006 11:47:46.979634 4958 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6ccd8\" (UniqueName: \"kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8\") on node \"crc\" DevicePath \"\"" Oct 06 11:47:46 crc kubenswrapper[4958]: I1006 11:47:46.979648 4958 reconciler_common.go:293] "Volume detached for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib\") on node \"crc\" DevicePath \"\"" Oct 06 11:47:46 crc kubenswrapper[4958]: I1006 11:47:46.979662 4958 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert\") on node \"crc\" DevicePath \"\"" Oct 06 11:47:46 crc kubenswrapper[4958]: I1006 11:47:46.979680 4958 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client\") on node \"crc\" DevicePath \"\"" Oct 06 11:47:46 crc kubenswrapper[4958]: I1006 11:47:46.979695 4958 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert\") on node \"crc\" DevicePath \"\"" Oct 06 11:47:46 crc kubenswrapper[4958]: I1006 11:47:46.979710 4958 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Oct 06 11:47:46 crc kubenswrapper[4958]: I1006 11:47:46.979726 4958 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fcqwp\" (UniqueName: \"kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp\") on node \"crc\" DevicePath \"\"" Oct 06 11:47:46 crc kubenswrapper[4958]: I1006 11:47:46.979755 4958 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert\") on node \"crc\" DevicePath \"\"" Oct 06 11:47:46 crc kubenswrapper[4958]: I1006 11:47:46.979770 4958 reconciler_common.go:293] "Volume detached for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert\") on node \"crc\" DevicePath \"\"" Oct 06 11:47:46 crc kubenswrapper[4958]: I1006 11:47:46.979801 4958 reconciler_common.go:293] "Volume detached for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit\") on node \"crc\" DevicePath \"\"" Oct 06 11:47:46 crc kubenswrapper[4958]: I1006 11:47:46.979819 4958 reconciler_common.go:293] "Volume detached for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert\") on node \"crc\" DevicePath \"\"" Oct 06 11:47:46 crc kubenswrapper[4958]: I1006 11:47:46.979834 4958 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lz9wn\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn\") on node \"crc\" DevicePath \"\"" Oct 06 11:47:46 crc kubenswrapper[4958]: I1006 11:47:46.979847 4958 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zgdk5\" (UniqueName: \"kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5\") on node \"crc\" DevicePath \"\"" Oct 06 11:47:46 crc kubenswrapper[4958]: I1006 11:47:46.979862 4958 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sb6h7\" (UniqueName: \"kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7\") on node \"crc\" DevicePath \"\"" Oct 06 11:47:46 crc kubenswrapper[4958]: I1006 11:47:46.979878 4958 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities\") on node \"crc\" DevicePath \"\"" Oct 06 11:47:46 crc kubenswrapper[4958]: I1006 11:47:46.979892 4958 reconciler_common.go:293] "Volume detached for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key\") on node \"crc\" DevicePath \"\"" Oct 06 11:47:46 crc kubenswrapper[4958]: I1006 11:47:46.979905 4958 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content\") on node \"crc\" DevicePath \"\"" Oct 06 11:47:46 crc kubenswrapper[4958]: I1006 11:47:46.979918 4958 reconciler_common.go:293] "Volume detached for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy\") on node \"crc\" DevicePath \"\"" Oct 06 11:47:46 crc kubenswrapper[4958]: I1006 11:47:46.979935 4958 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls\") on node \"crc\" DevicePath \"\"" Oct 06 11:47:46 crc kubenswrapper[4958]: I1006 11:47:46.979947 4958 reconciler_common.go:293] "Volume detached for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config\") on node \"crc\" DevicePath \"\"" Oct 06 11:47:46 crc kubenswrapper[4958]: I1006 11:47:46.979960 4958 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert\") on node \"crc\" DevicePath \"\"" Oct 06 11:47:46 crc kubenswrapper[4958]: I1006 11:47:46.979974 4958 reconciler_common.go:293] "Volume detached for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config\") on node \"crc\" DevicePath \"\"" Oct 06 11:47:46 crc kubenswrapper[4958]: I1006 11:47:46.980068 4958 reconciler_common.go:293] "Volume detached for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates\") on node \"crc\" DevicePath \"\"" Oct 06 11:47:46 crc kubenswrapper[4958]: I1006 11:47:46.980086 4958 reconciler_common.go:293] "Volume detached for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert\") on node \"crc\" DevicePath \"\"" Oct 06 11:47:46 crc kubenswrapper[4958]: I1006 11:47:46.980101 4958 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xcphl\" (UniqueName: \"kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl\") on node \"crc\" DevicePath \"\"" Oct 06 11:47:46 crc kubenswrapper[4958]: I1006 11:47:46.980117 4958 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls\") on node \"crc\" DevicePath \"\"" Oct 06 11:47:46 crc kubenswrapper[4958]: I1006 11:47:46.980134 4958 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config\") on node \"crc\" DevicePath \"\"" Oct 06 11:47:46 crc kubenswrapper[4958]: I1006 11:47:46.980399 4958 reconciler_common.go:293] "Volume detached for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Oct 06 11:47:46 crc kubenswrapper[4958]: I1006 11:47:46.982527 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist" (OuterVolumeSpecName: "cni-sysctl-allowlist") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "cni-sysctl-allowlist". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 06 11:47:46 crc kubenswrapper[4958]: I1006 11:47:46.986508 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb" (OuterVolumeSpecName: "kube-api-access-279lb") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "kube-api-access-279lb". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 06 11:47:46 crc kubenswrapper[4958]: I1006 11:47:46.986825 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rdwmf\" (UniqueName: \"kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Oct 06 11:47:47 crc kubenswrapper[4958]: I1006 11:47:47.010392 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 06 11:47:47 crc kubenswrapper[4958]: I1006 11:47:47.010769 4958 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1d611f23-29be-4491-8495-bee1670e935f" path="/var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes" Oct 06 11:47:47 crc kubenswrapper[4958]: I1006 11:47:47.018665 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:46Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Oct 06 11:47:47 crc kubenswrapper[4958]: I1006 11:47:47.027511 4958 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="20b0d48f-5fd6-431c-a545-e3c800c7b866" path="/var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/volumes" Oct 06 11:47:47 crc kubenswrapper[4958]: I1006 11:47:47.031234 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 06 11:47:47 crc kubenswrapper[4958]: I1006 11:47:47.031727 4958 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" path="/var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes" Oct 06 11:47:47 crc kubenswrapper[4958]: I1006 11:47:47.032241 4958 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="22c825df-677d-4ca6-82db-3454ed06e783" path="/var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes" Oct 06 11:47:47 crc kubenswrapper[4958]: I1006 11:47:47.032955 4958 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="25e176fe-21b4-4974-b1ed-c8b94f112a7f" path="/var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes" Oct 06 11:47:47 crc kubenswrapper[4958]: I1006 11:47:47.033265 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 06 11:47:47 crc kubenswrapper[4958]: I1006 11:47:47.033591 4958 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" path="/var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes" Oct 06 11:47:47 crc kubenswrapper[4958]: I1006 11:47:47.034168 4958 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="31d8b7a1-420e-4252-a5b7-eebe8a111292" path="/var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes" Oct 06 11:47:47 crc kubenswrapper[4958]: I1006 11:47:47.034983 4958 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3ab1a177-2de0-46d9-b765-d0d0649bb42e" path="/var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/volumes" Oct 06 11:47:47 crc kubenswrapper[4958]: I1006 11:47:47.036773 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted" (OuterVolumeSpecName: "ca-trust-extracted") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "ca-trust-extracted". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 06 11:47:47 crc kubenswrapper[4958]: I1006 11:47:47.036968 4958 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" path="/var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes" Oct 06 11:47:47 crc kubenswrapper[4958]: I1006 11:47:47.037634 4958 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="43509403-f426-496e-be36-56cef71462f5" path="/var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes" Oct 06 11:47:47 crc kubenswrapper[4958]: I1006 11:47:47.039213 4958 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="44663579-783b-4372-86d6-acf235a62d72" path="/var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/volumes" Oct 06 11:47:47 crc kubenswrapper[4958]: I1006 11:47:47.040223 4958 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="496e6271-fb68-4057-954e-a0d97a4afa3f" path="/var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes" Oct 06 11:47:47 crc kubenswrapper[4958]: I1006 11:47:47.041181 4958 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" path="/var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes" Oct 06 11:47:47 crc kubenswrapper[4958]: I1006 11:47:47.041685 4958 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="49ef4625-1d3a-4a9f-b595-c2433d32326d" path="/var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/volumes" Oct 06 11:47:47 crc kubenswrapper[4958]: I1006 11:47:47.042413 4958 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4bb40260-dbaa-4fb0-84df-5e680505d512" path="/var/lib/kubelet/pods/4bb40260-dbaa-4fb0-84df-5e680505d512/volumes" Oct 06 11:47:47 crc kubenswrapper[4958]: I1006 11:47:47.043313 4958 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5441d097-087c-4d9a-baa8-b210afa90fc9" path="/var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes" Oct 06 11:47:47 crc kubenswrapper[4958]: I1006 11:47:47.043710 4958 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="57a731c4-ef35-47a8-b875-bfb08a7f8011" path="/var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes" Oct 06 11:47:47 crc kubenswrapper[4958]: I1006 11:47:47.047716 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 06 11:47:47 crc kubenswrapper[4958]: I1006 11:47:47.048409 4958 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5b88f790-22fa-440e-b583-365168c0b23d" path="/var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/volumes" Oct 06 11:47:47 crc kubenswrapper[4958]: I1006 11:47:47.049055 4958 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5fe579f8-e8a6-4643-bce5-a661393c4dde" path="/var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/volumes" Oct 06 11:47:47 crc kubenswrapper[4958]: I1006 11:47:47.049913 4958 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6402fda4-df10-493c-b4e5-d0569419652d" path="/var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes" Oct 06 11:47:47 crc kubenswrapper[4958]: I1006 11:47:47.050554 4958 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6509e943-70c6-444c-bc41-48a544e36fbd" path="/var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes" Oct 06 11:47:47 crc kubenswrapper[4958]: I1006 11:47:47.050992 4958 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6731426b-95fe-49ff-bb5f-40441049fde2" path="/var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/volumes" Oct 06 11:47:47 crc kubenswrapper[4958]: I1006 11:47:47.054205 4958 kubelet_volumes.go:152] "Cleaned up orphaned volume subpath from pod" podUID="6ea678ab-3438-413e-bfe3-290ae7725660" path="/var/lib/kubelet/pods/6ea678ab-3438-413e-bfe3-290ae7725660/volume-subpaths/run-systemd/ovnkube-controller/6" Oct 06 11:47:47 crc kubenswrapper[4958]: I1006 11:47:47.054306 4958 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6ea678ab-3438-413e-bfe3-290ae7725660" path="/var/lib/kubelet/pods/6ea678ab-3438-413e-bfe3-290ae7725660/volumes" Oct 06 11:47:47 crc kubenswrapper[4958]: I1006 11:47:47.057933 4958 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7539238d-5fe0-46ed-884e-1c3b566537ec" path="/var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes" Oct 06 11:47:47 crc kubenswrapper[4958]: I1006 11:47:47.059421 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:46Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Oct 06 11:47:47 crc kubenswrapper[4958]: I1006 11:47:47.066499 4958 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7583ce53-e0fe-4a16-9e4d-50516596a136" path="/var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes" Oct 06 11:47:47 crc kubenswrapper[4958]: I1006 11:47:47.066938 4958 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7bb08738-c794-4ee8-9972-3a62ca171029" path="/var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes" Oct 06 11:47:47 crc kubenswrapper[4958]: I1006 11:47:47.073568 4958 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="87cf06ed-a83f-41a7-828d-70653580a8cb" path="/var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes" Oct 06 11:47:47 crc kubenswrapper[4958]: I1006 11:47:47.074778 4958 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" path="/var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes" Oct 06 11:47:47 crc kubenswrapper[4958]: I1006 11:47:47.075712 4958 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="925f1c65-6136-48ba-85aa-3a3b50560753" path="/var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes" Oct 06 11:47:47 crc kubenswrapper[4958]: I1006 11:47:47.076803 4958 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" path="/var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/volumes" Oct 06 11:47:47 crc kubenswrapper[4958]: I1006 11:47:47.077471 4958 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9d4552c7-cd75-42dd-8880-30dd377c49a4" path="/var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes" Oct 06 11:47:47 crc kubenswrapper[4958]: I1006 11:47:47.077895 4958 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" path="/var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/volumes" Oct 06 11:47:47 crc kubenswrapper[4958]: I1006 11:47:47.080086 4958 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a31745f5-9847-4afe-82a5-3161cc66ca93" path="/var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes" Oct 06 11:47:47 crc kubenswrapper[4958]: I1006 11:47:47.081118 4958 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" path="/var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes" Oct 06 11:47:47 crc kubenswrapper[4958]: I1006 11:47:47.081536 4958 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d6qdx\" (UniqueName: \"kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx\") on node \"crc\" DevicePath \"\"" Oct 06 11:47:47 crc kubenswrapper[4958]: I1006 11:47:47.081572 4958 reconciler_common.go:293] "Volume detached for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted\") on node \"crc\" DevicePath \"\"" Oct 06 11:47:47 crc kubenswrapper[4958]: I1006 11:47:47.081583 4958 reconciler_common.go:293] "Volume detached for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert\") on node \"crc\" DevicePath \"\"" Oct 06 11:47:47 crc kubenswrapper[4958]: I1006 11:47:47.081593 4958 reconciler_common.go:293] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert\") on node \"crc\" DevicePath \"\"" Oct 06 11:47:47 crc kubenswrapper[4958]: I1006 11:47:47.081602 4958 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-279lb\" (UniqueName: \"kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb\") on node \"crc\" DevicePath \"\"" Oct 06 11:47:47 crc kubenswrapper[4958]: I1006 11:47:47.081612 4958 reconciler_common.go:293] "Volume detached for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist\") on node \"crc\" DevicePath \"\"" Oct 06 11:47:47 crc kubenswrapper[4958]: I1006 11:47:47.081620 4958 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 06 11:47:47 crc kubenswrapper[4958]: I1006 11:47:47.081629 4958 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content\") on node \"crc\" DevicePath \"\"" Oct 06 11:47:47 crc kubenswrapper[4958]: I1006 11:47:47.081637 4958 reconciler_common.go:293] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config\") on node \"crc\" DevicePath \"\"" Oct 06 11:47:47 crc kubenswrapper[4958]: I1006 11:47:47.081646 4958 reconciler_common.go:293] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert\") on node \"crc\" DevicePath \"\"" Oct 06 11:47:47 crc kubenswrapper[4958]: I1006 11:47:47.081656 4958 reconciler_common.go:293] "Volume detached for volume \"cert\" (UniqueName: \"kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert\") on node \"crc\" DevicePath \"\"" Oct 06 11:47:47 crc kubenswrapper[4958]: I1006 11:47:47.081714 4958 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b6312bbd-5731-4ea0-a20f-81d5a57df44a" path="/var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/volumes" Oct 06 11:47:47 crc kubenswrapper[4958]: I1006 11:47:47.082549 4958 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" path="/var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes" Oct 06 11:47:47 crc kubenswrapper[4958]: I1006 11:47:47.083075 4958 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" path="/var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes" Oct 06 11:47:47 crc kubenswrapper[4958]: I1006 11:47:47.083936 4958 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bd23aa5c-e532-4e53-bccf-e79f130c5ae8" path="/var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/volumes" Oct 06 11:47:47 crc kubenswrapper[4958]: I1006 11:47:47.084816 4958 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bf126b07-da06-4140-9a57-dfd54fc6b486" path="/var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes" Oct 06 11:47:47 crc kubenswrapper[4958]: I1006 11:47:47.085852 4958 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c03ee662-fb2f-4fc4-a2c1-af487c19d254" path="/var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes" Oct 06 11:47:47 crc kubenswrapper[4958]: I1006 11:47:47.086316 4958 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" path="/var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/volumes" Oct 06 11:47:47 crc kubenswrapper[4958]: I1006 11:47:47.086781 4958 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e7e6199b-1264-4501-8953-767f51328d08" path="/var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes" Oct 06 11:47:47 crc kubenswrapper[4958]: I1006 11:47:47.086836 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:46Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Oct 06 11:47:47 crc kubenswrapper[4958]: I1006 11:47:47.087821 4958 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="efdd0498-1daa-4136-9a4a-3b948c2293fc" path="/var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/volumes" Oct 06 11:47:47 crc kubenswrapper[4958]: I1006 11:47:47.088386 4958 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" path="/var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/volumes" Oct 06 11:47:47 crc kubenswrapper[4958]: I1006 11:47:47.089233 4958 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fda69060-fa79-4696-b1a6-7980f124bf7c" path="/var/lib/kubelet/pods/fda69060-fa79-4696-b1a6-7980f124bf7c/volumes" Oct 06 11:47:47 crc kubenswrapper[4958]: I1006 11:47:47.103512 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:46Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Oct 06 11:47:47 crc kubenswrapper[4958]: I1006 11:47:47.127592 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:46Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Oct 06 11:47:47 crc kubenswrapper[4958]: I1006 11:47:47.136785 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:46Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Oct 06 11:47:47 crc kubenswrapper[4958]: I1006 11:47:47.144649 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:46Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Oct 06 11:47:47 crc kubenswrapper[4958]: I1006 11:47:47.155916 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:46Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Oct 06 11:47:47 crc kubenswrapper[4958]: I1006 11:47:47.166718 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:46Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Oct 06 11:47:47 crc kubenswrapper[4958]: I1006 11:47:47.180882 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Oct 06 11:47:47 crc kubenswrapper[4958]: I1006 11:47:47.194637 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-node-identity/network-node-identity-vrzqb" Oct 06 11:47:47 crc kubenswrapper[4958]: W1006 11:47:47.199323 4958 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod37a5e44f_9a88_4405_be8a_b645485e7312.slice/crio-8c6c39f9226ec7b00fde3d733d149f19e22d9d900e8ac75e4504a0cae091d70e WatchSource:0}: Error finding container 8c6c39f9226ec7b00fde3d733d149f19e22d9d900e8ac75e4504a0cae091d70e: Status 404 returned error can't find the container with id 8c6c39f9226ec7b00fde3d733d149f19e22d9d900e8ac75e4504a0cae091d70e Oct 06 11:47:47 crc kubenswrapper[4958]: I1006 11:47:47.213963 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/iptables-alerter-4ln5h" Oct 06 11:47:47 crc kubenswrapper[4958]: I1006 11:47:47.384442 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 06 11:47:47 crc kubenswrapper[4958]: E1006 11:47:47.384744 4958 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-06 11:47:48.384688466 +0000 UTC m=+22.270713784 (durationBeforeRetry 1s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 06 11:47:47 crc kubenswrapper[4958]: I1006 11:47:47.485892 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 06 11:47:47 crc kubenswrapper[4958]: I1006 11:47:47.485963 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 06 11:47:47 crc kubenswrapper[4958]: I1006 11:47:47.486000 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 06 11:47:47 crc kubenswrapper[4958]: I1006 11:47:47.486046 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 06 11:47:47 crc kubenswrapper[4958]: E1006 11:47:47.486121 4958 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Oct 06 11:47:47 crc kubenswrapper[4958]: E1006 11:47:47.486183 4958 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Oct 06 11:47:47 crc kubenswrapper[4958]: E1006 11:47:47.486198 4958 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Oct 06 11:47:47 crc kubenswrapper[4958]: E1006 11:47:47.486208 4958 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Oct 06 11:47:47 crc kubenswrapper[4958]: E1006 11:47:47.486266 4958 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Oct 06 11:47:47 crc kubenswrapper[4958]: E1006 11:47:47.486297 4958 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2025-10-06 11:47:48.486275257 +0000 UTC m=+22.372300555 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Oct 06 11:47:47 crc kubenswrapper[4958]: E1006 11:47:47.486332 4958 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-10-06 11:47:48.486310037 +0000 UTC m=+22.372335345 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Oct 06 11:47:47 crc kubenswrapper[4958]: E1006 11:47:47.486311 4958 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Oct 06 11:47:47 crc kubenswrapper[4958]: E1006 11:47:47.486359 4958 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Oct 06 11:47:47 crc kubenswrapper[4958]: E1006 11:47:47.486365 4958 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-10-06 11:47:48.486344358 +0000 UTC m=+22.372369666 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Oct 06 11:47:47 crc kubenswrapper[4958]: E1006 11:47:47.486375 4958 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Oct 06 11:47:47 crc kubenswrapper[4958]: E1006 11:47:47.486466 4958 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2025-10-06 11:47:48.48644251 +0000 UTC m=+22.372467828 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Oct 06 11:47:47 crc kubenswrapper[4958]: I1006 11:47:47.546470 4958 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns/node-resolver-8xzjs"] Oct 06 11:47:47 crc kubenswrapper[4958]: I1006 11:47:47.547062 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/node-resolver-8xzjs" Oct 06 11:47:47 crc kubenswrapper[4958]: I1006 11:47:47.549256 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"node-resolver-dockercfg-kz9s7" Oct 06 11:47:47 crc kubenswrapper[4958]: I1006 11:47:47.550398 4958 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"openshift-service-ca.crt" Oct 06 11:47:47 crc kubenswrapper[4958]: I1006 11:47:47.550838 4958 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"kube-root-ca.crt" Oct 06 11:47:47 crc kubenswrapper[4958]: I1006 11:47:47.561264 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:46Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Oct 06 11:47:47 crc kubenswrapper[4958]: I1006 11:47:47.574020 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:46Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Oct 06 11:47:47 crc kubenswrapper[4958]: I1006 11:47:47.581713 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-8xzjs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c3545ace-6477-4c0b-9576-f32c6748e0a0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:47Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:47Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lz2sh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T11:47:47Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-8xzjs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Oct 06 11:47:47 crc kubenswrapper[4958]: I1006 11:47:47.587023 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lz2sh\" (UniqueName: \"kubernetes.io/projected/c3545ace-6477-4c0b-9576-f32c6748e0a0-kube-api-access-lz2sh\") pod \"node-resolver-8xzjs\" (UID: \"c3545ace-6477-4c0b-9576-f32c6748e0a0\") " pod="openshift-dns/node-resolver-8xzjs" Oct 06 11:47:47 crc kubenswrapper[4958]: I1006 11:47:47.587069 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/c3545ace-6477-4c0b-9576-f32c6748e0a0-hosts-file\") pod \"node-resolver-8xzjs\" (UID: \"c3545ace-6477-4c0b-9576-f32c6748e0a0\") " pod="openshift-dns/node-resolver-8xzjs" Oct 06 11:47:47 crc kubenswrapper[4958]: I1006 11:47:47.606869 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:46Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Oct 06 11:47:47 crc kubenswrapper[4958]: I1006 11:47:47.620449 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:46Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Oct 06 11:47:47 crc kubenswrapper[4958]: I1006 11:47:47.633677 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:46Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Oct 06 11:47:47 crc kubenswrapper[4958]: I1006 11:47:47.646549 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:46Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Oct 06 11:47:47 crc kubenswrapper[4958]: I1006 11:47:47.688132 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/c3545ace-6477-4c0b-9576-f32c6748e0a0-hosts-file\") pod \"node-resolver-8xzjs\" (UID: \"c3545ace-6477-4c0b-9576-f32c6748e0a0\") " pod="openshift-dns/node-resolver-8xzjs" Oct 06 11:47:47 crc kubenswrapper[4958]: I1006 11:47:47.688215 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lz2sh\" (UniqueName: \"kubernetes.io/projected/c3545ace-6477-4c0b-9576-f32c6748e0a0-kube-api-access-lz2sh\") pod \"node-resolver-8xzjs\" (UID: \"c3545ace-6477-4c0b-9576-f32c6748e0a0\") " pod="openshift-dns/node-resolver-8xzjs" Oct 06 11:47:47 crc kubenswrapper[4958]: I1006 11:47:47.688663 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/c3545ace-6477-4c0b-9576-f32c6748e0a0-hosts-file\") pod \"node-resolver-8xzjs\" (UID: \"c3545ace-6477-4c0b-9576-f32c6748e0a0\") " pod="openshift-dns/node-resolver-8xzjs" Oct 06 11:47:47 crc kubenswrapper[4958]: I1006 11:47:47.723822 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lz2sh\" (UniqueName: \"kubernetes.io/projected/c3545ace-6477-4c0b-9576-f32c6748e0a0-kube-api-access-lz2sh\") pod \"node-resolver-8xzjs\" (UID: \"c3545ace-6477-4c0b-9576-f32c6748e0a0\") " pod="openshift-dns/node-resolver-8xzjs" Oct 06 11:47:47 crc kubenswrapper[4958]: I1006 11:47:47.861346 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/node-resolver-8xzjs" Oct 06 11:47:47 crc kubenswrapper[4958]: I1006 11:47:47.913074 4958 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-additional-cni-plugins-lwknw"] Oct 06 11:47:47 crc kubenswrapper[4958]: I1006 11:47:47.913647 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-additional-cni-plugins-lwknw" Oct 06 11:47:47 crc kubenswrapper[4958]: I1006 11:47:47.916639 4958 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"kube-root-ca.crt" Oct 06 11:47:47 crc kubenswrapper[4958]: I1006 11:47:47.916694 4958 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-daemon-whw6z"] Oct 06 11:47:47 crc kubenswrapper[4958]: I1006 11:47:47.917088 4958 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-4w4h5"] Oct 06 11:47:47 crc kubenswrapper[4958]: I1006 11:47:47.917343 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-4w4h5" Oct 06 11:47:47 crc kubenswrapper[4958]: I1006 11:47:47.917396 4958 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"default-cni-sysctl-allowlist" Oct 06 11:47:47 crc kubenswrapper[4958]: I1006 11:47:47.917628 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ancillary-tools-dockercfg-vnmsz" Oct 06 11:47:47 crc kubenswrapper[4958]: I1006 11:47:47.917693 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-daemon-whw6z" Oct 06 11:47:47 crc kubenswrapper[4958]: I1006 11:47:47.917754 4958 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"openshift-service-ca.crt" Oct 06 11:47:47 crc kubenswrapper[4958]: I1006 11:47:47.918770 4958 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"multus-daemon-config" Oct 06 11:47:47 crc kubenswrapper[4958]: I1006 11:47:47.919508 4958 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"cni-copy-resources" Oct 06 11:47:47 crc kubenswrapper[4958]: I1006 11:47:47.919657 4958 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-rbac-proxy" Oct 06 11:47:47 crc kubenswrapper[4958]: I1006 11:47:47.919883 4958 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-root-ca.crt" Oct 06 11:47:47 crc kubenswrapper[4958]: I1006 11:47:47.920001 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-daemon-dockercfg-r5tcq" Oct 06 11:47:47 crc kubenswrapper[4958]: I1006 11:47:47.920121 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"proxy-tls" Oct 06 11:47:47 crc kubenswrapper[4958]: I1006 11:47:47.920437 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"default-dockercfg-2q5b6" Oct 06 11:47:47 crc kubenswrapper[4958]: I1006 11:47:47.920601 4958 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"openshift-service-ca.crt" Oct 06 11:47:47 crc kubenswrapper[4958]: I1006 11:47:47.932547 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:46Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Oct 06 11:47:47 crc kubenswrapper[4958]: I1006 11:47:47.949201 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-8xzjs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c3545ace-6477-4c0b-9576-f32c6748e0a0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:47Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:47Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lz2sh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T11:47:47Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-8xzjs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Oct 06 11:47:47 crc kubenswrapper[4958]: I1006 11:47:47.977705 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:46Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Oct 06 11:47:47 crc kubenswrapper[4958]: I1006 11:47:47.991473 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/8198c8fd-8ec9-4a56-9e87-4e3967fb8ec7-cni-binary-copy\") pod \"multus-4w4h5\" (UID: \"8198c8fd-8ec9-4a56-9e87-4e3967fb8ec7\") " pod="openshift-multus/multus-4w4h5" Oct 06 11:47:47 crc kubenswrapper[4958]: I1006 11:47:47.991516 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/8198c8fd-8ec9-4a56-9e87-4e3967fb8ec7-host-run-k8s-cni-cncf-io\") pod \"multus-4w4h5\" (UID: \"8198c8fd-8ec9-4a56-9e87-4e3967fb8ec7\") " pod="openshift-multus/multus-4w4h5" Oct 06 11:47:47 crc kubenswrapper[4958]: I1006 11:47:47.991541 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/8198c8fd-8ec9-4a56-9e87-4e3967fb8ec7-hostroot\") pod \"multus-4w4h5\" (UID: \"8198c8fd-8ec9-4a56-9e87-4e3967fb8ec7\") " pod="openshift-multus/multus-4w4h5" Oct 06 11:47:47 crc kubenswrapper[4958]: I1006 11:47:47.991570 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/5e46de0e-9f67-4dae-8601-65004d0d71c1-system-cni-dir\") pod \"multus-additional-cni-plugins-lwknw\" (UID: \"5e46de0e-9f67-4dae-8601-65004d0d71c1\") " pod="openshift-multus/multus-additional-cni-plugins-lwknw" Oct 06 11:47:47 crc kubenswrapper[4958]: I1006 11:47:47.991586 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/8198c8fd-8ec9-4a56-9e87-4e3967fb8ec7-host-run-netns\") pod \"multus-4w4h5\" (UID: \"8198c8fd-8ec9-4a56-9e87-4e3967fb8ec7\") " pod="openshift-multus/multus-4w4h5" Oct 06 11:47:47 crc kubenswrapper[4958]: I1006 11:47:47.991609 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/8198c8fd-8ec9-4a56-9e87-4e3967fb8ec7-multus-daemon-config\") pod \"multus-4w4h5\" (UID: \"8198c8fd-8ec9-4a56-9e87-4e3967fb8ec7\") " pod="openshift-multus/multus-4w4h5" Oct 06 11:47:47 crc kubenswrapper[4958]: I1006 11:47:47.991626 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wghx2\" (UniqueName: \"kubernetes.io/projected/5e46de0e-9f67-4dae-8601-65004d0d71c1-kube-api-access-wghx2\") pod \"multus-additional-cni-plugins-lwknw\" (UID: \"5e46de0e-9f67-4dae-8601-65004d0d71c1\") " pod="openshift-multus/multus-additional-cni-plugins-lwknw" Oct 06 11:47:47 crc kubenswrapper[4958]: I1006 11:47:47.991646 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/8198c8fd-8ec9-4a56-9e87-4e3967fb8ec7-host-var-lib-cni-bin\") pod \"multus-4w4h5\" (UID: \"8198c8fd-8ec9-4a56-9e87-4e3967fb8ec7\") " pod="openshift-multus/multus-4w4h5" Oct 06 11:47:47 crc kubenswrapper[4958]: I1006 11:47:47.991663 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/5e46de0e-9f67-4dae-8601-65004d0d71c1-cni-binary-copy\") pod \"multus-additional-cni-plugins-lwknw\" (UID: \"5e46de0e-9f67-4dae-8601-65004d0d71c1\") " pod="openshift-multus/multus-additional-cni-plugins-lwknw" Oct 06 11:47:47 crc kubenswrapper[4958]: I1006 11:47:47.991679 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/8198c8fd-8ec9-4a56-9e87-4e3967fb8ec7-etc-kubernetes\") pod \"multus-4w4h5\" (UID: \"8198c8fd-8ec9-4a56-9e87-4e3967fb8ec7\") " pod="openshift-multus/multus-4w4h5" Oct 06 11:47:47 crc kubenswrapper[4958]: I1006 11:47:47.991695 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/5e46de0e-9f67-4dae-8601-65004d0d71c1-tuning-conf-dir\") pod \"multus-additional-cni-plugins-lwknw\" (UID: \"5e46de0e-9f67-4dae-8601-65004d0d71c1\") " pod="openshift-multus/multus-additional-cni-plugins-lwknw" Oct 06 11:47:47 crc kubenswrapper[4958]: I1006 11:47:47.991711 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/1a63a5e9-6d00-40ff-a080-cfcaf03a1c1b-proxy-tls\") pod \"machine-config-daemon-whw6z\" (UID: \"1a63a5e9-6d00-40ff-a080-cfcaf03a1c1b\") " pod="openshift-machine-config-operator/machine-config-daemon-whw6z" Oct 06 11:47:47 crc kubenswrapper[4958]: I1006 11:47:47.991726 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/8198c8fd-8ec9-4a56-9e87-4e3967fb8ec7-cnibin\") pod \"multus-4w4h5\" (UID: \"8198c8fd-8ec9-4a56-9e87-4e3967fb8ec7\") " pod="openshift-multus/multus-4w4h5" Oct 06 11:47:47 crc kubenswrapper[4958]: I1006 11:47:47.991742 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/8198c8fd-8ec9-4a56-9e87-4e3967fb8ec7-host-var-lib-kubelet\") pod \"multus-4w4h5\" (UID: \"8198c8fd-8ec9-4a56-9e87-4e3967fb8ec7\") " pod="openshift-multus/multus-4w4h5" Oct 06 11:47:47 crc kubenswrapper[4958]: I1006 11:47:47.991759 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/5e46de0e-9f67-4dae-8601-65004d0d71c1-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-lwknw\" (UID: \"5e46de0e-9f67-4dae-8601-65004d0d71c1\") " pod="openshift-multus/multus-additional-cni-plugins-lwknw" Oct 06 11:47:47 crc kubenswrapper[4958]: I1006 11:47:47.991778 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/1a63a5e9-6d00-40ff-a080-cfcaf03a1c1b-mcd-auth-proxy-config\") pod \"machine-config-daemon-whw6z\" (UID: \"1a63a5e9-6d00-40ff-a080-cfcaf03a1c1b\") " pod="openshift-machine-config-operator/machine-config-daemon-whw6z" Oct 06 11:47:47 crc kubenswrapper[4958]: I1006 11:47:47.991796 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/8198c8fd-8ec9-4a56-9e87-4e3967fb8ec7-system-cni-dir\") pod \"multus-4w4h5\" (UID: \"8198c8fd-8ec9-4a56-9e87-4e3967fb8ec7\") " pod="openshift-multus/multus-4w4h5" Oct 06 11:47:47 crc kubenswrapper[4958]: I1006 11:47:47.991812 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/8198c8fd-8ec9-4a56-9e87-4e3967fb8ec7-multus-cni-dir\") pod \"multus-4w4h5\" (UID: \"8198c8fd-8ec9-4a56-9e87-4e3967fb8ec7\") " pod="openshift-multus/multus-4w4h5" Oct 06 11:47:47 crc kubenswrapper[4958]: I1006 11:47:47.991830 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bwd9v\" (UniqueName: \"kubernetes.io/projected/1a63a5e9-6d00-40ff-a080-cfcaf03a1c1b-kube-api-access-bwd9v\") pod \"machine-config-daemon-whw6z\" (UID: \"1a63a5e9-6d00-40ff-a080-cfcaf03a1c1b\") " pod="openshift-machine-config-operator/machine-config-daemon-whw6z" Oct 06 11:47:47 crc kubenswrapper[4958]: I1006 11:47:47.991847 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/8198c8fd-8ec9-4a56-9e87-4e3967fb8ec7-host-var-lib-cni-multus\") pod \"multus-4w4h5\" (UID: \"8198c8fd-8ec9-4a56-9e87-4e3967fb8ec7\") " pod="openshift-multus/multus-4w4h5" Oct 06 11:47:47 crc kubenswrapper[4958]: I1006 11:47:47.991862 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/5e46de0e-9f67-4dae-8601-65004d0d71c1-os-release\") pod \"multus-additional-cni-plugins-lwknw\" (UID: \"5e46de0e-9f67-4dae-8601-65004d0d71c1\") " pod="openshift-multus/multus-additional-cni-plugins-lwknw" Oct 06 11:47:47 crc kubenswrapper[4958]: I1006 11:47:47.991878 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/8198c8fd-8ec9-4a56-9e87-4e3967fb8ec7-host-run-multus-certs\") pod \"multus-4w4h5\" (UID: \"8198c8fd-8ec9-4a56-9e87-4e3967fb8ec7\") " pod="openshift-multus/multus-4w4h5" Oct 06 11:47:47 crc kubenswrapper[4958]: I1006 11:47:47.991909 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/8198c8fd-8ec9-4a56-9e87-4e3967fb8ec7-os-release\") pod \"multus-4w4h5\" (UID: \"8198c8fd-8ec9-4a56-9e87-4e3967fb8ec7\") " pod="openshift-multus/multus-4w4h5" Oct 06 11:47:47 crc kubenswrapper[4958]: I1006 11:47:47.991927 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2ncxs\" (UniqueName: \"kubernetes.io/projected/8198c8fd-8ec9-4a56-9e87-4e3967fb8ec7-kube-api-access-2ncxs\") pod \"multus-4w4h5\" (UID: \"8198c8fd-8ec9-4a56-9e87-4e3967fb8ec7\") " pod="openshift-multus/multus-4w4h5" Oct 06 11:47:47 crc kubenswrapper[4958]: I1006 11:47:47.991942 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/5e46de0e-9f67-4dae-8601-65004d0d71c1-cnibin\") pod \"multus-additional-cni-plugins-lwknw\" (UID: \"5e46de0e-9f67-4dae-8601-65004d0d71c1\") " pod="openshift-multus/multus-additional-cni-plugins-lwknw" Oct 06 11:47:47 crc kubenswrapper[4958]: I1006 11:47:47.991969 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/8198c8fd-8ec9-4a56-9e87-4e3967fb8ec7-multus-socket-dir-parent\") pod \"multus-4w4h5\" (UID: \"8198c8fd-8ec9-4a56-9e87-4e3967fb8ec7\") " pod="openshift-multus/multus-4w4h5" Oct 06 11:47:47 crc kubenswrapper[4958]: I1006 11:47:47.991989 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/8198c8fd-8ec9-4a56-9e87-4e3967fb8ec7-multus-conf-dir\") pod \"multus-4w4h5\" (UID: \"8198c8fd-8ec9-4a56-9e87-4e3967fb8ec7\") " pod="openshift-multus/multus-4w4h5" Oct 06 11:47:47 crc kubenswrapper[4958]: I1006 11:47:47.992009 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/1a63a5e9-6d00-40ff-a080-cfcaf03a1c1b-rootfs\") pod \"machine-config-daemon-whw6z\" (UID: \"1a63a5e9-6d00-40ff-a080-cfcaf03a1c1b\") " pod="openshift-machine-config-operator/machine-config-daemon-whw6z" Oct 06 11:47:47 crc kubenswrapper[4958]: I1006 11:47:47.996584 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:46Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Oct 06 11:47:48 crc kubenswrapper[4958]: I1006 11:47:48.015779 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:46Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Oct 06 11:47:48 crc kubenswrapper[4958]: I1006 11:47:48.039562 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:46Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Oct 06 11:47:48 crc kubenswrapper[4958]: I1006 11:47:48.049029 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:46Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Oct 06 11:47:48 crc kubenswrapper[4958]: I1006 11:47:48.067464 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-lwknw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5e46de0e-9f67-4dae-8601-65004d0d71c1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:47Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:47Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:47Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wghx2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wghx2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wghx2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wghx2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wghx2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wghx2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wghx2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T11:47:47Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-lwknw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Oct 06 11:47:48 crc kubenswrapper[4958]: I1006 11:47:48.070811 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" event={"ID":"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49","Type":"ContainerStarted","Data":"c23a9e2ae81bd1c50fe37a384aa7f6b1e0333bc0d49e14e51d16889ead52ad1e"} Oct 06 11:47:48 crc kubenswrapper[4958]: I1006 11:47:48.072690 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"3a93985bd27818990d6f5ec103d8c0722e1317a74398d64300148018c4f77b71"} Oct 06 11:47:48 crc kubenswrapper[4958]: I1006 11:47:48.072718 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"9f39e4dcc064167825d4f7323323a394d1a3d1a374b7104bffc6ad56b344d89e"} Oct 06 11:47:48 crc kubenswrapper[4958]: I1006 11:47:48.072728 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"e64149b7a36bc1b65614994e4c5ea8de9375b07e63238ad70de69216ac48074a"} Oct 06 11:47:48 crc kubenswrapper[4958]: I1006 11:47:48.074578 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/node-resolver-8xzjs" event={"ID":"c3545ace-6477-4c0b-9576-f32c6748e0a0","Type":"ContainerStarted","Data":"f301f93985bd76d21838b4d9301f995979eef6f745412118deb7610623231349"} Oct 06 11:47:48 crc kubenswrapper[4958]: I1006 11:47:48.077257 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" event={"ID":"37a5e44f-9a88-4405-be8a-b645485e7312","Type":"ContainerStarted","Data":"b5c0c63694c68c28642df297d1ceff03425621c51b170c758b3fedeafbccc89e"} Oct 06 11:47:48 crc kubenswrapper[4958]: I1006 11:47:48.077285 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" event={"ID":"37a5e44f-9a88-4405-be8a-b645485e7312","Type":"ContainerStarted","Data":"8c6c39f9226ec7b00fde3d733d149f19e22d9d900e8ac75e4504a0cae091d70e"} Oct 06 11:47:48 crc kubenswrapper[4958]: I1006 11:47:48.092375 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:46Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Oct 06 11:47:48 crc kubenswrapper[4958]: I1006 11:47:48.093162 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2ncxs\" (UniqueName: \"kubernetes.io/projected/8198c8fd-8ec9-4a56-9e87-4e3967fb8ec7-kube-api-access-2ncxs\") pod \"multus-4w4h5\" (UID: \"8198c8fd-8ec9-4a56-9e87-4e3967fb8ec7\") " pod="openshift-multus/multus-4w4h5" Oct 06 11:47:48 crc kubenswrapper[4958]: I1006 11:47:48.093207 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/8198c8fd-8ec9-4a56-9e87-4e3967fb8ec7-os-release\") pod \"multus-4w4h5\" (UID: \"8198c8fd-8ec9-4a56-9e87-4e3967fb8ec7\") " pod="openshift-multus/multus-4w4h5" Oct 06 11:47:48 crc kubenswrapper[4958]: I1006 11:47:48.093229 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/8198c8fd-8ec9-4a56-9e87-4e3967fb8ec7-multus-conf-dir\") pod \"multus-4w4h5\" (UID: \"8198c8fd-8ec9-4a56-9e87-4e3967fb8ec7\") " pod="openshift-multus/multus-4w4h5" Oct 06 11:47:48 crc kubenswrapper[4958]: I1006 11:47:48.093250 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/5e46de0e-9f67-4dae-8601-65004d0d71c1-cnibin\") pod \"multus-additional-cni-plugins-lwknw\" (UID: \"5e46de0e-9f67-4dae-8601-65004d0d71c1\") " pod="openshift-multus/multus-additional-cni-plugins-lwknw" Oct 06 11:47:48 crc kubenswrapper[4958]: I1006 11:47:48.093272 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/8198c8fd-8ec9-4a56-9e87-4e3967fb8ec7-multus-socket-dir-parent\") pod \"multus-4w4h5\" (UID: \"8198c8fd-8ec9-4a56-9e87-4e3967fb8ec7\") " pod="openshift-multus/multus-4w4h5" Oct 06 11:47:48 crc kubenswrapper[4958]: I1006 11:47:48.093289 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/1a63a5e9-6d00-40ff-a080-cfcaf03a1c1b-rootfs\") pod \"machine-config-daemon-whw6z\" (UID: \"1a63a5e9-6d00-40ff-a080-cfcaf03a1c1b\") " pod="openshift-machine-config-operator/machine-config-daemon-whw6z" Oct 06 11:47:48 crc kubenswrapper[4958]: I1006 11:47:48.093307 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/8198c8fd-8ec9-4a56-9e87-4e3967fb8ec7-cni-binary-copy\") pod \"multus-4w4h5\" (UID: \"8198c8fd-8ec9-4a56-9e87-4e3967fb8ec7\") " pod="openshift-multus/multus-4w4h5" Oct 06 11:47:48 crc kubenswrapper[4958]: I1006 11:47:48.093322 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/8198c8fd-8ec9-4a56-9e87-4e3967fb8ec7-host-run-k8s-cni-cncf-io\") pod \"multus-4w4h5\" (UID: \"8198c8fd-8ec9-4a56-9e87-4e3967fb8ec7\") " pod="openshift-multus/multus-4w4h5" Oct 06 11:47:48 crc kubenswrapper[4958]: I1006 11:47:48.093337 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/8198c8fd-8ec9-4a56-9e87-4e3967fb8ec7-host-run-netns\") pod \"multus-4w4h5\" (UID: \"8198c8fd-8ec9-4a56-9e87-4e3967fb8ec7\") " pod="openshift-multus/multus-4w4h5" Oct 06 11:47:48 crc kubenswrapper[4958]: I1006 11:47:48.093352 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/8198c8fd-8ec9-4a56-9e87-4e3967fb8ec7-hostroot\") pod \"multus-4w4h5\" (UID: \"8198c8fd-8ec9-4a56-9e87-4e3967fb8ec7\") " pod="openshift-multus/multus-4w4h5" Oct 06 11:47:48 crc kubenswrapper[4958]: I1006 11:47:48.093376 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/5e46de0e-9f67-4dae-8601-65004d0d71c1-system-cni-dir\") pod \"multus-additional-cni-plugins-lwknw\" (UID: \"5e46de0e-9f67-4dae-8601-65004d0d71c1\") " pod="openshift-multus/multus-additional-cni-plugins-lwknw" Oct 06 11:47:48 crc kubenswrapper[4958]: I1006 11:47:48.093384 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/8198c8fd-8ec9-4a56-9e87-4e3967fb8ec7-multus-conf-dir\") pod \"multus-4w4h5\" (UID: \"8198c8fd-8ec9-4a56-9e87-4e3967fb8ec7\") " pod="openshift-multus/multus-4w4h5" Oct 06 11:47:48 crc kubenswrapper[4958]: I1006 11:47:48.093399 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/8198c8fd-8ec9-4a56-9e87-4e3967fb8ec7-multus-daemon-config\") pod \"multus-4w4h5\" (UID: \"8198c8fd-8ec9-4a56-9e87-4e3967fb8ec7\") " pod="openshift-multus/multus-4w4h5" Oct 06 11:47:48 crc kubenswrapper[4958]: I1006 11:47:48.093492 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wghx2\" (UniqueName: \"kubernetes.io/projected/5e46de0e-9f67-4dae-8601-65004d0d71c1-kube-api-access-wghx2\") pod \"multus-additional-cni-plugins-lwknw\" (UID: \"5e46de0e-9f67-4dae-8601-65004d0d71c1\") " pod="openshift-multus/multus-additional-cni-plugins-lwknw" Oct 06 11:47:48 crc kubenswrapper[4958]: I1006 11:47:48.093518 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/8198c8fd-8ec9-4a56-9e87-4e3967fb8ec7-host-var-lib-cni-bin\") pod \"multus-4w4h5\" (UID: \"8198c8fd-8ec9-4a56-9e87-4e3967fb8ec7\") " pod="openshift-multus/multus-4w4h5" Oct 06 11:47:48 crc kubenswrapper[4958]: I1006 11:47:48.093542 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/5e46de0e-9f67-4dae-8601-65004d0d71c1-cni-binary-copy\") pod \"multus-additional-cni-plugins-lwknw\" (UID: \"5e46de0e-9f67-4dae-8601-65004d0d71c1\") " pod="openshift-multus/multus-additional-cni-plugins-lwknw" Oct 06 11:47:48 crc kubenswrapper[4958]: I1006 11:47:48.093564 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/8198c8fd-8ec9-4a56-9e87-4e3967fb8ec7-etc-kubernetes\") pod \"multus-4w4h5\" (UID: \"8198c8fd-8ec9-4a56-9e87-4e3967fb8ec7\") " pod="openshift-multus/multus-4w4h5" Oct 06 11:47:48 crc kubenswrapper[4958]: I1006 11:47:48.093581 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/5e46de0e-9f67-4dae-8601-65004d0d71c1-tuning-conf-dir\") pod \"multus-additional-cni-plugins-lwknw\" (UID: \"5e46de0e-9f67-4dae-8601-65004d0d71c1\") " pod="openshift-multus/multus-additional-cni-plugins-lwknw" Oct 06 11:47:48 crc kubenswrapper[4958]: I1006 11:47:48.093600 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/8198c8fd-8ec9-4a56-9e87-4e3967fb8ec7-os-release\") pod \"multus-4w4h5\" (UID: \"8198c8fd-8ec9-4a56-9e87-4e3967fb8ec7\") " pod="openshift-multus/multus-4w4h5" Oct 06 11:47:48 crc kubenswrapper[4958]: I1006 11:47:48.093598 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/1a63a5e9-6d00-40ff-a080-cfcaf03a1c1b-proxy-tls\") pod \"machine-config-daemon-whw6z\" (UID: \"1a63a5e9-6d00-40ff-a080-cfcaf03a1c1b\") " pod="openshift-machine-config-operator/machine-config-daemon-whw6z" Oct 06 11:47:48 crc kubenswrapper[4958]: I1006 11:47:48.093644 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/8198c8fd-8ec9-4a56-9e87-4e3967fb8ec7-system-cni-dir\") pod \"multus-4w4h5\" (UID: \"8198c8fd-8ec9-4a56-9e87-4e3967fb8ec7\") " pod="openshift-multus/multus-4w4h5" Oct 06 11:47:48 crc kubenswrapper[4958]: I1006 11:47:48.093665 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/8198c8fd-8ec9-4a56-9e87-4e3967fb8ec7-cnibin\") pod \"multus-4w4h5\" (UID: \"8198c8fd-8ec9-4a56-9e87-4e3967fb8ec7\") " pod="openshift-multus/multus-4w4h5" Oct 06 11:47:48 crc kubenswrapper[4958]: I1006 11:47:48.093681 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/8198c8fd-8ec9-4a56-9e87-4e3967fb8ec7-host-var-lib-kubelet\") pod \"multus-4w4h5\" (UID: \"8198c8fd-8ec9-4a56-9e87-4e3967fb8ec7\") " pod="openshift-multus/multus-4w4h5" Oct 06 11:47:48 crc kubenswrapper[4958]: I1006 11:47:48.093688 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/8198c8fd-8ec9-4a56-9e87-4e3967fb8ec7-host-run-k8s-cni-cncf-io\") pod \"multus-4w4h5\" (UID: \"8198c8fd-8ec9-4a56-9e87-4e3967fb8ec7\") " pod="openshift-multus/multus-4w4h5" Oct 06 11:47:48 crc kubenswrapper[4958]: I1006 11:47:48.093698 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/5e46de0e-9f67-4dae-8601-65004d0d71c1-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-lwknw\" (UID: \"5e46de0e-9f67-4dae-8601-65004d0d71c1\") " pod="openshift-multus/multus-additional-cni-plugins-lwknw" Oct 06 11:47:48 crc kubenswrapper[4958]: I1006 11:47:48.093720 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/8198c8fd-8ec9-4a56-9e87-4e3967fb8ec7-hostroot\") pod \"multus-4w4h5\" (UID: \"8198c8fd-8ec9-4a56-9e87-4e3967fb8ec7\") " pod="openshift-multus/multus-4w4h5" Oct 06 11:47:48 crc kubenswrapper[4958]: I1006 11:47:48.093720 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/1a63a5e9-6d00-40ff-a080-cfcaf03a1c1b-mcd-auth-proxy-config\") pod \"machine-config-daemon-whw6z\" (UID: \"1a63a5e9-6d00-40ff-a080-cfcaf03a1c1b\") " pod="openshift-machine-config-operator/machine-config-daemon-whw6z" Oct 06 11:47:48 crc kubenswrapper[4958]: I1006 11:47:48.093741 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/8198c8fd-8ec9-4a56-9e87-4e3967fb8ec7-multus-cni-dir\") pod \"multus-4w4h5\" (UID: \"8198c8fd-8ec9-4a56-9e87-4e3967fb8ec7\") " pod="openshift-multus/multus-4w4h5" Oct 06 11:47:48 crc kubenswrapper[4958]: I1006 11:47:48.093760 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bwd9v\" (UniqueName: \"kubernetes.io/projected/1a63a5e9-6d00-40ff-a080-cfcaf03a1c1b-kube-api-access-bwd9v\") pod \"machine-config-daemon-whw6z\" (UID: \"1a63a5e9-6d00-40ff-a080-cfcaf03a1c1b\") " pod="openshift-machine-config-operator/machine-config-daemon-whw6z" Oct 06 11:47:48 crc kubenswrapper[4958]: I1006 11:47:48.093775 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/8198c8fd-8ec9-4a56-9e87-4e3967fb8ec7-host-var-lib-cni-multus\") pod \"multus-4w4h5\" (UID: \"8198c8fd-8ec9-4a56-9e87-4e3967fb8ec7\") " pod="openshift-multus/multus-4w4h5" Oct 06 11:47:48 crc kubenswrapper[4958]: I1006 11:47:48.093792 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/5e46de0e-9f67-4dae-8601-65004d0d71c1-os-release\") pod \"multus-additional-cni-plugins-lwknw\" (UID: \"5e46de0e-9f67-4dae-8601-65004d0d71c1\") " pod="openshift-multus/multus-additional-cni-plugins-lwknw" Oct 06 11:47:48 crc kubenswrapper[4958]: I1006 11:47:48.093808 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/8198c8fd-8ec9-4a56-9e87-4e3967fb8ec7-host-run-multus-certs\") pod \"multus-4w4h5\" (UID: \"8198c8fd-8ec9-4a56-9e87-4e3967fb8ec7\") " pod="openshift-multus/multus-4w4h5" Oct 06 11:47:48 crc kubenswrapper[4958]: I1006 11:47:48.093816 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/5e46de0e-9f67-4dae-8601-65004d0d71c1-cnibin\") pod \"multus-additional-cni-plugins-lwknw\" (UID: \"5e46de0e-9f67-4dae-8601-65004d0d71c1\") " pod="openshift-multus/multus-additional-cni-plugins-lwknw" Oct 06 11:47:48 crc kubenswrapper[4958]: I1006 11:47:48.093846 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/5e46de0e-9f67-4dae-8601-65004d0d71c1-system-cni-dir\") pod \"multus-additional-cni-plugins-lwknw\" (UID: \"5e46de0e-9f67-4dae-8601-65004d0d71c1\") " pod="openshift-multus/multus-additional-cni-plugins-lwknw" Oct 06 11:47:48 crc kubenswrapper[4958]: I1006 11:47:48.093665 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/8198c8fd-8ec9-4a56-9e87-4e3967fb8ec7-host-run-netns\") pod \"multus-4w4h5\" (UID: \"8198c8fd-8ec9-4a56-9e87-4e3967fb8ec7\") " pod="openshift-multus/multus-4w4h5" Oct 06 11:47:48 crc kubenswrapper[4958]: I1006 11:47:48.093861 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/1a63a5e9-6d00-40ff-a080-cfcaf03a1c1b-rootfs\") pod \"machine-config-daemon-whw6z\" (UID: \"1a63a5e9-6d00-40ff-a080-cfcaf03a1c1b\") " pod="openshift-machine-config-operator/machine-config-daemon-whw6z" Oct 06 11:47:48 crc kubenswrapper[4958]: I1006 11:47:48.093905 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/8198c8fd-8ec9-4a56-9e87-4e3967fb8ec7-host-var-lib-kubelet\") pod \"multus-4w4h5\" (UID: \"8198c8fd-8ec9-4a56-9e87-4e3967fb8ec7\") " pod="openshift-multus/multus-4w4h5" Oct 06 11:47:48 crc kubenswrapper[4958]: I1006 11:47:48.094007 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/5e46de0e-9f67-4dae-8601-65004d0d71c1-os-release\") pod \"multus-additional-cni-plugins-lwknw\" (UID: \"5e46de0e-9f67-4dae-8601-65004d0d71c1\") " pod="openshift-multus/multus-additional-cni-plugins-lwknw" Oct 06 11:47:48 crc kubenswrapper[4958]: I1006 11:47:48.093793 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/8198c8fd-8ec9-4a56-9e87-4e3967fb8ec7-multus-socket-dir-parent\") pod \"multus-4w4h5\" (UID: \"8198c8fd-8ec9-4a56-9e87-4e3967fb8ec7\") " pod="openshift-multus/multus-4w4h5" Oct 06 11:47:48 crc kubenswrapper[4958]: I1006 11:47:48.094046 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/8198c8fd-8ec9-4a56-9e87-4e3967fb8ec7-host-run-multus-certs\") pod \"multus-4w4h5\" (UID: \"8198c8fd-8ec9-4a56-9e87-4e3967fb8ec7\") " pod="openshift-multus/multus-4w4h5" Oct 06 11:47:48 crc kubenswrapper[4958]: I1006 11:47:48.094053 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/8198c8fd-8ec9-4a56-9e87-4e3967fb8ec7-multus-daemon-config\") pod \"multus-4w4h5\" (UID: \"8198c8fd-8ec9-4a56-9e87-4e3967fb8ec7\") " pod="openshift-multus/multus-4w4h5" Oct 06 11:47:48 crc kubenswrapper[4958]: I1006 11:47:48.094326 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/8198c8fd-8ec9-4a56-9e87-4e3967fb8ec7-cni-binary-copy\") pod \"multus-4w4h5\" (UID: \"8198c8fd-8ec9-4a56-9e87-4e3967fb8ec7\") " pod="openshift-multus/multus-4w4h5" Oct 06 11:47:48 crc kubenswrapper[4958]: I1006 11:47:48.094379 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/8198c8fd-8ec9-4a56-9e87-4e3967fb8ec7-host-var-lib-cni-multus\") pod \"multus-4w4h5\" (UID: \"8198c8fd-8ec9-4a56-9e87-4e3967fb8ec7\") " pod="openshift-multus/multus-4w4h5" Oct 06 11:47:48 crc kubenswrapper[4958]: I1006 11:47:48.094407 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/8198c8fd-8ec9-4a56-9e87-4e3967fb8ec7-host-var-lib-cni-bin\") pod \"multus-4w4h5\" (UID: \"8198c8fd-8ec9-4a56-9e87-4e3967fb8ec7\") " pod="openshift-multus/multus-4w4h5" Oct 06 11:47:48 crc kubenswrapper[4958]: I1006 11:47:48.094479 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/5e46de0e-9f67-4dae-8601-65004d0d71c1-cni-binary-copy\") pod \"multus-additional-cni-plugins-lwknw\" (UID: \"5e46de0e-9f67-4dae-8601-65004d0d71c1\") " pod="openshift-multus/multus-additional-cni-plugins-lwknw" Oct 06 11:47:48 crc kubenswrapper[4958]: I1006 11:47:48.094519 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/8198c8fd-8ec9-4a56-9e87-4e3967fb8ec7-cnibin\") pod \"multus-4w4h5\" (UID: \"8198c8fd-8ec9-4a56-9e87-4e3967fb8ec7\") " pod="openshift-multus/multus-4w4h5" Oct 06 11:47:48 crc kubenswrapper[4958]: I1006 11:47:48.094648 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/5e46de0e-9f67-4dae-8601-65004d0d71c1-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-lwknw\" (UID: \"5e46de0e-9f67-4dae-8601-65004d0d71c1\") " pod="openshift-multus/multus-additional-cni-plugins-lwknw" Oct 06 11:47:48 crc kubenswrapper[4958]: I1006 11:47:48.094679 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/8198c8fd-8ec9-4a56-9e87-4e3967fb8ec7-etc-kubernetes\") pod \"multus-4w4h5\" (UID: \"8198c8fd-8ec9-4a56-9e87-4e3967fb8ec7\") " pod="openshift-multus/multus-4w4h5" Oct 06 11:47:48 crc kubenswrapper[4958]: I1006 11:47:48.094659 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/8198c8fd-8ec9-4a56-9e87-4e3967fb8ec7-system-cni-dir\") pod \"multus-4w4h5\" (UID: \"8198c8fd-8ec9-4a56-9e87-4e3967fb8ec7\") " pod="openshift-multus/multus-4w4h5" Oct 06 11:47:48 crc kubenswrapper[4958]: I1006 11:47:48.094742 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/8198c8fd-8ec9-4a56-9e87-4e3967fb8ec7-multus-cni-dir\") pod \"multus-4w4h5\" (UID: \"8198c8fd-8ec9-4a56-9e87-4e3967fb8ec7\") " pod="openshift-multus/multus-4w4h5" Oct 06 11:47:48 crc kubenswrapper[4958]: I1006 11:47:48.094982 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/1a63a5e9-6d00-40ff-a080-cfcaf03a1c1b-mcd-auth-proxy-config\") pod \"machine-config-daemon-whw6z\" (UID: \"1a63a5e9-6d00-40ff-a080-cfcaf03a1c1b\") " pod="openshift-machine-config-operator/machine-config-daemon-whw6z" Oct 06 11:47:48 crc kubenswrapper[4958]: I1006 11:47:48.096082 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/5e46de0e-9f67-4dae-8601-65004d0d71c1-tuning-conf-dir\") pod \"multus-additional-cni-plugins-lwknw\" (UID: \"5e46de0e-9f67-4dae-8601-65004d0d71c1\") " pod="openshift-multus/multus-additional-cni-plugins-lwknw" Oct 06 11:47:48 crc kubenswrapper[4958]: I1006 11:47:48.097458 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/1a63a5e9-6d00-40ff-a080-cfcaf03a1c1b-proxy-tls\") pod \"machine-config-daemon-whw6z\" (UID: \"1a63a5e9-6d00-40ff-a080-cfcaf03a1c1b\") " pod="openshift-machine-config-operator/machine-config-daemon-whw6z" Oct 06 11:47:48 crc kubenswrapper[4958]: I1006 11:47:48.114333 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:46Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Oct 06 11:47:48 crc kubenswrapper[4958]: I1006 11:47:48.120304 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wghx2\" (UniqueName: \"kubernetes.io/projected/5e46de0e-9f67-4dae-8601-65004d0d71c1-kube-api-access-wghx2\") pod \"multus-additional-cni-plugins-lwknw\" (UID: \"5e46de0e-9f67-4dae-8601-65004d0d71c1\") " pod="openshift-multus/multus-additional-cni-plugins-lwknw" Oct 06 11:47:48 crc kubenswrapper[4958]: I1006 11:47:48.123725 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2ncxs\" (UniqueName: \"kubernetes.io/projected/8198c8fd-8ec9-4a56-9e87-4e3967fb8ec7-kube-api-access-2ncxs\") pod \"multus-4w4h5\" (UID: \"8198c8fd-8ec9-4a56-9e87-4e3967fb8ec7\") " pod="openshift-multus/multus-4w4h5" Oct 06 11:47:48 crc kubenswrapper[4958]: I1006 11:47:48.123763 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bwd9v\" (UniqueName: \"kubernetes.io/projected/1a63a5e9-6d00-40ff-a080-cfcaf03a1c1b-kube-api-access-bwd9v\") pod \"machine-config-daemon-whw6z\" (UID: \"1a63a5e9-6d00-40ff-a080-cfcaf03a1c1b\") " pod="openshift-machine-config-operator/machine-config-daemon-whw6z" Oct 06 11:47:48 crc kubenswrapper[4958]: I1006 11:47:48.135322 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:46Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Oct 06 11:47:48 crc kubenswrapper[4958]: I1006 11:47:48.158177 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:46Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Oct 06 11:47:48 crc kubenswrapper[4958]: I1006 11:47:48.172788 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:46Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Oct 06 11:47:48 crc kubenswrapper[4958]: I1006 11:47:48.184190 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-lwknw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5e46de0e-9f67-4dae-8601-65004d0d71c1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:47Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:47Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:47Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wghx2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wghx2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wghx2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wghx2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wghx2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wghx2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wghx2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T11:47:47Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-lwknw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Oct 06 11:47:48 crc kubenswrapper[4958]: I1006 11:47:48.192485 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-4w4h5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8198c8fd-8ec9-4a56-9e87-4e3967fb8ec7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:47Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:47Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2ncxs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T11:47:47Z\\\"}}\" for pod \"openshift-multus\"/\"multus-4w4h5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Oct 06 11:47:48 crc kubenswrapper[4958]: I1006 11:47:48.201729 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-whw6z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1a63a5e9-6d00-40ff-a080-cfcaf03a1c1b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:47Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:47Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bwd9v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bwd9v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T11:47:47Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-whw6z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Oct 06 11:47:48 crc kubenswrapper[4958]: I1006 11:47:48.213567 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:46Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Oct 06 11:47:48 crc kubenswrapper[4958]: I1006 11:47:48.220324 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-8xzjs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c3545ace-6477-4c0b-9576-f32c6748e0a0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:47Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:47Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lz2sh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T11:47:47Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-8xzjs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Oct 06 11:47:48 crc kubenswrapper[4958]: I1006 11:47:48.231576 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:46Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Oct 06 11:47:48 crc kubenswrapper[4958]: I1006 11:47:48.242441 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-additional-cni-plugins-lwknw" Oct 06 11:47:48 crc kubenswrapper[4958]: I1006 11:47:48.242452 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-whw6z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1a63a5e9-6d00-40ff-a080-cfcaf03a1c1b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:47Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:47Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bwd9v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bwd9v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T11:47:47Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-whw6z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Oct 06 11:47:48 crc kubenswrapper[4958]: I1006 11:47:48.254044 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:48Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:48Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3a93985bd27818990d6f5ec103d8c0722e1317a74398d64300148018c4f77b71\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:47:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9f39e4dcc064167825d4f7323323a394d1a3d1a374b7104bffc6ad56b344d89e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:47:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Oct 06 11:47:48 crc kubenswrapper[4958]: W1006 11:47:48.255233 4958 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5e46de0e_9f67_4dae_8601_65004d0d71c1.slice/crio-9901059d3053d9b8c9bb8ec858f8b87fb1ab0f6a96486365fcee186cd529acf6 WatchSource:0}: Error finding container 9901059d3053d9b8c9bb8ec858f8b87fb1ab0f6a96486365fcee186cd529acf6: Status 404 returned error can't find the container with id 9901059d3053d9b8c9bb8ec858f8b87fb1ab0f6a96486365fcee186cd529acf6 Oct 06 11:47:48 crc kubenswrapper[4958]: I1006 11:47:48.268856 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:46Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Oct 06 11:47:48 crc kubenswrapper[4958]: I1006 11:47:48.275109 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-4w4h5" Oct 06 11:47:48 crc kubenswrapper[4958]: I1006 11:47:48.282226 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-daemon-whw6z" Oct 06 11:47:48 crc kubenswrapper[4958]: I1006 11:47:48.283979 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-lwknw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5e46de0e-9f67-4dae-8601-65004d0d71c1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:47Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:47Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:47Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wghx2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wghx2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wghx2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wghx2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wghx2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wghx2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wghx2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T11:47:47Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-lwknw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Oct 06 11:47:48 crc kubenswrapper[4958]: W1006 11:47:48.298733 4958 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8198c8fd_8ec9_4a56_9e87_4e3967fb8ec7.slice/crio-d67195ac31ed3598395962976f4bef7d85dfe0d76f8ead56754fdee05053cfe9 WatchSource:0}: Error finding container d67195ac31ed3598395962976f4bef7d85dfe0d76f8ead56754fdee05053cfe9: Status 404 returned error can't find the container with id d67195ac31ed3598395962976f4bef7d85dfe0d76f8ead56754fdee05053cfe9 Oct 06 11:47:48 crc kubenswrapper[4958]: I1006 11:47:48.300518 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-4w4h5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8198c8fd-8ec9-4a56-9e87-4e3967fb8ec7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:47Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:47Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2ncxs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T11:47:47Z\\\"}}\" for pod \"openshift-multus\"/\"multus-4w4h5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Oct 06 11:47:48 crc kubenswrapper[4958]: W1006 11:47:48.308110 4958 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod1a63a5e9_6d00_40ff_a080_cfcaf03a1c1b.slice/crio-5b029de885ac32e83ee1c2d2af4d1ddc842bbcaf53a4b6219b0829f0c592f04c WatchSource:0}: Error finding container 5b029de885ac32e83ee1c2d2af4d1ddc842bbcaf53a4b6219b0829f0c592f04c: Status 404 returned error can't find the container with id 5b029de885ac32e83ee1c2d2af4d1ddc842bbcaf53a4b6219b0829f0c592f04c Oct 06 11:47:48 crc kubenswrapper[4958]: I1006 11:47:48.317380 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:46Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Oct 06 11:47:48 crc kubenswrapper[4958]: I1006 11:47:48.319204 4958 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-ntrlk"] Oct 06 11:47:48 crc kubenswrapper[4958]: I1006 11:47:48.323824 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-ntrlk" Oct 06 11:47:48 crc kubenswrapper[4958]: I1006 11:47:48.327505 4958 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"kube-root-ca.crt" Oct 06 11:47:48 crc kubenswrapper[4958]: I1006 11:47:48.327746 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-8xzjs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c3545ace-6477-4c0b-9576-f32c6748e0a0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:47Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:47Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lz2sh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T11:47:47Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-8xzjs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Oct 06 11:47:48 crc kubenswrapper[4958]: I1006 11:47:48.332372 4958 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"openshift-service-ca.crt" Oct 06 11:47:48 crc kubenswrapper[4958]: I1006 11:47:48.332635 4958 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"env-overrides" Oct 06 11:47:48 crc kubenswrapper[4958]: I1006 11:47:48.332725 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-node-dockercfg-pwtwl" Oct 06 11:47:48 crc kubenswrapper[4958]: I1006 11:47:48.332851 4958 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-script-lib" Oct 06 11:47:48 crc kubenswrapper[4958]: I1006 11:47:48.332906 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-node-metrics-cert" Oct 06 11:47:48 crc kubenswrapper[4958]: I1006 11:47:48.332739 4958 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-config" Oct 06 11:47:48 crc kubenswrapper[4958]: I1006 11:47:48.342043 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:48Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:48Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b5c0c63694c68c28642df297d1ceff03425621c51b170c758b3fedeafbccc89e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:47:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Oct 06 11:47:48 crc kubenswrapper[4958]: I1006 11:47:48.354038 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:46Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Oct 06 11:47:48 crc kubenswrapper[4958]: I1006 11:47:48.368587 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:46Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Oct 06 11:47:48 crc kubenswrapper[4958]: I1006 11:47:48.384406 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-ntrlk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cd589959-144a-41bd-b6d5-a872e5c25cee\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:48Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:48Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:48Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:48Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vzpjm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vzpjm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vzpjm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vzpjm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vzpjm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vzpjm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vzpjm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vzpjm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vzpjm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T11:47:48Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-ntrlk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Oct 06 11:47:48 crc kubenswrapper[4958]: I1006 11:47:48.394497 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:48Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:48Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3a93985bd27818990d6f5ec103d8c0722e1317a74398d64300148018c4f77b71\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:47:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9f39e4dcc064167825d4f7323323a394d1a3d1a374b7104bffc6ad56b344d89e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:47:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Oct 06 11:47:48 crc kubenswrapper[4958]: I1006 11:47:48.395726 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 06 11:47:48 crc kubenswrapper[4958]: I1006 11:47:48.395835 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/cd589959-144a-41bd-b6d5-a872e5c25cee-run-ovn\") pod \"ovnkube-node-ntrlk\" (UID: \"cd589959-144a-41bd-b6d5-a872e5c25cee\") " pod="openshift-ovn-kubernetes/ovnkube-node-ntrlk" Oct 06 11:47:48 crc kubenswrapper[4958]: I1006 11:47:48.395860 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/cd589959-144a-41bd-b6d5-a872e5c25cee-ovn-node-metrics-cert\") pod \"ovnkube-node-ntrlk\" (UID: \"cd589959-144a-41bd-b6d5-a872e5c25cee\") " pod="openshift-ovn-kubernetes/ovnkube-node-ntrlk" Oct 06 11:47:48 crc kubenswrapper[4958]: E1006 11:47:48.395937 4958 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-06 11:47:50.395901114 +0000 UTC m=+24.281926412 (durationBeforeRetry 2s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 06 11:47:48 crc kubenswrapper[4958]: I1006 11:47:48.395999 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/cd589959-144a-41bd-b6d5-a872e5c25cee-log-socket\") pod \"ovnkube-node-ntrlk\" (UID: \"cd589959-144a-41bd-b6d5-a872e5c25cee\") " pod="openshift-ovn-kubernetes/ovnkube-node-ntrlk" Oct 06 11:47:48 crc kubenswrapper[4958]: I1006 11:47:48.396049 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/cd589959-144a-41bd-b6d5-a872e5c25cee-run-openvswitch\") pod \"ovnkube-node-ntrlk\" (UID: \"cd589959-144a-41bd-b6d5-a872e5c25cee\") " pod="openshift-ovn-kubernetes/ovnkube-node-ntrlk" Oct 06 11:47:48 crc kubenswrapper[4958]: I1006 11:47:48.396066 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/cd589959-144a-41bd-b6d5-a872e5c25cee-ovnkube-config\") pod \"ovnkube-node-ntrlk\" (UID: \"cd589959-144a-41bd-b6d5-a872e5c25cee\") " pod="openshift-ovn-kubernetes/ovnkube-node-ntrlk" Oct 06 11:47:48 crc kubenswrapper[4958]: I1006 11:47:48.396085 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/cd589959-144a-41bd-b6d5-a872e5c25cee-host-slash\") pod \"ovnkube-node-ntrlk\" (UID: \"cd589959-144a-41bd-b6d5-a872e5c25cee\") " pod="openshift-ovn-kubernetes/ovnkube-node-ntrlk" Oct 06 11:47:48 crc kubenswrapper[4958]: I1006 11:47:48.396108 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/cd589959-144a-41bd-b6d5-a872e5c25cee-etc-openvswitch\") pod \"ovnkube-node-ntrlk\" (UID: \"cd589959-144a-41bd-b6d5-a872e5c25cee\") " pod="openshift-ovn-kubernetes/ovnkube-node-ntrlk" Oct 06 11:47:48 crc kubenswrapper[4958]: I1006 11:47:48.396161 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/cd589959-144a-41bd-b6d5-a872e5c25cee-systemd-units\") pod \"ovnkube-node-ntrlk\" (UID: \"cd589959-144a-41bd-b6d5-a872e5c25cee\") " pod="openshift-ovn-kubernetes/ovnkube-node-ntrlk" Oct 06 11:47:48 crc kubenswrapper[4958]: I1006 11:47:48.396180 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/cd589959-144a-41bd-b6d5-a872e5c25cee-host-run-netns\") pod \"ovnkube-node-ntrlk\" (UID: \"cd589959-144a-41bd-b6d5-a872e5c25cee\") " pod="openshift-ovn-kubernetes/ovnkube-node-ntrlk" Oct 06 11:47:48 crc kubenswrapper[4958]: I1006 11:47:48.396199 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/cd589959-144a-41bd-b6d5-a872e5c25cee-env-overrides\") pod \"ovnkube-node-ntrlk\" (UID: \"cd589959-144a-41bd-b6d5-a872e5c25cee\") " pod="openshift-ovn-kubernetes/ovnkube-node-ntrlk" Oct 06 11:47:48 crc kubenswrapper[4958]: I1006 11:47:48.396263 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/cd589959-144a-41bd-b6d5-a872e5c25cee-node-log\") pod \"ovnkube-node-ntrlk\" (UID: \"cd589959-144a-41bd-b6d5-a872e5c25cee\") " pod="openshift-ovn-kubernetes/ovnkube-node-ntrlk" Oct 06 11:47:48 crc kubenswrapper[4958]: I1006 11:47:48.396288 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/cd589959-144a-41bd-b6d5-a872e5c25cee-host-kubelet\") pod \"ovnkube-node-ntrlk\" (UID: \"cd589959-144a-41bd-b6d5-a872e5c25cee\") " pod="openshift-ovn-kubernetes/ovnkube-node-ntrlk" Oct 06 11:47:48 crc kubenswrapper[4958]: I1006 11:47:48.396306 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/cd589959-144a-41bd-b6d5-a872e5c25cee-ovnkube-script-lib\") pod \"ovnkube-node-ntrlk\" (UID: \"cd589959-144a-41bd-b6d5-a872e5c25cee\") " pod="openshift-ovn-kubernetes/ovnkube-node-ntrlk" Oct 06 11:47:48 crc kubenswrapper[4958]: I1006 11:47:48.396454 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/cd589959-144a-41bd-b6d5-a872e5c25cee-var-lib-openvswitch\") pod \"ovnkube-node-ntrlk\" (UID: \"cd589959-144a-41bd-b6d5-a872e5c25cee\") " pod="openshift-ovn-kubernetes/ovnkube-node-ntrlk" Oct 06 11:47:48 crc kubenswrapper[4958]: I1006 11:47:48.396499 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/cd589959-144a-41bd-b6d5-a872e5c25cee-host-run-ovn-kubernetes\") pod \"ovnkube-node-ntrlk\" (UID: \"cd589959-144a-41bd-b6d5-a872e5c25cee\") " pod="openshift-ovn-kubernetes/ovnkube-node-ntrlk" Oct 06 11:47:48 crc kubenswrapper[4958]: I1006 11:47:48.396523 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vzpjm\" (UniqueName: \"kubernetes.io/projected/cd589959-144a-41bd-b6d5-a872e5c25cee-kube-api-access-vzpjm\") pod \"ovnkube-node-ntrlk\" (UID: \"cd589959-144a-41bd-b6d5-a872e5c25cee\") " pod="openshift-ovn-kubernetes/ovnkube-node-ntrlk" Oct 06 11:47:48 crc kubenswrapper[4958]: I1006 11:47:48.396582 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/cd589959-144a-41bd-b6d5-a872e5c25cee-host-cni-bin\") pod \"ovnkube-node-ntrlk\" (UID: \"cd589959-144a-41bd-b6d5-a872e5c25cee\") " pod="openshift-ovn-kubernetes/ovnkube-node-ntrlk" Oct 06 11:47:48 crc kubenswrapper[4958]: I1006 11:47:48.396601 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/cd589959-144a-41bd-b6d5-a872e5c25cee-host-cni-netd\") pod \"ovnkube-node-ntrlk\" (UID: \"cd589959-144a-41bd-b6d5-a872e5c25cee\") " pod="openshift-ovn-kubernetes/ovnkube-node-ntrlk" Oct 06 11:47:48 crc kubenswrapper[4958]: I1006 11:47:48.396622 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/cd589959-144a-41bd-b6d5-a872e5c25cee-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-ntrlk\" (UID: \"cd589959-144a-41bd-b6d5-a872e5c25cee\") " pod="openshift-ovn-kubernetes/ovnkube-node-ntrlk" Oct 06 11:47:48 crc kubenswrapper[4958]: I1006 11:47:48.396644 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/cd589959-144a-41bd-b6d5-a872e5c25cee-run-systemd\") pod \"ovnkube-node-ntrlk\" (UID: \"cd589959-144a-41bd-b6d5-a872e5c25cee\") " pod="openshift-ovn-kubernetes/ovnkube-node-ntrlk" Oct 06 11:47:48 crc kubenswrapper[4958]: I1006 11:47:48.404499 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:46Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Oct 06 11:47:48 crc kubenswrapper[4958]: I1006 11:47:48.416574 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-lwknw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5e46de0e-9f67-4dae-8601-65004d0d71c1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:47Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:47Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:47Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wghx2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wghx2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wghx2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wghx2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wghx2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wghx2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wghx2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T11:47:47Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-lwknw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Oct 06 11:47:48 crc kubenswrapper[4958]: I1006 11:47:48.428613 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-4w4h5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8198c8fd-8ec9-4a56-9e87-4e3967fb8ec7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:47Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:47Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2ncxs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T11:47:47Z\\\"}}\" for pod \"openshift-multus\"/\"multus-4w4h5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Oct 06 11:47:48 crc kubenswrapper[4958]: I1006 11:47:48.441817 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-whw6z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1a63a5e9-6d00-40ff-a080-cfcaf03a1c1b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:47Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:47Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bwd9v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bwd9v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T11:47:47Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-whw6z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Oct 06 11:47:48 crc kubenswrapper[4958]: I1006 11:47:48.452854 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:46Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Oct 06 11:47:48 crc kubenswrapper[4958]: I1006 11:47:48.461357 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-8xzjs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c3545ace-6477-4c0b-9576-f32c6748e0a0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:47Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:47Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lz2sh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T11:47:47Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-8xzjs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Oct 06 11:47:48 crc kubenswrapper[4958]: I1006 11:47:48.478328 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:48Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:48Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b5c0c63694c68c28642df297d1ceff03425621c51b170c758b3fedeafbccc89e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:47:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Oct 06 11:47:48 crc kubenswrapper[4958]: I1006 11:47:48.491695 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:46Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Oct 06 11:47:48 crc kubenswrapper[4958]: I1006 11:47:48.497923 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/cd589959-144a-41bd-b6d5-a872e5c25cee-node-log\") pod \"ovnkube-node-ntrlk\" (UID: \"cd589959-144a-41bd-b6d5-a872e5c25cee\") " pod="openshift-ovn-kubernetes/ovnkube-node-ntrlk" Oct 06 11:47:48 crc kubenswrapper[4958]: I1006 11:47:48.497968 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/cd589959-144a-41bd-b6d5-a872e5c25cee-host-kubelet\") pod \"ovnkube-node-ntrlk\" (UID: \"cd589959-144a-41bd-b6d5-a872e5c25cee\") " pod="openshift-ovn-kubernetes/ovnkube-node-ntrlk" Oct 06 11:47:48 crc kubenswrapper[4958]: I1006 11:47:48.497987 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/cd589959-144a-41bd-b6d5-a872e5c25cee-ovnkube-script-lib\") pod \"ovnkube-node-ntrlk\" (UID: \"cd589959-144a-41bd-b6d5-a872e5c25cee\") " pod="openshift-ovn-kubernetes/ovnkube-node-ntrlk" Oct 06 11:47:48 crc kubenswrapper[4958]: I1006 11:47:48.498007 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vzpjm\" (UniqueName: \"kubernetes.io/projected/cd589959-144a-41bd-b6d5-a872e5c25cee-kube-api-access-vzpjm\") pod \"ovnkube-node-ntrlk\" (UID: \"cd589959-144a-41bd-b6d5-a872e5c25cee\") " pod="openshift-ovn-kubernetes/ovnkube-node-ntrlk" Oct 06 11:47:48 crc kubenswrapper[4958]: I1006 11:47:48.498034 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 06 11:47:48 crc kubenswrapper[4958]: I1006 11:47:48.498058 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 06 11:47:48 crc kubenswrapper[4958]: I1006 11:47:48.498080 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/cd589959-144a-41bd-b6d5-a872e5c25cee-var-lib-openvswitch\") pod \"ovnkube-node-ntrlk\" (UID: \"cd589959-144a-41bd-b6d5-a872e5c25cee\") " pod="openshift-ovn-kubernetes/ovnkube-node-ntrlk" Oct 06 11:47:48 crc kubenswrapper[4958]: I1006 11:47:48.498099 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/cd589959-144a-41bd-b6d5-a872e5c25cee-host-run-ovn-kubernetes\") pod \"ovnkube-node-ntrlk\" (UID: \"cd589959-144a-41bd-b6d5-a872e5c25cee\") " pod="openshift-ovn-kubernetes/ovnkube-node-ntrlk" Oct 06 11:47:48 crc kubenswrapper[4958]: I1006 11:47:48.498108 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/cd589959-144a-41bd-b6d5-a872e5c25cee-host-kubelet\") pod \"ovnkube-node-ntrlk\" (UID: \"cd589959-144a-41bd-b6d5-a872e5c25cee\") " pod="openshift-ovn-kubernetes/ovnkube-node-ntrlk" Oct 06 11:47:48 crc kubenswrapper[4958]: I1006 11:47:48.498121 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/cd589959-144a-41bd-b6d5-a872e5c25cee-host-cni-bin\") pod \"ovnkube-node-ntrlk\" (UID: \"cd589959-144a-41bd-b6d5-a872e5c25cee\") " pod="openshift-ovn-kubernetes/ovnkube-node-ntrlk" Oct 06 11:47:48 crc kubenswrapper[4958]: I1006 11:47:48.498185 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/cd589959-144a-41bd-b6d5-a872e5c25cee-host-cni-bin\") pod \"ovnkube-node-ntrlk\" (UID: \"cd589959-144a-41bd-b6d5-a872e5c25cee\") " pod="openshift-ovn-kubernetes/ovnkube-node-ntrlk" Oct 06 11:47:48 crc kubenswrapper[4958]: I1006 11:47:48.498210 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/cd589959-144a-41bd-b6d5-a872e5c25cee-host-cni-netd\") pod \"ovnkube-node-ntrlk\" (UID: \"cd589959-144a-41bd-b6d5-a872e5c25cee\") " pod="openshift-ovn-kubernetes/ovnkube-node-ntrlk" Oct 06 11:47:48 crc kubenswrapper[4958]: I1006 11:47:48.498233 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/cd589959-144a-41bd-b6d5-a872e5c25cee-var-lib-openvswitch\") pod \"ovnkube-node-ntrlk\" (UID: \"cd589959-144a-41bd-b6d5-a872e5c25cee\") " pod="openshift-ovn-kubernetes/ovnkube-node-ntrlk" Oct 06 11:47:48 crc kubenswrapper[4958]: I1006 11:47:48.498240 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/cd589959-144a-41bd-b6d5-a872e5c25cee-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-ntrlk\" (UID: \"cd589959-144a-41bd-b6d5-a872e5c25cee\") " pod="openshift-ovn-kubernetes/ovnkube-node-ntrlk" Oct 06 11:47:48 crc kubenswrapper[4958]: I1006 11:47:48.498258 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/cd589959-144a-41bd-b6d5-a872e5c25cee-host-run-ovn-kubernetes\") pod \"ovnkube-node-ntrlk\" (UID: \"cd589959-144a-41bd-b6d5-a872e5c25cee\") " pod="openshift-ovn-kubernetes/ovnkube-node-ntrlk" Oct 06 11:47:48 crc kubenswrapper[4958]: I1006 11:47:48.498268 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/cd589959-144a-41bd-b6d5-a872e5c25cee-run-systemd\") pod \"ovnkube-node-ntrlk\" (UID: \"cd589959-144a-41bd-b6d5-a872e5c25cee\") " pod="openshift-ovn-kubernetes/ovnkube-node-ntrlk" Oct 06 11:47:48 crc kubenswrapper[4958]: E1006 11:47:48.498300 4958 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Oct 06 11:47:48 crc kubenswrapper[4958]: E1006 11:47:48.498331 4958 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Oct 06 11:47:48 crc kubenswrapper[4958]: E1006 11:47:48.498346 4958 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Oct 06 11:47:48 crc kubenswrapper[4958]: E1006 11:47:48.498401 4958 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Oct 06 11:47:48 crc kubenswrapper[4958]: E1006 11:47:48.498418 4958 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Oct 06 11:47:48 crc kubenswrapper[4958]: E1006 11:47:48.498429 4958 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Oct 06 11:47:48 crc kubenswrapper[4958]: I1006 11:47:48.498474 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/cd589959-144a-41bd-b6d5-a872e5c25cee-node-log\") pod \"ovnkube-node-ntrlk\" (UID: \"cd589959-144a-41bd-b6d5-a872e5c25cee\") " pod="openshift-ovn-kubernetes/ovnkube-node-ntrlk" Oct 06 11:47:48 crc kubenswrapper[4958]: I1006 11:47:48.498500 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/cd589959-144a-41bd-b6d5-a872e5c25cee-host-cni-netd\") pod \"ovnkube-node-ntrlk\" (UID: \"cd589959-144a-41bd-b6d5-a872e5c25cee\") " pod="openshift-ovn-kubernetes/ovnkube-node-ntrlk" Oct 06 11:47:48 crc kubenswrapper[4958]: I1006 11:47:48.498305 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 06 11:47:48 crc kubenswrapper[4958]: I1006 11:47:48.498536 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/cd589959-144a-41bd-b6d5-a872e5c25cee-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-ntrlk\" (UID: \"cd589959-144a-41bd-b6d5-a872e5c25cee\") " pod="openshift-ovn-kubernetes/ovnkube-node-ntrlk" Oct 06 11:47:48 crc kubenswrapper[4958]: E1006 11:47:48.498570 4958 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Oct 06 11:47:48 crc kubenswrapper[4958]: E1006 11:47:48.498401 4958 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2025-10-06 11:47:50.498381314 +0000 UTC m=+24.384406702 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Oct 06 11:47:48 crc kubenswrapper[4958]: I1006 11:47:48.498569 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/cd589959-144a-41bd-b6d5-a872e5c25cee-run-systemd\") pod \"ovnkube-node-ntrlk\" (UID: \"cd589959-144a-41bd-b6d5-a872e5c25cee\") " pod="openshift-ovn-kubernetes/ovnkube-node-ntrlk" Oct 06 11:47:48 crc kubenswrapper[4958]: E1006 11:47:48.498608 4958 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2025-10-06 11:47:50.498589158 +0000 UTC m=+24.384614536 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Oct 06 11:47:48 crc kubenswrapper[4958]: I1006 11:47:48.498632 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 06 11:47:48 crc kubenswrapper[4958]: I1006 11:47:48.498656 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/cd589959-144a-41bd-b6d5-a872e5c25cee-run-ovn\") pod \"ovnkube-node-ntrlk\" (UID: \"cd589959-144a-41bd-b6d5-a872e5c25cee\") " pod="openshift-ovn-kubernetes/ovnkube-node-ntrlk" Oct 06 11:47:48 crc kubenswrapper[4958]: I1006 11:47:48.498674 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/cd589959-144a-41bd-b6d5-a872e5c25cee-ovn-node-metrics-cert\") pod \"ovnkube-node-ntrlk\" (UID: \"cd589959-144a-41bd-b6d5-a872e5c25cee\") " pod="openshift-ovn-kubernetes/ovnkube-node-ntrlk" Oct 06 11:47:48 crc kubenswrapper[4958]: E1006 11:47:48.498676 4958 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Oct 06 11:47:48 crc kubenswrapper[4958]: E1006 11:47:48.498691 4958 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-10-06 11:47:50.49868275 +0000 UTC m=+24.384708058 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Oct 06 11:47:48 crc kubenswrapper[4958]: I1006 11:47:48.498710 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/cd589959-144a-41bd-b6d5-a872e5c25cee-run-openvswitch\") pod \"ovnkube-node-ntrlk\" (UID: \"cd589959-144a-41bd-b6d5-a872e5c25cee\") " pod="openshift-ovn-kubernetes/ovnkube-node-ntrlk" Oct 06 11:47:48 crc kubenswrapper[4958]: E1006 11:47:48.498714 4958 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-10-06 11:47:50.49870789 +0000 UTC m=+24.384733198 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Oct 06 11:47:48 crc kubenswrapper[4958]: I1006 11:47:48.498734 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/cd589959-144a-41bd-b6d5-a872e5c25cee-run-ovn\") pod \"ovnkube-node-ntrlk\" (UID: \"cd589959-144a-41bd-b6d5-a872e5c25cee\") " pod="openshift-ovn-kubernetes/ovnkube-node-ntrlk" Oct 06 11:47:48 crc kubenswrapper[4958]: I1006 11:47:48.498768 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/cd589959-144a-41bd-b6d5-a872e5c25cee-log-socket\") pod \"ovnkube-node-ntrlk\" (UID: \"cd589959-144a-41bd-b6d5-a872e5c25cee\") " pod="openshift-ovn-kubernetes/ovnkube-node-ntrlk" Oct 06 11:47:48 crc kubenswrapper[4958]: I1006 11:47:48.498770 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/cd589959-144a-41bd-b6d5-a872e5c25cee-run-openvswitch\") pod \"ovnkube-node-ntrlk\" (UID: \"cd589959-144a-41bd-b6d5-a872e5c25cee\") " pod="openshift-ovn-kubernetes/ovnkube-node-ntrlk" Oct 06 11:47:48 crc kubenswrapper[4958]: I1006 11:47:48.498789 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/cd589959-144a-41bd-b6d5-a872e5c25cee-ovnkube-config\") pod \"ovnkube-node-ntrlk\" (UID: \"cd589959-144a-41bd-b6d5-a872e5c25cee\") " pod="openshift-ovn-kubernetes/ovnkube-node-ntrlk" Oct 06 11:47:48 crc kubenswrapper[4958]: I1006 11:47:48.498807 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/cd589959-144a-41bd-b6d5-a872e5c25cee-host-slash\") pod \"ovnkube-node-ntrlk\" (UID: \"cd589959-144a-41bd-b6d5-a872e5c25cee\") " pod="openshift-ovn-kubernetes/ovnkube-node-ntrlk" Oct 06 11:47:48 crc kubenswrapper[4958]: I1006 11:47:48.498814 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/cd589959-144a-41bd-b6d5-a872e5c25cee-log-socket\") pod \"ovnkube-node-ntrlk\" (UID: \"cd589959-144a-41bd-b6d5-a872e5c25cee\") " pod="openshift-ovn-kubernetes/ovnkube-node-ntrlk" Oct 06 11:47:48 crc kubenswrapper[4958]: I1006 11:47:48.498823 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/cd589959-144a-41bd-b6d5-a872e5c25cee-etc-openvswitch\") pod \"ovnkube-node-ntrlk\" (UID: \"cd589959-144a-41bd-b6d5-a872e5c25cee\") " pod="openshift-ovn-kubernetes/ovnkube-node-ntrlk" Oct 06 11:47:48 crc kubenswrapper[4958]: I1006 11:47:48.498840 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/cd589959-144a-41bd-b6d5-a872e5c25cee-systemd-units\") pod \"ovnkube-node-ntrlk\" (UID: \"cd589959-144a-41bd-b6d5-a872e5c25cee\") " pod="openshift-ovn-kubernetes/ovnkube-node-ntrlk" Oct 06 11:47:48 crc kubenswrapper[4958]: I1006 11:47:48.498857 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/cd589959-144a-41bd-b6d5-a872e5c25cee-host-run-netns\") pod \"ovnkube-node-ntrlk\" (UID: \"cd589959-144a-41bd-b6d5-a872e5c25cee\") " pod="openshift-ovn-kubernetes/ovnkube-node-ntrlk" Oct 06 11:47:48 crc kubenswrapper[4958]: I1006 11:47:48.498872 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/cd589959-144a-41bd-b6d5-a872e5c25cee-env-overrides\") pod \"ovnkube-node-ntrlk\" (UID: \"cd589959-144a-41bd-b6d5-a872e5c25cee\") " pod="openshift-ovn-kubernetes/ovnkube-node-ntrlk" Oct 06 11:47:48 crc kubenswrapper[4958]: I1006 11:47:48.499089 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/cd589959-144a-41bd-b6d5-a872e5c25cee-ovnkube-script-lib\") pod \"ovnkube-node-ntrlk\" (UID: \"cd589959-144a-41bd-b6d5-a872e5c25cee\") " pod="openshift-ovn-kubernetes/ovnkube-node-ntrlk" Oct 06 11:47:48 crc kubenswrapper[4958]: I1006 11:47:48.499201 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/cd589959-144a-41bd-b6d5-a872e5c25cee-etc-openvswitch\") pod \"ovnkube-node-ntrlk\" (UID: \"cd589959-144a-41bd-b6d5-a872e5c25cee\") " pod="openshift-ovn-kubernetes/ovnkube-node-ntrlk" Oct 06 11:47:48 crc kubenswrapper[4958]: I1006 11:47:48.499253 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/cd589959-144a-41bd-b6d5-a872e5c25cee-host-slash\") pod \"ovnkube-node-ntrlk\" (UID: \"cd589959-144a-41bd-b6d5-a872e5c25cee\") " pod="openshift-ovn-kubernetes/ovnkube-node-ntrlk" Oct 06 11:47:48 crc kubenswrapper[4958]: I1006 11:47:48.499253 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/cd589959-144a-41bd-b6d5-a872e5c25cee-systemd-units\") pod \"ovnkube-node-ntrlk\" (UID: \"cd589959-144a-41bd-b6d5-a872e5c25cee\") " pod="openshift-ovn-kubernetes/ovnkube-node-ntrlk" Oct 06 11:47:48 crc kubenswrapper[4958]: I1006 11:47:48.499280 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/cd589959-144a-41bd-b6d5-a872e5c25cee-host-run-netns\") pod \"ovnkube-node-ntrlk\" (UID: \"cd589959-144a-41bd-b6d5-a872e5c25cee\") " pod="openshift-ovn-kubernetes/ovnkube-node-ntrlk" Oct 06 11:47:48 crc kubenswrapper[4958]: I1006 11:47:48.499417 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/cd589959-144a-41bd-b6d5-a872e5c25cee-env-overrides\") pod \"ovnkube-node-ntrlk\" (UID: \"cd589959-144a-41bd-b6d5-a872e5c25cee\") " pod="openshift-ovn-kubernetes/ovnkube-node-ntrlk" Oct 06 11:47:48 crc kubenswrapper[4958]: I1006 11:47:48.499628 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/cd589959-144a-41bd-b6d5-a872e5c25cee-ovnkube-config\") pod \"ovnkube-node-ntrlk\" (UID: \"cd589959-144a-41bd-b6d5-a872e5c25cee\") " pod="openshift-ovn-kubernetes/ovnkube-node-ntrlk" Oct 06 11:47:48 crc kubenswrapper[4958]: I1006 11:47:48.502847 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/cd589959-144a-41bd-b6d5-a872e5c25cee-ovn-node-metrics-cert\") pod \"ovnkube-node-ntrlk\" (UID: \"cd589959-144a-41bd-b6d5-a872e5c25cee\") " pod="openshift-ovn-kubernetes/ovnkube-node-ntrlk" Oct 06 11:47:48 crc kubenswrapper[4958]: I1006 11:47:48.514680 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vzpjm\" (UniqueName: \"kubernetes.io/projected/cd589959-144a-41bd-b6d5-a872e5c25cee-kube-api-access-vzpjm\") pod \"ovnkube-node-ntrlk\" (UID: \"cd589959-144a-41bd-b6d5-a872e5c25cee\") " pod="openshift-ovn-kubernetes/ovnkube-node-ntrlk" Oct 06 11:47:48 crc kubenswrapper[4958]: I1006 11:47:48.659264 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-ntrlk" Oct 06 11:47:48 crc kubenswrapper[4958]: W1006 11:47:48.673472 4958 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podcd589959_144a_41bd_b6d5_a872e5c25cee.slice/crio-9d9ed4bdaca68aafc466a717a228d9c59567fe0afcb34a0474de16fd324d1ee7 WatchSource:0}: Error finding container 9d9ed4bdaca68aafc466a717a228d9c59567fe0afcb34a0474de16fd324d1ee7: Status 404 returned error can't find the container with id 9d9ed4bdaca68aafc466a717a228d9c59567fe0afcb34a0474de16fd324d1ee7 Oct 06 11:47:48 crc kubenswrapper[4958]: I1006 11:47:48.908989 4958 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Oct 06 11:47:48 crc kubenswrapper[4958]: I1006 11:47:48.912391 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 06 11:47:48 crc kubenswrapper[4958]: E1006 11:47:48.912735 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Oct 06 11:47:48 crc kubenswrapper[4958]: I1006 11:47:48.912472 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 06 11:47:48 crc kubenswrapper[4958]: E1006 11:47:48.913009 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Oct 06 11:47:48 crc kubenswrapper[4958]: I1006 11:47:48.912424 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 06 11:47:48 crc kubenswrapper[4958]: E1006 11:47:48.913284 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Oct 06 11:47:48 crc kubenswrapper[4958]: I1006 11:47:48.916215 4958 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" path="/var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes" Oct 06 11:47:48 crc kubenswrapper[4958]: I1006 11:47:48.917006 4958 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0b574797-001e-440a-8f4e-c0be86edad0f" path="/var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes" Oct 06 11:47:48 crc kubenswrapper[4958]: I1006 11:47:48.917572 4958 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5225d0e4-402f-4861-b410-819f433b1803" path="/var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes" Oct 06 11:47:48 crc kubenswrapper[4958]: I1006 11:47:48.918113 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Oct 06 11:47:48 crc kubenswrapper[4958]: I1006 11:47:48.923544 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:46Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:47:48Z is after 2025-08-24T17:21:41Z" Oct 06 11:47:48 crc kubenswrapper[4958]: I1006 11:47:48.927517 4958 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager/kube-controller-manager-crc"] Oct 06 11:47:48 crc kubenswrapper[4958]: I1006 11:47:48.936369 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-8xzjs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c3545ace-6477-4c0b-9576-f32c6748e0a0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:47Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:47Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lz2sh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T11:47:47Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-8xzjs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:47:48Z is after 2025-08-24T17:21:41Z" Oct 06 11:47:48 crc kubenswrapper[4958]: I1006 11:47:48.956743 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:48Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:48Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b5c0c63694c68c28642df297d1ceff03425621c51b170c758b3fedeafbccc89e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:47:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:47:48Z is after 2025-08-24T17:21:41Z" Oct 06 11:47:48 crc kubenswrapper[4958]: I1006 11:47:48.967997 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:46Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:47:48Z is after 2025-08-24T17:21:41Z" Oct 06 11:47:48 crc kubenswrapper[4958]: I1006 11:47:48.983446 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:46Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:47:48Z is after 2025-08-24T17:21:41Z" Oct 06 11:47:49 crc kubenswrapper[4958]: I1006 11:47:49.001981 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-ntrlk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cd589959-144a-41bd-b6d5-a872e5c25cee\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:48Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:48Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:48Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:48Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vzpjm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vzpjm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vzpjm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vzpjm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vzpjm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vzpjm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vzpjm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vzpjm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vzpjm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T11:47:48Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-ntrlk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:47:49Z is after 2025-08-24T17:21:41Z" Oct 06 11:47:49 crc kubenswrapper[4958]: I1006 11:47:49.013017 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:48Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:48Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3a93985bd27818990d6f5ec103d8c0722e1317a74398d64300148018c4f77b71\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:47:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9f39e4dcc064167825d4f7323323a394d1a3d1a374b7104bffc6ad56b344d89e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:47:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:47:49Z is after 2025-08-24T17:21:41Z" Oct 06 11:47:49 crc kubenswrapper[4958]: I1006 11:47:49.023048 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:46Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:47:49Z is after 2025-08-24T17:21:41Z" Oct 06 11:47:49 crc kubenswrapper[4958]: I1006 11:47:49.037454 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-lwknw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5e46de0e-9f67-4dae-8601-65004d0d71c1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:47Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:47Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:47Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wghx2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wghx2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wghx2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wghx2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wghx2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wghx2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wghx2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T11:47:47Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-lwknw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:47:49Z is after 2025-08-24T17:21:41Z" Oct 06 11:47:49 crc kubenswrapper[4958]: I1006 11:47:49.052413 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-4w4h5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8198c8fd-8ec9-4a56-9e87-4e3967fb8ec7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:47Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:47Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2ncxs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T11:47:47Z\\\"}}\" for pod \"openshift-multus\"/\"multus-4w4h5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:47:49Z is after 2025-08-24T17:21:41Z" Oct 06 11:47:49 crc kubenswrapper[4958]: I1006 11:47:49.066062 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-whw6z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1a63a5e9-6d00-40ff-a080-cfcaf03a1c1b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:47Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:47Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bwd9v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bwd9v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T11:47:47Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-whw6z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:47:49Z is after 2025-08-24T17:21:41Z" Oct 06 11:47:49 crc kubenswrapper[4958]: I1006 11:47:49.078393 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e346fee2-2887-49c0-ad05-0e4ca19453e4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://42a6335b390e5de2da2d935de9e0cb1e60391521813d0928c59d1d9c27ae8e7f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:47:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://64dfdaf2d8faa6e7ebde08361728077d3db9118ec963dc4c1fc8caf9eb9c752b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:47:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5cbba96aaea45073aaeada1136c677ba1ba87248f004cf3d7394ddfccf412478\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:47:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://95e5c689ad307b63dec13d774d1ff8fe8fa0a11a7d33e18a0470000a4df0f6dc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:47:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T11:47:27Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:47:49Z is after 2025-08-24T17:21:41Z" Oct 06 11:47:49 crc kubenswrapper[4958]: I1006 11:47:49.081561 4958 generic.go:334] "Generic (PLEG): container finished" podID="5e46de0e-9f67-4dae-8601-65004d0d71c1" containerID="244104564b5ad6b646671bdc75f440e5568a0c2449c111e56431328365d044fe" exitCode=0 Oct 06 11:47:49 crc kubenswrapper[4958]: I1006 11:47:49.081664 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-lwknw" event={"ID":"5e46de0e-9f67-4dae-8601-65004d0d71c1","Type":"ContainerDied","Data":"244104564b5ad6b646671bdc75f440e5568a0c2449c111e56431328365d044fe"} Oct 06 11:47:49 crc kubenswrapper[4958]: I1006 11:47:49.081735 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-lwknw" event={"ID":"5e46de0e-9f67-4dae-8601-65004d0d71c1","Type":"ContainerStarted","Data":"9901059d3053d9b8c9bb8ec858f8b87fb1ab0f6a96486365fcee186cd529acf6"} Oct 06 11:47:49 crc kubenswrapper[4958]: I1006 11:47:49.083666 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/node-resolver-8xzjs" event={"ID":"c3545ace-6477-4c0b-9576-f32c6748e0a0","Type":"ContainerStarted","Data":"d94d788ac8f6a4665db7852c21f55128c9afb7976c9e18fbc302c02d8c3a0c43"} Oct 06 11:47:49 crc kubenswrapper[4958]: I1006 11:47:49.084933 4958 generic.go:334] "Generic (PLEG): container finished" podID="cd589959-144a-41bd-b6d5-a872e5c25cee" containerID="738882f7b5c7b87e6d6426a997dd7efe5daf1d97973cf1085097cc0f5e790f00" exitCode=0 Oct 06 11:47:49 crc kubenswrapper[4958]: I1006 11:47:49.084994 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-ntrlk" event={"ID":"cd589959-144a-41bd-b6d5-a872e5c25cee","Type":"ContainerDied","Data":"738882f7b5c7b87e6d6426a997dd7efe5daf1d97973cf1085097cc0f5e790f00"} Oct 06 11:47:49 crc kubenswrapper[4958]: I1006 11:47:49.085018 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-ntrlk" event={"ID":"cd589959-144a-41bd-b6d5-a872e5c25cee","Type":"ContainerStarted","Data":"9d9ed4bdaca68aafc466a717a228d9c59567fe0afcb34a0474de16fd324d1ee7"} Oct 06 11:47:49 crc kubenswrapper[4958]: I1006 11:47:49.088048 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-whw6z" event={"ID":"1a63a5e9-6d00-40ff-a080-cfcaf03a1c1b","Type":"ContainerStarted","Data":"b4fae35f5c37636077eb5d2fee37236e39d009d77aa158458d267aed5fc29f48"} Oct 06 11:47:49 crc kubenswrapper[4958]: I1006 11:47:49.088084 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-whw6z" event={"ID":"1a63a5e9-6d00-40ff-a080-cfcaf03a1c1b","Type":"ContainerStarted","Data":"0be6251825a9cd485da028e6f7fd169dae74215a7191607dfd0a4b7a470e43c4"} Oct 06 11:47:49 crc kubenswrapper[4958]: I1006 11:47:49.088100 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-whw6z" event={"ID":"1a63a5e9-6d00-40ff-a080-cfcaf03a1c1b","Type":"ContainerStarted","Data":"5b029de885ac32e83ee1c2d2af4d1ddc842bbcaf53a4b6219b0829f0c592f04c"} Oct 06 11:47:49 crc kubenswrapper[4958]: I1006 11:47:49.089427 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-4w4h5" event={"ID":"8198c8fd-8ec9-4a56-9e87-4e3967fb8ec7","Type":"ContainerStarted","Data":"979bf29caab50b132b30dd6eff62aed5d2209ae1c43fde2433101410df35035b"} Oct 06 11:47:49 crc kubenswrapper[4958]: I1006 11:47:49.089475 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-4w4h5" event={"ID":"8198c8fd-8ec9-4a56-9e87-4e3967fb8ec7","Type":"ContainerStarted","Data":"d67195ac31ed3598395962976f4bef7d85dfe0d76f8ead56754fdee05053cfe9"} Oct 06 11:47:49 crc kubenswrapper[4958]: I1006 11:47:49.091372 4958 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/1.log" Oct 06 11:47:49 crc kubenswrapper[4958]: I1006 11:47:49.091804 4958 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Oct 06 11:47:49 crc kubenswrapper[4958]: I1006 11:47:49.093897 4958 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="fdb3ac1507f9a55981ed07786d2ab8bd2aab7072e8e625b17f15bc27f40f460d" exitCode=255 Oct 06 11:47:49 crc kubenswrapper[4958]: I1006 11:47:49.093987 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerDied","Data":"fdb3ac1507f9a55981ed07786d2ab8bd2aab7072e8e625b17f15bc27f40f460d"} Oct 06 11:47:49 crc kubenswrapper[4958]: I1006 11:47:49.094111 4958 scope.go:117] "RemoveContainer" containerID="da3746265bc4e00200b79550ea0cdf306905cd84aafae2759ac3b57d68f44a09" Oct 06 11:47:49 crc kubenswrapper[4958]: I1006 11:47:49.096345 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:46Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:47:49Z is after 2025-08-24T17:21:41Z" Oct 06 11:47:49 crc kubenswrapper[4958]: I1006 11:47:49.110491 4958 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Oct 06 11:47:49 crc kubenswrapper[4958]: I1006 11:47:49.111784 4958 scope.go:117] "RemoveContainer" containerID="fdb3ac1507f9a55981ed07786d2ab8bd2aab7072e8e625b17f15bc27f40f460d" Oct 06 11:47:49 crc kubenswrapper[4958]: E1006 11:47:49.112113 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" Oct 06 11:47:49 crc kubenswrapper[4958]: I1006 11:47:49.112709 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-8xzjs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c3545ace-6477-4c0b-9576-f32c6748e0a0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:47Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:47Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lz2sh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T11:47:47Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-8xzjs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:47:49Z is after 2025-08-24T17:21:41Z" Oct 06 11:47:49 crc kubenswrapper[4958]: I1006 11:47:49.129283 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:48Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:48Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b5c0c63694c68c28642df297d1ceff03425621c51b170c758b3fedeafbccc89e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:47:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:47:49Z is after 2025-08-24T17:21:41Z" Oct 06 11:47:49 crc kubenswrapper[4958]: I1006 11:47:49.145338 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:46Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:47:49Z is after 2025-08-24T17:21:41Z" Oct 06 11:47:49 crc kubenswrapper[4958]: I1006 11:47:49.159973 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:46Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:47:49Z is after 2025-08-24T17:21:41Z" Oct 06 11:47:49 crc kubenswrapper[4958]: I1006 11:47:49.202586 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-ntrlk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cd589959-144a-41bd-b6d5-a872e5c25cee\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:48Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:48Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:48Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:48Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vzpjm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vzpjm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vzpjm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vzpjm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vzpjm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vzpjm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vzpjm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vzpjm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vzpjm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T11:47:48Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-ntrlk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:47:49Z is after 2025-08-24T17:21:41Z" Oct 06 11:47:49 crc kubenswrapper[4958]: I1006 11:47:49.219998 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-4w4h5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8198c8fd-8ec9-4a56-9e87-4e3967fb8ec7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:47Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:47Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2ncxs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T11:47:47Z\\\"}}\" for pod \"openshift-multus\"/\"multus-4w4h5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:47:49Z is after 2025-08-24T17:21:41Z" Oct 06 11:47:49 crc kubenswrapper[4958]: I1006 11:47:49.236234 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-whw6z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1a63a5e9-6d00-40ff-a080-cfcaf03a1c1b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:47Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:47Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bwd9v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bwd9v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T11:47:47Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-whw6z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:47:49Z is after 2025-08-24T17:21:41Z" Oct 06 11:47:49 crc kubenswrapper[4958]: I1006 11:47:49.265279 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:48Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:48Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3a93985bd27818990d6f5ec103d8c0722e1317a74398d64300148018c4f77b71\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:47:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9f39e4dcc064167825d4f7323323a394d1a3d1a374b7104bffc6ad56b344d89e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:47:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:47:49Z is after 2025-08-24T17:21:41Z" Oct 06 11:47:49 crc kubenswrapper[4958]: I1006 11:47:49.278631 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:46Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:47:49Z is after 2025-08-24T17:21:41Z" Oct 06 11:47:49 crc kubenswrapper[4958]: I1006 11:47:49.294465 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-lwknw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5e46de0e-9f67-4dae-8601-65004d0d71c1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:47Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:47Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:47Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wghx2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wghx2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wghx2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wghx2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wghx2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wghx2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wghx2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T11:47:47Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-lwknw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:47:49Z is after 2025-08-24T17:21:41Z" Oct 06 11:47:49 crc kubenswrapper[4958]: I1006 11:47:49.327012 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9bd5b11f-6414-4c57-99b8-516b3f4be804\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:27Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:27Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8f40315a451e12055987bec9804330dac96578f86d7fdedfd96ed187936df857\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:47:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0b16a70f5c26106f885601fdc9f41b6d606668ed3a4a527be3a7674dd4217d36\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:47:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://93cedfe1a3600f0eb73fd787333690ed1fd8cd5f766a54ba40eb6aad808593c7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:47:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fdb3ac1507f9a55981ed07786d2ab8bd2aab7072e8e625b17f15bc27f40f460d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://da3746265bc4e00200b79550ea0cdf306905cd84aafae2759ac3b57d68f44a09\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-10-06T11:47:41Z\\\",\\\"message\\\":\\\"W1006 11:47:30.119228 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI1006 11:47:30.119674 1 crypto.go:601] Generating new CA for check-endpoints-signer@1759751250 cert, and key in /tmp/serving-cert-867757703/serving-signer.crt, /tmp/serving-cert-867757703/serving-signer.key\\\\nI1006 11:47:30.463817 1 observer_polling.go:159] Starting file observer\\\\nW1006 11:47:30.466700 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI1006 11:47:30.466927 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1006 11:47:30.469319 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-867757703/tls.crt::/tmp/serving-cert-867757703/tls.key\\\\\\\"\\\\nF1006 11:47:41.498950 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-06T11:47:30Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fdb3ac1507f9a55981ed07786d2ab8bd2aab7072e8e625b17f15bc27f40f460d\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-10-06T11:47:48Z\\\",\\\"message\\\":\\\"file observer\\\\nW1006 11:47:47.776944 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1006 11:47:47.777183 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1006 11:47:47.778589 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-624858867/tls.crt::/tmp/serving-cert-624858867/tls.key\\\\\\\"\\\\nI1006 11:47:48.078332 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1006 11:47:48.084427 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1006 11:47:48.084453 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1006 11:47:48.084475 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1006 11:47:48.084480 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1006 11:47:48.108531 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI1006 11:47:48.108546 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW1006 11:47:48.108561 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1006 11:47:48.108582 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1006 11:47:48.108586 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1006 11:47:48.108589 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1006 11:47:48.108592 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1006 11:47:48.108594 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1006 11:47:48.110441 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-06T11:47:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://533257b553a482e13f8c8930f7ee7d6f8b70af1499cf861d3dfe73cb2225aa2a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:47:29Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://70c0a2bcb44c905e3d39402f0992d3e50a939252201b98e9a723f5a6777d7f91\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://70c0a2bcb44c905e3d39402f0992d3e50a939252201b98e9a723f5a6777d7f91\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-06T11:47:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-06T11:47:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T11:47:27Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:47:49Z is after 2025-08-24T17:21:41Z" Oct 06 11:47:49 crc kubenswrapper[4958]: I1006 11:47:49.340102 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:48Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:48Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b5c0c63694c68c28642df297d1ceff03425621c51b170c758b3fedeafbccc89e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:47:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:47:49Z is after 2025-08-24T17:21:41Z" Oct 06 11:47:49 crc kubenswrapper[4958]: I1006 11:47:49.354264 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:46Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:47:49Z is after 2025-08-24T17:21:41Z" Oct 06 11:47:49 crc kubenswrapper[4958]: I1006 11:47:49.379566 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-ntrlk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cd589959-144a-41bd-b6d5-a872e5c25cee\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:48Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:48Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vzpjm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vzpjm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vzpjm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vzpjm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vzpjm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vzpjm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vzpjm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vzpjm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://738882f7b5c7b87e6d6426a997dd7efe5daf1d97973cf1085097cc0f5e790f00\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://738882f7b5c7b87e6d6426a997dd7efe5daf1d97973cf1085097cc0f5e790f00\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-06T11:47:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-06T11:47:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vzpjm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T11:47:48Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-ntrlk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:47:49Z is after 2025-08-24T17:21:41Z" Oct 06 11:47:49 crc kubenswrapper[4958]: I1006 11:47:49.395589 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:46Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:47:49Z is after 2025-08-24T17:21:41Z" Oct 06 11:47:49 crc kubenswrapper[4958]: I1006 11:47:49.410975 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:48Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:48Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3a93985bd27818990d6f5ec103d8c0722e1317a74398d64300148018c4f77b71\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:47:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9f39e4dcc064167825d4f7323323a394d1a3d1a374b7104bffc6ad56b344d89e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:47:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:47:49Z is after 2025-08-24T17:21:41Z" Oct 06 11:47:49 crc kubenswrapper[4958]: I1006 11:47:49.422936 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:46Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:47:49Z is after 2025-08-24T17:21:41Z" Oct 06 11:47:49 crc kubenswrapper[4958]: I1006 11:47:49.438018 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-lwknw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5e46de0e-9f67-4dae-8601-65004d0d71c1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:47Z\\\",\\\"message\\\":\\\"containers with incomplete status: [cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:47Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:47Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wghx2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://244104564b5ad6b646671bdc75f440e5568a0c2449c111e56431328365d044fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://244104564b5ad6b646671bdc75f440e5568a0c2449c111e56431328365d044fe\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-06T11:47:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-06T11:47:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wghx2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wghx2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wghx2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wghx2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wghx2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wghx2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T11:47:47Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-lwknw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:47:49Z is after 2025-08-24T17:21:41Z" Oct 06 11:47:49 crc kubenswrapper[4958]: I1006 11:47:49.455319 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-4w4h5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8198c8fd-8ec9-4a56-9e87-4e3967fb8ec7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://979bf29caab50b132b30dd6eff62aed5d2209ae1c43fde2433101410df35035b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:47:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2ncxs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T11:47:47Z\\\"}}\" for pod \"openshift-multus\"/\"multus-4w4h5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:47:49Z is after 2025-08-24T17:21:41Z" Oct 06 11:47:49 crc kubenswrapper[4958]: I1006 11:47:49.469099 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-whw6z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1a63a5e9-6d00-40ff-a080-cfcaf03a1c1b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b4fae35f5c37636077eb5d2fee37236e39d009d77aa158458d267aed5fc29f48\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:47:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bwd9v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0be6251825a9cd485da028e6f7fd169dae74215a7191607dfd0a4b7a470e43c4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:47:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bwd9v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T11:47:47Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-whw6z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:47:49Z is after 2025-08-24T17:21:41Z" Oct 06 11:47:49 crc kubenswrapper[4958]: I1006 11:47:49.471014 4958 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/node-ca-vns7w"] Oct 06 11:47:49 crc kubenswrapper[4958]: I1006 11:47:49.471507 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/node-ca-vns7w" Oct 06 11:47:49 crc kubenswrapper[4958]: I1006 11:47:49.473794 4958 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"image-registry-certificates" Oct 06 11:47:49 crc kubenswrapper[4958]: I1006 11:47:49.490941 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"node-ca-dockercfg-4777p" Oct 06 11:47:49 crc kubenswrapper[4958]: I1006 11:47:49.510772 4958 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"openshift-service-ca.crt" Oct 06 11:47:49 crc kubenswrapper[4958]: I1006 11:47:49.532354 4958 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"kube-root-ca.crt" Oct 06 11:47:49 crc kubenswrapper[4958]: I1006 11:47:49.560743 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e346fee2-2887-49c0-ad05-0e4ca19453e4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://42a6335b390e5de2da2d935de9e0cb1e60391521813d0928c59d1d9c27ae8e7f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:47:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://64dfdaf2d8faa6e7ebde08361728077d3db9118ec963dc4c1fc8caf9eb9c752b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:47:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5cbba96aaea45073aaeada1136c677ba1ba87248f004cf3d7394ddfccf412478\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:47:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://95e5c689ad307b63dec13d774d1ff8fe8fa0a11a7d33e18a0470000a4df0f6dc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:47:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T11:47:27Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:47:49Z is after 2025-08-24T17:21:41Z" Oct 06 11:47:49 crc kubenswrapper[4958]: I1006 11:47:49.600335 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:46Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:47:49Z is after 2025-08-24T17:21:41Z" Oct 06 11:47:49 crc kubenswrapper[4958]: I1006 11:47:49.611397 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f8htn\" (UniqueName: \"kubernetes.io/projected/f33eae78-7219-4551-bea7-dcfaf22c4e62-kube-api-access-f8htn\") pod \"node-ca-vns7w\" (UID: \"f33eae78-7219-4551-bea7-dcfaf22c4e62\") " pod="openshift-image-registry/node-ca-vns7w" Oct 06 11:47:49 crc kubenswrapper[4958]: I1006 11:47:49.611437 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/f33eae78-7219-4551-bea7-dcfaf22c4e62-serviceca\") pod \"node-ca-vns7w\" (UID: \"f33eae78-7219-4551-bea7-dcfaf22c4e62\") " pod="openshift-image-registry/node-ca-vns7w" Oct 06 11:47:49 crc kubenswrapper[4958]: I1006 11:47:49.611522 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/f33eae78-7219-4551-bea7-dcfaf22c4e62-host\") pod \"node-ca-vns7w\" (UID: \"f33eae78-7219-4551-bea7-dcfaf22c4e62\") " pod="openshift-image-registry/node-ca-vns7w" Oct 06 11:47:49 crc kubenswrapper[4958]: I1006 11:47:49.637585 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-8xzjs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c3545ace-6477-4c0b-9576-f32c6748e0a0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d94d788ac8f6a4665db7852c21f55128c9afb7976c9e18fbc302c02d8c3a0c43\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:47:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lz2sh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T11:47:47Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-8xzjs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:47:49Z is after 2025-08-24T17:21:41Z" Oct 06 11:47:49 crc kubenswrapper[4958]: I1006 11:47:49.686281 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9bd5b11f-6414-4c57-99b8-516b3f4be804\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:27Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:27Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8f40315a451e12055987bec9804330dac96578f86d7fdedfd96ed187936df857\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:47:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0b16a70f5c26106f885601fdc9f41b6d606668ed3a4a527be3a7674dd4217d36\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:47:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://93cedfe1a3600f0eb73fd787333690ed1fd8cd5f766a54ba40eb6aad808593c7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:47:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fdb3ac1507f9a55981ed07786d2ab8bd2aab7072e8e625b17f15bc27f40f460d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://da3746265bc4e00200b79550ea0cdf306905cd84aafae2759ac3b57d68f44a09\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-10-06T11:47:41Z\\\",\\\"message\\\":\\\"W1006 11:47:30.119228 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI1006 11:47:30.119674 1 crypto.go:601] Generating new CA for check-endpoints-signer@1759751250 cert, and key in /tmp/serving-cert-867757703/serving-signer.crt, /tmp/serving-cert-867757703/serving-signer.key\\\\nI1006 11:47:30.463817 1 observer_polling.go:159] Starting file observer\\\\nW1006 11:47:30.466700 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI1006 11:47:30.466927 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1006 11:47:30.469319 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-867757703/tls.crt::/tmp/serving-cert-867757703/tls.key\\\\\\\"\\\\nF1006 11:47:41.498950 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-06T11:47:30Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fdb3ac1507f9a55981ed07786d2ab8bd2aab7072e8e625b17f15bc27f40f460d\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-10-06T11:47:48Z\\\",\\\"message\\\":\\\"file observer\\\\nW1006 11:47:47.776944 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1006 11:47:47.777183 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1006 11:47:47.778589 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-624858867/tls.crt::/tmp/serving-cert-624858867/tls.key\\\\\\\"\\\\nI1006 11:47:48.078332 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1006 11:47:48.084427 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1006 11:47:48.084453 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1006 11:47:48.084475 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1006 11:47:48.084480 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1006 11:47:48.108531 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI1006 11:47:48.108546 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW1006 11:47:48.108561 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1006 11:47:48.108582 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1006 11:47:48.108586 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1006 11:47:48.108589 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1006 11:47:48.108592 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1006 11:47:48.108594 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1006 11:47:48.110441 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-06T11:47:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://533257b553a482e13f8c8930f7ee7d6f8b70af1499cf861d3dfe73cb2225aa2a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:47:29Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://70c0a2bcb44c905e3d39402f0992d3e50a939252201b98e9a723f5a6777d7f91\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://70c0a2bcb44c905e3d39402f0992d3e50a939252201b98e9a723f5a6777d7f91\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-06T11:47:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-06T11:47:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T11:47:27Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:47:49Z is after 2025-08-24T17:21:41Z" Oct 06 11:47:49 crc kubenswrapper[4958]: I1006 11:47:49.712209 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f8htn\" (UniqueName: \"kubernetes.io/projected/f33eae78-7219-4551-bea7-dcfaf22c4e62-kube-api-access-f8htn\") pod \"node-ca-vns7w\" (UID: \"f33eae78-7219-4551-bea7-dcfaf22c4e62\") " pod="openshift-image-registry/node-ca-vns7w" Oct 06 11:47:49 crc kubenswrapper[4958]: I1006 11:47:49.712263 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/f33eae78-7219-4551-bea7-dcfaf22c4e62-serviceca\") pod \"node-ca-vns7w\" (UID: \"f33eae78-7219-4551-bea7-dcfaf22c4e62\") " pod="openshift-image-registry/node-ca-vns7w" Oct 06 11:47:49 crc kubenswrapper[4958]: I1006 11:47:49.712336 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/f33eae78-7219-4551-bea7-dcfaf22c4e62-host\") pod \"node-ca-vns7w\" (UID: \"f33eae78-7219-4551-bea7-dcfaf22c4e62\") " pod="openshift-image-registry/node-ca-vns7w" Oct 06 11:47:49 crc kubenswrapper[4958]: I1006 11:47:49.712416 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/f33eae78-7219-4551-bea7-dcfaf22c4e62-host\") pod \"node-ca-vns7w\" (UID: \"f33eae78-7219-4551-bea7-dcfaf22c4e62\") " pod="openshift-image-registry/node-ca-vns7w" Oct 06 11:47:49 crc kubenswrapper[4958]: I1006 11:47:49.713235 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/f33eae78-7219-4551-bea7-dcfaf22c4e62-serviceca\") pod \"node-ca-vns7w\" (UID: \"f33eae78-7219-4551-bea7-dcfaf22c4e62\") " pod="openshift-image-registry/node-ca-vns7w" Oct 06 11:47:49 crc kubenswrapper[4958]: I1006 11:47:49.726155 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:48Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:48Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b5c0c63694c68c28642df297d1ceff03425621c51b170c758b3fedeafbccc89e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:47:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:47:49Z is after 2025-08-24T17:21:41Z" Oct 06 11:47:49 crc kubenswrapper[4958]: I1006 11:47:49.748787 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f8htn\" (UniqueName: \"kubernetes.io/projected/f33eae78-7219-4551-bea7-dcfaf22c4e62-kube-api-access-f8htn\") pod \"node-ca-vns7w\" (UID: \"f33eae78-7219-4551-bea7-dcfaf22c4e62\") " pod="openshift-image-registry/node-ca-vns7w" Oct 06 11:47:49 crc kubenswrapper[4958]: I1006 11:47:49.782283 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:46Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:47:49Z is after 2025-08-24T17:21:41Z" Oct 06 11:47:49 crc kubenswrapper[4958]: I1006 11:47:49.804743 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/node-ca-vns7w" Oct 06 11:47:49 crc kubenswrapper[4958]: W1006 11:47:49.816320 4958 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf33eae78_7219_4551_bea7_dcfaf22c4e62.slice/crio-4624fd2b24309c91a4749de745f2b4dd3014347ca2327859256778992b46ca2f WatchSource:0}: Error finding container 4624fd2b24309c91a4749de745f2b4dd3014347ca2327859256778992b46ca2f: Status 404 returned error can't find the container with id 4624fd2b24309c91a4749de745f2b4dd3014347ca2327859256778992b46ca2f Oct 06 11:47:49 crc kubenswrapper[4958]: I1006 11:47:49.831789 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:46Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:47:49Z is after 2025-08-24T17:21:41Z" Oct 06 11:47:49 crc kubenswrapper[4958]: I1006 11:47:49.885381 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-ntrlk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cd589959-144a-41bd-b6d5-a872e5c25cee\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:48Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:48Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vzpjm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vzpjm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vzpjm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vzpjm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vzpjm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vzpjm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vzpjm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vzpjm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://738882f7b5c7b87e6d6426a997dd7efe5daf1d97973cf1085097cc0f5e790f00\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://738882f7b5c7b87e6d6426a997dd7efe5daf1d97973cf1085097cc0f5e790f00\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-06T11:47:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-06T11:47:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vzpjm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T11:47:48Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-ntrlk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:47:49Z is after 2025-08-24T17:21:41Z" Oct 06 11:47:49 crc kubenswrapper[4958]: I1006 11:47:49.907559 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:48Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:48Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3a93985bd27818990d6f5ec103d8c0722e1317a74398d64300148018c4f77b71\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:47:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9f39e4dcc064167825d4f7323323a394d1a3d1a374b7104bffc6ad56b344d89e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:47:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:47:49Z is after 2025-08-24T17:21:41Z" Oct 06 11:47:49 crc kubenswrapper[4958]: I1006 11:47:49.941707 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:46Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:47:49Z is after 2025-08-24T17:21:41Z" Oct 06 11:47:49 crc kubenswrapper[4958]: I1006 11:47:49.981519 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-lwknw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5e46de0e-9f67-4dae-8601-65004d0d71c1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:47Z\\\",\\\"message\\\":\\\"containers with incomplete status: [cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:47Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:47Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wghx2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://244104564b5ad6b646671bdc75f440e5568a0c2449c111e56431328365d044fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://244104564b5ad6b646671bdc75f440e5568a0c2449c111e56431328365d044fe\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-06T11:47:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-06T11:47:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wghx2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wghx2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wghx2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wghx2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wghx2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wghx2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T11:47:47Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-lwknw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:47:49Z is after 2025-08-24T17:21:41Z" Oct 06 11:47:50 crc kubenswrapper[4958]: I1006 11:47:50.018883 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-4w4h5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8198c8fd-8ec9-4a56-9e87-4e3967fb8ec7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://979bf29caab50b132b30dd6eff62aed5d2209ae1c43fde2433101410df35035b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:47:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2ncxs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T11:47:47Z\\\"}}\" for pod \"openshift-multus\"/\"multus-4w4h5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:47:50Z is after 2025-08-24T17:21:41Z" Oct 06 11:47:50 crc kubenswrapper[4958]: I1006 11:47:50.060392 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-whw6z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1a63a5e9-6d00-40ff-a080-cfcaf03a1c1b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b4fae35f5c37636077eb5d2fee37236e39d009d77aa158458d267aed5fc29f48\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:47:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bwd9v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0be6251825a9cd485da028e6f7fd169dae74215a7191607dfd0a4b7a470e43c4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:47:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bwd9v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T11:47:47Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-whw6z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:47:50Z is after 2025-08-24T17:21:41Z" Oct 06 11:47:50 crc kubenswrapper[4958]: I1006 11:47:50.098578 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/node-ca-vns7w" event={"ID":"f33eae78-7219-4551-bea7-dcfaf22c4e62","Type":"ContainerStarted","Data":"4624fd2b24309c91a4749de745f2b4dd3014347ca2327859256778992b46ca2f"} Oct 06 11:47:50 crc kubenswrapper[4958]: I1006 11:47:50.103433 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-ntrlk" event={"ID":"cd589959-144a-41bd-b6d5-a872e5c25cee","Type":"ContainerStarted","Data":"484bfcace3003fb125465d9296c1b04852447d674b6c4abc8fedbd3a1b6ea8be"} Oct 06 11:47:50 crc kubenswrapper[4958]: I1006 11:47:50.103468 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-ntrlk" event={"ID":"cd589959-144a-41bd-b6d5-a872e5c25cee","Type":"ContainerStarted","Data":"d7834f1b3141381f7d868acc4aceb2809e81896cb7082de9888c3d540d062078"} Oct 06 11:47:50 crc kubenswrapper[4958]: I1006 11:47:50.103479 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-ntrlk" event={"ID":"cd589959-144a-41bd-b6d5-a872e5c25cee","Type":"ContainerStarted","Data":"a38398fb9a752d0059e1adfd4ee23557572492f2d27f63ef90ed4730b742fd0e"} Oct 06 11:47:50 crc kubenswrapper[4958]: I1006 11:47:50.103488 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-ntrlk" event={"ID":"cd589959-144a-41bd-b6d5-a872e5c25cee","Type":"ContainerStarted","Data":"bc57f0ff822a9fcb8c3b73146bf4879f6e05c2f92b6a4317528530e54e419278"} Oct 06 11:47:50 crc kubenswrapper[4958]: I1006 11:47:50.103497 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-ntrlk" event={"ID":"cd589959-144a-41bd-b6d5-a872e5c25cee","Type":"ContainerStarted","Data":"051e26a11e5f65ab6e49664f99a5f6a87bf781bf7284f2e2e3996c0e26e77c84"} Oct 06 11:47:50 crc kubenswrapper[4958]: I1006 11:47:50.103506 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-ntrlk" event={"ID":"cd589959-144a-41bd-b6d5-a872e5c25cee","Type":"ContainerStarted","Data":"d90731b1e085c514ef0665d4df356b853f432a1012f84a41ed82fa257d57d9bf"} Oct 06 11:47:50 crc kubenswrapper[4958]: I1006 11:47:50.103771 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e346fee2-2887-49c0-ad05-0e4ca19453e4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://42a6335b390e5de2da2d935de9e0cb1e60391521813d0928c59d1d9c27ae8e7f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:47:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://64dfdaf2d8faa6e7ebde08361728077d3db9118ec963dc4c1fc8caf9eb9c752b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:47:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5cbba96aaea45073aaeada1136c677ba1ba87248f004cf3d7394ddfccf412478\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:47:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://95e5c689ad307b63dec13d774d1ff8fe8fa0a11a7d33e18a0470000a4df0f6dc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:47:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T11:47:27Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:47:50Z is after 2025-08-24T17:21:41Z" Oct 06 11:47:50 crc kubenswrapper[4958]: I1006 11:47:50.105822 4958 generic.go:334] "Generic (PLEG): container finished" podID="5e46de0e-9f67-4dae-8601-65004d0d71c1" containerID="5f832e5afbc55af6f3de6756005ddfef328c3d8357b1bbc8765469e5b254cf8a" exitCode=0 Oct 06 11:47:50 crc kubenswrapper[4958]: I1006 11:47:50.105889 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-lwknw" event={"ID":"5e46de0e-9f67-4dae-8601-65004d0d71c1","Type":"ContainerDied","Data":"5f832e5afbc55af6f3de6756005ddfef328c3d8357b1bbc8765469e5b254cf8a"} Oct 06 11:47:50 crc kubenswrapper[4958]: I1006 11:47:50.108554 4958 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/1.log" Oct 06 11:47:50 crc kubenswrapper[4958]: I1006 11:47:50.112949 4958 scope.go:117] "RemoveContainer" containerID="fdb3ac1507f9a55981ed07786d2ab8bd2aab7072e8e625b17f15bc27f40f460d" Oct 06 11:47:50 crc kubenswrapper[4958]: E1006 11:47:50.113099 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" Oct 06 11:47:50 crc kubenswrapper[4958]: I1006 11:47:50.115112 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" event={"ID":"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49","Type":"ContainerStarted","Data":"fbac1bba642ea0238b738ec34631d44326b215575297c7ea3595aa9d691849d2"} Oct 06 11:47:50 crc kubenswrapper[4958]: I1006 11:47:50.140065 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:46Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:47:50Z is after 2025-08-24T17:21:41Z" Oct 06 11:47:50 crc kubenswrapper[4958]: I1006 11:47:50.179899 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-8xzjs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c3545ace-6477-4c0b-9576-f32c6748e0a0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d94d788ac8f6a4665db7852c21f55128c9afb7976c9e18fbc302c02d8c3a0c43\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:47:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lz2sh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T11:47:47Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-8xzjs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:47:50Z is after 2025-08-24T17:21:41Z" Oct 06 11:47:50 crc kubenswrapper[4958]: I1006 11:47:50.219087 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-vns7w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f33eae78-7219-4551-bea7-dcfaf22c4e62\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:49Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:49Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f8htn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T11:47:49Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-vns7w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:47:50Z is after 2025-08-24T17:21:41Z" Oct 06 11:47:50 crc kubenswrapper[4958]: I1006 11:47:50.256474 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:46Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:47:50Z is after 2025-08-24T17:21:41Z" Oct 06 11:47:50 crc kubenswrapper[4958]: I1006 11:47:50.304134 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-ntrlk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cd589959-144a-41bd-b6d5-a872e5c25cee\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:48Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:48Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vzpjm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vzpjm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vzpjm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vzpjm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vzpjm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vzpjm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vzpjm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vzpjm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://738882f7b5c7b87e6d6426a997dd7efe5daf1d97973cf1085097cc0f5e790f00\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://738882f7b5c7b87e6d6426a997dd7efe5daf1d97973cf1085097cc0f5e790f00\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-06T11:47:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-06T11:47:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vzpjm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T11:47:48Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-ntrlk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:47:50Z is after 2025-08-24T17:21:41Z" Oct 06 11:47:50 crc kubenswrapper[4958]: I1006 11:47:50.339276 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:48Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:48Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3a93985bd27818990d6f5ec103d8c0722e1317a74398d64300148018c4f77b71\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:47:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9f39e4dcc064167825d4f7323323a394d1a3d1a374b7104bffc6ad56b344d89e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:47:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:47:50Z is after 2025-08-24T17:21:41Z" Oct 06 11:47:50 crc kubenswrapper[4958]: I1006 11:47:50.381010 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:46Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:47:50Z is after 2025-08-24T17:21:41Z" Oct 06 11:47:50 crc kubenswrapper[4958]: I1006 11:47:50.418737 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 06 11:47:50 crc kubenswrapper[4958]: E1006 11:47:50.418993 4958 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-06 11:47:54.418952435 +0000 UTC m=+28.304977743 (durationBeforeRetry 4s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 06 11:47:50 crc kubenswrapper[4958]: I1006 11:47:50.423217 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-lwknw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5e46de0e-9f67-4dae-8601-65004d0d71c1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:47Z\\\",\\\"message\\\":\\\"containers with incomplete status: [bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:47Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:47Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wghx2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://244104564b5ad6b646671bdc75f440e5568a0c2449c111e56431328365d044fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://244104564b5ad6b646671bdc75f440e5568a0c2449c111e56431328365d044fe\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-06T11:47:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-06T11:47:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wghx2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5f832e5afbc55af6f3de6756005ddfef328c3d8357b1bbc8765469e5b254cf8a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5f832e5afbc55af6f3de6756005ddfef328c3d8357b1bbc8765469e5b254cf8a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-06T11:47:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-06T11:47:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wghx2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wghx2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wghx2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wghx2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wghx2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T11:47:47Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-lwknw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:47:50Z is after 2025-08-24T17:21:41Z" Oct 06 11:47:50 crc kubenswrapper[4958]: I1006 11:47:50.460274 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-4w4h5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8198c8fd-8ec9-4a56-9e87-4e3967fb8ec7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://979bf29caab50b132b30dd6eff62aed5d2209ae1c43fde2433101410df35035b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:47:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2ncxs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T11:47:47Z\\\"}}\" for pod \"openshift-multus\"/\"multus-4w4h5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:47:50Z is after 2025-08-24T17:21:41Z" Oct 06 11:47:50 crc kubenswrapper[4958]: I1006 11:47:50.501445 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-whw6z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1a63a5e9-6d00-40ff-a080-cfcaf03a1c1b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b4fae35f5c37636077eb5d2fee37236e39d009d77aa158458d267aed5fc29f48\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:47:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bwd9v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0be6251825a9cd485da028e6f7fd169dae74215a7191607dfd0a4b7a470e43c4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:47:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bwd9v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T11:47:47Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-whw6z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:47:50Z is after 2025-08-24T17:21:41Z" Oct 06 11:47:50 crc kubenswrapper[4958]: I1006 11:47:50.519555 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 06 11:47:50 crc kubenswrapper[4958]: E1006 11:47:50.519759 4958 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Oct 06 11:47:50 crc kubenswrapper[4958]: I1006 11:47:50.519797 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 06 11:47:50 crc kubenswrapper[4958]: E1006 11:47:50.519900 4958 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Oct 06 11:47:50 crc kubenswrapper[4958]: E1006 11:47:50.519920 4958 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-10-06 11:47:54.519889132 +0000 UTC m=+28.405914480 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Oct 06 11:47:50 crc kubenswrapper[4958]: E1006 11:47:50.519928 4958 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Oct 06 11:47:50 crc kubenswrapper[4958]: E1006 11:47:50.519942 4958 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Oct 06 11:47:50 crc kubenswrapper[4958]: E1006 11:47:50.519995 4958 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2025-10-06 11:47:54.519972294 +0000 UTC m=+28.405997662 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Oct 06 11:47:50 crc kubenswrapper[4958]: I1006 11:47:50.519984 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 06 11:47:50 crc kubenswrapper[4958]: I1006 11:47:50.520076 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 06 11:47:50 crc kubenswrapper[4958]: E1006 11:47:50.520096 4958 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Oct 06 11:47:50 crc kubenswrapper[4958]: E1006 11:47:50.520110 4958 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Oct 06 11:47:50 crc kubenswrapper[4958]: E1006 11:47:50.520120 4958 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Oct 06 11:47:50 crc kubenswrapper[4958]: E1006 11:47:50.520165 4958 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2025-10-06 11:47:54.520139167 +0000 UTC m=+28.406164475 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Oct 06 11:47:50 crc kubenswrapper[4958]: E1006 11:47:50.520210 4958 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Oct 06 11:47:50 crc kubenswrapper[4958]: E1006 11:47:50.520243 4958 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-10-06 11:47:54.520234729 +0000 UTC m=+28.406260038 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Oct 06 11:47:50 crc kubenswrapper[4958]: I1006 11:47:50.547042 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e346fee2-2887-49c0-ad05-0e4ca19453e4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://42a6335b390e5de2da2d935de9e0cb1e60391521813d0928c59d1d9c27ae8e7f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:47:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://64dfdaf2d8faa6e7ebde08361728077d3db9118ec963dc4c1fc8caf9eb9c752b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:47:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5cbba96aaea45073aaeada1136c677ba1ba87248f004cf3d7394ddfccf412478\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:47:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://95e5c689ad307b63dec13d774d1ff8fe8fa0a11a7d33e18a0470000a4df0f6dc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:47:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T11:47:27Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:47:50Z is after 2025-08-24T17:21:41Z" Oct 06 11:47:50 crc kubenswrapper[4958]: I1006 11:47:50.580777 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:46Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:47:50Z is after 2025-08-24T17:21:41Z" Oct 06 11:47:50 crc kubenswrapper[4958]: I1006 11:47:50.618557 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-8xzjs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c3545ace-6477-4c0b-9576-f32c6748e0a0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d94d788ac8f6a4665db7852c21f55128c9afb7976c9e18fbc302c02d8c3a0c43\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:47:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lz2sh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T11:47:47Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-8xzjs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:47:50Z is after 2025-08-24T17:21:41Z" Oct 06 11:47:50 crc kubenswrapper[4958]: I1006 11:47:50.659719 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-vns7w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f33eae78-7219-4551-bea7-dcfaf22c4e62\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:49Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:49Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f8htn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T11:47:49Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-vns7w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:47:50Z is after 2025-08-24T17:21:41Z" Oct 06 11:47:50 crc kubenswrapper[4958]: I1006 11:47:50.707074 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9bd5b11f-6414-4c57-99b8-516b3f4be804\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:27Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:27Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8f40315a451e12055987bec9804330dac96578f86d7fdedfd96ed187936df857\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:47:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0b16a70f5c26106f885601fdc9f41b6d606668ed3a4a527be3a7674dd4217d36\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:47:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://93cedfe1a3600f0eb73fd787333690ed1fd8cd5f766a54ba40eb6aad808593c7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:47:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fdb3ac1507f9a55981ed07786d2ab8bd2aab7072e8e625b17f15bc27f40f460d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fdb3ac1507f9a55981ed07786d2ab8bd2aab7072e8e625b17f15bc27f40f460d\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-10-06T11:47:48Z\\\",\\\"message\\\":\\\"file observer\\\\nW1006 11:47:47.776944 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1006 11:47:47.777183 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1006 11:47:47.778589 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-624858867/tls.crt::/tmp/serving-cert-624858867/tls.key\\\\\\\"\\\\nI1006 11:47:48.078332 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1006 11:47:48.084427 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1006 11:47:48.084453 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1006 11:47:48.084475 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1006 11:47:48.084480 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1006 11:47:48.108531 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI1006 11:47:48.108546 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW1006 11:47:48.108561 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1006 11:47:48.108582 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1006 11:47:48.108586 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1006 11:47:48.108589 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1006 11:47:48.108592 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1006 11:47:48.108594 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1006 11:47:48.110441 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-06T11:47:42Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://533257b553a482e13f8c8930f7ee7d6f8b70af1499cf861d3dfe73cb2225aa2a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:47:29Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://70c0a2bcb44c905e3d39402f0992d3e50a939252201b98e9a723f5a6777d7f91\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://70c0a2bcb44c905e3d39402f0992d3e50a939252201b98e9a723f5a6777d7f91\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-06T11:47:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-06T11:47:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T11:47:27Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:47:50Z is after 2025-08-24T17:21:41Z" Oct 06 11:47:50 crc kubenswrapper[4958]: I1006 11:47:50.741441 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:48Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:48Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b5c0c63694c68c28642df297d1ceff03425621c51b170c758b3fedeafbccc89e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:47:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:47:50Z is after 2025-08-24T17:21:41Z" Oct 06 11:47:50 crc kubenswrapper[4958]: I1006 11:47:50.786194 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:50Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:50Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fbac1bba642ea0238b738ec34631d44326b215575297c7ea3595aa9d691849d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:47:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:47:50Z is after 2025-08-24T17:21:41Z" Oct 06 11:47:50 crc kubenswrapper[4958]: I1006 11:47:50.912842 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 06 11:47:50 crc kubenswrapper[4958]: I1006 11:47:50.912842 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 06 11:47:50 crc kubenswrapper[4958]: E1006 11:47:50.913086 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Oct 06 11:47:50 crc kubenswrapper[4958]: E1006 11:47:50.913195 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Oct 06 11:47:50 crc kubenswrapper[4958]: I1006 11:47:50.912912 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 06 11:47:50 crc kubenswrapper[4958]: E1006 11:47:50.913763 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Oct 06 11:47:51 crc kubenswrapper[4958]: I1006 11:47:51.120585 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/node-ca-vns7w" event={"ID":"f33eae78-7219-4551-bea7-dcfaf22c4e62","Type":"ContainerStarted","Data":"7c6a67770ab5e7e0af1ed6a7b4965bc2b3937e021ddf18a765edd7b65196ad5c"} Oct 06 11:47:51 crc kubenswrapper[4958]: I1006 11:47:51.123329 4958 generic.go:334] "Generic (PLEG): container finished" podID="5e46de0e-9f67-4dae-8601-65004d0d71c1" containerID="99bec1a2f08582cd407d108d771389ab034877ee58f59e7d899541d0c69809b0" exitCode=0 Oct 06 11:47:51 crc kubenswrapper[4958]: I1006 11:47:51.123353 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-lwknw" event={"ID":"5e46de0e-9f67-4dae-8601-65004d0d71c1","Type":"ContainerDied","Data":"99bec1a2f08582cd407d108d771389ab034877ee58f59e7d899541d0c69809b0"} Oct 06 11:47:51 crc kubenswrapper[4958]: I1006 11:47:51.138791 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:46Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:47:51Z is after 2025-08-24T17:21:41Z" Oct 06 11:47:51 crc kubenswrapper[4958]: I1006 11:47:51.166446 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-ntrlk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cd589959-144a-41bd-b6d5-a872e5c25cee\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:48Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:48Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vzpjm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vzpjm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vzpjm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vzpjm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vzpjm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vzpjm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vzpjm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vzpjm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://738882f7b5c7b87e6d6426a997dd7efe5daf1d97973cf1085097cc0f5e790f00\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://738882f7b5c7b87e6d6426a997dd7efe5daf1d97973cf1085097cc0f5e790f00\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-06T11:47:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-06T11:47:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vzpjm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T11:47:48Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-ntrlk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:47:51Z is after 2025-08-24T17:21:41Z" Oct 06 11:47:51 crc kubenswrapper[4958]: I1006 11:47:51.182890 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:46Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:47:51Z is after 2025-08-24T17:21:41Z" Oct 06 11:47:51 crc kubenswrapper[4958]: I1006 11:47:51.199874 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-lwknw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5e46de0e-9f67-4dae-8601-65004d0d71c1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:47Z\\\",\\\"message\\\":\\\"containers with incomplete status: [bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:47Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:47Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wghx2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://244104564b5ad6b646671bdc75f440e5568a0c2449c111e56431328365d044fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://244104564b5ad6b646671bdc75f440e5568a0c2449c111e56431328365d044fe\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-06T11:47:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-06T11:47:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wghx2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5f832e5afbc55af6f3de6756005ddfef328c3d8357b1bbc8765469e5b254cf8a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5f832e5afbc55af6f3de6756005ddfef328c3d8357b1bbc8765469e5b254cf8a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-06T11:47:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-06T11:47:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wghx2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wghx2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wghx2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wghx2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wghx2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T11:47:47Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-lwknw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:47:51Z is after 2025-08-24T17:21:41Z" Oct 06 11:47:51 crc kubenswrapper[4958]: I1006 11:47:51.218762 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-4w4h5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8198c8fd-8ec9-4a56-9e87-4e3967fb8ec7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://979bf29caab50b132b30dd6eff62aed5d2209ae1c43fde2433101410df35035b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:47:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2ncxs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T11:47:47Z\\\"}}\" for pod \"openshift-multus\"/\"multus-4w4h5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:47:51Z is after 2025-08-24T17:21:41Z" Oct 06 11:47:51 crc kubenswrapper[4958]: I1006 11:47:51.234559 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-whw6z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1a63a5e9-6d00-40ff-a080-cfcaf03a1c1b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b4fae35f5c37636077eb5d2fee37236e39d009d77aa158458d267aed5fc29f48\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:47:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bwd9v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0be6251825a9cd485da028e6f7fd169dae74215a7191607dfd0a4b7a470e43c4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:47:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bwd9v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T11:47:47Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-whw6z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:47:51Z is after 2025-08-24T17:21:41Z" Oct 06 11:47:51 crc kubenswrapper[4958]: I1006 11:47:51.251987 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:48Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:48Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3a93985bd27818990d6f5ec103d8c0722e1317a74398d64300148018c4f77b71\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:47:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9f39e4dcc064167825d4f7323323a394d1a3d1a374b7104bffc6ad56b344d89e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:47:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:47:51Z is after 2025-08-24T17:21:41Z" Oct 06 11:47:51 crc kubenswrapper[4958]: I1006 11:47:51.264545 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:46Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:47:51Z is after 2025-08-24T17:21:41Z" Oct 06 11:47:51 crc kubenswrapper[4958]: I1006 11:47:51.275902 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-8xzjs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c3545ace-6477-4c0b-9576-f32c6748e0a0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d94d788ac8f6a4665db7852c21f55128c9afb7976c9e18fbc302c02d8c3a0c43\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:47:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lz2sh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T11:47:47Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-8xzjs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:47:51Z is after 2025-08-24T17:21:41Z" Oct 06 11:47:51 crc kubenswrapper[4958]: I1006 11:47:51.287360 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-vns7w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f33eae78-7219-4551-bea7-dcfaf22c4e62\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7c6a67770ab5e7e0af1ed6a7b4965bc2b3937e021ddf18a765edd7b65196ad5c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:47:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f8htn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T11:47:49Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-vns7w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:47:51Z is after 2025-08-24T17:21:41Z" Oct 06 11:47:51 crc kubenswrapper[4958]: I1006 11:47:51.302271 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e346fee2-2887-49c0-ad05-0e4ca19453e4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://42a6335b390e5de2da2d935de9e0cb1e60391521813d0928c59d1d9c27ae8e7f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:47:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://64dfdaf2d8faa6e7ebde08361728077d3db9118ec963dc4c1fc8caf9eb9c752b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:47:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5cbba96aaea45073aaeada1136c677ba1ba87248f004cf3d7394ddfccf412478\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:47:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://95e5c689ad307b63dec13d774d1ff8fe8fa0a11a7d33e18a0470000a4df0f6dc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:47:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T11:47:27Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:47:51Z is after 2025-08-24T17:21:41Z" Oct 06 11:47:51 crc kubenswrapper[4958]: I1006 11:47:51.313429 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:48Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:48Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b5c0c63694c68c28642df297d1ceff03425621c51b170c758b3fedeafbccc89e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:47:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:47:51Z is after 2025-08-24T17:21:41Z" Oct 06 11:47:51 crc kubenswrapper[4958]: I1006 11:47:51.323548 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:50Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:50Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fbac1bba642ea0238b738ec34631d44326b215575297c7ea3595aa9d691849d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:47:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:47:51Z is after 2025-08-24T17:21:41Z" Oct 06 11:47:51 crc kubenswrapper[4958]: I1006 11:47:51.341234 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9bd5b11f-6414-4c57-99b8-516b3f4be804\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:27Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:27Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8f40315a451e12055987bec9804330dac96578f86d7fdedfd96ed187936df857\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:47:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0b16a70f5c26106f885601fdc9f41b6d606668ed3a4a527be3a7674dd4217d36\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:47:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://93cedfe1a3600f0eb73fd787333690ed1fd8cd5f766a54ba40eb6aad808593c7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:47:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fdb3ac1507f9a55981ed07786d2ab8bd2aab7072e8e625b17f15bc27f40f460d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fdb3ac1507f9a55981ed07786d2ab8bd2aab7072e8e625b17f15bc27f40f460d\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-10-06T11:47:48Z\\\",\\\"message\\\":\\\"file observer\\\\nW1006 11:47:47.776944 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1006 11:47:47.777183 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1006 11:47:47.778589 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-624858867/tls.crt::/tmp/serving-cert-624858867/tls.key\\\\\\\"\\\\nI1006 11:47:48.078332 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1006 11:47:48.084427 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1006 11:47:48.084453 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1006 11:47:48.084475 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1006 11:47:48.084480 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1006 11:47:48.108531 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI1006 11:47:48.108546 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW1006 11:47:48.108561 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1006 11:47:48.108582 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1006 11:47:48.108586 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1006 11:47:48.108589 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1006 11:47:48.108592 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1006 11:47:48.108594 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1006 11:47:48.110441 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-06T11:47:42Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://533257b553a482e13f8c8930f7ee7d6f8b70af1499cf861d3dfe73cb2225aa2a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:47:29Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://70c0a2bcb44c905e3d39402f0992d3e50a939252201b98e9a723f5a6777d7f91\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://70c0a2bcb44c905e3d39402f0992d3e50a939252201b98e9a723f5a6777d7f91\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-06T11:47:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-06T11:47:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T11:47:27Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:47:51Z is after 2025-08-24T17:21:41Z" Oct 06 11:47:51 crc kubenswrapper[4958]: I1006 11:47:51.377852 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-8xzjs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c3545ace-6477-4c0b-9576-f32c6748e0a0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d94d788ac8f6a4665db7852c21f55128c9afb7976c9e18fbc302c02d8c3a0c43\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:47:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lz2sh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T11:47:47Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-8xzjs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:47:51Z is after 2025-08-24T17:21:41Z" Oct 06 11:47:51 crc kubenswrapper[4958]: I1006 11:47:51.417237 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-vns7w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f33eae78-7219-4551-bea7-dcfaf22c4e62\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7c6a67770ab5e7e0af1ed6a7b4965bc2b3937e021ddf18a765edd7b65196ad5c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:47:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f8htn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T11:47:49Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-vns7w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:47:51Z is after 2025-08-24T17:21:41Z" Oct 06 11:47:51 crc kubenswrapper[4958]: I1006 11:47:51.461652 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e346fee2-2887-49c0-ad05-0e4ca19453e4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://42a6335b390e5de2da2d935de9e0cb1e60391521813d0928c59d1d9c27ae8e7f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:47:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://64dfdaf2d8faa6e7ebde08361728077d3db9118ec963dc4c1fc8caf9eb9c752b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:47:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5cbba96aaea45073aaeada1136c677ba1ba87248f004cf3d7394ddfccf412478\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:47:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://95e5c689ad307b63dec13d774d1ff8fe8fa0a11a7d33e18a0470000a4df0f6dc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:47:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T11:47:27Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:47:51Z is after 2025-08-24T17:21:41Z" Oct 06 11:47:51 crc kubenswrapper[4958]: I1006 11:47:51.500599 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:46Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:47:51Z is after 2025-08-24T17:21:41Z" Oct 06 11:47:51 crc kubenswrapper[4958]: I1006 11:47:51.544862 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:50Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:50Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fbac1bba642ea0238b738ec34631d44326b215575297c7ea3595aa9d691849d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:47:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:47:51Z is after 2025-08-24T17:21:41Z" Oct 06 11:47:51 crc kubenswrapper[4958]: I1006 11:47:51.584902 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9bd5b11f-6414-4c57-99b8-516b3f4be804\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:27Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:27Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8f40315a451e12055987bec9804330dac96578f86d7fdedfd96ed187936df857\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:47:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0b16a70f5c26106f885601fdc9f41b6d606668ed3a4a527be3a7674dd4217d36\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:47:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://93cedfe1a3600f0eb73fd787333690ed1fd8cd5f766a54ba40eb6aad808593c7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:47:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fdb3ac1507f9a55981ed07786d2ab8bd2aab7072e8e625b17f15bc27f40f460d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fdb3ac1507f9a55981ed07786d2ab8bd2aab7072e8e625b17f15bc27f40f460d\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-10-06T11:47:48Z\\\",\\\"message\\\":\\\"file observer\\\\nW1006 11:47:47.776944 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1006 11:47:47.777183 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1006 11:47:47.778589 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-624858867/tls.crt::/tmp/serving-cert-624858867/tls.key\\\\\\\"\\\\nI1006 11:47:48.078332 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1006 11:47:48.084427 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1006 11:47:48.084453 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1006 11:47:48.084475 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1006 11:47:48.084480 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1006 11:47:48.108531 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI1006 11:47:48.108546 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW1006 11:47:48.108561 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1006 11:47:48.108582 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1006 11:47:48.108586 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1006 11:47:48.108589 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1006 11:47:48.108592 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1006 11:47:48.108594 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1006 11:47:48.110441 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-06T11:47:42Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://533257b553a482e13f8c8930f7ee7d6f8b70af1499cf861d3dfe73cb2225aa2a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:47:29Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://70c0a2bcb44c905e3d39402f0992d3e50a939252201b98e9a723f5a6777d7f91\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://70c0a2bcb44c905e3d39402f0992d3e50a939252201b98e9a723f5a6777d7f91\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-06T11:47:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-06T11:47:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T11:47:27Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:47:51Z is after 2025-08-24T17:21:41Z" Oct 06 11:47:51 crc kubenswrapper[4958]: I1006 11:47:51.629262 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:48Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:48Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b5c0c63694c68c28642df297d1ceff03425621c51b170c758b3fedeafbccc89e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:47:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:47:51Z is after 2025-08-24T17:21:41Z" Oct 06 11:47:51 crc kubenswrapper[4958]: I1006 11:47:51.660909 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:46Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:47:51Z is after 2025-08-24T17:21:41Z" Oct 06 11:47:51 crc kubenswrapper[4958]: I1006 11:47:51.717168 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-ntrlk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cd589959-144a-41bd-b6d5-a872e5c25cee\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:48Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:48Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vzpjm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vzpjm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vzpjm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vzpjm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vzpjm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vzpjm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vzpjm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vzpjm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://738882f7b5c7b87e6d6426a997dd7efe5daf1d97973cf1085097cc0f5e790f00\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://738882f7b5c7b87e6d6426a997dd7efe5daf1d97973cf1085097cc0f5e790f00\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-06T11:47:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-06T11:47:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vzpjm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T11:47:48Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-ntrlk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:47:51Z is after 2025-08-24T17:21:41Z" Oct 06 11:47:51 crc kubenswrapper[4958]: I1006 11:47:51.747050 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-lwknw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5e46de0e-9f67-4dae-8601-65004d0d71c1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:47Z\\\",\\\"message\\\":\\\"containers with incomplete status: [routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:47Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:47Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wghx2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://244104564b5ad6b646671bdc75f440e5568a0c2449c111e56431328365d044fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://244104564b5ad6b646671bdc75f440e5568a0c2449c111e56431328365d044fe\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-06T11:47:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-06T11:47:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wghx2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5f832e5afbc55af6f3de6756005ddfef328c3d8357b1bbc8765469e5b254cf8a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5f832e5afbc55af6f3de6756005ddfef328c3d8357b1bbc8765469e5b254cf8a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-06T11:47:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-06T11:47:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wghx2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://99bec1a2f08582cd407d108d771389ab034877ee58f59e7d899541d0c69809b0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://99bec1a2f08582cd407d108d771389ab034877ee58f59e7d899541d0c69809b0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-06T11:47:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-06T11:47:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wghx2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wghx2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wghx2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wghx2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T11:47:47Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-lwknw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:47:51Z is after 2025-08-24T17:21:41Z" Oct 06 11:47:51 crc kubenswrapper[4958]: I1006 11:47:51.799652 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-4w4h5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8198c8fd-8ec9-4a56-9e87-4e3967fb8ec7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://979bf29caab50b132b30dd6eff62aed5d2209ae1c43fde2433101410df35035b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:47:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2ncxs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T11:47:47Z\\\"}}\" for pod \"openshift-multus\"/\"multus-4w4h5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:47:51Z is after 2025-08-24T17:21:41Z" Oct 06 11:47:51 crc kubenswrapper[4958]: I1006 11:47:51.820889 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-whw6z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1a63a5e9-6d00-40ff-a080-cfcaf03a1c1b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b4fae35f5c37636077eb5d2fee37236e39d009d77aa158458d267aed5fc29f48\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:47:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bwd9v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0be6251825a9cd485da028e6f7fd169dae74215a7191607dfd0a4b7a470e43c4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:47:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bwd9v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T11:47:47Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-whw6z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:47:51Z is after 2025-08-24T17:21:41Z" Oct 06 11:47:51 crc kubenswrapper[4958]: I1006 11:47:51.859590 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:48Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:48Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3a93985bd27818990d6f5ec103d8c0722e1317a74398d64300148018c4f77b71\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:47:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9f39e4dcc064167825d4f7323323a394d1a3d1a374b7104bffc6ad56b344d89e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:47:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:47:51Z is after 2025-08-24T17:21:41Z" Oct 06 11:47:51 crc kubenswrapper[4958]: I1006 11:47:51.901027 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:46Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:47:51Z is after 2025-08-24T17:21:41Z" Oct 06 11:47:52 crc kubenswrapper[4958]: I1006 11:47:52.131036 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-ntrlk" event={"ID":"cd589959-144a-41bd-b6d5-a872e5c25cee","Type":"ContainerStarted","Data":"1398aae03a42cf60100ed334a3d65c9c4fd501855d5316e7e1ed3974d0c7e22e"} Oct 06 11:47:52 crc kubenswrapper[4958]: I1006 11:47:52.133612 4958 generic.go:334] "Generic (PLEG): container finished" podID="5e46de0e-9f67-4dae-8601-65004d0d71c1" containerID="4d090f1099ca4b6472934163fb66f1875dc03819cf0907f3058aa5006d9d2198" exitCode=0 Oct 06 11:47:52 crc kubenswrapper[4958]: I1006 11:47:52.133708 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-lwknw" event={"ID":"5e46de0e-9f67-4dae-8601-65004d0d71c1","Type":"ContainerDied","Data":"4d090f1099ca4b6472934163fb66f1875dc03819cf0907f3058aa5006d9d2198"} Oct 06 11:47:52 crc kubenswrapper[4958]: I1006 11:47:52.148690 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e346fee2-2887-49c0-ad05-0e4ca19453e4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://42a6335b390e5de2da2d935de9e0cb1e60391521813d0928c59d1d9c27ae8e7f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:47:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://64dfdaf2d8faa6e7ebde08361728077d3db9118ec963dc4c1fc8caf9eb9c752b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:47:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5cbba96aaea45073aaeada1136c677ba1ba87248f004cf3d7394ddfccf412478\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:47:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://95e5c689ad307b63dec13d774d1ff8fe8fa0a11a7d33e18a0470000a4df0f6dc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:47:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T11:47:27Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:47:52Z is after 2025-08-24T17:21:41Z" Oct 06 11:47:52 crc kubenswrapper[4958]: I1006 11:47:52.163538 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:46Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:47:52Z is after 2025-08-24T17:21:41Z" Oct 06 11:47:52 crc kubenswrapper[4958]: I1006 11:47:52.174587 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-8xzjs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c3545ace-6477-4c0b-9576-f32c6748e0a0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d94d788ac8f6a4665db7852c21f55128c9afb7976c9e18fbc302c02d8c3a0c43\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:47:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lz2sh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T11:47:47Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-8xzjs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:47:52Z is after 2025-08-24T17:21:41Z" Oct 06 11:47:52 crc kubenswrapper[4958]: I1006 11:47:52.186013 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-vns7w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f33eae78-7219-4551-bea7-dcfaf22c4e62\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7c6a67770ab5e7e0af1ed6a7b4965bc2b3937e021ddf18a765edd7b65196ad5c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:47:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f8htn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T11:47:49Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-vns7w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:47:52Z is after 2025-08-24T17:21:41Z" Oct 06 11:47:52 crc kubenswrapper[4958]: I1006 11:47:52.200009 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9bd5b11f-6414-4c57-99b8-516b3f4be804\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:27Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:27Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8f40315a451e12055987bec9804330dac96578f86d7fdedfd96ed187936df857\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:47:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0b16a70f5c26106f885601fdc9f41b6d606668ed3a4a527be3a7674dd4217d36\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:47:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://93cedfe1a3600f0eb73fd787333690ed1fd8cd5f766a54ba40eb6aad808593c7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:47:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fdb3ac1507f9a55981ed07786d2ab8bd2aab7072e8e625b17f15bc27f40f460d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fdb3ac1507f9a55981ed07786d2ab8bd2aab7072e8e625b17f15bc27f40f460d\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-10-06T11:47:48Z\\\",\\\"message\\\":\\\"file observer\\\\nW1006 11:47:47.776944 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1006 11:47:47.777183 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1006 11:47:47.778589 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-624858867/tls.crt::/tmp/serving-cert-624858867/tls.key\\\\\\\"\\\\nI1006 11:47:48.078332 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1006 11:47:48.084427 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1006 11:47:48.084453 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1006 11:47:48.084475 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1006 11:47:48.084480 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1006 11:47:48.108531 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI1006 11:47:48.108546 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW1006 11:47:48.108561 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1006 11:47:48.108582 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1006 11:47:48.108586 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1006 11:47:48.108589 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1006 11:47:48.108592 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1006 11:47:48.108594 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1006 11:47:48.110441 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-06T11:47:42Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://533257b553a482e13f8c8930f7ee7d6f8b70af1499cf861d3dfe73cb2225aa2a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:47:29Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://70c0a2bcb44c905e3d39402f0992d3e50a939252201b98e9a723f5a6777d7f91\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://70c0a2bcb44c905e3d39402f0992d3e50a939252201b98e9a723f5a6777d7f91\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-06T11:47:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-06T11:47:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T11:47:27Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:47:52Z is after 2025-08-24T17:21:41Z" Oct 06 11:47:52 crc kubenswrapper[4958]: I1006 11:47:52.214749 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:48Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:48Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b5c0c63694c68c28642df297d1ceff03425621c51b170c758b3fedeafbccc89e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:47:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:47:52Z is after 2025-08-24T17:21:41Z" Oct 06 11:47:52 crc kubenswrapper[4958]: I1006 11:47:52.226573 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:50Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:50Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fbac1bba642ea0238b738ec34631d44326b215575297c7ea3595aa9d691849d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:47:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:47:52Z is after 2025-08-24T17:21:41Z" Oct 06 11:47:52 crc kubenswrapper[4958]: I1006 11:47:52.241218 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:46Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:47:52Z is after 2025-08-24T17:21:41Z" Oct 06 11:47:52 crc kubenswrapper[4958]: I1006 11:47:52.264342 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-ntrlk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cd589959-144a-41bd-b6d5-a872e5c25cee\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:48Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:48Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vzpjm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vzpjm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vzpjm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vzpjm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vzpjm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vzpjm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vzpjm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vzpjm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://738882f7b5c7b87e6d6426a997dd7efe5daf1d97973cf1085097cc0f5e790f00\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://738882f7b5c7b87e6d6426a997dd7efe5daf1d97973cf1085097cc0f5e790f00\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-06T11:47:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-06T11:47:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vzpjm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T11:47:48Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-ntrlk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:47:52Z is after 2025-08-24T17:21:41Z" Oct 06 11:47:52 crc kubenswrapper[4958]: I1006 11:47:52.306585 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:48Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:48Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3a93985bd27818990d6f5ec103d8c0722e1317a74398d64300148018c4f77b71\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:47:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9f39e4dcc064167825d4f7323323a394d1a3d1a374b7104bffc6ad56b344d89e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:47:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:47:52Z is after 2025-08-24T17:21:41Z" Oct 06 11:47:52 crc kubenswrapper[4958]: I1006 11:47:52.340846 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:46Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:47:52Z is after 2025-08-24T17:21:41Z" Oct 06 11:47:52 crc kubenswrapper[4958]: I1006 11:47:52.383283 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-lwknw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5e46de0e-9f67-4dae-8601-65004d0d71c1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:47Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:47Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:47Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wghx2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://244104564b5ad6b646671bdc75f440e5568a0c2449c111e56431328365d044fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://244104564b5ad6b646671bdc75f440e5568a0c2449c111e56431328365d044fe\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-06T11:47:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-06T11:47:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wghx2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5f832e5afbc55af6f3de6756005ddfef328c3d8357b1bbc8765469e5b254cf8a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5f832e5afbc55af6f3de6756005ddfef328c3d8357b1bbc8765469e5b254cf8a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-06T11:47:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-06T11:47:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wghx2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://99bec1a2f08582cd407d108d771389ab034877ee58f59e7d899541d0c69809b0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://99bec1a2f08582cd407d108d771389ab034877ee58f59e7d899541d0c69809b0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-06T11:47:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-06T11:47:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wghx2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4d090f1099ca4b6472934163fb66f1875dc03819cf0907f3058aa5006d9d2198\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4d090f1099ca4b6472934163fb66f1875dc03819cf0907f3058aa5006d9d2198\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-06T11:47:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-06T11:47:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wghx2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wghx2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wghx2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T11:47:47Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-lwknw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:47:52Z is after 2025-08-24T17:21:41Z" Oct 06 11:47:52 crc kubenswrapper[4958]: I1006 11:47:52.421378 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-4w4h5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8198c8fd-8ec9-4a56-9e87-4e3967fb8ec7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://979bf29caab50b132b30dd6eff62aed5d2209ae1c43fde2433101410df35035b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:47:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2ncxs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T11:47:47Z\\\"}}\" for pod \"openshift-multus\"/\"multus-4w4h5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:47:52Z is after 2025-08-24T17:21:41Z" Oct 06 11:47:52 crc kubenswrapper[4958]: I1006 11:47:52.462114 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-whw6z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1a63a5e9-6d00-40ff-a080-cfcaf03a1c1b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b4fae35f5c37636077eb5d2fee37236e39d009d77aa158458d267aed5fc29f48\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:47:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bwd9v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0be6251825a9cd485da028e6f7fd169dae74215a7191607dfd0a4b7a470e43c4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:47:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bwd9v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T11:47:47Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-whw6z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:47:52Z is after 2025-08-24T17:21:41Z" Oct 06 11:47:52 crc kubenswrapper[4958]: I1006 11:47:52.912414 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 06 11:47:52 crc kubenswrapper[4958]: I1006 11:47:52.912416 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 06 11:47:52 crc kubenswrapper[4958]: E1006 11:47:52.913064 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Oct 06 11:47:52 crc kubenswrapper[4958]: I1006 11:47:52.912448 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 06 11:47:52 crc kubenswrapper[4958]: E1006 11:47:52.913185 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Oct 06 11:47:52 crc kubenswrapper[4958]: E1006 11:47:52.913377 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Oct 06 11:47:53 crc kubenswrapper[4958]: I1006 11:47:53.146564 4958 generic.go:334] "Generic (PLEG): container finished" podID="5e46de0e-9f67-4dae-8601-65004d0d71c1" containerID="41bdbeadff713969b3914296b7f045991c7da126f8181ee9b60d4607d7a565dd" exitCode=0 Oct 06 11:47:53 crc kubenswrapper[4958]: I1006 11:47:53.146641 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-lwknw" event={"ID":"5e46de0e-9f67-4dae-8601-65004d0d71c1","Type":"ContainerDied","Data":"41bdbeadff713969b3914296b7f045991c7da126f8181ee9b60d4607d7a565dd"} Oct 06 11:47:53 crc kubenswrapper[4958]: I1006 11:47:53.152011 4958 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Oct 06 11:47:53 crc kubenswrapper[4958]: I1006 11:47:53.154102 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:47:53 crc kubenswrapper[4958]: I1006 11:47:53.154161 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:47:53 crc kubenswrapper[4958]: I1006 11:47:53.154173 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:47:53 crc kubenswrapper[4958]: I1006 11:47:53.154295 4958 kubelet_node_status.go:76] "Attempting to register node" node="crc" Oct 06 11:47:53 crc kubenswrapper[4958]: I1006 11:47:53.160794 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:46Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:47:53Z is after 2025-08-24T17:21:41Z" Oct 06 11:47:53 crc kubenswrapper[4958]: I1006 11:47:53.162611 4958 kubelet_node_status.go:115] "Node was previously registered" node="crc" Oct 06 11:47:53 crc kubenswrapper[4958]: I1006 11:47:53.162902 4958 kubelet_node_status.go:79] "Successfully registered node" node="crc" Oct 06 11:47:53 crc kubenswrapper[4958]: I1006 11:47:53.164776 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:47:53 crc kubenswrapper[4958]: I1006 11:47:53.164855 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:47:53 crc kubenswrapper[4958]: I1006 11:47:53.164880 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:47:53 crc kubenswrapper[4958]: I1006 11:47:53.164912 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:47:53 crc kubenswrapper[4958]: I1006 11:47:53.164936 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:47:53Z","lastTransitionTime":"2025-10-06T11:47:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:47:53 crc kubenswrapper[4958]: E1006 11:47:53.183748 4958 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-10-06T11:47:53Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:53Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-06T11:47:53Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:53Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-06T11:47:53Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:53Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-06T11:47:53Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:53Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"71775188-f0be-4b91-a34f-f469bf9337b6\\\",\\\"systemUUID\\\":\\\"c838f8b7-52cc-46da-a415-eff8b7b887b9\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:47:53Z is after 2025-08-24T17:21:41Z" Oct 06 11:47:53 crc kubenswrapper[4958]: I1006 11:47:53.187625 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-ntrlk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cd589959-144a-41bd-b6d5-a872e5c25cee\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:48Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:48Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vzpjm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vzpjm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vzpjm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vzpjm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vzpjm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vzpjm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vzpjm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vzpjm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://738882f7b5c7b87e6d6426a997dd7efe5daf1d97973cf1085097cc0f5e790f00\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://738882f7b5c7b87e6d6426a997dd7efe5daf1d97973cf1085097cc0f5e790f00\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-06T11:47:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-06T11:47:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vzpjm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T11:47:48Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-ntrlk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:47:53Z is after 2025-08-24T17:21:41Z" Oct 06 11:47:53 crc kubenswrapper[4958]: I1006 11:47:53.190357 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:47:53 crc kubenswrapper[4958]: I1006 11:47:53.190393 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:47:53 crc kubenswrapper[4958]: I1006 11:47:53.190406 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:47:53 crc kubenswrapper[4958]: I1006 11:47:53.190426 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:47:53 crc kubenswrapper[4958]: I1006 11:47:53.190437 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:47:53Z","lastTransitionTime":"2025-10-06T11:47:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:47:53 crc kubenswrapper[4958]: I1006 11:47:53.204705 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:46Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:47:53Z is after 2025-08-24T17:21:41Z" Oct 06 11:47:53 crc kubenswrapper[4958]: E1006 11:47:53.206922 4958 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-10-06T11:47:53Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:53Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-06T11:47:53Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:53Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-06T11:47:53Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:53Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-06T11:47:53Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:53Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"71775188-f0be-4b91-a34f-f469bf9337b6\\\",\\\"systemUUID\\\":\\\"c838f8b7-52cc-46da-a415-eff8b7b887b9\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:47:53Z is after 2025-08-24T17:21:41Z" Oct 06 11:47:53 crc kubenswrapper[4958]: I1006 11:47:53.212331 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:47:53 crc kubenswrapper[4958]: I1006 11:47:53.212458 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:47:53 crc kubenswrapper[4958]: I1006 11:47:53.212527 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:47:53 crc kubenswrapper[4958]: I1006 11:47:53.212587 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:47:53 crc kubenswrapper[4958]: I1006 11:47:53.212648 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:47:53Z","lastTransitionTime":"2025-10-06T11:47:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:47:53 crc kubenswrapper[4958]: I1006 11:47:53.223533 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-lwknw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5e46de0e-9f67-4dae-8601-65004d0d71c1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:47Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:47Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:47Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wghx2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://244104564b5ad6b646671bdc75f440e5568a0c2449c111e56431328365d044fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://244104564b5ad6b646671bdc75f440e5568a0c2449c111e56431328365d044fe\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-06T11:47:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-06T11:47:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wghx2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5f832e5afbc55af6f3de6756005ddfef328c3d8357b1bbc8765469e5b254cf8a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5f832e5afbc55af6f3de6756005ddfef328c3d8357b1bbc8765469e5b254cf8a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-06T11:47:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-06T11:47:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wghx2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://99bec1a2f08582cd407d108d771389ab034877ee58f59e7d899541d0c69809b0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://99bec1a2f08582cd407d108d771389ab034877ee58f59e7d899541d0c69809b0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-06T11:47:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-06T11:47:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wghx2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4d090f1099ca4b6472934163fb66f1875dc03819cf0907f3058aa5006d9d2198\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4d090f1099ca4b6472934163fb66f1875dc03819cf0907f3058aa5006d9d2198\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-06T11:47:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-06T11:47:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wghx2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://41bdbeadff713969b3914296b7f045991c7da126f8181ee9b60d4607d7a565dd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://41bdbeadff713969b3914296b7f045991c7da126f8181ee9b60d4607d7a565dd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-06T11:47:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-06T11:47:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wghx2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wghx2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T11:47:47Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-lwknw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:47:53Z is after 2025-08-24T17:21:41Z" Oct 06 11:47:53 crc kubenswrapper[4958]: E1006 11:47:53.225777 4958 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-10-06T11:47:53Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:53Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-06T11:47:53Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:53Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-06T11:47:53Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:53Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-06T11:47:53Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:53Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"71775188-f0be-4b91-a34f-f469bf9337b6\\\",\\\"systemUUID\\\":\\\"c838f8b7-52cc-46da-a415-eff8b7b887b9\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:47:53Z is after 2025-08-24T17:21:41Z" Oct 06 11:47:53 crc kubenswrapper[4958]: I1006 11:47:53.229685 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:47:53 crc kubenswrapper[4958]: I1006 11:47:53.229752 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:47:53 crc kubenswrapper[4958]: I1006 11:47:53.229799 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:47:53 crc kubenswrapper[4958]: I1006 11:47:53.229832 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:47:53 crc kubenswrapper[4958]: I1006 11:47:53.229854 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:47:53Z","lastTransitionTime":"2025-10-06T11:47:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:47:53 crc kubenswrapper[4958]: I1006 11:47:53.236913 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-4w4h5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8198c8fd-8ec9-4a56-9e87-4e3967fb8ec7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://979bf29caab50b132b30dd6eff62aed5d2209ae1c43fde2433101410df35035b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:47:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2ncxs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T11:47:47Z\\\"}}\" for pod \"openshift-multus\"/\"multus-4w4h5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:47:53Z is after 2025-08-24T17:21:41Z" Oct 06 11:47:53 crc kubenswrapper[4958]: E1006 11:47:53.241881 4958 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-10-06T11:47:53Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:53Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-06T11:47:53Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:53Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-06T11:47:53Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:53Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-06T11:47:53Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:53Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"71775188-f0be-4b91-a34f-f469bf9337b6\\\",\\\"systemUUID\\\":\\\"c838f8b7-52cc-46da-a415-eff8b7b887b9\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:47:53Z is after 2025-08-24T17:21:41Z" Oct 06 11:47:53 crc kubenswrapper[4958]: I1006 11:47:53.245688 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:47:53 crc kubenswrapper[4958]: I1006 11:47:53.245744 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:47:53 crc kubenswrapper[4958]: I1006 11:47:53.245753 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:47:53 crc kubenswrapper[4958]: I1006 11:47:53.245771 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:47:53 crc kubenswrapper[4958]: I1006 11:47:53.245785 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:47:53Z","lastTransitionTime":"2025-10-06T11:47:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:47:53 crc kubenswrapper[4958]: I1006 11:47:53.250828 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-whw6z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1a63a5e9-6d00-40ff-a080-cfcaf03a1c1b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b4fae35f5c37636077eb5d2fee37236e39d009d77aa158458d267aed5fc29f48\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:47:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bwd9v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0be6251825a9cd485da028e6f7fd169dae74215a7191607dfd0a4b7a470e43c4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:47:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bwd9v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T11:47:47Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-whw6z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:47:53Z is after 2025-08-24T17:21:41Z" Oct 06 11:47:53 crc kubenswrapper[4958]: E1006 11:47:53.256460 4958 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-10-06T11:47:53Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:53Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-06T11:47:53Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:53Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-06T11:47:53Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:53Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-06T11:47:53Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:53Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"71775188-f0be-4b91-a34f-f469bf9337b6\\\",\\\"systemUUID\\\":\\\"c838f8b7-52cc-46da-a415-eff8b7b887b9\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:47:53Z is after 2025-08-24T17:21:41Z" Oct 06 11:47:53 crc kubenswrapper[4958]: E1006 11:47:53.256626 4958 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Oct 06 11:47:53 crc kubenswrapper[4958]: I1006 11:47:53.258701 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:47:53 crc kubenswrapper[4958]: I1006 11:47:53.258734 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:47:53 crc kubenswrapper[4958]: I1006 11:47:53.258747 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:47:53 crc kubenswrapper[4958]: I1006 11:47:53.258766 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:47:53 crc kubenswrapper[4958]: I1006 11:47:53.258778 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:47:53Z","lastTransitionTime":"2025-10-06T11:47:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:47:53 crc kubenswrapper[4958]: I1006 11:47:53.265597 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:48Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:48Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3a93985bd27818990d6f5ec103d8c0722e1317a74398d64300148018c4f77b71\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:47:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9f39e4dcc064167825d4f7323323a394d1a3d1a374b7104bffc6ad56b344d89e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:47:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:47:53Z is after 2025-08-24T17:21:41Z" Oct 06 11:47:53 crc kubenswrapper[4958]: I1006 11:47:53.277174 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:46Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:47:53Z is after 2025-08-24T17:21:41Z" Oct 06 11:47:53 crc kubenswrapper[4958]: I1006 11:47:53.289335 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-8xzjs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c3545ace-6477-4c0b-9576-f32c6748e0a0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d94d788ac8f6a4665db7852c21f55128c9afb7976c9e18fbc302c02d8c3a0c43\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:47:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lz2sh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T11:47:47Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-8xzjs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:47:53Z is after 2025-08-24T17:21:41Z" Oct 06 11:47:53 crc kubenswrapper[4958]: I1006 11:47:53.300050 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-vns7w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f33eae78-7219-4551-bea7-dcfaf22c4e62\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7c6a67770ab5e7e0af1ed6a7b4965bc2b3937e021ddf18a765edd7b65196ad5c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:47:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f8htn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T11:47:49Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-vns7w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:47:53Z is after 2025-08-24T17:21:41Z" Oct 06 11:47:53 crc kubenswrapper[4958]: I1006 11:47:53.315604 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e346fee2-2887-49c0-ad05-0e4ca19453e4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://42a6335b390e5de2da2d935de9e0cb1e60391521813d0928c59d1d9c27ae8e7f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:47:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://64dfdaf2d8faa6e7ebde08361728077d3db9118ec963dc4c1fc8caf9eb9c752b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:47:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5cbba96aaea45073aaeada1136c677ba1ba87248f004cf3d7394ddfccf412478\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:47:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://95e5c689ad307b63dec13d774d1ff8fe8fa0a11a7d33e18a0470000a4df0f6dc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:47:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T11:47:27Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:47:53Z is after 2025-08-24T17:21:41Z" Oct 06 11:47:53 crc kubenswrapper[4958]: I1006 11:47:53.329082 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:48Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:48Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b5c0c63694c68c28642df297d1ceff03425621c51b170c758b3fedeafbccc89e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:47:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:47:53Z is after 2025-08-24T17:21:41Z" Oct 06 11:47:53 crc kubenswrapper[4958]: I1006 11:47:53.341568 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:50Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:50Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fbac1bba642ea0238b738ec34631d44326b215575297c7ea3595aa9d691849d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:47:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:47:53Z is after 2025-08-24T17:21:41Z" Oct 06 11:47:53 crc kubenswrapper[4958]: I1006 11:47:53.355534 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9bd5b11f-6414-4c57-99b8-516b3f4be804\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:27Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:27Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8f40315a451e12055987bec9804330dac96578f86d7fdedfd96ed187936df857\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:47:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0b16a70f5c26106f885601fdc9f41b6d606668ed3a4a527be3a7674dd4217d36\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:47:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://93cedfe1a3600f0eb73fd787333690ed1fd8cd5f766a54ba40eb6aad808593c7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:47:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fdb3ac1507f9a55981ed07786d2ab8bd2aab7072e8e625b17f15bc27f40f460d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fdb3ac1507f9a55981ed07786d2ab8bd2aab7072e8e625b17f15bc27f40f460d\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-10-06T11:47:48Z\\\",\\\"message\\\":\\\"file observer\\\\nW1006 11:47:47.776944 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1006 11:47:47.777183 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1006 11:47:47.778589 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-624858867/tls.crt::/tmp/serving-cert-624858867/tls.key\\\\\\\"\\\\nI1006 11:47:48.078332 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1006 11:47:48.084427 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1006 11:47:48.084453 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1006 11:47:48.084475 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1006 11:47:48.084480 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1006 11:47:48.108531 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI1006 11:47:48.108546 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW1006 11:47:48.108561 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1006 11:47:48.108582 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1006 11:47:48.108586 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1006 11:47:48.108589 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1006 11:47:48.108592 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1006 11:47:48.108594 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1006 11:47:48.110441 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-06T11:47:42Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://533257b553a482e13f8c8930f7ee7d6f8b70af1499cf861d3dfe73cb2225aa2a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:47:29Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://70c0a2bcb44c905e3d39402f0992d3e50a939252201b98e9a723f5a6777d7f91\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://70c0a2bcb44c905e3d39402f0992d3e50a939252201b98e9a723f5a6777d7f91\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-06T11:47:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-06T11:47:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T11:47:27Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:47:53Z is after 2025-08-24T17:21:41Z" Oct 06 11:47:53 crc kubenswrapper[4958]: I1006 11:47:53.364973 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:47:53 crc kubenswrapper[4958]: I1006 11:47:53.365026 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:47:53 crc kubenswrapper[4958]: I1006 11:47:53.365040 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:47:53 crc kubenswrapper[4958]: I1006 11:47:53.365069 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:47:53 crc kubenswrapper[4958]: I1006 11:47:53.365082 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:47:53Z","lastTransitionTime":"2025-10-06T11:47:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:47:53 crc kubenswrapper[4958]: I1006 11:47:53.467681 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:47:53 crc kubenswrapper[4958]: I1006 11:47:53.467731 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:47:53 crc kubenswrapper[4958]: I1006 11:47:53.467743 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:47:53 crc kubenswrapper[4958]: I1006 11:47:53.467762 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:47:53 crc kubenswrapper[4958]: I1006 11:47:53.467774 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:47:53Z","lastTransitionTime":"2025-10-06T11:47:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:47:53 crc kubenswrapper[4958]: I1006 11:47:53.571780 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:47:53 crc kubenswrapper[4958]: I1006 11:47:53.571825 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:47:53 crc kubenswrapper[4958]: I1006 11:47:53.571840 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:47:53 crc kubenswrapper[4958]: I1006 11:47:53.571859 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:47:53 crc kubenswrapper[4958]: I1006 11:47:53.571872 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:47:53Z","lastTransitionTime":"2025-10-06T11:47:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:47:53 crc kubenswrapper[4958]: I1006 11:47:53.674263 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:47:53 crc kubenswrapper[4958]: I1006 11:47:53.674312 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:47:53 crc kubenswrapper[4958]: I1006 11:47:53.674325 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:47:53 crc kubenswrapper[4958]: I1006 11:47:53.674342 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:47:53 crc kubenswrapper[4958]: I1006 11:47:53.674353 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:47:53Z","lastTransitionTime":"2025-10-06T11:47:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:47:53 crc kubenswrapper[4958]: I1006 11:47:53.777682 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:47:53 crc kubenswrapper[4958]: I1006 11:47:53.777749 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:47:53 crc kubenswrapper[4958]: I1006 11:47:53.777766 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:47:53 crc kubenswrapper[4958]: I1006 11:47:53.777811 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:47:53 crc kubenswrapper[4958]: I1006 11:47:53.777831 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:47:53Z","lastTransitionTime":"2025-10-06T11:47:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:47:53 crc kubenswrapper[4958]: I1006 11:47:53.881682 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:47:53 crc kubenswrapper[4958]: I1006 11:47:53.881751 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:47:53 crc kubenswrapper[4958]: I1006 11:47:53.881771 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:47:53 crc kubenswrapper[4958]: I1006 11:47:53.881798 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:47:53 crc kubenswrapper[4958]: I1006 11:47:53.881816 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:47:53Z","lastTransitionTime":"2025-10-06T11:47:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:47:53 crc kubenswrapper[4958]: I1006 11:47:53.985342 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:47:53 crc kubenswrapper[4958]: I1006 11:47:53.985400 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:47:53 crc kubenswrapper[4958]: I1006 11:47:53.985422 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:47:53 crc kubenswrapper[4958]: I1006 11:47:53.985447 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:47:53 crc kubenswrapper[4958]: I1006 11:47:53.985465 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:47:53Z","lastTransitionTime":"2025-10-06T11:47:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:47:54 crc kubenswrapper[4958]: I1006 11:47:54.089878 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:47:54 crc kubenswrapper[4958]: I1006 11:47:54.089946 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:47:54 crc kubenswrapper[4958]: I1006 11:47:54.089963 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:47:54 crc kubenswrapper[4958]: I1006 11:47:54.089991 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:47:54 crc kubenswrapper[4958]: I1006 11:47:54.090008 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:47:54Z","lastTransitionTime":"2025-10-06T11:47:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:47:54 crc kubenswrapper[4958]: I1006 11:47:54.159000 4958 generic.go:334] "Generic (PLEG): container finished" podID="5e46de0e-9f67-4dae-8601-65004d0d71c1" containerID="3e496dd75c5dd4e480d06cd46302b6172620b35eb24aff3308865a0931677bc4" exitCode=0 Oct 06 11:47:54 crc kubenswrapper[4958]: I1006 11:47:54.159125 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-lwknw" event={"ID":"5e46de0e-9f67-4dae-8601-65004d0d71c1","Type":"ContainerDied","Data":"3e496dd75c5dd4e480d06cd46302b6172620b35eb24aff3308865a0931677bc4"} Oct 06 11:47:54 crc kubenswrapper[4958]: I1006 11:47:54.185782 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-4w4h5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8198c8fd-8ec9-4a56-9e87-4e3967fb8ec7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://979bf29caab50b132b30dd6eff62aed5d2209ae1c43fde2433101410df35035b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:47:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2ncxs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T11:47:47Z\\\"}}\" for pod \"openshift-multus\"/\"multus-4w4h5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:47:54Z is after 2025-08-24T17:21:41Z" Oct 06 11:47:54 crc kubenswrapper[4958]: I1006 11:47:54.193214 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:47:54 crc kubenswrapper[4958]: I1006 11:47:54.193290 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:47:54 crc kubenswrapper[4958]: I1006 11:47:54.193310 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:47:54 crc kubenswrapper[4958]: I1006 11:47:54.193336 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:47:54 crc kubenswrapper[4958]: I1006 11:47:54.193353 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:47:54Z","lastTransitionTime":"2025-10-06T11:47:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:47:54 crc kubenswrapper[4958]: I1006 11:47:54.205639 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-whw6z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1a63a5e9-6d00-40ff-a080-cfcaf03a1c1b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b4fae35f5c37636077eb5d2fee37236e39d009d77aa158458d267aed5fc29f48\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:47:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bwd9v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0be6251825a9cd485da028e6f7fd169dae74215a7191607dfd0a4b7a470e43c4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:47:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bwd9v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T11:47:47Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-whw6z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:47:54Z is after 2025-08-24T17:21:41Z" Oct 06 11:47:54 crc kubenswrapper[4958]: I1006 11:47:54.228738 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:48Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:48Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3a93985bd27818990d6f5ec103d8c0722e1317a74398d64300148018c4f77b71\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:47:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9f39e4dcc064167825d4f7323323a394d1a3d1a374b7104bffc6ad56b344d89e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:47:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:47:54Z is after 2025-08-24T17:21:41Z" Oct 06 11:47:54 crc kubenswrapper[4958]: I1006 11:47:54.249933 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:46Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:47:54Z is after 2025-08-24T17:21:41Z" Oct 06 11:47:54 crc kubenswrapper[4958]: I1006 11:47:54.269596 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-lwknw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5e46de0e-9f67-4dae-8601-65004d0d71c1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:47Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:47Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wghx2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://244104564b5ad6b646671bdc75f440e5568a0c2449c111e56431328365d044fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://244104564b5ad6b646671bdc75f440e5568a0c2449c111e56431328365d044fe\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-06T11:47:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-06T11:47:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wghx2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5f832e5afbc55af6f3de6756005ddfef328c3d8357b1bbc8765469e5b254cf8a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5f832e5afbc55af6f3de6756005ddfef328c3d8357b1bbc8765469e5b254cf8a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-06T11:47:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-06T11:47:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wghx2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://99bec1a2f08582cd407d108d771389ab034877ee58f59e7d899541d0c69809b0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://99bec1a2f08582cd407d108d771389ab034877ee58f59e7d899541d0c69809b0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-06T11:47:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-06T11:47:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wghx2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4d090f1099ca4b6472934163fb66f1875dc03819cf0907f3058aa5006d9d2198\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4d090f1099ca4b6472934163fb66f1875dc03819cf0907f3058aa5006d9d2198\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-06T11:47:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-06T11:47:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wghx2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://41bdbeadff713969b3914296b7f045991c7da126f8181ee9b60d4607d7a565dd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://41bdbeadff713969b3914296b7f045991c7da126f8181ee9b60d4607d7a565dd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-06T11:47:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-06T11:47:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wghx2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3e496dd75c5dd4e480d06cd46302b6172620b35eb24aff3308865a0931677bc4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3e496dd75c5dd4e480d06cd46302b6172620b35eb24aff3308865a0931677bc4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-06T11:47:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-06T11:47:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wghx2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T11:47:47Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-lwknw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:47:54Z is after 2025-08-24T17:21:41Z" Oct 06 11:47:54 crc kubenswrapper[4958]: I1006 11:47:54.282237 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-vns7w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f33eae78-7219-4551-bea7-dcfaf22c4e62\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7c6a67770ab5e7e0af1ed6a7b4965bc2b3937e021ddf18a765edd7b65196ad5c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:47:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f8htn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T11:47:49Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-vns7w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:47:54Z is after 2025-08-24T17:21:41Z" Oct 06 11:47:54 crc kubenswrapper[4958]: I1006 11:47:54.296845 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:47:54 crc kubenswrapper[4958]: I1006 11:47:54.296893 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:47:54 crc kubenswrapper[4958]: I1006 11:47:54.296908 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:47:54 crc kubenswrapper[4958]: I1006 11:47:54.296933 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:47:54 crc kubenswrapper[4958]: I1006 11:47:54.296949 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:47:54Z","lastTransitionTime":"2025-10-06T11:47:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:47:54 crc kubenswrapper[4958]: I1006 11:47:54.299368 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e346fee2-2887-49c0-ad05-0e4ca19453e4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://42a6335b390e5de2da2d935de9e0cb1e60391521813d0928c59d1d9c27ae8e7f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:47:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://64dfdaf2d8faa6e7ebde08361728077d3db9118ec963dc4c1fc8caf9eb9c752b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:47:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5cbba96aaea45073aaeada1136c677ba1ba87248f004cf3d7394ddfccf412478\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:47:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://95e5c689ad307b63dec13d774d1ff8fe8fa0a11a7d33e18a0470000a4df0f6dc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:47:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T11:47:27Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:47:54Z is after 2025-08-24T17:21:41Z" Oct 06 11:47:54 crc kubenswrapper[4958]: I1006 11:47:54.314761 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:46Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:47:54Z is after 2025-08-24T17:21:41Z" Oct 06 11:47:54 crc kubenswrapper[4958]: I1006 11:47:54.326112 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-8xzjs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c3545ace-6477-4c0b-9576-f32c6748e0a0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d94d788ac8f6a4665db7852c21f55128c9afb7976c9e18fbc302c02d8c3a0c43\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:47:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lz2sh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T11:47:47Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-8xzjs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:47:54Z is after 2025-08-24T17:21:41Z" Oct 06 11:47:54 crc kubenswrapper[4958]: I1006 11:47:54.343398 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9bd5b11f-6414-4c57-99b8-516b3f4be804\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:27Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:27Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8f40315a451e12055987bec9804330dac96578f86d7fdedfd96ed187936df857\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:47:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0b16a70f5c26106f885601fdc9f41b6d606668ed3a4a527be3a7674dd4217d36\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:47:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://93cedfe1a3600f0eb73fd787333690ed1fd8cd5f766a54ba40eb6aad808593c7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:47:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fdb3ac1507f9a55981ed07786d2ab8bd2aab7072e8e625b17f15bc27f40f460d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fdb3ac1507f9a55981ed07786d2ab8bd2aab7072e8e625b17f15bc27f40f460d\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-10-06T11:47:48Z\\\",\\\"message\\\":\\\"file observer\\\\nW1006 11:47:47.776944 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1006 11:47:47.777183 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1006 11:47:47.778589 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-624858867/tls.crt::/tmp/serving-cert-624858867/tls.key\\\\\\\"\\\\nI1006 11:47:48.078332 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1006 11:47:48.084427 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1006 11:47:48.084453 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1006 11:47:48.084475 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1006 11:47:48.084480 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1006 11:47:48.108531 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI1006 11:47:48.108546 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW1006 11:47:48.108561 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1006 11:47:48.108582 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1006 11:47:48.108586 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1006 11:47:48.108589 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1006 11:47:48.108592 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1006 11:47:48.108594 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1006 11:47:48.110441 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-06T11:47:42Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://533257b553a482e13f8c8930f7ee7d6f8b70af1499cf861d3dfe73cb2225aa2a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:47:29Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://70c0a2bcb44c905e3d39402f0992d3e50a939252201b98e9a723f5a6777d7f91\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://70c0a2bcb44c905e3d39402f0992d3e50a939252201b98e9a723f5a6777d7f91\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-06T11:47:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-06T11:47:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T11:47:27Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:47:54Z is after 2025-08-24T17:21:41Z" Oct 06 11:47:54 crc kubenswrapper[4958]: I1006 11:47:54.358776 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:48Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:48Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b5c0c63694c68c28642df297d1ceff03425621c51b170c758b3fedeafbccc89e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:47:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:47:54Z is after 2025-08-24T17:21:41Z" Oct 06 11:47:54 crc kubenswrapper[4958]: I1006 11:47:54.372792 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:50Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:50Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fbac1bba642ea0238b738ec34631d44326b215575297c7ea3595aa9d691849d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:47:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:47:54Z is after 2025-08-24T17:21:41Z" Oct 06 11:47:54 crc kubenswrapper[4958]: I1006 11:47:54.387416 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:46Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:47:54Z is after 2025-08-24T17:21:41Z" Oct 06 11:47:54 crc kubenswrapper[4958]: I1006 11:47:54.399686 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:47:54 crc kubenswrapper[4958]: I1006 11:47:54.399742 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:47:54 crc kubenswrapper[4958]: I1006 11:47:54.399761 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:47:54 crc kubenswrapper[4958]: I1006 11:47:54.399786 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:47:54 crc kubenswrapper[4958]: I1006 11:47:54.399802 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:47:54Z","lastTransitionTime":"2025-10-06T11:47:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:47:54 crc kubenswrapper[4958]: I1006 11:47:54.413019 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-ntrlk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cd589959-144a-41bd-b6d5-a872e5c25cee\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:48Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:48Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vzpjm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vzpjm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vzpjm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vzpjm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vzpjm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vzpjm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vzpjm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vzpjm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://738882f7b5c7b87e6d6426a997dd7efe5daf1d97973cf1085097cc0f5e790f00\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://738882f7b5c7b87e6d6426a997dd7efe5daf1d97973cf1085097cc0f5e790f00\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-06T11:47:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-06T11:47:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vzpjm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T11:47:48Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-ntrlk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:47:54Z is after 2025-08-24T17:21:41Z" Oct 06 11:47:54 crc kubenswrapper[4958]: I1006 11:47:54.466392 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 06 11:47:54 crc kubenswrapper[4958]: E1006 11:47:54.466566 4958 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-06 11:48:02.466537868 +0000 UTC m=+36.352563176 (durationBeforeRetry 8s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 06 11:47:54 crc kubenswrapper[4958]: I1006 11:47:54.504030 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:47:54 crc kubenswrapper[4958]: I1006 11:47:54.504420 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:47:54 crc kubenswrapper[4958]: I1006 11:47:54.504442 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:47:54 crc kubenswrapper[4958]: I1006 11:47:54.504471 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:47:54 crc kubenswrapper[4958]: I1006 11:47:54.504486 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:47:54Z","lastTransitionTime":"2025-10-06T11:47:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:47:54 crc kubenswrapper[4958]: I1006 11:47:54.568376 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 06 11:47:54 crc kubenswrapper[4958]: I1006 11:47:54.568441 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 06 11:47:54 crc kubenswrapper[4958]: I1006 11:47:54.568500 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 06 11:47:54 crc kubenswrapper[4958]: I1006 11:47:54.568557 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 06 11:47:54 crc kubenswrapper[4958]: E1006 11:47:54.568623 4958 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Oct 06 11:47:54 crc kubenswrapper[4958]: E1006 11:47:54.568665 4958 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Oct 06 11:47:54 crc kubenswrapper[4958]: E1006 11:47:54.568682 4958 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Oct 06 11:47:54 crc kubenswrapper[4958]: E1006 11:47:54.568741 4958 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Oct 06 11:47:54 crc kubenswrapper[4958]: E1006 11:47:54.568767 4958 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Oct 06 11:47:54 crc kubenswrapper[4958]: E1006 11:47:54.568773 4958 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Oct 06 11:47:54 crc kubenswrapper[4958]: E1006 11:47:54.568773 4958 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Oct 06 11:47:54 crc kubenswrapper[4958]: E1006 11:47:54.568788 4958 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Oct 06 11:47:54 crc kubenswrapper[4958]: E1006 11:47:54.568800 4958 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2025-10-06 11:48:02.568776992 +0000 UTC m=+36.454802310 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Oct 06 11:47:54 crc kubenswrapper[4958]: E1006 11:47:54.568984 4958 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-10-06 11:48:02.568957126 +0000 UTC m=+36.454982444 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Oct 06 11:47:54 crc kubenswrapper[4958]: E1006 11:47:54.569005 4958 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-10-06 11:48:02.568997697 +0000 UTC m=+36.455023015 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Oct 06 11:47:54 crc kubenswrapper[4958]: E1006 11:47:54.569023 4958 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2025-10-06 11:48:02.569016027 +0000 UTC m=+36.455041345 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Oct 06 11:47:54 crc kubenswrapper[4958]: I1006 11:47:54.608238 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:47:54 crc kubenswrapper[4958]: I1006 11:47:54.608293 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:47:54 crc kubenswrapper[4958]: I1006 11:47:54.608307 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:47:54 crc kubenswrapper[4958]: I1006 11:47:54.608326 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:47:54 crc kubenswrapper[4958]: I1006 11:47:54.608338 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:47:54Z","lastTransitionTime":"2025-10-06T11:47:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:47:54 crc kubenswrapper[4958]: I1006 11:47:54.711564 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:47:54 crc kubenswrapper[4958]: I1006 11:47:54.711639 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:47:54 crc kubenswrapper[4958]: I1006 11:47:54.711668 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:47:54 crc kubenswrapper[4958]: I1006 11:47:54.711701 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:47:54 crc kubenswrapper[4958]: I1006 11:47:54.711731 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:47:54Z","lastTransitionTime":"2025-10-06T11:47:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:47:54 crc kubenswrapper[4958]: I1006 11:47:54.814980 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:47:54 crc kubenswrapper[4958]: I1006 11:47:54.815057 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:47:54 crc kubenswrapper[4958]: I1006 11:47:54.815077 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:47:54 crc kubenswrapper[4958]: I1006 11:47:54.815107 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:47:54 crc kubenswrapper[4958]: I1006 11:47:54.815129 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:47:54Z","lastTransitionTime":"2025-10-06T11:47:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:47:54 crc kubenswrapper[4958]: I1006 11:47:54.913031 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 06 11:47:54 crc kubenswrapper[4958]: I1006 11:47:54.913220 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 06 11:47:54 crc kubenswrapper[4958]: E1006 11:47:54.913409 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Oct 06 11:47:54 crc kubenswrapper[4958]: I1006 11:47:54.913735 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 06 11:47:54 crc kubenswrapper[4958]: E1006 11:47:54.913865 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Oct 06 11:47:54 crc kubenswrapper[4958]: E1006 11:47:54.914022 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Oct 06 11:47:54 crc kubenswrapper[4958]: I1006 11:47:54.918053 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:47:54 crc kubenswrapper[4958]: I1006 11:47:54.918102 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:47:54 crc kubenswrapper[4958]: I1006 11:47:54.918125 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:47:54 crc kubenswrapper[4958]: I1006 11:47:54.918188 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:47:54 crc kubenswrapper[4958]: I1006 11:47:54.918208 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:47:54Z","lastTransitionTime":"2025-10-06T11:47:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:47:55 crc kubenswrapper[4958]: I1006 11:47:55.021563 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:47:55 crc kubenswrapper[4958]: I1006 11:47:55.021633 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:47:55 crc kubenswrapper[4958]: I1006 11:47:55.021652 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:47:55 crc kubenswrapper[4958]: I1006 11:47:55.021676 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:47:55 crc kubenswrapper[4958]: I1006 11:47:55.021693 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:47:55Z","lastTransitionTime":"2025-10-06T11:47:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:47:55 crc kubenswrapper[4958]: I1006 11:47:55.137084 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:47:55 crc kubenswrapper[4958]: I1006 11:47:55.137138 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:47:55 crc kubenswrapper[4958]: I1006 11:47:55.137167 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:47:55 crc kubenswrapper[4958]: I1006 11:47:55.137393 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:47:55 crc kubenswrapper[4958]: I1006 11:47:55.137413 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:47:55Z","lastTransitionTime":"2025-10-06T11:47:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:47:55 crc kubenswrapper[4958]: I1006 11:47:55.178271 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-ntrlk" event={"ID":"cd589959-144a-41bd-b6d5-a872e5c25cee","Type":"ContainerStarted","Data":"61b99876556ce7323716f2759c1bcf660a211c42c1daf2a32475f512b97fb431"} Oct 06 11:47:55 crc kubenswrapper[4958]: I1006 11:47:55.178794 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-ntrlk" Oct 06 11:47:55 crc kubenswrapper[4958]: I1006 11:47:55.188476 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-lwknw" event={"ID":"5e46de0e-9f67-4dae-8601-65004d0d71c1","Type":"ContainerStarted","Data":"6adb713c9c2659566eee132d371a68bc10d24c6761c017a1e8b0ce8bf974b9fa"} Oct 06 11:47:55 crc kubenswrapper[4958]: I1006 11:47:55.201097 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9bd5b11f-6414-4c57-99b8-516b3f4be804\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:27Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:27Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8f40315a451e12055987bec9804330dac96578f86d7fdedfd96ed187936df857\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:47:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0b16a70f5c26106f885601fdc9f41b6d606668ed3a4a527be3a7674dd4217d36\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:47:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://93cedfe1a3600f0eb73fd787333690ed1fd8cd5f766a54ba40eb6aad808593c7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:47:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fdb3ac1507f9a55981ed07786d2ab8bd2aab7072e8e625b17f15bc27f40f460d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fdb3ac1507f9a55981ed07786d2ab8bd2aab7072e8e625b17f15bc27f40f460d\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-10-06T11:47:48Z\\\",\\\"message\\\":\\\"file observer\\\\nW1006 11:47:47.776944 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1006 11:47:47.777183 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1006 11:47:47.778589 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-624858867/tls.crt::/tmp/serving-cert-624858867/tls.key\\\\\\\"\\\\nI1006 11:47:48.078332 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1006 11:47:48.084427 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1006 11:47:48.084453 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1006 11:47:48.084475 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1006 11:47:48.084480 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1006 11:47:48.108531 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI1006 11:47:48.108546 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW1006 11:47:48.108561 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1006 11:47:48.108582 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1006 11:47:48.108586 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1006 11:47:48.108589 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1006 11:47:48.108592 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1006 11:47:48.108594 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1006 11:47:48.110441 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-06T11:47:42Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://533257b553a482e13f8c8930f7ee7d6f8b70af1499cf861d3dfe73cb2225aa2a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:47:29Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://70c0a2bcb44c905e3d39402f0992d3e50a939252201b98e9a723f5a6777d7f91\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://70c0a2bcb44c905e3d39402f0992d3e50a939252201b98e9a723f5a6777d7f91\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-06T11:47:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-06T11:47:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T11:47:27Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:47:55Z is after 2025-08-24T17:21:41Z" Oct 06 11:47:55 crc kubenswrapper[4958]: I1006 11:47:55.219347 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-ntrlk" Oct 06 11:47:55 crc kubenswrapper[4958]: I1006 11:47:55.225749 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:48Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:48Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b5c0c63694c68c28642df297d1ceff03425621c51b170c758b3fedeafbccc89e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:47:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:47:55Z is after 2025-08-24T17:21:41Z" Oct 06 11:47:55 crc kubenswrapper[4958]: I1006 11:47:55.240172 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:47:55 crc kubenswrapper[4958]: I1006 11:47:55.240248 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:47:55 crc kubenswrapper[4958]: I1006 11:47:55.240269 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:47:55 crc kubenswrapper[4958]: I1006 11:47:55.240300 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:47:55 crc kubenswrapper[4958]: I1006 11:47:55.240320 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:47:55Z","lastTransitionTime":"2025-10-06T11:47:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:47:55 crc kubenswrapper[4958]: I1006 11:47:55.243106 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:50Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:50Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fbac1bba642ea0238b738ec34631d44326b215575297c7ea3595aa9d691849d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:47:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:47:55Z is after 2025-08-24T17:21:41Z" Oct 06 11:47:55 crc kubenswrapper[4958]: I1006 11:47:55.256264 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:46Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:47:55Z is after 2025-08-24T17:21:41Z" Oct 06 11:47:55 crc kubenswrapper[4958]: I1006 11:47:55.287100 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-ntrlk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cd589959-144a-41bd-b6d5-a872e5c25cee\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:48Z\\\",\\\"message\\\":\\\"containers with unready status: [nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:48Z\\\",\\\"message\\\":\\\"containers with unready status: [nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bc57f0ff822a9fcb8c3b73146bf4879f6e05c2f92b6a4317528530e54e419278\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:47:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vzpjm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a38398fb9a752d0059e1adfd4ee23557572492f2d27f63ef90ed4730b742fd0e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:47:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vzpjm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://484bfcace3003fb125465d9296c1b04852447d674b6c4abc8fedbd3a1b6ea8be\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:47:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vzpjm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d7834f1b3141381f7d868acc4aceb2809e81896cb7082de9888c3d540d062078\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:47:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vzpjm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://051e26a11e5f65ab6e49664f99a5f6a87bf781bf7284f2e2e3996c0e26e77c84\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:47:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vzpjm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d90731b1e085c514ef0665d4df356b853f432a1012f84a41ed82fa257d57d9bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:47:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vzpjm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://61b99876556ce7323716f2759c1bcf660a211c42c1daf2a32475f512b97fb431\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:47:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vzpjm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1398aae03a42cf60100ed334a3d65c9c4fd501855d5316e7e1ed3974d0c7e22e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:47:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vzpjm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://738882f7b5c7b87e6d6426a997dd7efe5daf1d97973cf1085097cc0f5e790f00\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://738882f7b5c7b87e6d6426a997dd7efe5daf1d97973cf1085097cc0f5e790f00\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-06T11:47:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-06T11:47:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vzpjm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T11:47:48Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-ntrlk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:47:55Z is after 2025-08-24T17:21:41Z" Oct 06 11:47:55 crc kubenswrapper[4958]: I1006 11:47:55.308210 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:48Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:48Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3a93985bd27818990d6f5ec103d8c0722e1317a74398d64300148018c4f77b71\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:47:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9f39e4dcc064167825d4f7323323a394d1a3d1a374b7104bffc6ad56b344d89e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:47:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:47:55Z is after 2025-08-24T17:21:41Z" Oct 06 11:47:55 crc kubenswrapper[4958]: I1006 11:47:55.329774 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:46Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:47:55Z is after 2025-08-24T17:21:41Z" Oct 06 11:47:55 crc kubenswrapper[4958]: I1006 11:47:55.344376 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:47:55 crc kubenswrapper[4958]: I1006 11:47:55.344772 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:47:55 crc kubenswrapper[4958]: I1006 11:47:55.345010 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:47:55 crc kubenswrapper[4958]: I1006 11:47:55.345295 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:47:55 crc kubenswrapper[4958]: I1006 11:47:55.345515 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:47:55Z","lastTransitionTime":"2025-10-06T11:47:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:47:55 crc kubenswrapper[4958]: I1006 11:47:55.355520 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-lwknw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5e46de0e-9f67-4dae-8601-65004d0d71c1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:47Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:47Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wghx2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://244104564b5ad6b646671bdc75f440e5568a0c2449c111e56431328365d044fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://244104564b5ad6b646671bdc75f440e5568a0c2449c111e56431328365d044fe\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-06T11:47:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-06T11:47:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wghx2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5f832e5afbc55af6f3de6756005ddfef328c3d8357b1bbc8765469e5b254cf8a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5f832e5afbc55af6f3de6756005ddfef328c3d8357b1bbc8765469e5b254cf8a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-06T11:47:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-06T11:47:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wghx2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://99bec1a2f08582cd407d108d771389ab034877ee58f59e7d899541d0c69809b0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://99bec1a2f08582cd407d108d771389ab034877ee58f59e7d899541d0c69809b0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-06T11:47:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-06T11:47:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wghx2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4d090f1099ca4b6472934163fb66f1875dc03819cf0907f3058aa5006d9d2198\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4d090f1099ca4b6472934163fb66f1875dc03819cf0907f3058aa5006d9d2198\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-06T11:47:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-06T11:47:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wghx2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://41bdbeadff713969b3914296b7f045991c7da126f8181ee9b60d4607d7a565dd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://41bdbeadff713969b3914296b7f045991c7da126f8181ee9b60d4607d7a565dd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-06T11:47:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-06T11:47:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wghx2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3e496dd75c5dd4e480d06cd46302b6172620b35eb24aff3308865a0931677bc4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3e496dd75c5dd4e480d06cd46302b6172620b35eb24aff3308865a0931677bc4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-06T11:47:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-06T11:47:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wghx2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T11:47:47Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-lwknw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:47:55Z is after 2025-08-24T17:21:41Z" Oct 06 11:47:55 crc kubenswrapper[4958]: I1006 11:47:55.375297 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-4w4h5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8198c8fd-8ec9-4a56-9e87-4e3967fb8ec7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://979bf29caab50b132b30dd6eff62aed5d2209ae1c43fde2433101410df35035b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:47:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2ncxs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T11:47:47Z\\\"}}\" for pod \"openshift-multus\"/\"multus-4w4h5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:47:55Z is after 2025-08-24T17:21:41Z" Oct 06 11:47:55 crc kubenswrapper[4958]: I1006 11:47:55.394639 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-whw6z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1a63a5e9-6d00-40ff-a080-cfcaf03a1c1b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b4fae35f5c37636077eb5d2fee37236e39d009d77aa158458d267aed5fc29f48\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:47:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bwd9v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0be6251825a9cd485da028e6f7fd169dae74215a7191607dfd0a4b7a470e43c4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:47:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bwd9v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T11:47:47Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-whw6z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:47:55Z is after 2025-08-24T17:21:41Z" Oct 06 11:47:55 crc kubenswrapper[4958]: I1006 11:47:55.415295 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e346fee2-2887-49c0-ad05-0e4ca19453e4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://42a6335b390e5de2da2d935de9e0cb1e60391521813d0928c59d1d9c27ae8e7f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:47:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://64dfdaf2d8faa6e7ebde08361728077d3db9118ec963dc4c1fc8caf9eb9c752b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:47:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5cbba96aaea45073aaeada1136c677ba1ba87248f004cf3d7394ddfccf412478\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:47:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://95e5c689ad307b63dec13d774d1ff8fe8fa0a11a7d33e18a0470000a4df0f6dc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:47:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T11:47:27Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:47:55Z is after 2025-08-24T17:21:41Z" Oct 06 11:47:55 crc kubenswrapper[4958]: I1006 11:47:55.435307 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:46Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:47:55Z is after 2025-08-24T17:21:41Z" Oct 06 11:47:55 crc kubenswrapper[4958]: I1006 11:47:55.450028 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:47:55 crc kubenswrapper[4958]: I1006 11:47:55.450109 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:47:55 crc kubenswrapper[4958]: I1006 11:47:55.450132 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:47:55 crc kubenswrapper[4958]: I1006 11:47:55.450207 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:47:55 crc kubenswrapper[4958]: I1006 11:47:55.450233 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:47:55Z","lastTransitionTime":"2025-10-06T11:47:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:47:55 crc kubenswrapper[4958]: I1006 11:47:55.453747 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-8xzjs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c3545ace-6477-4c0b-9576-f32c6748e0a0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d94d788ac8f6a4665db7852c21f55128c9afb7976c9e18fbc302c02d8c3a0c43\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:47:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lz2sh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T11:47:47Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-8xzjs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:47:55Z is after 2025-08-24T17:21:41Z" Oct 06 11:47:55 crc kubenswrapper[4958]: I1006 11:47:55.471955 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-vns7w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f33eae78-7219-4551-bea7-dcfaf22c4e62\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7c6a67770ab5e7e0af1ed6a7b4965bc2b3937e021ddf18a765edd7b65196ad5c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:47:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f8htn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T11:47:49Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-vns7w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:47:55Z is after 2025-08-24T17:21:41Z" Oct 06 11:47:55 crc kubenswrapper[4958]: I1006 11:47:55.496395 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9bd5b11f-6414-4c57-99b8-516b3f4be804\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:27Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:27Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8f40315a451e12055987bec9804330dac96578f86d7fdedfd96ed187936df857\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:47:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0b16a70f5c26106f885601fdc9f41b6d606668ed3a4a527be3a7674dd4217d36\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:47:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://93cedfe1a3600f0eb73fd787333690ed1fd8cd5f766a54ba40eb6aad808593c7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:47:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fdb3ac1507f9a55981ed07786d2ab8bd2aab7072e8e625b17f15bc27f40f460d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fdb3ac1507f9a55981ed07786d2ab8bd2aab7072e8e625b17f15bc27f40f460d\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-10-06T11:47:48Z\\\",\\\"message\\\":\\\"file observer\\\\nW1006 11:47:47.776944 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1006 11:47:47.777183 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1006 11:47:47.778589 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-624858867/tls.crt::/tmp/serving-cert-624858867/tls.key\\\\\\\"\\\\nI1006 11:47:48.078332 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1006 11:47:48.084427 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1006 11:47:48.084453 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1006 11:47:48.084475 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1006 11:47:48.084480 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1006 11:47:48.108531 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI1006 11:47:48.108546 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW1006 11:47:48.108561 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1006 11:47:48.108582 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1006 11:47:48.108586 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1006 11:47:48.108589 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1006 11:47:48.108592 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1006 11:47:48.108594 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1006 11:47:48.110441 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-06T11:47:42Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://533257b553a482e13f8c8930f7ee7d6f8b70af1499cf861d3dfe73cb2225aa2a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:47:29Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://70c0a2bcb44c905e3d39402f0992d3e50a939252201b98e9a723f5a6777d7f91\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://70c0a2bcb44c905e3d39402f0992d3e50a939252201b98e9a723f5a6777d7f91\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-06T11:47:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-06T11:47:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T11:47:27Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:47:55Z is after 2025-08-24T17:21:41Z" Oct 06 11:47:55 crc kubenswrapper[4958]: I1006 11:47:55.521820 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:48Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:48Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b5c0c63694c68c28642df297d1ceff03425621c51b170c758b3fedeafbccc89e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:47:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:47:55Z is after 2025-08-24T17:21:41Z" Oct 06 11:47:55 crc kubenswrapper[4958]: I1006 11:47:55.541000 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:50Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:50Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fbac1bba642ea0238b738ec34631d44326b215575297c7ea3595aa9d691849d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:47:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:47:55Z is after 2025-08-24T17:21:41Z" Oct 06 11:47:55 crc kubenswrapper[4958]: I1006 11:47:55.553058 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:47:55 crc kubenswrapper[4958]: I1006 11:47:55.553101 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:47:55 crc kubenswrapper[4958]: I1006 11:47:55.553113 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:47:55 crc kubenswrapper[4958]: I1006 11:47:55.553132 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:47:55 crc kubenswrapper[4958]: I1006 11:47:55.553173 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:47:55Z","lastTransitionTime":"2025-10-06T11:47:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:47:55 crc kubenswrapper[4958]: I1006 11:47:55.561004 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:46Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:47:55Z is after 2025-08-24T17:21:41Z" Oct 06 11:47:55 crc kubenswrapper[4958]: I1006 11:47:55.596664 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-ntrlk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cd589959-144a-41bd-b6d5-a872e5c25cee\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:48Z\\\",\\\"message\\\":\\\"containers with unready status: [nbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:48Z\\\",\\\"message\\\":\\\"containers with unready status: [nbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bc57f0ff822a9fcb8c3b73146bf4879f6e05c2f92b6a4317528530e54e419278\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:47:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vzpjm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a38398fb9a752d0059e1adfd4ee23557572492f2d27f63ef90ed4730b742fd0e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:47:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vzpjm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://484bfcace3003fb125465d9296c1b04852447d674b6c4abc8fedbd3a1b6ea8be\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:47:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vzpjm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d7834f1b3141381f7d868acc4aceb2809e81896cb7082de9888c3d540d062078\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:47:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vzpjm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://051e26a11e5f65ab6e49664f99a5f6a87bf781bf7284f2e2e3996c0e26e77c84\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:47:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vzpjm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d90731b1e085c514ef0665d4df356b853f432a1012f84a41ed82fa257d57d9bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:47:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vzpjm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://61b99876556ce7323716f2759c1bcf660a211c42c1daf2a32475f512b97fb431\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:47:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vzpjm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1398aae03a42cf60100ed334a3d65c9c4fd501855d5316e7e1ed3974d0c7e22e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:47:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vzpjm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://738882f7b5c7b87e6d6426a997dd7efe5daf1d97973cf1085097cc0f5e790f00\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://738882f7b5c7b87e6d6426a997dd7efe5daf1d97973cf1085097cc0f5e790f00\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-06T11:47:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-06T11:47:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vzpjm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T11:47:48Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-ntrlk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:47:55Z is after 2025-08-24T17:21:41Z" Oct 06 11:47:55 crc kubenswrapper[4958]: I1006 11:47:55.618699 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:48Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:48Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3a93985bd27818990d6f5ec103d8c0722e1317a74398d64300148018c4f77b71\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:47:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9f39e4dcc064167825d4f7323323a394d1a3d1a374b7104bffc6ad56b344d89e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:47:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:47:55Z is after 2025-08-24T17:21:41Z" Oct 06 11:47:55 crc kubenswrapper[4958]: I1006 11:47:55.636871 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:46Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:47:55Z is after 2025-08-24T17:21:41Z" Oct 06 11:47:55 crc kubenswrapper[4958]: I1006 11:47:55.655929 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:47:55 crc kubenswrapper[4958]: I1006 11:47:55.655968 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:47:55 crc kubenswrapper[4958]: I1006 11:47:55.655978 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:47:55 crc kubenswrapper[4958]: I1006 11:47:55.655998 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:47:55 crc kubenswrapper[4958]: I1006 11:47:55.656012 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:47:55Z","lastTransitionTime":"2025-10-06T11:47:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:47:55 crc kubenswrapper[4958]: I1006 11:47:55.657720 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-lwknw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5e46de0e-9f67-4dae-8601-65004d0d71c1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6adb713c9c2659566eee132d371a68bc10d24c6761c017a1e8b0ce8bf974b9fa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:47:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wghx2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://244104564b5ad6b646671bdc75f440e5568a0c2449c111e56431328365d044fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://244104564b5ad6b646671bdc75f440e5568a0c2449c111e56431328365d044fe\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-06T11:47:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-06T11:47:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wghx2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5f832e5afbc55af6f3de6756005ddfef328c3d8357b1bbc8765469e5b254cf8a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5f832e5afbc55af6f3de6756005ddfef328c3d8357b1bbc8765469e5b254cf8a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-06T11:47:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-06T11:47:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wghx2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://99bec1a2f08582cd407d108d771389ab034877ee58f59e7d899541d0c69809b0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://99bec1a2f08582cd407d108d771389ab034877ee58f59e7d899541d0c69809b0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-06T11:47:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-06T11:47:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wghx2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4d090f1099ca4b6472934163fb66f1875dc03819cf0907f3058aa5006d9d2198\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4d090f1099ca4b6472934163fb66f1875dc03819cf0907f3058aa5006d9d2198\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-06T11:47:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-06T11:47:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wghx2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://41bdbeadff713969b3914296b7f045991c7da126f8181ee9b60d4607d7a565dd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://41bdbeadff713969b3914296b7f045991c7da126f8181ee9b60d4607d7a565dd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-06T11:47:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-06T11:47:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wghx2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3e496dd75c5dd4e480d06cd46302b6172620b35eb24aff3308865a0931677bc4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3e496dd75c5dd4e480d06cd46302b6172620b35eb24aff3308865a0931677bc4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-06T11:47:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-06T11:47:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wghx2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T11:47:47Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-lwknw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:47:55Z is after 2025-08-24T17:21:41Z" Oct 06 11:47:55 crc kubenswrapper[4958]: I1006 11:47:55.674422 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-4w4h5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8198c8fd-8ec9-4a56-9e87-4e3967fb8ec7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://979bf29caab50b132b30dd6eff62aed5d2209ae1c43fde2433101410df35035b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:47:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2ncxs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T11:47:47Z\\\"}}\" for pod \"openshift-multus\"/\"multus-4w4h5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:47:55Z is after 2025-08-24T17:21:41Z" Oct 06 11:47:55 crc kubenswrapper[4958]: I1006 11:47:55.685383 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-whw6z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1a63a5e9-6d00-40ff-a080-cfcaf03a1c1b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b4fae35f5c37636077eb5d2fee37236e39d009d77aa158458d267aed5fc29f48\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:47:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bwd9v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0be6251825a9cd485da028e6f7fd169dae74215a7191607dfd0a4b7a470e43c4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:47:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bwd9v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T11:47:47Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-whw6z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:47:55Z is after 2025-08-24T17:21:41Z" Oct 06 11:47:55 crc kubenswrapper[4958]: I1006 11:47:55.709406 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e346fee2-2887-49c0-ad05-0e4ca19453e4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://42a6335b390e5de2da2d935de9e0cb1e60391521813d0928c59d1d9c27ae8e7f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:47:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://64dfdaf2d8faa6e7ebde08361728077d3db9118ec963dc4c1fc8caf9eb9c752b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:47:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5cbba96aaea45073aaeada1136c677ba1ba87248f004cf3d7394ddfccf412478\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:47:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://95e5c689ad307b63dec13d774d1ff8fe8fa0a11a7d33e18a0470000a4df0f6dc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:47:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T11:47:27Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:47:55Z is after 2025-08-24T17:21:41Z" Oct 06 11:47:55 crc kubenswrapper[4958]: I1006 11:47:55.725302 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:46Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:47:55Z is after 2025-08-24T17:21:41Z" Oct 06 11:47:55 crc kubenswrapper[4958]: I1006 11:47:55.739526 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-8xzjs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c3545ace-6477-4c0b-9576-f32c6748e0a0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d94d788ac8f6a4665db7852c21f55128c9afb7976c9e18fbc302c02d8c3a0c43\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:47:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lz2sh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T11:47:47Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-8xzjs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:47:55Z is after 2025-08-24T17:21:41Z" Oct 06 11:47:55 crc kubenswrapper[4958]: I1006 11:47:55.751685 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-vns7w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f33eae78-7219-4551-bea7-dcfaf22c4e62\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7c6a67770ab5e7e0af1ed6a7b4965bc2b3937e021ddf18a765edd7b65196ad5c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:47:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f8htn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T11:47:49Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-vns7w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:47:55Z is after 2025-08-24T17:21:41Z" Oct 06 11:47:55 crc kubenswrapper[4958]: I1006 11:47:55.758418 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:47:55 crc kubenswrapper[4958]: I1006 11:47:55.758465 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:47:55 crc kubenswrapper[4958]: I1006 11:47:55.758483 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:47:55 crc kubenswrapper[4958]: I1006 11:47:55.758505 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:47:55 crc kubenswrapper[4958]: I1006 11:47:55.758522 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:47:55Z","lastTransitionTime":"2025-10-06T11:47:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:47:55 crc kubenswrapper[4958]: I1006 11:47:55.861515 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:47:55 crc kubenswrapper[4958]: I1006 11:47:55.861600 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:47:55 crc kubenswrapper[4958]: I1006 11:47:55.861628 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:47:55 crc kubenswrapper[4958]: I1006 11:47:55.861662 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:47:55 crc kubenswrapper[4958]: I1006 11:47:55.861685 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:47:55Z","lastTransitionTime":"2025-10-06T11:47:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:47:55 crc kubenswrapper[4958]: I1006 11:47:55.965298 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:47:55 crc kubenswrapper[4958]: I1006 11:47:55.965345 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:47:55 crc kubenswrapper[4958]: I1006 11:47:55.965354 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:47:55 crc kubenswrapper[4958]: I1006 11:47:55.965372 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:47:55 crc kubenswrapper[4958]: I1006 11:47:55.965383 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:47:55Z","lastTransitionTime":"2025-10-06T11:47:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:47:56 crc kubenswrapper[4958]: I1006 11:47:56.068413 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:47:56 crc kubenswrapper[4958]: I1006 11:47:56.068467 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:47:56 crc kubenswrapper[4958]: I1006 11:47:56.068477 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:47:56 crc kubenswrapper[4958]: I1006 11:47:56.068500 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:47:56 crc kubenswrapper[4958]: I1006 11:47:56.068514 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:47:56Z","lastTransitionTime":"2025-10-06T11:47:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:47:56 crc kubenswrapper[4958]: I1006 11:47:56.171938 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:47:56 crc kubenswrapper[4958]: I1006 11:47:56.171995 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:47:56 crc kubenswrapper[4958]: I1006 11:47:56.172007 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:47:56 crc kubenswrapper[4958]: I1006 11:47:56.172029 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:47:56 crc kubenswrapper[4958]: I1006 11:47:56.172042 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:47:56Z","lastTransitionTime":"2025-10-06T11:47:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:47:56 crc kubenswrapper[4958]: I1006 11:47:56.192501 4958 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Oct 06 11:47:56 crc kubenswrapper[4958]: I1006 11:47:56.192981 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-ntrlk" Oct 06 11:47:56 crc kubenswrapper[4958]: I1006 11:47:56.264628 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-ntrlk" Oct 06 11:47:56 crc kubenswrapper[4958]: I1006 11:47:56.275917 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:47:56 crc kubenswrapper[4958]: I1006 11:47:56.275976 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:47:56 crc kubenswrapper[4958]: I1006 11:47:56.275996 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:47:56 crc kubenswrapper[4958]: I1006 11:47:56.276021 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:47:56 crc kubenswrapper[4958]: I1006 11:47:56.276040 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:47:56Z","lastTransitionTime":"2025-10-06T11:47:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:47:56 crc kubenswrapper[4958]: I1006 11:47:56.285604 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:46Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:47:56Z is after 2025-08-24T17:21:41Z" Oct 06 11:47:56 crc kubenswrapper[4958]: I1006 11:47:56.317182 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-ntrlk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cd589959-144a-41bd-b6d5-a872e5c25cee\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:48Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:48Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bc57f0ff822a9fcb8c3b73146bf4879f6e05c2f92b6a4317528530e54e419278\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:47:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vzpjm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a38398fb9a752d0059e1adfd4ee23557572492f2d27f63ef90ed4730b742fd0e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:47:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vzpjm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://484bfcace3003fb125465d9296c1b04852447d674b6c4abc8fedbd3a1b6ea8be\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:47:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vzpjm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d7834f1b3141381f7d868acc4aceb2809e81896cb7082de9888c3d540d062078\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:47:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vzpjm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://051e26a11e5f65ab6e49664f99a5f6a87bf781bf7284f2e2e3996c0e26e77c84\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:47:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vzpjm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d90731b1e085c514ef0665d4df356b853f432a1012f84a41ed82fa257d57d9bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:47:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vzpjm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://61b99876556ce7323716f2759c1bcf660a211c42c1daf2a32475f512b97fb431\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:47:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vzpjm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1398aae03a42cf60100ed334a3d65c9c4fd501855d5316e7e1ed3974d0c7e22e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:47:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vzpjm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://738882f7b5c7b87e6d6426a997dd7efe5daf1d97973cf1085097cc0f5e790f00\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://738882f7b5c7b87e6d6426a997dd7efe5daf1d97973cf1085097cc0f5e790f00\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-06T11:47:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-06T11:47:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vzpjm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T11:47:48Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-ntrlk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:47:56Z is after 2025-08-24T17:21:41Z" Oct 06 11:47:56 crc kubenswrapper[4958]: I1006 11:47:56.335830 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-whw6z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1a63a5e9-6d00-40ff-a080-cfcaf03a1c1b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b4fae35f5c37636077eb5d2fee37236e39d009d77aa158458d267aed5fc29f48\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:47:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bwd9v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0be6251825a9cd485da028e6f7fd169dae74215a7191607dfd0a4b7a470e43c4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:47:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bwd9v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T11:47:47Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-whw6z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:47:56Z is after 2025-08-24T17:21:41Z" Oct 06 11:47:56 crc kubenswrapper[4958]: I1006 11:47:56.355785 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:48Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:48Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3a93985bd27818990d6f5ec103d8c0722e1317a74398d64300148018c4f77b71\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:47:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9f39e4dcc064167825d4f7323323a394d1a3d1a374b7104bffc6ad56b344d89e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:47:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:47:56Z is after 2025-08-24T17:21:41Z" Oct 06 11:47:56 crc kubenswrapper[4958]: I1006 11:47:56.375781 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:46Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:47:56Z is after 2025-08-24T17:21:41Z" Oct 06 11:47:56 crc kubenswrapper[4958]: I1006 11:47:56.379381 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:47:56 crc kubenswrapper[4958]: I1006 11:47:56.379443 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:47:56 crc kubenswrapper[4958]: I1006 11:47:56.379463 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:47:56 crc kubenswrapper[4958]: I1006 11:47:56.379491 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:47:56 crc kubenswrapper[4958]: I1006 11:47:56.379513 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:47:56Z","lastTransitionTime":"2025-10-06T11:47:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:47:56 crc kubenswrapper[4958]: I1006 11:47:56.402419 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-lwknw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5e46de0e-9f67-4dae-8601-65004d0d71c1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6adb713c9c2659566eee132d371a68bc10d24c6761c017a1e8b0ce8bf974b9fa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:47:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wghx2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://244104564b5ad6b646671bdc75f440e5568a0c2449c111e56431328365d044fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://244104564b5ad6b646671bdc75f440e5568a0c2449c111e56431328365d044fe\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-06T11:47:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-06T11:47:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wghx2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5f832e5afbc55af6f3de6756005ddfef328c3d8357b1bbc8765469e5b254cf8a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5f832e5afbc55af6f3de6756005ddfef328c3d8357b1bbc8765469e5b254cf8a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-06T11:47:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-06T11:47:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wghx2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://99bec1a2f08582cd407d108d771389ab034877ee58f59e7d899541d0c69809b0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://99bec1a2f08582cd407d108d771389ab034877ee58f59e7d899541d0c69809b0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-06T11:47:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-06T11:47:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wghx2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4d090f1099ca4b6472934163fb66f1875dc03819cf0907f3058aa5006d9d2198\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4d090f1099ca4b6472934163fb66f1875dc03819cf0907f3058aa5006d9d2198\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-06T11:47:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-06T11:47:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wghx2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://41bdbeadff713969b3914296b7f045991c7da126f8181ee9b60d4607d7a565dd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://41bdbeadff713969b3914296b7f045991c7da126f8181ee9b60d4607d7a565dd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-06T11:47:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-06T11:47:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wghx2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3e496dd75c5dd4e480d06cd46302b6172620b35eb24aff3308865a0931677bc4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3e496dd75c5dd4e480d06cd46302b6172620b35eb24aff3308865a0931677bc4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-06T11:47:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-06T11:47:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wghx2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T11:47:47Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-lwknw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:47:56Z is after 2025-08-24T17:21:41Z" Oct 06 11:47:56 crc kubenswrapper[4958]: I1006 11:47:56.418474 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-4w4h5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8198c8fd-8ec9-4a56-9e87-4e3967fb8ec7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://979bf29caab50b132b30dd6eff62aed5d2209ae1c43fde2433101410df35035b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:47:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2ncxs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T11:47:47Z\\\"}}\" for pod \"openshift-multus\"/\"multus-4w4h5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:47:56Z is after 2025-08-24T17:21:41Z" Oct 06 11:47:56 crc kubenswrapper[4958]: I1006 11:47:56.437093 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e346fee2-2887-49c0-ad05-0e4ca19453e4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://42a6335b390e5de2da2d935de9e0cb1e60391521813d0928c59d1d9c27ae8e7f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:47:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://64dfdaf2d8faa6e7ebde08361728077d3db9118ec963dc4c1fc8caf9eb9c752b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:47:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5cbba96aaea45073aaeada1136c677ba1ba87248f004cf3d7394ddfccf412478\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:47:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://95e5c689ad307b63dec13d774d1ff8fe8fa0a11a7d33e18a0470000a4df0f6dc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:47:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T11:47:27Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:47:56Z is after 2025-08-24T17:21:41Z" Oct 06 11:47:56 crc kubenswrapper[4958]: I1006 11:47:56.458184 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:46Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:47:56Z is after 2025-08-24T17:21:41Z" Oct 06 11:47:56 crc kubenswrapper[4958]: I1006 11:47:56.473446 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-8xzjs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c3545ace-6477-4c0b-9576-f32c6748e0a0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d94d788ac8f6a4665db7852c21f55128c9afb7976c9e18fbc302c02d8c3a0c43\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:47:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lz2sh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T11:47:47Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-8xzjs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:47:56Z is after 2025-08-24T17:21:41Z" Oct 06 11:47:56 crc kubenswrapper[4958]: I1006 11:47:56.482996 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:47:56 crc kubenswrapper[4958]: I1006 11:47:56.483055 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:47:56 crc kubenswrapper[4958]: I1006 11:47:56.483073 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:47:56 crc kubenswrapper[4958]: I1006 11:47:56.483099 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:47:56 crc kubenswrapper[4958]: I1006 11:47:56.483118 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:47:56Z","lastTransitionTime":"2025-10-06T11:47:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:47:56 crc kubenswrapper[4958]: I1006 11:47:56.489406 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-vns7w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f33eae78-7219-4551-bea7-dcfaf22c4e62\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7c6a67770ab5e7e0af1ed6a7b4965bc2b3937e021ddf18a765edd7b65196ad5c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:47:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f8htn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T11:47:49Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-vns7w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:47:56Z is after 2025-08-24T17:21:41Z" Oct 06 11:47:56 crc kubenswrapper[4958]: I1006 11:47:56.514817 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9bd5b11f-6414-4c57-99b8-516b3f4be804\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:27Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:27Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8f40315a451e12055987bec9804330dac96578f86d7fdedfd96ed187936df857\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:47:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0b16a70f5c26106f885601fdc9f41b6d606668ed3a4a527be3a7674dd4217d36\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:47:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://93cedfe1a3600f0eb73fd787333690ed1fd8cd5f766a54ba40eb6aad808593c7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:47:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fdb3ac1507f9a55981ed07786d2ab8bd2aab7072e8e625b17f15bc27f40f460d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fdb3ac1507f9a55981ed07786d2ab8bd2aab7072e8e625b17f15bc27f40f460d\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-10-06T11:47:48Z\\\",\\\"message\\\":\\\"file observer\\\\nW1006 11:47:47.776944 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1006 11:47:47.777183 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1006 11:47:47.778589 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-624858867/tls.crt::/tmp/serving-cert-624858867/tls.key\\\\\\\"\\\\nI1006 11:47:48.078332 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1006 11:47:48.084427 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1006 11:47:48.084453 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1006 11:47:48.084475 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1006 11:47:48.084480 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1006 11:47:48.108531 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI1006 11:47:48.108546 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW1006 11:47:48.108561 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1006 11:47:48.108582 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1006 11:47:48.108586 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1006 11:47:48.108589 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1006 11:47:48.108592 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1006 11:47:48.108594 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1006 11:47:48.110441 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-06T11:47:42Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://533257b553a482e13f8c8930f7ee7d6f8b70af1499cf861d3dfe73cb2225aa2a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:47:29Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://70c0a2bcb44c905e3d39402f0992d3e50a939252201b98e9a723f5a6777d7f91\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://70c0a2bcb44c905e3d39402f0992d3e50a939252201b98e9a723f5a6777d7f91\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-06T11:47:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-06T11:47:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T11:47:27Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:47:56Z is after 2025-08-24T17:21:41Z" Oct 06 11:47:56 crc kubenswrapper[4958]: I1006 11:47:56.537473 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:48Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:48Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b5c0c63694c68c28642df297d1ceff03425621c51b170c758b3fedeafbccc89e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:47:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:47:56Z is after 2025-08-24T17:21:41Z" Oct 06 11:47:56 crc kubenswrapper[4958]: I1006 11:47:56.556788 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:50Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:50Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fbac1bba642ea0238b738ec34631d44326b215575297c7ea3595aa9d691849d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:47:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:47:56Z is after 2025-08-24T17:21:41Z" Oct 06 11:47:56 crc kubenswrapper[4958]: I1006 11:47:56.585627 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:47:56 crc kubenswrapper[4958]: I1006 11:47:56.585694 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:47:56 crc kubenswrapper[4958]: I1006 11:47:56.585709 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:47:56 crc kubenswrapper[4958]: I1006 11:47:56.585732 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:47:56 crc kubenswrapper[4958]: I1006 11:47:56.585745 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:47:56Z","lastTransitionTime":"2025-10-06T11:47:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:47:56 crc kubenswrapper[4958]: I1006 11:47:56.689992 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:47:56 crc kubenswrapper[4958]: I1006 11:47:56.690084 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:47:56 crc kubenswrapper[4958]: I1006 11:47:56.690106 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:47:56 crc kubenswrapper[4958]: I1006 11:47:56.690136 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:47:56 crc kubenswrapper[4958]: I1006 11:47:56.690199 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:47:56Z","lastTransitionTime":"2025-10-06T11:47:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:47:56 crc kubenswrapper[4958]: I1006 11:47:56.792624 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:47:56 crc kubenswrapper[4958]: I1006 11:47:56.792688 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:47:56 crc kubenswrapper[4958]: I1006 11:47:56.792706 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:47:56 crc kubenswrapper[4958]: I1006 11:47:56.792738 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:47:56 crc kubenswrapper[4958]: I1006 11:47:56.792755 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:47:56Z","lastTransitionTime":"2025-10-06T11:47:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:47:56 crc kubenswrapper[4958]: I1006 11:47:56.895501 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:47:56 crc kubenswrapper[4958]: I1006 11:47:56.895548 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:47:56 crc kubenswrapper[4958]: I1006 11:47:56.895561 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:47:56 crc kubenswrapper[4958]: I1006 11:47:56.895578 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:47:56 crc kubenswrapper[4958]: I1006 11:47:56.895592 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:47:56Z","lastTransitionTime":"2025-10-06T11:47:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:47:56 crc kubenswrapper[4958]: I1006 11:47:56.913024 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 06 11:47:56 crc kubenswrapper[4958]: I1006 11:47:56.913102 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 06 11:47:56 crc kubenswrapper[4958]: E1006 11:47:56.913185 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Oct 06 11:47:56 crc kubenswrapper[4958]: I1006 11:47:56.913208 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 06 11:47:56 crc kubenswrapper[4958]: E1006 11:47:56.913360 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Oct 06 11:47:56 crc kubenswrapper[4958]: E1006 11:47:56.913480 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Oct 06 11:47:56 crc kubenswrapper[4958]: I1006 11:47:56.932198 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:46Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:47:56Z is after 2025-08-24T17:21:41Z" Oct 06 11:47:56 crc kubenswrapper[4958]: I1006 11:47:56.955627 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-ntrlk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cd589959-144a-41bd-b6d5-a872e5c25cee\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:48Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:48Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bc57f0ff822a9fcb8c3b73146bf4879f6e05c2f92b6a4317528530e54e419278\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:47:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vzpjm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a38398fb9a752d0059e1adfd4ee23557572492f2d27f63ef90ed4730b742fd0e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:47:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vzpjm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://484bfcace3003fb125465d9296c1b04852447d674b6c4abc8fedbd3a1b6ea8be\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:47:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vzpjm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d7834f1b3141381f7d868acc4aceb2809e81896cb7082de9888c3d540d062078\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:47:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vzpjm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://051e26a11e5f65ab6e49664f99a5f6a87bf781bf7284f2e2e3996c0e26e77c84\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:47:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vzpjm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d90731b1e085c514ef0665d4df356b853f432a1012f84a41ed82fa257d57d9bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:47:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vzpjm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://61b99876556ce7323716f2759c1bcf660a211c42c1daf2a32475f512b97fb431\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:47:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vzpjm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1398aae03a42cf60100ed334a3d65c9c4fd501855d5316e7e1ed3974d0c7e22e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:47:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vzpjm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://738882f7b5c7b87e6d6426a997dd7efe5daf1d97973cf1085097cc0f5e790f00\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://738882f7b5c7b87e6d6426a997dd7efe5daf1d97973cf1085097cc0f5e790f00\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-06T11:47:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-06T11:47:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vzpjm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T11:47:48Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-ntrlk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:47:56Z is after 2025-08-24T17:21:41Z" Oct 06 11:47:56 crc kubenswrapper[4958]: I1006 11:47:56.974028 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-4w4h5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8198c8fd-8ec9-4a56-9e87-4e3967fb8ec7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://979bf29caab50b132b30dd6eff62aed5d2209ae1c43fde2433101410df35035b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:47:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2ncxs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T11:47:47Z\\\"}}\" for pod \"openshift-multus\"/\"multus-4w4h5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:47:56Z is after 2025-08-24T17:21:41Z" Oct 06 11:47:56 crc kubenswrapper[4958]: I1006 11:47:56.997010 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:47:56 crc kubenswrapper[4958]: I1006 11:47:56.997048 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:47:56 crc kubenswrapper[4958]: I1006 11:47:56.997059 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:47:56 crc kubenswrapper[4958]: I1006 11:47:56.997076 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:47:56 crc kubenswrapper[4958]: I1006 11:47:56.997087 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:47:56Z","lastTransitionTime":"2025-10-06T11:47:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:47:57 crc kubenswrapper[4958]: I1006 11:47:57.003618 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-whw6z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1a63a5e9-6d00-40ff-a080-cfcaf03a1c1b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b4fae35f5c37636077eb5d2fee37236e39d009d77aa158458d267aed5fc29f48\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:47:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bwd9v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0be6251825a9cd485da028e6f7fd169dae74215a7191607dfd0a4b7a470e43c4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:47:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bwd9v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T11:47:47Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-whw6z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:47:57Z is after 2025-08-24T17:21:41Z" Oct 06 11:47:57 crc kubenswrapper[4958]: I1006 11:47:57.022817 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:48Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:48Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3a93985bd27818990d6f5ec103d8c0722e1317a74398d64300148018c4f77b71\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:47:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9f39e4dcc064167825d4f7323323a394d1a3d1a374b7104bffc6ad56b344d89e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:47:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:47:57Z is after 2025-08-24T17:21:41Z" Oct 06 11:47:57 crc kubenswrapper[4958]: I1006 11:47:57.037963 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:46Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:47:57Z is after 2025-08-24T17:21:41Z" Oct 06 11:47:57 crc kubenswrapper[4958]: I1006 11:47:57.055466 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-lwknw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5e46de0e-9f67-4dae-8601-65004d0d71c1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6adb713c9c2659566eee132d371a68bc10d24c6761c017a1e8b0ce8bf974b9fa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:47:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wghx2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://244104564b5ad6b646671bdc75f440e5568a0c2449c111e56431328365d044fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://244104564b5ad6b646671bdc75f440e5568a0c2449c111e56431328365d044fe\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-06T11:47:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-06T11:47:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wghx2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5f832e5afbc55af6f3de6756005ddfef328c3d8357b1bbc8765469e5b254cf8a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5f832e5afbc55af6f3de6756005ddfef328c3d8357b1bbc8765469e5b254cf8a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-06T11:47:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-06T11:47:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wghx2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://99bec1a2f08582cd407d108d771389ab034877ee58f59e7d899541d0c69809b0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://99bec1a2f08582cd407d108d771389ab034877ee58f59e7d899541d0c69809b0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-06T11:47:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-06T11:47:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wghx2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4d090f1099ca4b6472934163fb66f1875dc03819cf0907f3058aa5006d9d2198\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4d090f1099ca4b6472934163fb66f1875dc03819cf0907f3058aa5006d9d2198\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-06T11:47:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-06T11:47:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wghx2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://41bdbeadff713969b3914296b7f045991c7da126f8181ee9b60d4607d7a565dd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://41bdbeadff713969b3914296b7f045991c7da126f8181ee9b60d4607d7a565dd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-06T11:47:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-06T11:47:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wghx2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3e496dd75c5dd4e480d06cd46302b6172620b35eb24aff3308865a0931677bc4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3e496dd75c5dd4e480d06cd46302b6172620b35eb24aff3308865a0931677bc4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-06T11:47:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-06T11:47:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wghx2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T11:47:47Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-lwknw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:47:57Z is after 2025-08-24T17:21:41Z" Oct 06 11:47:57 crc kubenswrapper[4958]: I1006 11:47:57.065268 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-vns7w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f33eae78-7219-4551-bea7-dcfaf22c4e62\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7c6a67770ab5e7e0af1ed6a7b4965bc2b3937e021ddf18a765edd7b65196ad5c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:47:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f8htn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T11:47:49Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-vns7w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:47:57Z is after 2025-08-24T17:21:41Z" Oct 06 11:47:57 crc kubenswrapper[4958]: I1006 11:47:57.075810 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e346fee2-2887-49c0-ad05-0e4ca19453e4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://42a6335b390e5de2da2d935de9e0cb1e60391521813d0928c59d1d9c27ae8e7f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:47:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://64dfdaf2d8faa6e7ebde08361728077d3db9118ec963dc4c1fc8caf9eb9c752b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:47:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5cbba96aaea45073aaeada1136c677ba1ba87248f004cf3d7394ddfccf412478\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:47:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://95e5c689ad307b63dec13d774d1ff8fe8fa0a11a7d33e18a0470000a4df0f6dc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:47:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T11:47:27Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:47:57Z is after 2025-08-24T17:21:41Z" Oct 06 11:47:57 crc kubenswrapper[4958]: I1006 11:47:57.088014 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:46Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:47:57Z is after 2025-08-24T17:21:41Z" Oct 06 11:47:57 crc kubenswrapper[4958]: I1006 11:47:57.099620 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:47:57 crc kubenswrapper[4958]: I1006 11:47:57.099651 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:47:57 crc kubenswrapper[4958]: I1006 11:47:57.099660 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:47:57 crc kubenswrapper[4958]: I1006 11:47:57.099673 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:47:57 crc kubenswrapper[4958]: I1006 11:47:57.099683 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:47:57Z","lastTransitionTime":"2025-10-06T11:47:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:47:57 crc kubenswrapper[4958]: I1006 11:47:57.099910 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-8xzjs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c3545ace-6477-4c0b-9576-f32c6748e0a0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d94d788ac8f6a4665db7852c21f55128c9afb7976c9e18fbc302c02d8c3a0c43\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:47:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lz2sh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T11:47:47Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-8xzjs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:47:57Z is after 2025-08-24T17:21:41Z" Oct 06 11:47:57 crc kubenswrapper[4958]: I1006 11:47:57.112229 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9bd5b11f-6414-4c57-99b8-516b3f4be804\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:27Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:27Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8f40315a451e12055987bec9804330dac96578f86d7fdedfd96ed187936df857\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:47:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0b16a70f5c26106f885601fdc9f41b6d606668ed3a4a527be3a7674dd4217d36\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:47:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://93cedfe1a3600f0eb73fd787333690ed1fd8cd5f766a54ba40eb6aad808593c7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:47:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fdb3ac1507f9a55981ed07786d2ab8bd2aab7072e8e625b17f15bc27f40f460d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fdb3ac1507f9a55981ed07786d2ab8bd2aab7072e8e625b17f15bc27f40f460d\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-10-06T11:47:48Z\\\",\\\"message\\\":\\\"file observer\\\\nW1006 11:47:47.776944 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1006 11:47:47.777183 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1006 11:47:47.778589 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-624858867/tls.crt::/tmp/serving-cert-624858867/tls.key\\\\\\\"\\\\nI1006 11:47:48.078332 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1006 11:47:48.084427 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1006 11:47:48.084453 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1006 11:47:48.084475 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1006 11:47:48.084480 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1006 11:47:48.108531 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI1006 11:47:48.108546 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW1006 11:47:48.108561 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1006 11:47:48.108582 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1006 11:47:48.108586 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1006 11:47:48.108589 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1006 11:47:48.108592 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1006 11:47:48.108594 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1006 11:47:48.110441 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-06T11:47:42Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://533257b553a482e13f8c8930f7ee7d6f8b70af1499cf861d3dfe73cb2225aa2a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:47:29Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://70c0a2bcb44c905e3d39402f0992d3e50a939252201b98e9a723f5a6777d7f91\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://70c0a2bcb44c905e3d39402f0992d3e50a939252201b98e9a723f5a6777d7f91\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-06T11:47:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-06T11:47:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T11:47:27Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:47:57Z is after 2025-08-24T17:21:41Z" Oct 06 11:47:57 crc kubenswrapper[4958]: I1006 11:47:57.125382 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:48Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:48Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b5c0c63694c68c28642df297d1ceff03425621c51b170c758b3fedeafbccc89e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:47:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:47:57Z is after 2025-08-24T17:21:41Z" Oct 06 11:47:57 crc kubenswrapper[4958]: I1006 11:47:57.138764 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:50Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:50Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fbac1bba642ea0238b738ec34631d44326b215575297c7ea3595aa9d691849d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:47:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:47:57Z is after 2025-08-24T17:21:41Z" Oct 06 11:47:57 crc kubenswrapper[4958]: I1006 11:47:57.196663 4958 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Oct 06 11:47:57 crc kubenswrapper[4958]: I1006 11:47:57.202830 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:47:57 crc kubenswrapper[4958]: I1006 11:47:57.202997 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:47:57 crc kubenswrapper[4958]: I1006 11:47:57.203078 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:47:57 crc kubenswrapper[4958]: I1006 11:47:57.203180 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:47:57 crc kubenswrapper[4958]: I1006 11:47:57.203274 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:47:57Z","lastTransitionTime":"2025-10-06T11:47:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:47:57 crc kubenswrapper[4958]: I1006 11:47:57.315865 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:47:57 crc kubenswrapper[4958]: I1006 11:47:57.316265 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:47:57 crc kubenswrapper[4958]: I1006 11:47:57.316279 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:47:57 crc kubenswrapper[4958]: I1006 11:47:57.316298 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:47:57 crc kubenswrapper[4958]: I1006 11:47:57.316313 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:47:57Z","lastTransitionTime":"2025-10-06T11:47:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:47:57 crc kubenswrapper[4958]: I1006 11:47:57.338616 4958 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Oct 06 11:47:57 crc kubenswrapper[4958]: I1006 11:47:57.339865 4958 scope.go:117] "RemoveContainer" containerID="fdb3ac1507f9a55981ed07786d2ab8bd2aab7072e8e625b17f15bc27f40f460d" Oct 06 11:47:57 crc kubenswrapper[4958]: E1006 11:47:57.340120 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" Oct 06 11:47:57 crc kubenswrapper[4958]: I1006 11:47:57.422297 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:47:57 crc kubenswrapper[4958]: I1006 11:47:57.422372 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:47:57 crc kubenswrapper[4958]: I1006 11:47:57.422395 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:47:57 crc kubenswrapper[4958]: I1006 11:47:57.422423 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:47:57 crc kubenswrapper[4958]: I1006 11:47:57.422442 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:47:57Z","lastTransitionTime":"2025-10-06T11:47:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:47:57 crc kubenswrapper[4958]: I1006 11:47:57.527039 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:47:57 crc kubenswrapper[4958]: I1006 11:47:57.527120 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:47:57 crc kubenswrapper[4958]: I1006 11:47:57.527139 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:47:57 crc kubenswrapper[4958]: I1006 11:47:57.527201 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:47:57 crc kubenswrapper[4958]: I1006 11:47:57.527219 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:47:57Z","lastTransitionTime":"2025-10-06T11:47:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:47:57 crc kubenswrapper[4958]: I1006 11:47:57.630735 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:47:57 crc kubenswrapper[4958]: I1006 11:47:57.630800 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:47:57 crc kubenswrapper[4958]: I1006 11:47:57.630818 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:47:57 crc kubenswrapper[4958]: I1006 11:47:57.630846 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:47:57 crc kubenswrapper[4958]: I1006 11:47:57.630867 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:47:57Z","lastTransitionTime":"2025-10-06T11:47:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:47:57 crc kubenswrapper[4958]: I1006 11:47:57.734948 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:47:57 crc kubenswrapper[4958]: I1006 11:47:57.735454 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:47:57 crc kubenswrapper[4958]: I1006 11:47:57.735525 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:47:57 crc kubenswrapper[4958]: I1006 11:47:57.735764 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:47:57 crc kubenswrapper[4958]: I1006 11:47:57.735807 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:47:57Z","lastTransitionTime":"2025-10-06T11:47:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:47:57 crc kubenswrapper[4958]: I1006 11:47:57.838735 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:47:57 crc kubenswrapper[4958]: I1006 11:47:57.838776 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:47:57 crc kubenswrapper[4958]: I1006 11:47:57.838785 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:47:57 crc kubenswrapper[4958]: I1006 11:47:57.838799 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:47:57 crc kubenswrapper[4958]: I1006 11:47:57.838809 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:47:57Z","lastTransitionTime":"2025-10-06T11:47:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:47:57 crc kubenswrapper[4958]: I1006 11:47:57.942268 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:47:57 crc kubenswrapper[4958]: I1006 11:47:57.942311 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:47:57 crc kubenswrapper[4958]: I1006 11:47:57.942319 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:47:57 crc kubenswrapper[4958]: I1006 11:47:57.942334 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:47:57 crc kubenswrapper[4958]: I1006 11:47:57.942344 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:47:57Z","lastTransitionTime":"2025-10-06T11:47:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:47:58 crc kubenswrapper[4958]: I1006 11:47:58.044955 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:47:58 crc kubenswrapper[4958]: I1006 11:47:58.044991 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:47:58 crc kubenswrapper[4958]: I1006 11:47:58.045000 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:47:58 crc kubenswrapper[4958]: I1006 11:47:58.045013 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:47:58 crc kubenswrapper[4958]: I1006 11:47:58.045022 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:47:58Z","lastTransitionTime":"2025-10-06T11:47:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:47:58 crc kubenswrapper[4958]: I1006 11:47:58.148262 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:47:58 crc kubenswrapper[4958]: I1006 11:47:58.148322 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:47:58 crc kubenswrapper[4958]: I1006 11:47:58.148335 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:47:58 crc kubenswrapper[4958]: I1006 11:47:58.148354 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:47:58 crc kubenswrapper[4958]: I1006 11:47:58.148373 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:47:58Z","lastTransitionTime":"2025-10-06T11:47:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:47:58 crc kubenswrapper[4958]: I1006 11:47:58.202585 4958 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-ntrlk_cd589959-144a-41bd-b6d5-a872e5c25cee/ovnkube-controller/0.log" Oct 06 11:47:58 crc kubenswrapper[4958]: I1006 11:47:58.206441 4958 generic.go:334] "Generic (PLEG): container finished" podID="cd589959-144a-41bd-b6d5-a872e5c25cee" containerID="61b99876556ce7323716f2759c1bcf660a211c42c1daf2a32475f512b97fb431" exitCode=1 Oct 06 11:47:58 crc kubenswrapper[4958]: I1006 11:47:58.206506 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-ntrlk" event={"ID":"cd589959-144a-41bd-b6d5-a872e5c25cee","Type":"ContainerDied","Data":"61b99876556ce7323716f2759c1bcf660a211c42c1daf2a32475f512b97fb431"} Oct 06 11:47:58 crc kubenswrapper[4958]: I1006 11:47:58.208430 4958 scope.go:117] "RemoveContainer" containerID="61b99876556ce7323716f2759c1bcf660a211c42c1daf2a32475f512b97fb431" Oct 06 11:47:58 crc kubenswrapper[4958]: I1006 11:47:58.223070 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:48Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:48Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3a93985bd27818990d6f5ec103d8c0722e1317a74398d64300148018c4f77b71\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:47:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9f39e4dcc064167825d4f7323323a394d1a3d1a374b7104bffc6ad56b344d89e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:47:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:47:58Z is after 2025-08-24T17:21:41Z" Oct 06 11:47:58 crc kubenswrapper[4958]: I1006 11:47:58.242418 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:46Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:47:58Z is after 2025-08-24T17:21:41Z" Oct 06 11:47:58 crc kubenswrapper[4958]: I1006 11:47:58.253450 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:47:58 crc kubenswrapper[4958]: I1006 11:47:58.253510 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:47:58 crc kubenswrapper[4958]: I1006 11:47:58.253527 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:47:58 crc kubenswrapper[4958]: I1006 11:47:58.253553 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:47:58 crc kubenswrapper[4958]: I1006 11:47:58.253570 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:47:58Z","lastTransitionTime":"2025-10-06T11:47:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:47:58 crc kubenswrapper[4958]: I1006 11:47:58.266104 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-lwknw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5e46de0e-9f67-4dae-8601-65004d0d71c1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6adb713c9c2659566eee132d371a68bc10d24c6761c017a1e8b0ce8bf974b9fa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:47:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wghx2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://244104564b5ad6b646671bdc75f440e5568a0c2449c111e56431328365d044fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://244104564b5ad6b646671bdc75f440e5568a0c2449c111e56431328365d044fe\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-06T11:47:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-06T11:47:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wghx2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5f832e5afbc55af6f3de6756005ddfef328c3d8357b1bbc8765469e5b254cf8a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5f832e5afbc55af6f3de6756005ddfef328c3d8357b1bbc8765469e5b254cf8a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-06T11:47:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-06T11:47:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wghx2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://99bec1a2f08582cd407d108d771389ab034877ee58f59e7d899541d0c69809b0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://99bec1a2f08582cd407d108d771389ab034877ee58f59e7d899541d0c69809b0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-06T11:47:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-06T11:47:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wghx2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4d090f1099ca4b6472934163fb66f1875dc03819cf0907f3058aa5006d9d2198\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4d090f1099ca4b6472934163fb66f1875dc03819cf0907f3058aa5006d9d2198\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-06T11:47:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-06T11:47:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wghx2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://41bdbeadff713969b3914296b7f045991c7da126f8181ee9b60d4607d7a565dd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://41bdbeadff713969b3914296b7f045991c7da126f8181ee9b60d4607d7a565dd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-06T11:47:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-06T11:47:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wghx2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3e496dd75c5dd4e480d06cd46302b6172620b35eb24aff3308865a0931677bc4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3e496dd75c5dd4e480d06cd46302b6172620b35eb24aff3308865a0931677bc4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-06T11:47:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-06T11:47:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wghx2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T11:47:47Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-lwknw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:47:58Z is after 2025-08-24T17:21:41Z" Oct 06 11:47:58 crc kubenswrapper[4958]: I1006 11:47:58.287547 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-4w4h5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8198c8fd-8ec9-4a56-9e87-4e3967fb8ec7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://979bf29caab50b132b30dd6eff62aed5d2209ae1c43fde2433101410df35035b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:47:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2ncxs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T11:47:47Z\\\"}}\" for pod \"openshift-multus\"/\"multus-4w4h5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:47:58Z is after 2025-08-24T17:21:41Z" Oct 06 11:47:58 crc kubenswrapper[4958]: I1006 11:47:58.311641 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-whw6z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1a63a5e9-6d00-40ff-a080-cfcaf03a1c1b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b4fae35f5c37636077eb5d2fee37236e39d009d77aa158458d267aed5fc29f48\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:47:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bwd9v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0be6251825a9cd485da028e6f7fd169dae74215a7191607dfd0a4b7a470e43c4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:47:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bwd9v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T11:47:47Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-whw6z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:47:58Z is after 2025-08-24T17:21:41Z" Oct 06 11:47:58 crc kubenswrapper[4958]: I1006 11:47:58.335356 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e346fee2-2887-49c0-ad05-0e4ca19453e4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://42a6335b390e5de2da2d935de9e0cb1e60391521813d0928c59d1d9c27ae8e7f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:47:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://64dfdaf2d8faa6e7ebde08361728077d3db9118ec963dc4c1fc8caf9eb9c752b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:47:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5cbba96aaea45073aaeada1136c677ba1ba87248f004cf3d7394ddfccf412478\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:47:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://95e5c689ad307b63dec13d774d1ff8fe8fa0a11a7d33e18a0470000a4df0f6dc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:47:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T11:47:27Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:47:58Z is after 2025-08-24T17:21:41Z" Oct 06 11:47:58 crc kubenswrapper[4958]: I1006 11:47:58.353428 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:46Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:47:58Z is after 2025-08-24T17:21:41Z" Oct 06 11:47:58 crc kubenswrapper[4958]: I1006 11:47:58.355970 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:47:58 crc kubenswrapper[4958]: I1006 11:47:58.356027 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:47:58 crc kubenswrapper[4958]: I1006 11:47:58.356042 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:47:58 crc kubenswrapper[4958]: I1006 11:47:58.356064 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:47:58 crc kubenswrapper[4958]: I1006 11:47:58.356082 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:47:58Z","lastTransitionTime":"2025-10-06T11:47:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:47:58 crc kubenswrapper[4958]: I1006 11:47:58.368547 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-8xzjs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c3545ace-6477-4c0b-9576-f32c6748e0a0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d94d788ac8f6a4665db7852c21f55128c9afb7976c9e18fbc302c02d8c3a0c43\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:47:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lz2sh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T11:47:47Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-8xzjs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:47:58Z is after 2025-08-24T17:21:41Z" Oct 06 11:47:58 crc kubenswrapper[4958]: I1006 11:47:58.385486 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-vns7w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f33eae78-7219-4551-bea7-dcfaf22c4e62\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7c6a67770ab5e7e0af1ed6a7b4965bc2b3937e021ddf18a765edd7b65196ad5c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:47:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f8htn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T11:47:49Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-vns7w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:47:58Z is after 2025-08-24T17:21:41Z" Oct 06 11:47:58 crc kubenswrapper[4958]: I1006 11:47:58.401434 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9bd5b11f-6414-4c57-99b8-516b3f4be804\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:27Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:27Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8f40315a451e12055987bec9804330dac96578f86d7fdedfd96ed187936df857\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:47:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0b16a70f5c26106f885601fdc9f41b6d606668ed3a4a527be3a7674dd4217d36\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:47:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://93cedfe1a3600f0eb73fd787333690ed1fd8cd5f766a54ba40eb6aad808593c7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:47:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fdb3ac1507f9a55981ed07786d2ab8bd2aab7072e8e625b17f15bc27f40f460d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fdb3ac1507f9a55981ed07786d2ab8bd2aab7072e8e625b17f15bc27f40f460d\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-10-06T11:47:48Z\\\",\\\"message\\\":\\\"file observer\\\\nW1006 11:47:47.776944 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1006 11:47:47.777183 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1006 11:47:47.778589 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-624858867/tls.crt::/tmp/serving-cert-624858867/tls.key\\\\\\\"\\\\nI1006 11:47:48.078332 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1006 11:47:48.084427 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1006 11:47:48.084453 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1006 11:47:48.084475 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1006 11:47:48.084480 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1006 11:47:48.108531 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI1006 11:47:48.108546 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW1006 11:47:48.108561 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1006 11:47:48.108582 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1006 11:47:48.108586 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1006 11:47:48.108589 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1006 11:47:48.108592 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1006 11:47:48.108594 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1006 11:47:48.110441 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-06T11:47:42Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://533257b553a482e13f8c8930f7ee7d6f8b70af1499cf861d3dfe73cb2225aa2a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:47:29Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://70c0a2bcb44c905e3d39402f0992d3e50a939252201b98e9a723f5a6777d7f91\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://70c0a2bcb44c905e3d39402f0992d3e50a939252201b98e9a723f5a6777d7f91\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-06T11:47:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-06T11:47:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T11:47:27Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:47:58Z is after 2025-08-24T17:21:41Z" Oct 06 11:47:58 crc kubenswrapper[4958]: I1006 11:47:58.418785 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:48Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:48Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b5c0c63694c68c28642df297d1ceff03425621c51b170c758b3fedeafbccc89e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:47:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:47:58Z is after 2025-08-24T17:21:41Z" Oct 06 11:47:58 crc kubenswrapper[4958]: I1006 11:47:58.433118 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:50Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:50Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fbac1bba642ea0238b738ec34631d44326b215575297c7ea3595aa9d691849d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:47:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:47:58Z is after 2025-08-24T17:21:41Z" Oct 06 11:47:58 crc kubenswrapper[4958]: I1006 11:47:58.458504 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:47:58 crc kubenswrapper[4958]: I1006 11:47:58.458562 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:47:58 crc kubenswrapper[4958]: I1006 11:47:58.458579 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:47:58 crc kubenswrapper[4958]: I1006 11:47:58.458603 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:47:58 crc kubenswrapper[4958]: I1006 11:47:58.458623 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:47:58Z","lastTransitionTime":"2025-10-06T11:47:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:47:58 crc kubenswrapper[4958]: I1006 11:47:58.461882 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-ntrlk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cd589959-144a-41bd-b6d5-a872e5c25cee\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:48Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:48Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bc57f0ff822a9fcb8c3b73146bf4879f6e05c2f92b6a4317528530e54e419278\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:47:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vzpjm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a38398fb9a752d0059e1adfd4ee23557572492f2d27f63ef90ed4730b742fd0e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:47:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vzpjm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://484bfcace3003fb125465d9296c1b04852447d674b6c4abc8fedbd3a1b6ea8be\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:47:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vzpjm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d7834f1b3141381f7d868acc4aceb2809e81896cb7082de9888c3d540d062078\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:47:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vzpjm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://051e26a11e5f65ab6e49664f99a5f6a87bf781bf7284f2e2e3996c0e26e77c84\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:47:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vzpjm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d90731b1e085c514ef0665d4df356b853f432a1012f84a41ed82fa257d57d9bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:47:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vzpjm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://61b99876556ce7323716f2759c1bcf660a211c42c1daf2a32475f512b97fb431\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://61b99876556ce7323716f2759c1bcf660a211c42c1daf2a32475f512b97fb431\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-10-06T11:47:57Z\\\",\\\"message\\\":\\\" reflector *v1.Namespace (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1006 11:47:57.406413 6277 handler.go:208] Removed *v1.Node event handler 7\\\\nI1006 11:47:57.406438 6277 handler.go:208] Removed *v1.Node event handler 2\\\\nI1006 11:47:57.406535 6277 reflector.go:311] Stopping reflector *v1alpha1.AdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI1006 11:47:57.406619 6277 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1006 11:47:57.406828 6277 reflector.go:311] Stopping reflector *v1.Pod (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1006 11:47:57.407077 6277 reflector.go:311] Stopping reflector *v1.NetworkAttachmentDefinition (0s) from github.com/k8snetworkplumbingwg/network-attachment-definition-client/pkg/client/informers/externalversions/factory.go:117\\\\nI1006 11:47:57.407391 6277 reflector.go:311] Stopping reflector *v1.AdminPolicyBasedExternalRoute (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/adminpolicybasedroute/v1/apis/informers/externalversions/factory.go:140\\\\nI1006 11:47:57.407452 6277 reflector.go:311] Stopping reflector *v1.UserDefinedNetwork (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/userdefinednetwork/v1/apis/informers/externalversions/f\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-06T11:47:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vzpjm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1398aae03a42cf60100ed334a3d65c9c4fd501855d5316e7e1ed3974d0c7e22e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:47:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vzpjm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://738882f7b5c7b87e6d6426a997dd7efe5daf1d97973cf1085097cc0f5e790f00\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://738882f7b5c7b87e6d6426a997dd7efe5daf1d97973cf1085097cc0f5e790f00\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-06T11:47:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-06T11:47:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vzpjm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T11:47:48Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-ntrlk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:47:58Z is after 2025-08-24T17:21:41Z" Oct 06 11:47:58 crc kubenswrapper[4958]: I1006 11:47:58.479447 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:46Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:47:58Z is after 2025-08-24T17:21:41Z" Oct 06 11:47:58 crc kubenswrapper[4958]: I1006 11:47:58.560999 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:47:58 crc kubenswrapper[4958]: I1006 11:47:58.561052 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:47:58 crc kubenswrapper[4958]: I1006 11:47:58.561070 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:47:58 crc kubenswrapper[4958]: I1006 11:47:58.561095 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:47:58 crc kubenswrapper[4958]: I1006 11:47:58.561113 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:47:58Z","lastTransitionTime":"2025-10-06T11:47:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:47:58 crc kubenswrapper[4958]: I1006 11:47:58.664296 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:47:58 crc kubenswrapper[4958]: I1006 11:47:58.664361 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:47:58 crc kubenswrapper[4958]: I1006 11:47:58.664381 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:47:58 crc kubenswrapper[4958]: I1006 11:47:58.664405 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:47:58 crc kubenswrapper[4958]: I1006 11:47:58.664424 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:47:58Z","lastTransitionTime":"2025-10-06T11:47:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:47:58 crc kubenswrapper[4958]: I1006 11:47:58.767880 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:47:58 crc kubenswrapper[4958]: I1006 11:47:58.767956 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:47:58 crc kubenswrapper[4958]: I1006 11:47:58.767980 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:47:58 crc kubenswrapper[4958]: I1006 11:47:58.768016 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:47:58 crc kubenswrapper[4958]: I1006 11:47:58.768039 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:47:58Z","lastTransitionTime":"2025-10-06T11:47:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:47:58 crc kubenswrapper[4958]: I1006 11:47:58.871043 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:47:58 crc kubenswrapper[4958]: I1006 11:47:58.871097 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:47:58 crc kubenswrapper[4958]: I1006 11:47:58.871110 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:47:58 crc kubenswrapper[4958]: I1006 11:47:58.871129 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:47:58 crc kubenswrapper[4958]: I1006 11:47:58.871161 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:47:58Z","lastTransitionTime":"2025-10-06T11:47:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:47:58 crc kubenswrapper[4958]: I1006 11:47:58.913039 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 06 11:47:58 crc kubenswrapper[4958]: I1006 11:47:58.913109 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 06 11:47:58 crc kubenswrapper[4958]: I1006 11:47:58.913170 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 06 11:47:58 crc kubenswrapper[4958]: E1006 11:47:58.913303 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Oct 06 11:47:58 crc kubenswrapper[4958]: E1006 11:47:58.913408 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Oct 06 11:47:58 crc kubenswrapper[4958]: E1006 11:47:58.913524 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Oct 06 11:47:58 crc kubenswrapper[4958]: I1006 11:47:58.974713 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:47:58 crc kubenswrapper[4958]: I1006 11:47:58.974763 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:47:58 crc kubenswrapper[4958]: I1006 11:47:58.974775 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:47:58 crc kubenswrapper[4958]: I1006 11:47:58.974793 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:47:58 crc kubenswrapper[4958]: I1006 11:47:58.974809 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:47:58Z","lastTransitionTime":"2025-10-06T11:47:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:47:59 crc kubenswrapper[4958]: I1006 11:47:59.078441 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:47:59 crc kubenswrapper[4958]: I1006 11:47:59.078511 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:47:59 crc kubenswrapper[4958]: I1006 11:47:59.078532 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:47:59 crc kubenswrapper[4958]: I1006 11:47:59.078561 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:47:59 crc kubenswrapper[4958]: I1006 11:47:59.078581 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:47:59Z","lastTransitionTime":"2025-10-06T11:47:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:47:59 crc kubenswrapper[4958]: I1006 11:47:59.181103 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:47:59 crc kubenswrapper[4958]: I1006 11:47:59.181212 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:47:59 crc kubenswrapper[4958]: I1006 11:47:59.181239 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:47:59 crc kubenswrapper[4958]: I1006 11:47:59.181262 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:47:59 crc kubenswrapper[4958]: I1006 11:47:59.181279 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:47:59Z","lastTransitionTime":"2025-10-06T11:47:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:47:59 crc kubenswrapper[4958]: I1006 11:47:59.213738 4958 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-ntrlk_cd589959-144a-41bd-b6d5-a872e5c25cee/ovnkube-controller/0.log" Oct 06 11:47:59 crc kubenswrapper[4958]: I1006 11:47:59.218540 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-ntrlk" event={"ID":"cd589959-144a-41bd-b6d5-a872e5c25cee","Type":"ContainerStarted","Data":"bde47cc2eb6224040038d0ad4b06a8f91be0c8eaabe6545c6169b533f913dbd3"} Oct 06 11:47:59 crc kubenswrapper[4958]: I1006 11:47:59.218737 4958 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Oct 06 11:47:59 crc kubenswrapper[4958]: I1006 11:47:59.238474 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:46Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:47:59Z is after 2025-08-24T17:21:41Z" Oct 06 11:47:59 crc kubenswrapper[4958]: I1006 11:47:59.263290 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-ntrlk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cd589959-144a-41bd-b6d5-a872e5c25cee\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:48Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:48Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bc57f0ff822a9fcb8c3b73146bf4879f6e05c2f92b6a4317528530e54e419278\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:47:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vzpjm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a38398fb9a752d0059e1adfd4ee23557572492f2d27f63ef90ed4730b742fd0e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:47:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vzpjm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://484bfcace3003fb125465d9296c1b04852447d674b6c4abc8fedbd3a1b6ea8be\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:47:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vzpjm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d7834f1b3141381f7d868acc4aceb2809e81896cb7082de9888c3d540d062078\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:47:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vzpjm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://051e26a11e5f65ab6e49664f99a5f6a87bf781bf7284f2e2e3996c0e26e77c84\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:47:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vzpjm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d90731b1e085c514ef0665d4df356b853f432a1012f84a41ed82fa257d57d9bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:47:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vzpjm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bde47cc2eb6224040038d0ad4b06a8f91be0c8eaabe6545c6169b533f913dbd3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://61b99876556ce7323716f2759c1bcf660a211c42c1daf2a32475f512b97fb431\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-10-06T11:47:57Z\\\",\\\"message\\\":\\\" reflector *v1.Namespace (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1006 11:47:57.406413 6277 handler.go:208] Removed *v1.Node event handler 7\\\\nI1006 11:47:57.406438 6277 handler.go:208] Removed *v1.Node event handler 2\\\\nI1006 11:47:57.406535 6277 reflector.go:311] Stopping reflector *v1alpha1.AdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI1006 11:47:57.406619 6277 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1006 11:47:57.406828 6277 reflector.go:311] Stopping reflector *v1.Pod (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1006 11:47:57.407077 6277 reflector.go:311] Stopping reflector *v1.NetworkAttachmentDefinition (0s) from github.com/k8snetworkplumbingwg/network-attachment-definition-client/pkg/client/informers/externalversions/factory.go:117\\\\nI1006 11:47:57.407391 6277 reflector.go:311] Stopping reflector *v1.AdminPolicyBasedExternalRoute (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/adminpolicybasedroute/v1/apis/informers/externalversions/factory.go:140\\\\nI1006 11:47:57.407452 6277 reflector.go:311] Stopping reflector *v1.UserDefinedNetwork (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/userdefinednetwork/v1/apis/informers/externalversions/f\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-06T11:47:54Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:47:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vzpjm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1398aae03a42cf60100ed334a3d65c9c4fd501855d5316e7e1ed3974d0c7e22e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:47:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vzpjm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://738882f7b5c7b87e6d6426a997dd7efe5daf1d97973cf1085097cc0f5e790f00\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://738882f7b5c7b87e6d6426a997dd7efe5daf1d97973cf1085097cc0f5e790f00\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-06T11:47:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-06T11:47:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vzpjm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T11:47:48Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-ntrlk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:47:59Z is after 2025-08-24T17:21:41Z" Oct 06 11:47:59 crc kubenswrapper[4958]: I1006 11:47:59.283289 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:48Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:48Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3a93985bd27818990d6f5ec103d8c0722e1317a74398d64300148018c4f77b71\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:47:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9f39e4dcc064167825d4f7323323a394d1a3d1a374b7104bffc6ad56b344d89e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:47:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:47:59Z is after 2025-08-24T17:21:41Z" Oct 06 11:47:59 crc kubenswrapper[4958]: I1006 11:47:59.284204 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:47:59 crc kubenswrapper[4958]: I1006 11:47:59.284510 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:47:59 crc kubenswrapper[4958]: I1006 11:47:59.284794 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:47:59 crc kubenswrapper[4958]: I1006 11:47:59.285021 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:47:59 crc kubenswrapper[4958]: I1006 11:47:59.285333 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:47:59Z","lastTransitionTime":"2025-10-06T11:47:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:47:59 crc kubenswrapper[4958]: I1006 11:47:59.310931 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:46Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:47:59Z is after 2025-08-24T17:21:41Z" Oct 06 11:47:59 crc kubenswrapper[4958]: I1006 11:47:59.329018 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-lwknw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5e46de0e-9f67-4dae-8601-65004d0d71c1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6adb713c9c2659566eee132d371a68bc10d24c6761c017a1e8b0ce8bf974b9fa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:47:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wghx2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://244104564b5ad6b646671bdc75f440e5568a0c2449c111e56431328365d044fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://244104564b5ad6b646671bdc75f440e5568a0c2449c111e56431328365d044fe\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-06T11:47:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-06T11:47:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wghx2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5f832e5afbc55af6f3de6756005ddfef328c3d8357b1bbc8765469e5b254cf8a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5f832e5afbc55af6f3de6756005ddfef328c3d8357b1bbc8765469e5b254cf8a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-06T11:47:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-06T11:47:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wghx2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://99bec1a2f08582cd407d108d771389ab034877ee58f59e7d899541d0c69809b0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://99bec1a2f08582cd407d108d771389ab034877ee58f59e7d899541d0c69809b0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-06T11:47:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-06T11:47:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wghx2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4d090f1099ca4b6472934163fb66f1875dc03819cf0907f3058aa5006d9d2198\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4d090f1099ca4b6472934163fb66f1875dc03819cf0907f3058aa5006d9d2198\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-06T11:47:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-06T11:47:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wghx2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://41bdbeadff713969b3914296b7f045991c7da126f8181ee9b60d4607d7a565dd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://41bdbeadff713969b3914296b7f045991c7da126f8181ee9b60d4607d7a565dd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-06T11:47:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-06T11:47:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wghx2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3e496dd75c5dd4e480d06cd46302b6172620b35eb24aff3308865a0931677bc4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3e496dd75c5dd4e480d06cd46302b6172620b35eb24aff3308865a0931677bc4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-06T11:47:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-06T11:47:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wghx2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T11:47:47Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-lwknw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:47:59Z is after 2025-08-24T17:21:41Z" Oct 06 11:47:59 crc kubenswrapper[4958]: I1006 11:47:59.351790 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-4w4h5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8198c8fd-8ec9-4a56-9e87-4e3967fb8ec7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://979bf29caab50b132b30dd6eff62aed5d2209ae1c43fde2433101410df35035b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:47:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2ncxs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T11:47:47Z\\\"}}\" for pod \"openshift-multus\"/\"multus-4w4h5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:47:59Z is after 2025-08-24T17:21:41Z" Oct 06 11:47:59 crc kubenswrapper[4958]: I1006 11:47:59.375293 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-whw6z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1a63a5e9-6d00-40ff-a080-cfcaf03a1c1b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b4fae35f5c37636077eb5d2fee37236e39d009d77aa158458d267aed5fc29f48\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:47:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bwd9v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0be6251825a9cd485da028e6f7fd169dae74215a7191607dfd0a4b7a470e43c4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:47:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bwd9v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T11:47:47Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-whw6z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:47:59Z is after 2025-08-24T17:21:41Z" Oct 06 11:47:59 crc kubenswrapper[4958]: I1006 11:47:59.390794 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:47:59 crc kubenswrapper[4958]: I1006 11:47:59.390873 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:47:59 crc kubenswrapper[4958]: I1006 11:47:59.390920 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:47:59 crc kubenswrapper[4958]: I1006 11:47:59.390958 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:47:59 crc kubenswrapper[4958]: I1006 11:47:59.390984 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:47:59Z","lastTransitionTime":"2025-10-06T11:47:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:47:59 crc kubenswrapper[4958]: I1006 11:47:59.394813 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e346fee2-2887-49c0-ad05-0e4ca19453e4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://42a6335b390e5de2da2d935de9e0cb1e60391521813d0928c59d1d9c27ae8e7f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:47:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://64dfdaf2d8faa6e7ebde08361728077d3db9118ec963dc4c1fc8caf9eb9c752b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:47:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5cbba96aaea45073aaeada1136c677ba1ba87248f004cf3d7394ddfccf412478\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:47:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://95e5c689ad307b63dec13d774d1ff8fe8fa0a11a7d33e18a0470000a4df0f6dc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:47:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T11:47:27Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:47:59Z is after 2025-08-24T17:21:41Z" Oct 06 11:47:59 crc kubenswrapper[4958]: I1006 11:47:59.410424 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:46Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:47:59Z is after 2025-08-24T17:21:41Z" Oct 06 11:47:59 crc kubenswrapper[4958]: I1006 11:47:59.424087 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-8xzjs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c3545ace-6477-4c0b-9576-f32c6748e0a0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d94d788ac8f6a4665db7852c21f55128c9afb7976c9e18fbc302c02d8c3a0c43\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:47:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lz2sh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T11:47:47Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-8xzjs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:47:59Z is after 2025-08-24T17:21:41Z" Oct 06 11:47:59 crc kubenswrapper[4958]: I1006 11:47:59.441010 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-vns7w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f33eae78-7219-4551-bea7-dcfaf22c4e62\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7c6a67770ab5e7e0af1ed6a7b4965bc2b3937e021ddf18a765edd7b65196ad5c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:47:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f8htn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T11:47:49Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-vns7w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:47:59Z is after 2025-08-24T17:21:41Z" Oct 06 11:47:59 crc kubenswrapper[4958]: I1006 11:47:59.462056 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9bd5b11f-6414-4c57-99b8-516b3f4be804\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:27Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:27Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8f40315a451e12055987bec9804330dac96578f86d7fdedfd96ed187936df857\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:47:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0b16a70f5c26106f885601fdc9f41b6d606668ed3a4a527be3a7674dd4217d36\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:47:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://93cedfe1a3600f0eb73fd787333690ed1fd8cd5f766a54ba40eb6aad808593c7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:47:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fdb3ac1507f9a55981ed07786d2ab8bd2aab7072e8e625b17f15bc27f40f460d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fdb3ac1507f9a55981ed07786d2ab8bd2aab7072e8e625b17f15bc27f40f460d\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-10-06T11:47:48Z\\\",\\\"message\\\":\\\"file observer\\\\nW1006 11:47:47.776944 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1006 11:47:47.777183 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1006 11:47:47.778589 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-624858867/tls.crt::/tmp/serving-cert-624858867/tls.key\\\\\\\"\\\\nI1006 11:47:48.078332 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1006 11:47:48.084427 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1006 11:47:48.084453 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1006 11:47:48.084475 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1006 11:47:48.084480 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1006 11:47:48.108531 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI1006 11:47:48.108546 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW1006 11:47:48.108561 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1006 11:47:48.108582 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1006 11:47:48.108586 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1006 11:47:48.108589 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1006 11:47:48.108592 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1006 11:47:48.108594 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1006 11:47:48.110441 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-06T11:47:42Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://533257b553a482e13f8c8930f7ee7d6f8b70af1499cf861d3dfe73cb2225aa2a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:47:29Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://70c0a2bcb44c905e3d39402f0992d3e50a939252201b98e9a723f5a6777d7f91\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://70c0a2bcb44c905e3d39402f0992d3e50a939252201b98e9a723f5a6777d7f91\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-06T11:47:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-06T11:47:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T11:47:27Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:47:59Z is after 2025-08-24T17:21:41Z" Oct 06 11:47:59 crc kubenswrapper[4958]: I1006 11:47:59.482705 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:48Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:48Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b5c0c63694c68c28642df297d1ceff03425621c51b170c758b3fedeafbccc89e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:47:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:47:59Z is after 2025-08-24T17:21:41Z" Oct 06 11:47:59 crc kubenswrapper[4958]: I1006 11:47:59.494465 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:47:59 crc kubenswrapper[4958]: I1006 11:47:59.494518 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:47:59 crc kubenswrapper[4958]: I1006 11:47:59.494536 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:47:59 crc kubenswrapper[4958]: I1006 11:47:59.494560 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:47:59 crc kubenswrapper[4958]: I1006 11:47:59.494578 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:47:59Z","lastTransitionTime":"2025-10-06T11:47:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:47:59 crc kubenswrapper[4958]: I1006 11:47:59.505765 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:50Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:50Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fbac1bba642ea0238b738ec34631d44326b215575297c7ea3595aa9d691849d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:47:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:47:59Z is after 2025-08-24T17:21:41Z" Oct 06 11:47:59 crc kubenswrapper[4958]: I1006 11:47:59.598567 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:47:59 crc kubenswrapper[4958]: I1006 11:47:59.598635 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:47:59 crc kubenswrapper[4958]: I1006 11:47:59.598653 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:47:59 crc kubenswrapper[4958]: I1006 11:47:59.598677 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:47:59 crc kubenswrapper[4958]: I1006 11:47:59.598695 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:47:59Z","lastTransitionTime":"2025-10-06T11:47:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:47:59 crc kubenswrapper[4958]: I1006 11:47:59.701763 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:47:59 crc kubenswrapper[4958]: I1006 11:47:59.701844 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:47:59 crc kubenswrapper[4958]: I1006 11:47:59.701867 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:47:59 crc kubenswrapper[4958]: I1006 11:47:59.701901 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:47:59 crc kubenswrapper[4958]: I1006 11:47:59.701925 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:47:59Z","lastTransitionTime":"2025-10-06T11:47:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:47:59 crc kubenswrapper[4958]: I1006 11:47:59.805170 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:47:59 crc kubenswrapper[4958]: I1006 11:47:59.805243 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:47:59 crc kubenswrapper[4958]: I1006 11:47:59.805262 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:47:59 crc kubenswrapper[4958]: I1006 11:47:59.805292 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:47:59 crc kubenswrapper[4958]: I1006 11:47:59.805311 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:47:59Z","lastTransitionTime":"2025-10-06T11:47:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:47:59 crc kubenswrapper[4958]: I1006 11:47:59.908589 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:47:59 crc kubenswrapper[4958]: I1006 11:47:59.908652 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:47:59 crc kubenswrapper[4958]: I1006 11:47:59.908670 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:47:59 crc kubenswrapper[4958]: I1006 11:47:59.908698 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:47:59 crc kubenswrapper[4958]: I1006 11:47:59.908717 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:47:59Z","lastTransitionTime":"2025-10-06T11:47:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:48:00 crc kubenswrapper[4958]: I1006 11:48:00.010867 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:48:00 crc kubenswrapper[4958]: I1006 11:48:00.010920 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:48:00 crc kubenswrapper[4958]: I1006 11:48:00.010946 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:48:00 crc kubenswrapper[4958]: I1006 11:48:00.010967 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:48:00 crc kubenswrapper[4958]: I1006 11:48:00.010982 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:48:00Z","lastTransitionTime":"2025-10-06T11:48:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:48:00 crc kubenswrapper[4958]: I1006 11:48:00.077974 4958 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-8jbr9"] Oct 06 11:48:00 crc kubenswrapper[4958]: I1006 11:48:00.078668 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-8jbr9" Oct 06 11:48:00 crc kubenswrapper[4958]: I1006 11:48:00.081432 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-control-plane-metrics-cert" Oct 06 11:48:00 crc kubenswrapper[4958]: I1006 11:48:00.083216 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-control-plane-dockercfg-gs7dd" Oct 06 11:48:00 crc kubenswrapper[4958]: I1006 11:48:00.103877 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:48Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:48Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3a93985bd27818990d6f5ec103d8c0722e1317a74398d64300148018c4f77b71\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:47:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9f39e4dcc064167825d4f7323323a394d1a3d1a374b7104bffc6ad56b344d89e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:47:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:48:00Z is after 2025-08-24T17:21:41Z" Oct 06 11:48:00 crc kubenswrapper[4958]: I1006 11:48:00.114831 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:48:00 crc kubenswrapper[4958]: I1006 11:48:00.114911 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:48:00 crc kubenswrapper[4958]: I1006 11:48:00.114932 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:48:00 crc kubenswrapper[4958]: I1006 11:48:00.114959 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:48:00 crc kubenswrapper[4958]: I1006 11:48:00.114984 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:48:00Z","lastTransitionTime":"2025-10-06T11:48:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:48:00 crc kubenswrapper[4958]: I1006 11:48:00.135702 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:46Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:48:00Z is after 2025-08-24T17:21:41Z" Oct 06 11:48:00 crc kubenswrapper[4958]: I1006 11:48:00.169511 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-lwknw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5e46de0e-9f67-4dae-8601-65004d0d71c1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6adb713c9c2659566eee132d371a68bc10d24c6761c017a1e8b0ce8bf974b9fa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:47:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wghx2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://244104564b5ad6b646671bdc75f440e5568a0c2449c111e56431328365d044fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://244104564b5ad6b646671bdc75f440e5568a0c2449c111e56431328365d044fe\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-06T11:47:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-06T11:47:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wghx2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5f832e5afbc55af6f3de6756005ddfef328c3d8357b1bbc8765469e5b254cf8a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5f832e5afbc55af6f3de6756005ddfef328c3d8357b1bbc8765469e5b254cf8a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-06T11:47:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-06T11:47:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wghx2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://99bec1a2f08582cd407d108d771389ab034877ee58f59e7d899541d0c69809b0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://99bec1a2f08582cd407d108d771389ab034877ee58f59e7d899541d0c69809b0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-06T11:47:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-06T11:47:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wghx2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4d090f1099ca4b6472934163fb66f1875dc03819cf0907f3058aa5006d9d2198\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4d090f1099ca4b6472934163fb66f1875dc03819cf0907f3058aa5006d9d2198\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-06T11:47:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-06T11:47:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wghx2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://41bdbeadff713969b3914296b7f045991c7da126f8181ee9b60d4607d7a565dd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://41bdbeadff713969b3914296b7f045991c7da126f8181ee9b60d4607d7a565dd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-06T11:47:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-06T11:47:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wghx2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3e496dd75c5dd4e480d06cd46302b6172620b35eb24aff3308865a0931677bc4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3e496dd75c5dd4e480d06cd46302b6172620b35eb24aff3308865a0931677bc4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-06T11:47:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-06T11:47:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wghx2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T11:47:47Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-lwknw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:48:00Z is after 2025-08-24T17:21:41Z" Oct 06 11:48:00 crc kubenswrapper[4958]: I1006 11:48:00.190999 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-4w4h5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8198c8fd-8ec9-4a56-9e87-4e3967fb8ec7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://979bf29caab50b132b30dd6eff62aed5d2209ae1c43fde2433101410df35035b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:47:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2ncxs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T11:47:47Z\\\"}}\" for pod \"openshift-multus\"/\"multus-4w4h5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:48:00Z is after 2025-08-24T17:21:41Z" Oct 06 11:48:00 crc kubenswrapper[4958]: I1006 11:48:00.203655 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-whw6z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1a63a5e9-6d00-40ff-a080-cfcaf03a1c1b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b4fae35f5c37636077eb5d2fee37236e39d009d77aa158458d267aed5fc29f48\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:47:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bwd9v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0be6251825a9cd485da028e6f7fd169dae74215a7191607dfd0a4b7a470e43c4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:47:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bwd9v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T11:47:47Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-whw6z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:48:00Z is after 2025-08-24T17:21:41Z" Oct 06 11:48:00 crc kubenswrapper[4958]: I1006 11:48:00.213367 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-8jbr9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ddd869cc-2703-4f0d-b694-32bb7735025e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:48:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:48:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:48:00Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:48:00Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8vhr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8vhr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T11:48:00Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-8jbr9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:48:00Z is after 2025-08-24T17:21:41Z" Oct 06 11:48:00 crc kubenswrapper[4958]: I1006 11:48:00.220465 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:48:00 crc kubenswrapper[4958]: I1006 11:48:00.220492 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:48:00 crc kubenswrapper[4958]: I1006 11:48:00.220500 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:48:00 crc kubenswrapper[4958]: I1006 11:48:00.220513 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:48:00 crc kubenswrapper[4958]: I1006 11:48:00.220524 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:48:00Z","lastTransitionTime":"2025-10-06T11:48:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:48:00 crc kubenswrapper[4958]: I1006 11:48:00.224722 4958 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-ntrlk_cd589959-144a-41bd-b6d5-a872e5c25cee/ovnkube-controller/1.log" Oct 06 11:48:00 crc kubenswrapper[4958]: I1006 11:48:00.225123 4958 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-ntrlk_cd589959-144a-41bd-b6d5-a872e5c25cee/ovnkube-controller/0.log" Oct 06 11:48:00 crc kubenswrapper[4958]: I1006 11:48:00.227609 4958 generic.go:334] "Generic (PLEG): container finished" podID="cd589959-144a-41bd-b6d5-a872e5c25cee" containerID="bde47cc2eb6224040038d0ad4b06a8f91be0c8eaabe6545c6169b533f913dbd3" exitCode=1 Oct 06 11:48:00 crc kubenswrapper[4958]: I1006 11:48:00.227677 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-ntrlk" event={"ID":"cd589959-144a-41bd-b6d5-a872e5c25cee","Type":"ContainerDied","Data":"bde47cc2eb6224040038d0ad4b06a8f91be0c8eaabe6545c6169b533f913dbd3"} Oct 06 11:48:00 crc kubenswrapper[4958]: I1006 11:48:00.227726 4958 scope.go:117] "RemoveContainer" containerID="61b99876556ce7323716f2759c1bcf660a211c42c1daf2a32475f512b97fb431" Oct 06 11:48:00 crc kubenswrapper[4958]: I1006 11:48:00.228357 4958 scope.go:117] "RemoveContainer" containerID="bde47cc2eb6224040038d0ad4b06a8f91be0c8eaabe6545c6169b533f913dbd3" Oct 06 11:48:00 crc kubenswrapper[4958]: E1006 11:48:00.228616 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-ntrlk_openshift-ovn-kubernetes(cd589959-144a-41bd-b6d5-a872e5c25cee)\"" pod="openshift-ovn-kubernetes/ovnkube-node-ntrlk" podUID="cd589959-144a-41bd-b6d5-a872e5c25cee" Oct 06 11:48:00 crc kubenswrapper[4958]: I1006 11:48:00.233518 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e346fee2-2887-49c0-ad05-0e4ca19453e4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://42a6335b390e5de2da2d935de9e0cb1e60391521813d0928c59d1d9c27ae8e7f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:47:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://64dfdaf2d8faa6e7ebde08361728077d3db9118ec963dc4c1fc8caf9eb9c752b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:47:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5cbba96aaea45073aaeada1136c677ba1ba87248f004cf3d7394ddfccf412478\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:47:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://95e5c689ad307b63dec13d774d1ff8fe8fa0a11a7d33e18a0470000a4df0f6dc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:47:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T11:47:27Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:48:00Z is after 2025-08-24T17:21:41Z" Oct 06 11:48:00 crc kubenswrapper[4958]: I1006 11:48:00.235825 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/ddd869cc-2703-4f0d-b694-32bb7735025e-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-8jbr9\" (UID: \"ddd869cc-2703-4f0d-b694-32bb7735025e\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-8jbr9" Oct 06 11:48:00 crc kubenswrapper[4958]: I1006 11:48:00.235865 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/ddd869cc-2703-4f0d-b694-32bb7735025e-env-overrides\") pod \"ovnkube-control-plane-749d76644c-8jbr9\" (UID: \"ddd869cc-2703-4f0d-b694-32bb7735025e\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-8jbr9" Oct 06 11:48:00 crc kubenswrapper[4958]: I1006 11:48:00.235915 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j8vhr\" (UniqueName: \"kubernetes.io/projected/ddd869cc-2703-4f0d-b694-32bb7735025e-kube-api-access-j8vhr\") pod \"ovnkube-control-plane-749d76644c-8jbr9\" (UID: \"ddd869cc-2703-4f0d-b694-32bb7735025e\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-8jbr9" Oct 06 11:48:00 crc kubenswrapper[4958]: I1006 11:48:00.235953 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/ddd869cc-2703-4f0d-b694-32bb7735025e-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-8jbr9\" (UID: \"ddd869cc-2703-4f0d-b694-32bb7735025e\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-8jbr9" Oct 06 11:48:00 crc kubenswrapper[4958]: I1006 11:48:00.246408 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:46Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:48:00Z is after 2025-08-24T17:21:41Z" Oct 06 11:48:00 crc kubenswrapper[4958]: I1006 11:48:00.257103 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-8xzjs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c3545ace-6477-4c0b-9576-f32c6748e0a0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d94d788ac8f6a4665db7852c21f55128c9afb7976c9e18fbc302c02d8c3a0c43\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:47:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lz2sh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T11:47:47Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-8xzjs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:48:00Z is after 2025-08-24T17:21:41Z" Oct 06 11:48:00 crc kubenswrapper[4958]: I1006 11:48:00.268493 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-vns7w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f33eae78-7219-4551-bea7-dcfaf22c4e62\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7c6a67770ab5e7e0af1ed6a7b4965bc2b3937e021ddf18a765edd7b65196ad5c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:47:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f8htn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T11:47:49Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-vns7w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:48:00Z is after 2025-08-24T17:21:41Z" Oct 06 11:48:00 crc kubenswrapper[4958]: I1006 11:48:00.280423 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9bd5b11f-6414-4c57-99b8-516b3f4be804\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:27Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:27Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8f40315a451e12055987bec9804330dac96578f86d7fdedfd96ed187936df857\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:47:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0b16a70f5c26106f885601fdc9f41b6d606668ed3a4a527be3a7674dd4217d36\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:47:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://93cedfe1a3600f0eb73fd787333690ed1fd8cd5f766a54ba40eb6aad808593c7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:47:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fdb3ac1507f9a55981ed07786d2ab8bd2aab7072e8e625b17f15bc27f40f460d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fdb3ac1507f9a55981ed07786d2ab8bd2aab7072e8e625b17f15bc27f40f460d\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-10-06T11:47:48Z\\\",\\\"message\\\":\\\"file observer\\\\nW1006 11:47:47.776944 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1006 11:47:47.777183 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1006 11:47:47.778589 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-624858867/tls.crt::/tmp/serving-cert-624858867/tls.key\\\\\\\"\\\\nI1006 11:47:48.078332 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1006 11:47:48.084427 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1006 11:47:48.084453 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1006 11:47:48.084475 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1006 11:47:48.084480 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1006 11:47:48.108531 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI1006 11:47:48.108546 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW1006 11:47:48.108561 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1006 11:47:48.108582 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1006 11:47:48.108586 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1006 11:47:48.108589 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1006 11:47:48.108592 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1006 11:47:48.108594 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1006 11:47:48.110441 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-06T11:47:42Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://533257b553a482e13f8c8930f7ee7d6f8b70af1499cf861d3dfe73cb2225aa2a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:47:29Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://70c0a2bcb44c905e3d39402f0992d3e50a939252201b98e9a723f5a6777d7f91\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://70c0a2bcb44c905e3d39402f0992d3e50a939252201b98e9a723f5a6777d7f91\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-06T11:47:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-06T11:47:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T11:47:27Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:48:00Z is after 2025-08-24T17:21:41Z" Oct 06 11:48:00 crc kubenswrapper[4958]: I1006 11:48:00.292290 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:48Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:48Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b5c0c63694c68c28642df297d1ceff03425621c51b170c758b3fedeafbccc89e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:47:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:48:00Z is after 2025-08-24T17:21:41Z" Oct 06 11:48:00 crc kubenswrapper[4958]: I1006 11:48:00.303358 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:50Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:50Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fbac1bba642ea0238b738ec34631d44326b215575297c7ea3595aa9d691849d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:47:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:48:00Z is after 2025-08-24T17:21:41Z" Oct 06 11:48:00 crc kubenswrapper[4958]: I1006 11:48:00.315463 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:46Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:48:00Z is after 2025-08-24T17:21:41Z" Oct 06 11:48:00 crc kubenswrapper[4958]: I1006 11:48:00.323342 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:48:00 crc kubenswrapper[4958]: I1006 11:48:00.323395 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:48:00 crc kubenswrapper[4958]: I1006 11:48:00.323409 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:48:00 crc kubenswrapper[4958]: I1006 11:48:00.323427 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:48:00 crc kubenswrapper[4958]: I1006 11:48:00.323440 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:48:00Z","lastTransitionTime":"2025-10-06T11:48:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:48:00 crc kubenswrapper[4958]: I1006 11:48:00.337286 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/ddd869cc-2703-4f0d-b694-32bb7735025e-env-overrides\") pod \"ovnkube-control-plane-749d76644c-8jbr9\" (UID: \"ddd869cc-2703-4f0d-b694-32bb7735025e\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-8jbr9" Oct 06 11:48:00 crc kubenswrapper[4958]: I1006 11:48:00.337378 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j8vhr\" (UniqueName: \"kubernetes.io/projected/ddd869cc-2703-4f0d-b694-32bb7735025e-kube-api-access-j8vhr\") pod \"ovnkube-control-plane-749d76644c-8jbr9\" (UID: \"ddd869cc-2703-4f0d-b694-32bb7735025e\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-8jbr9" Oct 06 11:48:00 crc kubenswrapper[4958]: I1006 11:48:00.337403 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/ddd869cc-2703-4f0d-b694-32bb7735025e-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-8jbr9\" (UID: \"ddd869cc-2703-4f0d-b694-32bb7735025e\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-8jbr9" Oct 06 11:48:00 crc kubenswrapper[4958]: I1006 11:48:00.337456 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/ddd869cc-2703-4f0d-b694-32bb7735025e-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-8jbr9\" (UID: \"ddd869cc-2703-4f0d-b694-32bb7735025e\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-8jbr9" Oct 06 11:48:00 crc kubenswrapper[4958]: I1006 11:48:00.338441 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/ddd869cc-2703-4f0d-b694-32bb7735025e-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-8jbr9\" (UID: \"ddd869cc-2703-4f0d-b694-32bb7735025e\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-8jbr9" Oct 06 11:48:00 crc kubenswrapper[4958]: I1006 11:48:00.338784 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/ddd869cc-2703-4f0d-b694-32bb7735025e-env-overrides\") pod \"ovnkube-control-plane-749d76644c-8jbr9\" (UID: \"ddd869cc-2703-4f0d-b694-32bb7735025e\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-8jbr9" Oct 06 11:48:00 crc kubenswrapper[4958]: I1006 11:48:00.339474 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-ntrlk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cd589959-144a-41bd-b6d5-a872e5c25cee\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:48Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:48Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bc57f0ff822a9fcb8c3b73146bf4879f6e05c2f92b6a4317528530e54e419278\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:47:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vzpjm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a38398fb9a752d0059e1adfd4ee23557572492f2d27f63ef90ed4730b742fd0e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:47:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vzpjm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://484bfcace3003fb125465d9296c1b04852447d674b6c4abc8fedbd3a1b6ea8be\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:47:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vzpjm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d7834f1b3141381f7d868acc4aceb2809e81896cb7082de9888c3d540d062078\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:47:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vzpjm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://051e26a11e5f65ab6e49664f99a5f6a87bf781bf7284f2e2e3996c0e26e77c84\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:47:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vzpjm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d90731b1e085c514ef0665d4df356b853f432a1012f84a41ed82fa257d57d9bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:47:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vzpjm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bde47cc2eb6224040038d0ad4b06a8f91be0c8eaabe6545c6169b533f913dbd3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://61b99876556ce7323716f2759c1bcf660a211c42c1daf2a32475f512b97fb431\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-10-06T11:47:57Z\\\",\\\"message\\\":\\\" reflector *v1.Namespace (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1006 11:47:57.406413 6277 handler.go:208] Removed *v1.Node event handler 7\\\\nI1006 11:47:57.406438 6277 handler.go:208] Removed *v1.Node event handler 2\\\\nI1006 11:47:57.406535 6277 reflector.go:311] Stopping reflector *v1alpha1.AdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI1006 11:47:57.406619 6277 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1006 11:47:57.406828 6277 reflector.go:311] Stopping reflector *v1.Pod (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1006 11:47:57.407077 6277 reflector.go:311] Stopping reflector *v1.NetworkAttachmentDefinition (0s) from github.com/k8snetworkplumbingwg/network-attachment-definition-client/pkg/client/informers/externalversions/factory.go:117\\\\nI1006 11:47:57.407391 6277 reflector.go:311] Stopping reflector *v1.AdminPolicyBasedExternalRoute (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/adminpolicybasedroute/v1/apis/informers/externalversions/factory.go:140\\\\nI1006 11:47:57.407452 6277 reflector.go:311] Stopping reflector *v1.UserDefinedNetwork (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/userdefinednetwork/v1/apis/informers/externalversions/f\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-06T11:47:54Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:47:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vzpjm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1398aae03a42cf60100ed334a3d65c9c4fd501855d5316e7e1ed3974d0c7e22e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:47:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vzpjm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://738882f7b5c7b87e6d6426a997dd7efe5daf1d97973cf1085097cc0f5e790f00\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://738882f7b5c7b87e6d6426a997dd7efe5daf1d97973cf1085097cc0f5e790f00\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-06T11:47:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-06T11:47:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vzpjm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T11:47:48Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-ntrlk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:48:00Z is after 2025-08-24T17:21:41Z" Oct 06 11:48:00 crc kubenswrapper[4958]: I1006 11:48:00.347928 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/ddd869cc-2703-4f0d-b694-32bb7735025e-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-8jbr9\" (UID: \"ddd869cc-2703-4f0d-b694-32bb7735025e\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-8jbr9" Oct 06 11:48:00 crc kubenswrapper[4958]: I1006 11:48:00.356101 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j8vhr\" (UniqueName: \"kubernetes.io/projected/ddd869cc-2703-4f0d-b694-32bb7735025e-kube-api-access-j8vhr\") pod \"ovnkube-control-plane-749d76644c-8jbr9\" (UID: \"ddd869cc-2703-4f0d-b694-32bb7735025e\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-8jbr9" Oct 06 11:48:00 crc kubenswrapper[4958]: I1006 11:48:00.357379 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-4w4h5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8198c8fd-8ec9-4a56-9e87-4e3967fb8ec7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://979bf29caab50b132b30dd6eff62aed5d2209ae1c43fde2433101410df35035b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:47:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2ncxs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T11:47:47Z\\\"}}\" for pod \"openshift-multus\"/\"multus-4w4h5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:48:00Z is after 2025-08-24T17:21:41Z" Oct 06 11:48:00 crc kubenswrapper[4958]: I1006 11:48:00.371223 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-whw6z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1a63a5e9-6d00-40ff-a080-cfcaf03a1c1b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b4fae35f5c37636077eb5d2fee37236e39d009d77aa158458d267aed5fc29f48\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:47:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bwd9v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0be6251825a9cd485da028e6f7fd169dae74215a7191607dfd0a4b7a470e43c4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:47:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bwd9v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T11:47:47Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-whw6z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:48:00Z is after 2025-08-24T17:21:41Z" Oct 06 11:48:00 crc kubenswrapper[4958]: I1006 11:48:00.386450 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-8jbr9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ddd869cc-2703-4f0d-b694-32bb7735025e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:48:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:48:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:48:00Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:48:00Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8vhr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8vhr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T11:48:00Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-8jbr9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:48:00Z is after 2025-08-24T17:21:41Z" Oct 06 11:48:00 crc kubenswrapper[4958]: I1006 11:48:00.402943 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:48Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:48Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3a93985bd27818990d6f5ec103d8c0722e1317a74398d64300148018c4f77b71\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:47:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9f39e4dcc064167825d4f7323323a394d1a3d1a374b7104bffc6ad56b344d89e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:47:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:48:00Z is after 2025-08-24T17:21:41Z" Oct 06 11:48:00 crc kubenswrapper[4958]: I1006 11:48:00.404084 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-8jbr9" Oct 06 11:48:00 crc kubenswrapper[4958]: I1006 11:48:00.421643 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:46Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:48:00Z is after 2025-08-24T17:21:41Z" Oct 06 11:48:00 crc kubenswrapper[4958]: W1006 11:48:00.424268 4958 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podddd869cc_2703_4f0d_b694_32bb7735025e.slice/crio-91907736e2e6830130edb0df050dd676f371311a26b69e36d4eb894b37e6c9a4 WatchSource:0}: Error finding container 91907736e2e6830130edb0df050dd676f371311a26b69e36d4eb894b37e6c9a4: Status 404 returned error can't find the container with id 91907736e2e6830130edb0df050dd676f371311a26b69e36d4eb894b37e6c9a4 Oct 06 11:48:00 crc kubenswrapper[4958]: I1006 11:48:00.426701 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:48:00 crc kubenswrapper[4958]: I1006 11:48:00.426771 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:48:00 crc kubenswrapper[4958]: I1006 11:48:00.426808 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:48:00 crc kubenswrapper[4958]: I1006 11:48:00.426841 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:48:00 crc kubenswrapper[4958]: I1006 11:48:00.426864 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:48:00Z","lastTransitionTime":"2025-10-06T11:48:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:48:00 crc kubenswrapper[4958]: I1006 11:48:00.445349 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-lwknw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5e46de0e-9f67-4dae-8601-65004d0d71c1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6adb713c9c2659566eee132d371a68bc10d24c6761c017a1e8b0ce8bf974b9fa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:47:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wghx2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://244104564b5ad6b646671bdc75f440e5568a0c2449c111e56431328365d044fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://244104564b5ad6b646671bdc75f440e5568a0c2449c111e56431328365d044fe\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-06T11:47:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-06T11:47:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wghx2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5f832e5afbc55af6f3de6756005ddfef328c3d8357b1bbc8765469e5b254cf8a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5f832e5afbc55af6f3de6756005ddfef328c3d8357b1bbc8765469e5b254cf8a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-06T11:47:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-06T11:47:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wghx2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://99bec1a2f08582cd407d108d771389ab034877ee58f59e7d899541d0c69809b0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://99bec1a2f08582cd407d108d771389ab034877ee58f59e7d899541d0c69809b0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-06T11:47:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-06T11:47:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wghx2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4d090f1099ca4b6472934163fb66f1875dc03819cf0907f3058aa5006d9d2198\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4d090f1099ca4b6472934163fb66f1875dc03819cf0907f3058aa5006d9d2198\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-06T11:47:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-06T11:47:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wghx2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://41bdbeadff713969b3914296b7f045991c7da126f8181ee9b60d4607d7a565dd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://41bdbeadff713969b3914296b7f045991c7da126f8181ee9b60d4607d7a565dd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-06T11:47:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-06T11:47:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wghx2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3e496dd75c5dd4e480d06cd46302b6172620b35eb24aff3308865a0931677bc4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3e496dd75c5dd4e480d06cd46302b6172620b35eb24aff3308865a0931677bc4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-06T11:47:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-06T11:47:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wghx2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T11:47:47Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-lwknw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:48:00Z is after 2025-08-24T17:21:41Z" Oct 06 11:48:00 crc kubenswrapper[4958]: I1006 11:48:00.459938 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-vns7w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f33eae78-7219-4551-bea7-dcfaf22c4e62\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7c6a67770ab5e7e0af1ed6a7b4965bc2b3937e021ddf18a765edd7b65196ad5c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:47:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f8htn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T11:47:49Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-vns7w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:48:00Z is after 2025-08-24T17:21:41Z" Oct 06 11:48:00 crc kubenswrapper[4958]: I1006 11:48:00.479050 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e346fee2-2887-49c0-ad05-0e4ca19453e4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://42a6335b390e5de2da2d935de9e0cb1e60391521813d0928c59d1d9c27ae8e7f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:47:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://64dfdaf2d8faa6e7ebde08361728077d3db9118ec963dc4c1fc8caf9eb9c752b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:47:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5cbba96aaea45073aaeada1136c677ba1ba87248f004cf3d7394ddfccf412478\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:47:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://95e5c689ad307b63dec13d774d1ff8fe8fa0a11a7d33e18a0470000a4df0f6dc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:47:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T11:47:27Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:48:00Z is after 2025-08-24T17:21:41Z" Oct 06 11:48:00 crc kubenswrapper[4958]: I1006 11:48:00.500501 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:46Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:48:00Z is after 2025-08-24T17:21:41Z" Oct 06 11:48:00 crc kubenswrapper[4958]: I1006 11:48:00.514992 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-8xzjs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c3545ace-6477-4c0b-9576-f32c6748e0a0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d94d788ac8f6a4665db7852c21f55128c9afb7976c9e18fbc302c02d8c3a0c43\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:47:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lz2sh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T11:47:47Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-8xzjs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:48:00Z is after 2025-08-24T17:21:41Z" Oct 06 11:48:00 crc kubenswrapper[4958]: I1006 11:48:00.530364 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:48:00 crc kubenswrapper[4958]: I1006 11:48:00.530658 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:48:00 crc kubenswrapper[4958]: I1006 11:48:00.530800 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:48:00 crc kubenswrapper[4958]: I1006 11:48:00.530948 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:48:00 crc kubenswrapper[4958]: I1006 11:48:00.531085 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:48:00Z","lastTransitionTime":"2025-10-06T11:48:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:48:00 crc kubenswrapper[4958]: I1006 11:48:00.539202 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9bd5b11f-6414-4c57-99b8-516b3f4be804\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:27Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:27Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8f40315a451e12055987bec9804330dac96578f86d7fdedfd96ed187936df857\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:47:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0b16a70f5c26106f885601fdc9f41b6d606668ed3a4a527be3a7674dd4217d36\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:47:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://93cedfe1a3600f0eb73fd787333690ed1fd8cd5f766a54ba40eb6aad808593c7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:47:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fdb3ac1507f9a55981ed07786d2ab8bd2aab7072e8e625b17f15bc27f40f460d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fdb3ac1507f9a55981ed07786d2ab8bd2aab7072e8e625b17f15bc27f40f460d\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-10-06T11:47:48Z\\\",\\\"message\\\":\\\"file observer\\\\nW1006 11:47:47.776944 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1006 11:47:47.777183 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1006 11:47:47.778589 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-624858867/tls.crt::/tmp/serving-cert-624858867/tls.key\\\\\\\"\\\\nI1006 11:47:48.078332 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1006 11:47:48.084427 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1006 11:47:48.084453 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1006 11:47:48.084475 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1006 11:47:48.084480 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1006 11:47:48.108531 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI1006 11:47:48.108546 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW1006 11:47:48.108561 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1006 11:47:48.108582 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1006 11:47:48.108586 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1006 11:47:48.108589 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1006 11:47:48.108592 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1006 11:47:48.108594 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1006 11:47:48.110441 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-06T11:47:42Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://533257b553a482e13f8c8930f7ee7d6f8b70af1499cf861d3dfe73cb2225aa2a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:47:29Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://70c0a2bcb44c905e3d39402f0992d3e50a939252201b98e9a723f5a6777d7f91\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://70c0a2bcb44c905e3d39402f0992d3e50a939252201b98e9a723f5a6777d7f91\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-06T11:47:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-06T11:47:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T11:47:27Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:48:00Z is after 2025-08-24T17:21:41Z" Oct 06 11:48:00 crc kubenswrapper[4958]: I1006 11:48:00.557226 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:48Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:48Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b5c0c63694c68c28642df297d1ceff03425621c51b170c758b3fedeafbccc89e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:47:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:48:00Z is after 2025-08-24T17:21:41Z" Oct 06 11:48:00 crc kubenswrapper[4958]: I1006 11:48:00.583270 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:50Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:50Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fbac1bba642ea0238b738ec34631d44326b215575297c7ea3595aa9d691849d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:47:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:48:00Z is after 2025-08-24T17:21:41Z" Oct 06 11:48:00 crc kubenswrapper[4958]: I1006 11:48:00.597841 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:46Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:48:00Z is after 2025-08-24T17:21:41Z" Oct 06 11:48:00 crc kubenswrapper[4958]: I1006 11:48:00.620626 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-ntrlk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cd589959-144a-41bd-b6d5-a872e5c25cee\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:48Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:48Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bc57f0ff822a9fcb8c3b73146bf4879f6e05c2f92b6a4317528530e54e419278\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:47:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vzpjm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a38398fb9a752d0059e1adfd4ee23557572492f2d27f63ef90ed4730b742fd0e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:47:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vzpjm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://484bfcace3003fb125465d9296c1b04852447d674b6c4abc8fedbd3a1b6ea8be\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:47:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vzpjm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d7834f1b3141381f7d868acc4aceb2809e81896cb7082de9888c3d540d062078\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:47:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vzpjm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://051e26a11e5f65ab6e49664f99a5f6a87bf781bf7284f2e2e3996c0e26e77c84\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:47:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vzpjm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d90731b1e085c514ef0665d4df356b853f432a1012f84a41ed82fa257d57d9bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:47:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vzpjm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bde47cc2eb6224040038d0ad4b06a8f91be0c8eaabe6545c6169b533f913dbd3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://61b99876556ce7323716f2759c1bcf660a211c42c1daf2a32475f512b97fb431\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-10-06T11:47:57Z\\\",\\\"message\\\":\\\" reflector *v1.Namespace (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1006 11:47:57.406413 6277 handler.go:208] Removed *v1.Node event handler 7\\\\nI1006 11:47:57.406438 6277 handler.go:208] Removed *v1.Node event handler 2\\\\nI1006 11:47:57.406535 6277 reflector.go:311] Stopping reflector *v1alpha1.AdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI1006 11:47:57.406619 6277 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1006 11:47:57.406828 6277 reflector.go:311] Stopping reflector *v1.Pod (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1006 11:47:57.407077 6277 reflector.go:311] Stopping reflector *v1.NetworkAttachmentDefinition (0s) from github.com/k8snetworkplumbingwg/network-attachment-definition-client/pkg/client/informers/externalversions/factory.go:117\\\\nI1006 11:47:57.407391 6277 reflector.go:311] Stopping reflector *v1.AdminPolicyBasedExternalRoute (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/adminpolicybasedroute/v1/apis/informers/externalversions/factory.go:140\\\\nI1006 11:47:57.407452 6277 reflector.go:311] Stopping reflector *v1.UserDefinedNetwork (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/userdefinednetwork/v1/apis/informers/externalversions/f\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-06T11:47:54Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bde47cc2eb6224040038d0ad4b06a8f91be0c8eaabe6545c6169b533f913dbd3\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-10-06T11:47:59Z\\\",\\\"message\\\":\\\"ind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-operator-lifecycle-manager/catalog-operator-metrics\\\\\\\"}\\\\nI1006 11:47:59.290723 6420 services_controller.go:360] Finished syncing service catalog-operator-metrics on namespace openshift-operator-lifecycle-manager for network=default : 1.119513ms\\\\nI1006 11:47:59.290754 6420 loadbalancer.go:304] Deleted 0 stale LBs for map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-apiserver/api\\\\\\\"}\\\\nI1006 11:47:59.290776 6420 services_controller.go:360] Finished syncing service api on namespace openshift-apiserver for network=default : 2.435771ms\\\\nI1006 11:47:59.290868 6420 loadbalancer.go:304] Deleted 0 stale LBs for map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-marketplace/community-operators\\\\\\\"}\\\\nI1006 11:47:59.290896 6420 services_controller.go:360] Finished syncing service community-operators on namespace openshift-marketplace for network=default : 2.503053ms\\\\nI1006 11:47:59.291059 6420 factory.go:1336] Added *v1.EgressFirewall event handler 9\\\\nI1006 11:47:59.291204 6420 controller.go:132] Adding controller ef_node_controller event handlers\\\\nI1006 11:47:59.291271 6420 ovnkube.go:599] Stopped ovnkube\\\\nI1006 11:47:59.291319 6420 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF1006 11:47:59.291428 6420 ovnkube.go:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-06T11:47:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vzpjm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1398aae03a42cf60100ed334a3d65c9c4fd501855d5316e7e1ed3974d0c7e22e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:47:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vzpjm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://738882f7b5c7b87e6d6426a997dd7efe5daf1d97973cf1085097cc0f5e790f00\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://738882f7b5c7b87e6d6426a997dd7efe5daf1d97973cf1085097cc0f5e790f00\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-06T11:47:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-06T11:47:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vzpjm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T11:47:48Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-ntrlk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:48:00Z is after 2025-08-24T17:21:41Z" Oct 06 11:48:00 crc kubenswrapper[4958]: I1006 11:48:00.634877 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:48:00 crc kubenswrapper[4958]: I1006 11:48:00.634920 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:48:00 crc kubenswrapper[4958]: I1006 11:48:00.634931 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:48:00 crc kubenswrapper[4958]: I1006 11:48:00.634946 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:48:00 crc kubenswrapper[4958]: I1006 11:48:00.634957 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:48:00Z","lastTransitionTime":"2025-10-06T11:48:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:48:00 crc kubenswrapper[4958]: I1006 11:48:00.737218 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:48:00 crc kubenswrapper[4958]: I1006 11:48:00.737588 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:48:00 crc kubenswrapper[4958]: I1006 11:48:00.737600 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:48:00 crc kubenswrapper[4958]: I1006 11:48:00.737623 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:48:00 crc kubenswrapper[4958]: I1006 11:48:00.737637 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:48:00Z","lastTransitionTime":"2025-10-06T11:48:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:48:00 crc kubenswrapper[4958]: I1006 11:48:00.840327 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:48:00 crc kubenswrapper[4958]: I1006 11:48:00.840380 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:48:00 crc kubenswrapper[4958]: I1006 11:48:00.840397 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:48:00 crc kubenswrapper[4958]: I1006 11:48:00.840421 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:48:00 crc kubenswrapper[4958]: I1006 11:48:00.840437 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:48:00Z","lastTransitionTime":"2025-10-06T11:48:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:48:00 crc kubenswrapper[4958]: I1006 11:48:00.912616 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 06 11:48:00 crc kubenswrapper[4958]: I1006 11:48:00.912645 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 06 11:48:00 crc kubenswrapper[4958]: E1006 11:48:00.912796 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Oct 06 11:48:00 crc kubenswrapper[4958]: E1006 11:48:00.912961 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Oct 06 11:48:00 crc kubenswrapper[4958]: I1006 11:48:00.913300 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 06 11:48:00 crc kubenswrapper[4958]: E1006 11:48:00.913563 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Oct 06 11:48:00 crc kubenswrapper[4958]: I1006 11:48:00.943557 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:48:00 crc kubenswrapper[4958]: I1006 11:48:00.943613 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:48:00 crc kubenswrapper[4958]: I1006 11:48:00.943631 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:48:00 crc kubenswrapper[4958]: I1006 11:48:00.943654 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:48:00 crc kubenswrapper[4958]: I1006 11:48:00.943670 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:48:00Z","lastTransitionTime":"2025-10-06T11:48:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:48:01 crc kubenswrapper[4958]: I1006 11:48:01.046355 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:48:01 crc kubenswrapper[4958]: I1006 11:48:01.046453 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:48:01 crc kubenswrapper[4958]: I1006 11:48:01.046476 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:48:01 crc kubenswrapper[4958]: I1006 11:48:01.046909 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:48:01 crc kubenswrapper[4958]: I1006 11:48:01.047206 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:48:01Z","lastTransitionTime":"2025-10-06T11:48:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:48:01 crc kubenswrapper[4958]: I1006 11:48:01.150425 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:48:01 crc kubenswrapper[4958]: I1006 11:48:01.150482 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:48:01 crc kubenswrapper[4958]: I1006 11:48:01.150498 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:48:01 crc kubenswrapper[4958]: I1006 11:48:01.150528 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:48:01 crc kubenswrapper[4958]: I1006 11:48:01.150552 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:48:01Z","lastTransitionTime":"2025-10-06T11:48:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:48:01 crc kubenswrapper[4958]: I1006 11:48:01.234484 4958 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-ntrlk_cd589959-144a-41bd-b6d5-a872e5c25cee/ovnkube-controller/1.log" Oct 06 11:48:01 crc kubenswrapper[4958]: I1006 11:48:01.240322 4958 scope.go:117] "RemoveContainer" containerID="bde47cc2eb6224040038d0ad4b06a8f91be0c8eaabe6545c6169b533f913dbd3" Oct 06 11:48:01 crc kubenswrapper[4958]: E1006 11:48:01.240597 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-ntrlk_openshift-ovn-kubernetes(cd589959-144a-41bd-b6d5-a872e5c25cee)\"" pod="openshift-ovn-kubernetes/ovnkube-node-ntrlk" podUID="cd589959-144a-41bd-b6d5-a872e5c25cee" Oct 06 11:48:01 crc kubenswrapper[4958]: I1006 11:48:01.241651 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-8jbr9" event={"ID":"ddd869cc-2703-4f0d-b694-32bb7735025e","Type":"ContainerStarted","Data":"5d95fcda3db5aafa2634bf9d4037a6b8b0456bf4be6fc620c742e57e86aed114"} Oct 06 11:48:01 crc kubenswrapper[4958]: I1006 11:48:01.241721 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-8jbr9" event={"ID":"ddd869cc-2703-4f0d-b694-32bb7735025e","Type":"ContainerStarted","Data":"9298df9f82818033b7c6c96e0d7fd59290a7ef86679ae958f19f1c841998715c"} Oct 06 11:48:01 crc kubenswrapper[4958]: I1006 11:48:01.241743 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-8jbr9" event={"ID":"ddd869cc-2703-4f0d-b694-32bb7735025e","Type":"ContainerStarted","Data":"91907736e2e6830130edb0df050dd676f371311a26b69e36d4eb894b37e6c9a4"} Oct 06 11:48:01 crc kubenswrapper[4958]: I1006 11:48:01.253874 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:48:01 crc kubenswrapper[4958]: I1006 11:48:01.253936 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:48:01 crc kubenswrapper[4958]: I1006 11:48:01.253954 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:48:01 crc kubenswrapper[4958]: I1006 11:48:01.253980 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:48:01 crc kubenswrapper[4958]: I1006 11:48:01.253999 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:48:01Z","lastTransitionTime":"2025-10-06T11:48:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:48:01 crc kubenswrapper[4958]: I1006 11:48:01.261268 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:46Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:48:01Z is after 2025-08-24T17:21:41Z" Oct 06 11:48:01 crc kubenswrapper[4958]: I1006 11:48:01.295580 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-ntrlk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cd589959-144a-41bd-b6d5-a872e5c25cee\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:48Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:48Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bc57f0ff822a9fcb8c3b73146bf4879f6e05c2f92b6a4317528530e54e419278\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:47:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vzpjm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a38398fb9a752d0059e1adfd4ee23557572492f2d27f63ef90ed4730b742fd0e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:47:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vzpjm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://484bfcace3003fb125465d9296c1b04852447d674b6c4abc8fedbd3a1b6ea8be\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:47:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vzpjm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d7834f1b3141381f7d868acc4aceb2809e81896cb7082de9888c3d540d062078\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:47:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vzpjm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://051e26a11e5f65ab6e49664f99a5f6a87bf781bf7284f2e2e3996c0e26e77c84\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:47:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vzpjm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d90731b1e085c514ef0665d4df356b853f432a1012f84a41ed82fa257d57d9bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:47:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vzpjm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bde47cc2eb6224040038d0ad4b06a8f91be0c8eaabe6545c6169b533f913dbd3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bde47cc2eb6224040038d0ad4b06a8f91be0c8eaabe6545c6169b533f913dbd3\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-10-06T11:47:59Z\\\",\\\"message\\\":\\\"ind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-operator-lifecycle-manager/catalog-operator-metrics\\\\\\\"}\\\\nI1006 11:47:59.290723 6420 services_controller.go:360] Finished syncing service catalog-operator-metrics on namespace openshift-operator-lifecycle-manager for network=default : 1.119513ms\\\\nI1006 11:47:59.290754 6420 loadbalancer.go:304] Deleted 0 stale LBs for map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-apiserver/api\\\\\\\"}\\\\nI1006 11:47:59.290776 6420 services_controller.go:360] Finished syncing service api on namespace openshift-apiserver for network=default : 2.435771ms\\\\nI1006 11:47:59.290868 6420 loadbalancer.go:304] Deleted 0 stale LBs for map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-marketplace/community-operators\\\\\\\"}\\\\nI1006 11:47:59.290896 6420 services_controller.go:360] Finished syncing service community-operators on namespace openshift-marketplace for network=default : 2.503053ms\\\\nI1006 11:47:59.291059 6420 factory.go:1336] Added *v1.EgressFirewall event handler 9\\\\nI1006 11:47:59.291204 6420 controller.go:132] Adding controller ef_node_controller event handlers\\\\nI1006 11:47:59.291271 6420 ovnkube.go:599] Stopped ovnkube\\\\nI1006 11:47:59.291319 6420 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF1006 11:47:59.291428 6420 ovnkube.go:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-06T11:47:58Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-ntrlk_openshift-ovn-kubernetes(cd589959-144a-41bd-b6d5-a872e5c25cee)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vzpjm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1398aae03a42cf60100ed334a3d65c9c4fd501855d5316e7e1ed3974d0c7e22e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:47:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vzpjm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://738882f7b5c7b87e6d6426a997dd7efe5daf1d97973cf1085097cc0f5e790f00\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://738882f7b5c7b87e6d6426a997dd7efe5daf1d97973cf1085097cc0f5e790f00\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-06T11:47:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-06T11:47:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vzpjm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T11:47:48Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-ntrlk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:48:01Z is after 2025-08-24T17:21:41Z" Oct 06 11:48:01 crc kubenswrapper[4958]: I1006 11:48:01.318703 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:46Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:48:01Z is after 2025-08-24T17:21:41Z" Oct 06 11:48:01 crc kubenswrapper[4958]: I1006 11:48:01.347133 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-lwknw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5e46de0e-9f67-4dae-8601-65004d0d71c1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6adb713c9c2659566eee132d371a68bc10d24c6761c017a1e8b0ce8bf974b9fa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:47:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wghx2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://244104564b5ad6b646671bdc75f440e5568a0c2449c111e56431328365d044fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://244104564b5ad6b646671bdc75f440e5568a0c2449c111e56431328365d044fe\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-06T11:47:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-06T11:47:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wghx2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5f832e5afbc55af6f3de6756005ddfef328c3d8357b1bbc8765469e5b254cf8a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5f832e5afbc55af6f3de6756005ddfef328c3d8357b1bbc8765469e5b254cf8a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-06T11:47:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-06T11:47:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wghx2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://99bec1a2f08582cd407d108d771389ab034877ee58f59e7d899541d0c69809b0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://99bec1a2f08582cd407d108d771389ab034877ee58f59e7d899541d0c69809b0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-06T11:47:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-06T11:47:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wghx2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4d090f1099ca4b6472934163fb66f1875dc03819cf0907f3058aa5006d9d2198\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4d090f1099ca4b6472934163fb66f1875dc03819cf0907f3058aa5006d9d2198\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-06T11:47:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-06T11:47:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wghx2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://41bdbeadff713969b3914296b7f045991c7da126f8181ee9b60d4607d7a565dd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://41bdbeadff713969b3914296b7f045991c7da126f8181ee9b60d4607d7a565dd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-06T11:47:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-06T11:47:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wghx2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3e496dd75c5dd4e480d06cd46302b6172620b35eb24aff3308865a0931677bc4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3e496dd75c5dd4e480d06cd46302b6172620b35eb24aff3308865a0931677bc4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-06T11:47:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-06T11:47:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wghx2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T11:47:47Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-lwknw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:48:01Z is after 2025-08-24T17:21:41Z" Oct 06 11:48:01 crc kubenswrapper[4958]: I1006 11:48:01.357048 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:48:01 crc kubenswrapper[4958]: I1006 11:48:01.357106 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:48:01 crc kubenswrapper[4958]: I1006 11:48:01.357122 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:48:01 crc kubenswrapper[4958]: I1006 11:48:01.357172 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:48:01 crc kubenswrapper[4958]: I1006 11:48:01.357192 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:48:01Z","lastTransitionTime":"2025-10-06T11:48:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:48:01 crc kubenswrapper[4958]: I1006 11:48:01.373382 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-4w4h5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8198c8fd-8ec9-4a56-9e87-4e3967fb8ec7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://979bf29caab50b132b30dd6eff62aed5d2209ae1c43fde2433101410df35035b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:47:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2ncxs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T11:47:47Z\\\"}}\" for pod \"openshift-multus\"/\"multus-4w4h5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:48:01Z is after 2025-08-24T17:21:41Z" Oct 06 11:48:01 crc kubenswrapper[4958]: I1006 11:48:01.392618 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-whw6z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1a63a5e9-6d00-40ff-a080-cfcaf03a1c1b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b4fae35f5c37636077eb5d2fee37236e39d009d77aa158458d267aed5fc29f48\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:47:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bwd9v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0be6251825a9cd485da028e6f7fd169dae74215a7191607dfd0a4b7a470e43c4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:47:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bwd9v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T11:47:47Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-whw6z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:48:01Z is after 2025-08-24T17:21:41Z" Oct 06 11:48:01 crc kubenswrapper[4958]: I1006 11:48:01.412132 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-8jbr9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ddd869cc-2703-4f0d-b694-32bb7735025e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:48:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:48:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:48:00Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:48:00Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8vhr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8vhr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T11:48:00Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-8jbr9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:48:01Z is after 2025-08-24T17:21:41Z" Oct 06 11:48:01 crc kubenswrapper[4958]: I1006 11:48:01.434271 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:48Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:48Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3a93985bd27818990d6f5ec103d8c0722e1317a74398d64300148018c4f77b71\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:47:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9f39e4dcc064167825d4f7323323a394d1a3d1a374b7104bffc6ad56b344d89e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:47:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:48:01Z is after 2025-08-24T17:21:41Z" Oct 06 11:48:01 crc kubenswrapper[4958]: I1006 11:48:01.453084 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:46Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:48:01Z is after 2025-08-24T17:21:41Z" Oct 06 11:48:01 crc kubenswrapper[4958]: I1006 11:48:01.460383 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:48:01 crc kubenswrapper[4958]: I1006 11:48:01.460435 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:48:01 crc kubenswrapper[4958]: I1006 11:48:01.460455 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:48:01 crc kubenswrapper[4958]: I1006 11:48:01.460480 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:48:01 crc kubenswrapper[4958]: I1006 11:48:01.460499 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:48:01Z","lastTransitionTime":"2025-10-06T11:48:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:48:01 crc kubenswrapper[4958]: I1006 11:48:01.469985 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-8xzjs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c3545ace-6477-4c0b-9576-f32c6748e0a0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d94d788ac8f6a4665db7852c21f55128c9afb7976c9e18fbc302c02d8c3a0c43\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:47:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lz2sh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T11:47:47Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-8xzjs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:48:01Z is after 2025-08-24T17:21:41Z" Oct 06 11:48:01 crc kubenswrapper[4958]: I1006 11:48:01.486211 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-vns7w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f33eae78-7219-4551-bea7-dcfaf22c4e62\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7c6a67770ab5e7e0af1ed6a7b4965bc2b3937e021ddf18a765edd7b65196ad5c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:47:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f8htn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T11:47:49Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-vns7w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:48:01Z is after 2025-08-24T17:21:41Z" Oct 06 11:48:01 crc kubenswrapper[4958]: I1006 11:48:01.507591 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e346fee2-2887-49c0-ad05-0e4ca19453e4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://42a6335b390e5de2da2d935de9e0cb1e60391521813d0928c59d1d9c27ae8e7f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:47:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://64dfdaf2d8faa6e7ebde08361728077d3db9118ec963dc4c1fc8caf9eb9c752b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:47:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5cbba96aaea45073aaeada1136c677ba1ba87248f004cf3d7394ddfccf412478\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:47:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://95e5c689ad307b63dec13d774d1ff8fe8fa0a11a7d33e18a0470000a4df0f6dc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:47:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T11:47:27Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:48:01Z is after 2025-08-24T17:21:41Z" Oct 06 11:48:01 crc kubenswrapper[4958]: I1006 11:48:01.529539 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:48Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:48Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b5c0c63694c68c28642df297d1ceff03425621c51b170c758b3fedeafbccc89e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:47:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:48:01Z is after 2025-08-24T17:21:41Z" Oct 06 11:48:01 crc kubenswrapper[4958]: I1006 11:48:01.548395 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:50Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:50Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fbac1bba642ea0238b738ec34631d44326b215575297c7ea3595aa9d691849d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:47:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:48:01Z is after 2025-08-24T17:21:41Z" Oct 06 11:48:01 crc kubenswrapper[4958]: I1006 11:48:01.563430 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:48:01 crc kubenswrapper[4958]: I1006 11:48:01.563502 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:48:01 crc kubenswrapper[4958]: I1006 11:48:01.563521 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:48:01 crc kubenswrapper[4958]: I1006 11:48:01.563545 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:48:01 crc kubenswrapper[4958]: I1006 11:48:01.563575 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:48:01Z","lastTransitionTime":"2025-10-06T11:48:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:48:01 crc kubenswrapper[4958]: I1006 11:48:01.572553 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9bd5b11f-6414-4c57-99b8-516b3f4be804\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:27Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:27Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8f40315a451e12055987bec9804330dac96578f86d7fdedfd96ed187936df857\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:47:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0b16a70f5c26106f885601fdc9f41b6d606668ed3a4a527be3a7674dd4217d36\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:47:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://93cedfe1a3600f0eb73fd787333690ed1fd8cd5f766a54ba40eb6aad808593c7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:47:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fdb3ac1507f9a55981ed07786d2ab8bd2aab7072e8e625b17f15bc27f40f460d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fdb3ac1507f9a55981ed07786d2ab8bd2aab7072e8e625b17f15bc27f40f460d\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-10-06T11:47:48Z\\\",\\\"message\\\":\\\"file observer\\\\nW1006 11:47:47.776944 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1006 11:47:47.777183 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1006 11:47:47.778589 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-624858867/tls.crt::/tmp/serving-cert-624858867/tls.key\\\\\\\"\\\\nI1006 11:47:48.078332 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1006 11:47:48.084427 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1006 11:47:48.084453 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1006 11:47:48.084475 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1006 11:47:48.084480 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1006 11:47:48.108531 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI1006 11:47:48.108546 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW1006 11:47:48.108561 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1006 11:47:48.108582 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1006 11:47:48.108586 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1006 11:47:48.108589 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1006 11:47:48.108592 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1006 11:47:48.108594 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1006 11:47:48.110441 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-06T11:47:42Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://533257b553a482e13f8c8930f7ee7d6f8b70af1499cf861d3dfe73cb2225aa2a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:47:29Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://70c0a2bcb44c905e3d39402f0992d3e50a939252201b98e9a723f5a6777d7f91\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://70c0a2bcb44c905e3d39402f0992d3e50a939252201b98e9a723f5a6777d7f91\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-06T11:47:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-06T11:47:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T11:47:27Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:48:01Z is after 2025-08-24T17:21:41Z" Oct 06 11:48:01 crc kubenswrapper[4958]: I1006 11:48:01.598771 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:46Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:48:01Z is after 2025-08-24T17:21:41Z" Oct 06 11:48:01 crc kubenswrapper[4958]: I1006 11:48:01.626068 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-lwknw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5e46de0e-9f67-4dae-8601-65004d0d71c1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6adb713c9c2659566eee132d371a68bc10d24c6761c017a1e8b0ce8bf974b9fa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:47:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wghx2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://244104564b5ad6b646671bdc75f440e5568a0c2449c111e56431328365d044fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://244104564b5ad6b646671bdc75f440e5568a0c2449c111e56431328365d044fe\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-06T11:47:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-06T11:47:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wghx2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5f832e5afbc55af6f3de6756005ddfef328c3d8357b1bbc8765469e5b254cf8a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5f832e5afbc55af6f3de6756005ddfef328c3d8357b1bbc8765469e5b254cf8a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-06T11:47:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-06T11:47:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wghx2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://99bec1a2f08582cd407d108d771389ab034877ee58f59e7d899541d0c69809b0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://99bec1a2f08582cd407d108d771389ab034877ee58f59e7d899541d0c69809b0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-06T11:47:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-06T11:47:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wghx2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4d090f1099ca4b6472934163fb66f1875dc03819cf0907f3058aa5006d9d2198\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4d090f1099ca4b6472934163fb66f1875dc03819cf0907f3058aa5006d9d2198\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-06T11:47:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-06T11:47:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wghx2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://41bdbeadff713969b3914296b7f045991c7da126f8181ee9b60d4607d7a565dd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://41bdbeadff713969b3914296b7f045991c7da126f8181ee9b60d4607d7a565dd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-06T11:47:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-06T11:47:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wghx2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3e496dd75c5dd4e480d06cd46302b6172620b35eb24aff3308865a0931677bc4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3e496dd75c5dd4e480d06cd46302b6172620b35eb24aff3308865a0931677bc4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-06T11:47:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-06T11:47:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wghx2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T11:47:47Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-lwknw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:48:01Z is after 2025-08-24T17:21:41Z" Oct 06 11:48:01 crc kubenswrapper[4958]: I1006 11:48:01.649728 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-4w4h5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8198c8fd-8ec9-4a56-9e87-4e3967fb8ec7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://979bf29caab50b132b30dd6eff62aed5d2209ae1c43fde2433101410df35035b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:47:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2ncxs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T11:47:47Z\\\"}}\" for pod \"openshift-multus\"/\"multus-4w4h5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:48:01Z is after 2025-08-24T17:21:41Z" Oct 06 11:48:01 crc kubenswrapper[4958]: I1006 11:48:01.666868 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:48:01 crc kubenswrapper[4958]: I1006 11:48:01.666922 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:48:01 crc kubenswrapper[4958]: I1006 11:48:01.666939 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:48:01 crc kubenswrapper[4958]: I1006 11:48:01.666964 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:48:01 crc kubenswrapper[4958]: I1006 11:48:01.666981 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:48:01Z","lastTransitionTime":"2025-10-06T11:48:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:48:01 crc kubenswrapper[4958]: I1006 11:48:01.673522 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-whw6z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1a63a5e9-6d00-40ff-a080-cfcaf03a1c1b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b4fae35f5c37636077eb5d2fee37236e39d009d77aa158458d267aed5fc29f48\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:47:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bwd9v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0be6251825a9cd485da028e6f7fd169dae74215a7191607dfd0a4b7a470e43c4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:47:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bwd9v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T11:47:47Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-whw6z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:48:01Z is after 2025-08-24T17:21:41Z" Oct 06 11:48:01 crc kubenswrapper[4958]: I1006 11:48:01.692600 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-8jbr9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ddd869cc-2703-4f0d-b694-32bb7735025e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:48:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:48:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:48:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:48:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9298df9f82818033b7c6c96e0d7fd59290a7ef86679ae958f19f1c841998715c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:48:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8vhr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5d95fcda3db5aafa2634bf9d4037a6b8b0456bf4be6fc620c742e57e86aed114\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:48:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8vhr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T11:48:00Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-8jbr9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:48:01Z is after 2025-08-24T17:21:41Z" Oct 06 11:48:01 crc kubenswrapper[4958]: I1006 11:48:01.713534 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:48Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:48Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3a93985bd27818990d6f5ec103d8c0722e1317a74398d64300148018c4f77b71\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:47:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9f39e4dcc064167825d4f7323323a394d1a3d1a374b7104bffc6ad56b344d89e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:47:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:48:01Z is after 2025-08-24T17:21:41Z" Oct 06 11:48:01 crc kubenswrapper[4958]: I1006 11:48:01.736771 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:46Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:48:01Z is after 2025-08-24T17:21:41Z" Oct 06 11:48:01 crc kubenswrapper[4958]: I1006 11:48:01.754136 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-8xzjs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c3545ace-6477-4c0b-9576-f32c6748e0a0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d94d788ac8f6a4665db7852c21f55128c9afb7976c9e18fbc302c02d8c3a0c43\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:47:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lz2sh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T11:47:47Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-8xzjs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:48:01Z is after 2025-08-24T17:21:41Z" Oct 06 11:48:01 crc kubenswrapper[4958]: I1006 11:48:01.770200 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:48:01 crc kubenswrapper[4958]: I1006 11:48:01.770270 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:48:01 crc kubenswrapper[4958]: I1006 11:48:01.770289 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:48:01 crc kubenswrapper[4958]: I1006 11:48:01.770312 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:48:01 crc kubenswrapper[4958]: I1006 11:48:01.770329 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:48:01Z","lastTransitionTime":"2025-10-06T11:48:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:48:01 crc kubenswrapper[4958]: I1006 11:48:01.773755 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-vns7w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f33eae78-7219-4551-bea7-dcfaf22c4e62\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7c6a67770ab5e7e0af1ed6a7b4965bc2b3937e021ddf18a765edd7b65196ad5c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:47:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f8htn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T11:47:49Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-vns7w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:48:01Z is after 2025-08-24T17:21:41Z" Oct 06 11:48:01 crc kubenswrapper[4958]: I1006 11:48:01.795486 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e346fee2-2887-49c0-ad05-0e4ca19453e4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://42a6335b390e5de2da2d935de9e0cb1e60391521813d0928c59d1d9c27ae8e7f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:47:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://64dfdaf2d8faa6e7ebde08361728077d3db9118ec963dc4c1fc8caf9eb9c752b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:47:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5cbba96aaea45073aaeada1136c677ba1ba87248f004cf3d7394ddfccf412478\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:47:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://95e5c689ad307b63dec13d774d1ff8fe8fa0a11a7d33e18a0470000a4df0f6dc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:47:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T11:47:27Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:48:01Z is after 2025-08-24T17:21:41Z" Oct 06 11:48:01 crc kubenswrapper[4958]: I1006 11:48:01.817347 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:48Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:48Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b5c0c63694c68c28642df297d1ceff03425621c51b170c758b3fedeafbccc89e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:47:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:48:01Z is after 2025-08-24T17:21:41Z" Oct 06 11:48:01 crc kubenswrapper[4958]: I1006 11:48:01.835829 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:50Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:50Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fbac1bba642ea0238b738ec34631d44326b215575297c7ea3595aa9d691849d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:47:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:48:01Z is after 2025-08-24T17:21:41Z" Oct 06 11:48:01 crc kubenswrapper[4958]: I1006 11:48:01.857642 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9bd5b11f-6414-4c57-99b8-516b3f4be804\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:27Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:27Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8f40315a451e12055987bec9804330dac96578f86d7fdedfd96ed187936df857\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:47:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0b16a70f5c26106f885601fdc9f41b6d606668ed3a4a527be3a7674dd4217d36\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:47:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://93cedfe1a3600f0eb73fd787333690ed1fd8cd5f766a54ba40eb6aad808593c7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:47:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fdb3ac1507f9a55981ed07786d2ab8bd2aab7072e8e625b17f15bc27f40f460d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fdb3ac1507f9a55981ed07786d2ab8bd2aab7072e8e625b17f15bc27f40f460d\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-10-06T11:47:48Z\\\",\\\"message\\\":\\\"file observer\\\\nW1006 11:47:47.776944 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1006 11:47:47.777183 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1006 11:47:47.778589 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-624858867/tls.crt::/tmp/serving-cert-624858867/tls.key\\\\\\\"\\\\nI1006 11:47:48.078332 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1006 11:47:48.084427 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1006 11:47:48.084453 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1006 11:47:48.084475 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1006 11:47:48.084480 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1006 11:47:48.108531 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI1006 11:47:48.108546 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW1006 11:47:48.108561 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1006 11:47:48.108582 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1006 11:47:48.108586 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1006 11:47:48.108589 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1006 11:47:48.108592 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1006 11:47:48.108594 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1006 11:47:48.110441 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-06T11:47:42Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://533257b553a482e13f8c8930f7ee7d6f8b70af1499cf861d3dfe73cb2225aa2a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:47:29Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://70c0a2bcb44c905e3d39402f0992d3e50a939252201b98e9a723f5a6777d7f91\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://70c0a2bcb44c905e3d39402f0992d3e50a939252201b98e9a723f5a6777d7f91\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-06T11:47:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-06T11:47:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T11:47:27Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:48:01Z is after 2025-08-24T17:21:41Z" Oct 06 11:48:01 crc kubenswrapper[4958]: I1006 11:48:01.873193 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:48:01 crc kubenswrapper[4958]: I1006 11:48:01.873258 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:48:01 crc kubenswrapper[4958]: I1006 11:48:01.873279 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:48:01 crc kubenswrapper[4958]: I1006 11:48:01.873307 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:48:01 crc kubenswrapper[4958]: I1006 11:48:01.873325 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:48:01Z","lastTransitionTime":"2025-10-06T11:48:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:48:01 crc kubenswrapper[4958]: I1006 11:48:01.879820 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:46Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:48:01Z is after 2025-08-24T17:21:41Z" Oct 06 11:48:01 crc kubenswrapper[4958]: I1006 11:48:01.911911 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-ntrlk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cd589959-144a-41bd-b6d5-a872e5c25cee\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:48Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:48Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bc57f0ff822a9fcb8c3b73146bf4879f6e05c2f92b6a4317528530e54e419278\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:47:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vzpjm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a38398fb9a752d0059e1adfd4ee23557572492f2d27f63ef90ed4730b742fd0e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:47:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vzpjm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://484bfcace3003fb125465d9296c1b04852447d674b6c4abc8fedbd3a1b6ea8be\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:47:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vzpjm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d7834f1b3141381f7d868acc4aceb2809e81896cb7082de9888c3d540d062078\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:47:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vzpjm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://051e26a11e5f65ab6e49664f99a5f6a87bf781bf7284f2e2e3996c0e26e77c84\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:47:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vzpjm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d90731b1e085c514ef0665d4df356b853f432a1012f84a41ed82fa257d57d9bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:47:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vzpjm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bde47cc2eb6224040038d0ad4b06a8f91be0c8eaabe6545c6169b533f913dbd3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bde47cc2eb6224040038d0ad4b06a8f91be0c8eaabe6545c6169b533f913dbd3\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-10-06T11:47:59Z\\\",\\\"message\\\":\\\"ind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-operator-lifecycle-manager/catalog-operator-metrics\\\\\\\"}\\\\nI1006 11:47:59.290723 6420 services_controller.go:360] Finished syncing service catalog-operator-metrics on namespace openshift-operator-lifecycle-manager for network=default : 1.119513ms\\\\nI1006 11:47:59.290754 6420 loadbalancer.go:304] Deleted 0 stale LBs for map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-apiserver/api\\\\\\\"}\\\\nI1006 11:47:59.290776 6420 services_controller.go:360] Finished syncing service api on namespace openshift-apiserver for network=default : 2.435771ms\\\\nI1006 11:47:59.290868 6420 loadbalancer.go:304] Deleted 0 stale LBs for map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-marketplace/community-operators\\\\\\\"}\\\\nI1006 11:47:59.290896 6420 services_controller.go:360] Finished syncing service community-operators on namespace openshift-marketplace for network=default : 2.503053ms\\\\nI1006 11:47:59.291059 6420 factory.go:1336] Added *v1.EgressFirewall event handler 9\\\\nI1006 11:47:59.291204 6420 controller.go:132] Adding controller ef_node_controller event handlers\\\\nI1006 11:47:59.291271 6420 ovnkube.go:599] Stopped ovnkube\\\\nI1006 11:47:59.291319 6420 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF1006 11:47:59.291428 6420 ovnkube.go:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-06T11:47:58Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-ntrlk_openshift-ovn-kubernetes(cd589959-144a-41bd-b6d5-a872e5c25cee)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vzpjm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1398aae03a42cf60100ed334a3d65c9c4fd501855d5316e7e1ed3974d0c7e22e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:47:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vzpjm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://738882f7b5c7b87e6d6426a997dd7efe5daf1d97973cf1085097cc0f5e790f00\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://738882f7b5c7b87e6d6426a997dd7efe5daf1d97973cf1085097cc0f5e790f00\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-06T11:47:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-06T11:47:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vzpjm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T11:47:48Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-ntrlk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:48:01Z is after 2025-08-24T17:21:41Z" Oct 06 11:48:01 crc kubenswrapper[4958]: I1006 11:48:01.976821 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:48:01 crc kubenswrapper[4958]: I1006 11:48:01.976878 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:48:01 crc kubenswrapper[4958]: I1006 11:48:01.976897 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:48:01 crc kubenswrapper[4958]: I1006 11:48:01.976926 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:48:01 crc kubenswrapper[4958]: I1006 11:48:01.976946 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:48:01Z","lastTransitionTime":"2025-10-06T11:48:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:48:02 crc kubenswrapper[4958]: I1006 11:48:02.025685 4958 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/network-metrics-daemon-4mxw5"] Oct 06 11:48:02 crc kubenswrapper[4958]: I1006 11:48:02.026613 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-4mxw5" Oct 06 11:48:02 crc kubenswrapper[4958]: E1006 11:48:02.026724 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-4mxw5" podUID="cbfbf2ea-df5f-4844-ba69-8c9f16a3a26c" Oct 06 11:48:02 crc kubenswrapper[4958]: I1006 11:48:02.049202 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-4mxw5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cbfbf2ea-df5f-4844-ba69-8c9f16a3a26c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:48:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:48:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:48:02Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:48:02Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cdkh7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cdkh7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T11:48:02Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-4mxw5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:48:02Z is after 2025-08-24T17:21:41Z" Oct 06 11:48:02 crc kubenswrapper[4958]: I1006 11:48:02.073013 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9bd5b11f-6414-4c57-99b8-516b3f4be804\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:27Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:27Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8f40315a451e12055987bec9804330dac96578f86d7fdedfd96ed187936df857\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:47:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0b16a70f5c26106f885601fdc9f41b6d606668ed3a4a527be3a7674dd4217d36\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:47:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://93cedfe1a3600f0eb73fd787333690ed1fd8cd5f766a54ba40eb6aad808593c7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:47:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fdb3ac1507f9a55981ed07786d2ab8bd2aab7072e8e625b17f15bc27f40f460d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fdb3ac1507f9a55981ed07786d2ab8bd2aab7072e8e625b17f15bc27f40f460d\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-10-06T11:47:48Z\\\",\\\"message\\\":\\\"file observer\\\\nW1006 11:47:47.776944 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1006 11:47:47.777183 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1006 11:47:47.778589 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-624858867/tls.crt::/tmp/serving-cert-624858867/tls.key\\\\\\\"\\\\nI1006 11:47:48.078332 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1006 11:47:48.084427 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1006 11:47:48.084453 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1006 11:47:48.084475 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1006 11:47:48.084480 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1006 11:47:48.108531 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI1006 11:47:48.108546 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW1006 11:47:48.108561 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1006 11:47:48.108582 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1006 11:47:48.108586 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1006 11:47:48.108589 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1006 11:47:48.108592 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1006 11:47:48.108594 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1006 11:47:48.110441 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-06T11:47:42Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://533257b553a482e13f8c8930f7ee7d6f8b70af1499cf861d3dfe73cb2225aa2a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:47:29Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://70c0a2bcb44c905e3d39402f0992d3e50a939252201b98e9a723f5a6777d7f91\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://70c0a2bcb44c905e3d39402f0992d3e50a939252201b98e9a723f5a6777d7f91\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-06T11:47:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-06T11:47:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T11:47:27Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:48:02Z is after 2025-08-24T17:21:41Z" Oct 06 11:48:02 crc kubenswrapper[4958]: I1006 11:48:02.079825 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:48:02 crc kubenswrapper[4958]: I1006 11:48:02.079888 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:48:02 crc kubenswrapper[4958]: I1006 11:48:02.079906 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:48:02 crc kubenswrapper[4958]: I1006 11:48:02.079931 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:48:02 crc kubenswrapper[4958]: I1006 11:48:02.079949 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:48:02Z","lastTransitionTime":"2025-10-06T11:48:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:48:02 crc kubenswrapper[4958]: I1006 11:48:02.096031 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:48Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:48Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b5c0c63694c68c28642df297d1ceff03425621c51b170c758b3fedeafbccc89e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:47:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:48:02Z is after 2025-08-24T17:21:41Z" Oct 06 11:48:02 crc kubenswrapper[4958]: I1006 11:48:02.115440 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:50Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:50Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fbac1bba642ea0238b738ec34631d44326b215575297c7ea3595aa9d691849d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:47:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:48:02Z is after 2025-08-24T17:21:41Z" Oct 06 11:48:02 crc kubenswrapper[4958]: I1006 11:48:02.139642 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:46Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:48:02Z is after 2025-08-24T17:21:41Z" Oct 06 11:48:02 crc kubenswrapper[4958]: I1006 11:48:02.158473 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/cbfbf2ea-df5f-4844-ba69-8c9f16a3a26c-metrics-certs\") pod \"network-metrics-daemon-4mxw5\" (UID: \"cbfbf2ea-df5f-4844-ba69-8c9f16a3a26c\") " pod="openshift-multus/network-metrics-daemon-4mxw5" Oct 06 11:48:02 crc kubenswrapper[4958]: I1006 11:48:02.158562 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cdkh7\" (UniqueName: \"kubernetes.io/projected/cbfbf2ea-df5f-4844-ba69-8c9f16a3a26c-kube-api-access-cdkh7\") pod \"network-metrics-daemon-4mxw5\" (UID: \"cbfbf2ea-df5f-4844-ba69-8c9f16a3a26c\") " pod="openshift-multus/network-metrics-daemon-4mxw5" Oct 06 11:48:02 crc kubenswrapper[4958]: I1006 11:48:02.161768 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-ntrlk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cd589959-144a-41bd-b6d5-a872e5c25cee\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:48Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:48Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bc57f0ff822a9fcb8c3b73146bf4879f6e05c2f92b6a4317528530e54e419278\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:47:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vzpjm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a38398fb9a752d0059e1adfd4ee23557572492f2d27f63ef90ed4730b742fd0e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:47:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vzpjm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://484bfcace3003fb125465d9296c1b04852447d674b6c4abc8fedbd3a1b6ea8be\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:47:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vzpjm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d7834f1b3141381f7d868acc4aceb2809e81896cb7082de9888c3d540d062078\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:47:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vzpjm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://051e26a11e5f65ab6e49664f99a5f6a87bf781bf7284f2e2e3996c0e26e77c84\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:47:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vzpjm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d90731b1e085c514ef0665d4df356b853f432a1012f84a41ed82fa257d57d9bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:47:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vzpjm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bde47cc2eb6224040038d0ad4b06a8f91be0c8eaabe6545c6169b533f913dbd3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bde47cc2eb6224040038d0ad4b06a8f91be0c8eaabe6545c6169b533f913dbd3\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-10-06T11:47:59Z\\\",\\\"message\\\":\\\"ind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-operator-lifecycle-manager/catalog-operator-metrics\\\\\\\"}\\\\nI1006 11:47:59.290723 6420 services_controller.go:360] Finished syncing service catalog-operator-metrics on namespace openshift-operator-lifecycle-manager for network=default : 1.119513ms\\\\nI1006 11:47:59.290754 6420 loadbalancer.go:304] Deleted 0 stale LBs for map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-apiserver/api\\\\\\\"}\\\\nI1006 11:47:59.290776 6420 services_controller.go:360] Finished syncing service api on namespace openshift-apiserver for network=default : 2.435771ms\\\\nI1006 11:47:59.290868 6420 loadbalancer.go:304] Deleted 0 stale LBs for map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-marketplace/community-operators\\\\\\\"}\\\\nI1006 11:47:59.290896 6420 services_controller.go:360] Finished syncing service community-operators on namespace openshift-marketplace for network=default : 2.503053ms\\\\nI1006 11:47:59.291059 6420 factory.go:1336] Added *v1.EgressFirewall event handler 9\\\\nI1006 11:47:59.291204 6420 controller.go:132] Adding controller ef_node_controller event handlers\\\\nI1006 11:47:59.291271 6420 ovnkube.go:599] Stopped ovnkube\\\\nI1006 11:47:59.291319 6420 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF1006 11:47:59.291428 6420 ovnkube.go:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-06T11:47:58Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-ntrlk_openshift-ovn-kubernetes(cd589959-144a-41bd-b6d5-a872e5c25cee)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vzpjm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1398aae03a42cf60100ed334a3d65c9c4fd501855d5316e7e1ed3974d0c7e22e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:47:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vzpjm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://738882f7b5c7b87e6d6426a997dd7efe5daf1d97973cf1085097cc0f5e790f00\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://738882f7b5c7b87e6d6426a997dd7efe5daf1d97973cf1085097cc0f5e790f00\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-06T11:47:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-06T11:47:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vzpjm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T11:47:48Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-ntrlk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:48:02Z is after 2025-08-24T17:21:41Z" Oct 06 11:48:02 crc kubenswrapper[4958]: I1006 11:48:02.181891 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-4w4h5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8198c8fd-8ec9-4a56-9e87-4e3967fb8ec7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://979bf29caab50b132b30dd6eff62aed5d2209ae1c43fde2433101410df35035b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:47:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2ncxs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T11:47:47Z\\\"}}\" for pod \"openshift-multus\"/\"multus-4w4h5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:48:02Z is after 2025-08-24T17:21:41Z" Oct 06 11:48:02 crc kubenswrapper[4958]: I1006 11:48:02.182922 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:48:02 crc kubenswrapper[4958]: I1006 11:48:02.182991 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:48:02 crc kubenswrapper[4958]: I1006 11:48:02.183013 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:48:02 crc kubenswrapper[4958]: I1006 11:48:02.183042 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:48:02 crc kubenswrapper[4958]: I1006 11:48:02.183063 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:48:02Z","lastTransitionTime":"2025-10-06T11:48:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:48:02 crc kubenswrapper[4958]: I1006 11:48:02.198356 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-whw6z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1a63a5e9-6d00-40ff-a080-cfcaf03a1c1b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b4fae35f5c37636077eb5d2fee37236e39d009d77aa158458d267aed5fc29f48\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:47:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bwd9v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0be6251825a9cd485da028e6f7fd169dae74215a7191607dfd0a4b7a470e43c4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:47:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bwd9v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T11:47:47Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-whw6z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:48:02Z is after 2025-08-24T17:21:41Z" Oct 06 11:48:02 crc kubenswrapper[4958]: I1006 11:48:02.215774 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-8jbr9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ddd869cc-2703-4f0d-b694-32bb7735025e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:48:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:48:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:48:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:48:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9298df9f82818033b7c6c96e0d7fd59290a7ef86679ae958f19f1c841998715c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:48:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8vhr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5d95fcda3db5aafa2634bf9d4037a6b8b0456bf4be6fc620c742e57e86aed114\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:48:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8vhr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T11:48:00Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-8jbr9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:48:02Z is after 2025-08-24T17:21:41Z" Oct 06 11:48:02 crc kubenswrapper[4958]: I1006 11:48:02.233555 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:48Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:48Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3a93985bd27818990d6f5ec103d8c0722e1317a74398d64300148018c4f77b71\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:47:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9f39e4dcc064167825d4f7323323a394d1a3d1a374b7104bffc6ad56b344d89e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:47:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:48:02Z is after 2025-08-24T17:21:41Z" Oct 06 11:48:02 crc kubenswrapper[4958]: I1006 11:48:02.253108 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:46Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:48:02Z is after 2025-08-24T17:21:41Z" Oct 06 11:48:02 crc kubenswrapper[4958]: I1006 11:48:02.260042 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/cbfbf2ea-df5f-4844-ba69-8c9f16a3a26c-metrics-certs\") pod \"network-metrics-daemon-4mxw5\" (UID: \"cbfbf2ea-df5f-4844-ba69-8c9f16a3a26c\") " pod="openshift-multus/network-metrics-daemon-4mxw5" Oct 06 11:48:02 crc kubenswrapper[4958]: I1006 11:48:02.260226 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cdkh7\" (UniqueName: \"kubernetes.io/projected/cbfbf2ea-df5f-4844-ba69-8c9f16a3a26c-kube-api-access-cdkh7\") pod \"network-metrics-daemon-4mxw5\" (UID: \"cbfbf2ea-df5f-4844-ba69-8c9f16a3a26c\") " pod="openshift-multus/network-metrics-daemon-4mxw5" Oct 06 11:48:02 crc kubenswrapper[4958]: E1006 11:48:02.260371 4958 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Oct 06 11:48:02 crc kubenswrapper[4958]: E1006 11:48:02.260527 4958 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/cbfbf2ea-df5f-4844-ba69-8c9f16a3a26c-metrics-certs podName:cbfbf2ea-df5f-4844-ba69-8c9f16a3a26c nodeName:}" failed. No retries permitted until 2025-10-06 11:48:02.760487403 +0000 UTC m=+36.646512751 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/cbfbf2ea-df5f-4844-ba69-8c9f16a3a26c-metrics-certs") pod "network-metrics-daemon-4mxw5" (UID: "cbfbf2ea-df5f-4844-ba69-8c9f16a3a26c") : object "openshift-multus"/"metrics-daemon-secret" not registered Oct 06 11:48:02 crc kubenswrapper[4958]: I1006 11:48:02.274084 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-lwknw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5e46de0e-9f67-4dae-8601-65004d0d71c1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6adb713c9c2659566eee132d371a68bc10d24c6761c017a1e8b0ce8bf974b9fa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:47:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wghx2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://244104564b5ad6b646671bdc75f440e5568a0c2449c111e56431328365d044fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://244104564b5ad6b646671bdc75f440e5568a0c2449c111e56431328365d044fe\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-06T11:47:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-06T11:47:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wghx2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5f832e5afbc55af6f3de6756005ddfef328c3d8357b1bbc8765469e5b254cf8a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5f832e5afbc55af6f3de6756005ddfef328c3d8357b1bbc8765469e5b254cf8a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-06T11:47:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-06T11:47:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wghx2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://99bec1a2f08582cd407d108d771389ab034877ee58f59e7d899541d0c69809b0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://99bec1a2f08582cd407d108d771389ab034877ee58f59e7d899541d0c69809b0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-06T11:47:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-06T11:47:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wghx2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4d090f1099ca4b6472934163fb66f1875dc03819cf0907f3058aa5006d9d2198\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4d090f1099ca4b6472934163fb66f1875dc03819cf0907f3058aa5006d9d2198\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-06T11:47:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-06T11:47:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wghx2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://41bdbeadff713969b3914296b7f045991c7da126f8181ee9b60d4607d7a565dd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://41bdbeadff713969b3914296b7f045991c7da126f8181ee9b60d4607d7a565dd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-06T11:47:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-06T11:47:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wghx2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3e496dd75c5dd4e480d06cd46302b6172620b35eb24aff3308865a0931677bc4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3e496dd75c5dd4e480d06cd46302b6172620b35eb24aff3308865a0931677bc4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-06T11:47:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-06T11:47:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wghx2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T11:47:47Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-lwknw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:48:02Z is after 2025-08-24T17:21:41Z" Oct 06 11:48:02 crc kubenswrapper[4958]: I1006 11:48:02.285740 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:48:02 crc kubenswrapper[4958]: I1006 11:48:02.285764 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:48:02 crc kubenswrapper[4958]: I1006 11:48:02.285773 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:48:02 crc kubenswrapper[4958]: I1006 11:48:02.285786 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:48:02 crc kubenswrapper[4958]: I1006 11:48:02.285795 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:48:02Z","lastTransitionTime":"2025-10-06T11:48:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:48:02 crc kubenswrapper[4958]: I1006 11:48:02.287289 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-vns7w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f33eae78-7219-4551-bea7-dcfaf22c4e62\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7c6a67770ab5e7e0af1ed6a7b4965bc2b3937e021ddf18a765edd7b65196ad5c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:47:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f8htn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T11:47:49Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-vns7w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:48:02Z is after 2025-08-24T17:21:41Z" Oct 06 11:48:02 crc kubenswrapper[4958]: I1006 11:48:02.292269 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cdkh7\" (UniqueName: \"kubernetes.io/projected/cbfbf2ea-df5f-4844-ba69-8c9f16a3a26c-kube-api-access-cdkh7\") pod \"network-metrics-daemon-4mxw5\" (UID: \"cbfbf2ea-df5f-4844-ba69-8c9f16a3a26c\") " pod="openshift-multus/network-metrics-daemon-4mxw5" Oct 06 11:48:02 crc kubenswrapper[4958]: I1006 11:48:02.307609 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e346fee2-2887-49c0-ad05-0e4ca19453e4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://42a6335b390e5de2da2d935de9e0cb1e60391521813d0928c59d1d9c27ae8e7f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:47:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://64dfdaf2d8faa6e7ebde08361728077d3db9118ec963dc4c1fc8caf9eb9c752b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:47:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5cbba96aaea45073aaeada1136c677ba1ba87248f004cf3d7394ddfccf412478\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:47:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://95e5c689ad307b63dec13d774d1ff8fe8fa0a11a7d33e18a0470000a4df0f6dc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:47:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T11:47:27Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:48:02Z is after 2025-08-24T17:21:41Z" Oct 06 11:48:02 crc kubenswrapper[4958]: I1006 11:48:02.326986 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:46Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:48:02Z is after 2025-08-24T17:21:41Z" Oct 06 11:48:02 crc kubenswrapper[4958]: I1006 11:48:02.342632 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-8xzjs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c3545ace-6477-4c0b-9576-f32c6748e0a0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d94d788ac8f6a4665db7852c21f55128c9afb7976c9e18fbc302c02d8c3a0c43\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:47:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lz2sh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T11:47:47Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-8xzjs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:48:02Z is after 2025-08-24T17:21:41Z" Oct 06 11:48:02 crc kubenswrapper[4958]: I1006 11:48:02.390437 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:48:02 crc kubenswrapper[4958]: I1006 11:48:02.390516 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:48:02 crc kubenswrapper[4958]: I1006 11:48:02.390535 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:48:02 crc kubenswrapper[4958]: I1006 11:48:02.390564 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:48:02 crc kubenswrapper[4958]: I1006 11:48:02.390581 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:48:02Z","lastTransitionTime":"2025-10-06T11:48:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:48:02 crc kubenswrapper[4958]: I1006 11:48:02.493294 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:48:02 crc kubenswrapper[4958]: I1006 11:48:02.493343 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:48:02 crc kubenswrapper[4958]: I1006 11:48:02.493357 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:48:02 crc kubenswrapper[4958]: I1006 11:48:02.493374 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:48:02 crc kubenswrapper[4958]: I1006 11:48:02.493388 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:48:02Z","lastTransitionTime":"2025-10-06T11:48:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:48:02 crc kubenswrapper[4958]: I1006 11:48:02.563660 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 06 11:48:02 crc kubenswrapper[4958]: E1006 11:48:02.563985 4958 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-06 11:48:18.563959808 +0000 UTC m=+52.449985156 (durationBeforeRetry 16s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 06 11:48:02 crc kubenswrapper[4958]: I1006 11:48:02.597400 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:48:02 crc kubenswrapper[4958]: I1006 11:48:02.597456 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:48:02 crc kubenswrapper[4958]: I1006 11:48:02.597474 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:48:02 crc kubenswrapper[4958]: I1006 11:48:02.597497 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:48:02 crc kubenswrapper[4958]: I1006 11:48:02.597515 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:48:02Z","lastTransitionTime":"2025-10-06T11:48:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:48:02 crc kubenswrapper[4958]: I1006 11:48:02.665193 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 06 11:48:02 crc kubenswrapper[4958]: I1006 11:48:02.665439 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 06 11:48:02 crc kubenswrapper[4958]: I1006 11:48:02.665666 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 06 11:48:02 crc kubenswrapper[4958]: I1006 11:48:02.665830 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 06 11:48:02 crc kubenswrapper[4958]: E1006 11:48:02.665552 4958 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Oct 06 11:48:02 crc kubenswrapper[4958]: E1006 11:48:02.666134 4958 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Oct 06 11:48:02 crc kubenswrapper[4958]: E1006 11:48:02.666306 4958 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Oct 06 11:48:02 crc kubenswrapper[4958]: E1006 11:48:02.666521 4958 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2025-10-06 11:48:18.666496219 +0000 UTC m=+52.552521567 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Oct 06 11:48:02 crc kubenswrapper[4958]: E1006 11:48:02.665572 4958 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Oct 06 11:48:02 crc kubenswrapper[4958]: E1006 11:48:02.666835 4958 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-10-06 11:48:18.666819326 +0000 UTC m=+52.552844674 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Oct 06 11:48:02 crc kubenswrapper[4958]: E1006 11:48:02.665834 4958 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Oct 06 11:48:02 crc kubenswrapper[4958]: E1006 11:48:02.667226 4958 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Oct 06 11:48:02 crc kubenswrapper[4958]: E1006 11:48:02.665894 4958 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Oct 06 11:48:02 crc kubenswrapper[4958]: E1006 11:48:02.667686 4958 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-10-06 11:48:18.667666984 +0000 UTC m=+52.553692332 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Oct 06 11:48:02 crc kubenswrapper[4958]: E1006 11:48:02.667581 4958 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Oct 06 11:48:02 crc kubenswrapper[4958]: E1006 11:48:02.667975 4958 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2025-10-06 11:48:18.66795897 +0000 UTC m=+52.553984318 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Oct 06 11:48:02 crc kubenswrapper[4958]: I1006 11:48:02.700511 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:48:02 crc kubenswrapper[4958]: I1006 11:48:02.700773 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:48:02 crc kubenswrapper[4958]: I1006 11:48:02.700949 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:48:02 crc kubenswrapper[4958]: I1006 11:48:02.701140 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:48:02 crc kubenswrapper[4958]: I1006 11:48:02.701393 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:48:02Z","lastTransitionTime":"2025-10-06T11:48:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:48:02 crc kubenswrapper[4958]: I1006 11:48:02.766743 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/cbfbf2ea-df5f-4844-ba69-8c9f16a3a26c-metrics-certs\") pod \"network-metrics-daemon-4mxw5\" (UID: \"cbfbf2ea-df5f-4844-ba69-8c9f16a3a26c\") " pod="openshift-multus/network-metrics-daemon-4mxw5" Oct 06 11:48:02 crc kubenswrapper[4958]: E1006 11:48:02.766991 4958 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Oct 06 11:48:02 crc kubenswrapper[4958]: E1006 11:48:02.767509 4958 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/cbfbf2ea-df5f-4844-ba69-8c9f16a3a26c-metrics-certs podName:cbfbf2ea-df5f-4844-ba69-8c9f16a3a26c nodeName:}" failed. No retries permitted until 2025-10-06 11:48:03.767428675 +0000 UTC m=+37.653454023 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/cbfbf2ea-df5f-4844-ba69-8c9f16a3a26c-metrics-certs") pod "network-metrics-daemon-4mxw5" (UID: "cbfbf2ea-df5f-4844-ba69-8c9f16a3a26c") : object "openshift-multus"/"metrics-daemon-secret" not registered Oct 06 11:48:02 crc kubenswrapper[4958]: I1006 11:48:02.804786 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:48:02 crc kubenswrapper[4958]: I1006 11:48:02.804996 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:48:02 crc kubenswrapper[4958]: I1006 11:48:02.805131 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:48:02 crc kubenswrapper[4958]: I1006 11:48:02.805291 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:48:02 crc kubenswrapper[4958]: I1006 11:48:02.805450 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:48:02Z","lastTransitionTime":"2025-10-06T11:48:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:48:02 crc kubenswrapper[4958]: I1006 11:48:02.908079 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:48:02 crc kubenswrapper[4958]: I1006 11:48:02.908195 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:48:02 crc kubenswrapper[4958]: I1006 11:48:02.908222 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:48:02 crc kubenswrapper[4958]: I1006 11:48:02.908252 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:48:02 crc kubenswrapper[4958]: I1006 11:48:02.908274 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:48:02Z","lastTransitionTime":"2025-10-06T11:48:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:48:02 crc kubenswrapper[4958]: I1006 11:48:02.912269 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 06 11:48:02 crc kubenswrapper[4958]: I1006 11:48:02.912343 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 06 11:48:02 crc kubenswrapper[4958]: I1006 11:48:02.912403 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 06 11:48:02 crc kubenswrapper[4958]: E1006 11:48:02.912474 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Oct 06 11:48:02 crc kubenswrapper[4958]: E1006 11:48:02.912629 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Oct 06 11:48:02 crc kubenswrapper[4958]: E1006 11:48:02.912990 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Oct 06 11:48:03 crc kubenswrapper[4958]: I1006 11:48:03.011701 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:48:03 crc kubenswrapper[4958]: I1006 11:48:03.011757 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:48:03 crc kubenswrapper[4958]: I1006 11:48:03.011774 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:48:03 crc kubenswrapper[4958]: I1006 11:48:03.011797 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:48:03 crc kubenswrapper[4958]: I1006 11:48:03.011817 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:48:03Z","lastTransitionTime":"2025-10-06T11:48:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:48:03 crc kubenswrapper[4958]: I1006 11:48:03.114887 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:48:03 crc kubenswrapper[4958]: I1006 11:48:03.114971 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:48:03 crc kubenswrapper[4958]: I1006 11:48:03.114993 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:48:03 crc kubenswrapper[4958]: I1006 11:48:03.115027 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:48:03 crc kubenswrapper[4958]: I1006 11:48:03.115050 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:48:03Z","lastTransitionTime":"2025-10-06T11:48:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:48:03 crc kubenswrapper[4958]: I1006 11:48:03.218485 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:48:03 crc kubenswrapper[4958]: I1006 11:48:03.218549 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:48:03 crc kubenswrapper[4958]: I1006 11:48:03.218565 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:48:03 crc kubenswrapper[4958]: I1006 11:48:03.218588 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:48:03 crc kubenswrapper[4958]: I1006 11:48:03.218605 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:48:03Z","lastTransitionTime":"2025-10-06T11:48:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:48:03 crc kubenswrapper[4958]: I1006 11:48:03.303810 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-ntrlk" Oct 06 11:48:03 crc kubenswrapper[4958]: I1006 11:48:03.305002 4958 scope.go:117] "RemoveContainer" containerID="bde47cc2eb6224040038d0ad4b06a8f91be0c8eaabe6545c6169b533f913dbd3" Oct 06 11:48:03 crc kubenswrapper[4958]: E1006 11:48:03.305363 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-ntrlk_openshift-ovn-kubernetes(cd589959-144a-41bd-b6d5-a872e5c25cee)\"" pod="openshift-ovn-kubernetes/ovnkube-node-ntrlk" podUID="cd589959-144a-41bd-b6d5-a872e5c25cee" Oct 06 11:48:03 crc kubenswrapper[4958]: I1006 11:48:03.322334 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:48:03 crc kubenswrapper[4958]: I1006 11:48:03.322403 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:48:03 crc kubenswrapper[4958]: I1006 11:48:03.322425 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:48:03 crc kubenswrapper[4958]: I1006 11:48:03.322451 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:48:03 crc kubenswrapper[4958]: I1006 11:48:03.322467 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:48:03Z","lastTransitionTime":"2025-10-06T11:48:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:48:03 crc kubenswrapper[4958]: I1006 11:48:03.425572 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:48:03 crc kubenswrapper[4958]: I1006 11:48:03.425635 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:48:03 crc kubenswrapper[4958]: I1006 11:48:03.425651 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:48:03 crc kubenswrapper[4958]: I1006 11:48:03.425674 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:48:03 crc kubenswrapper[4958]: I1006 11:48:03.425690 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:48:03Z","lastTransitionTime":"2025-10-06T11:48:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:48:03 crc kubenswrapper[4958]: I1006 11:48:03.528666 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:48:03 crc kubenswrapper[4958]: I1006 11:48:03.528738 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:48:03 crc kubenswrapper[4958]: I1006 11:48:03.528762 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:48:03 crc kubenswrapper[4958]: I1006 11:48:03.528793 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:48:03 crc kubenswrapper[4958]: I1006 11:48:03.528815 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:48:03Z","lastTransitionTime":"2025-10-06T11:48:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:48:03 crc kubenswrapper[4958]: I1006 11:48:03.590262 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:48:03 crc kubenswrapper[4958]: I1006 11:48:03.590335 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:48:03 crc kubenswrapper[4958]: I1006 11:48:03.590354 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:48:03 crc kubenswrapper[4958]: I1006 11:48:03.590380 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:48:03 crc kubenswrapper[4958]: I1006 11:48:03.590398 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:48:03Z","lastTransitionTime":"2025-10-06T11:48:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:48:03 crc kubenswrapper[4958]: E1006 11:48:03.610685 4958 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-10-06T11:48:03Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-06T11:48:03Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-06T11:48:03Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-06T11:48:03Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-06T11:48:03Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-06T11:48:03Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-06T11:48:03Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-06T11:48:03Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"71775188-f0be-4b91-a34f-f469bf9337b6\\\",\\\"systemUUID\\\":\\\"c838f8b7-52cc-46da-a415-eff8b7b887b9\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:48:03Z is after 2025-08-24T17:21:41Z" Oct 06 11:48:03 crc kubenswrapper[4958]: I1006 11:48:03.616542 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:48:03 crc kubenswrapper[4958]: I1006 11:48:03.616627 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:48:03 crc kubenswrapper[4958]: I1006 11:48:03.616653 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:48:03 crc kubenswrapper[4958]: I1006 11:48:03.616685 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:48:03 crc kubenswrapper[4958]: I1006 11:48:03.616708 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:48:03Z","lastTransitionTime":"2025-10-06T11:48:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:48:03 crc kubenswrapper[4958]: E1006 11:48:03.638924 4958 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-10-06T11:48:03Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-06T11:48:03Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-06T11:48:03Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-06T11:48:03Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-06T11:48:03Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-06T11:48:03Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-06T11:48:03Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-06T11:48:03Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"71775188-f0be-4b91-a34f-f469bf9337b6\\\",\\\"systemUUID\\\":\\\"c838f8b7-52cc-46da-a415-eff8b7b887b9\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:48:03Z is after 2025-08-24T17:21:41Z" Oct 06 11:48:03 crc kubenswrapper[4958]: I1006 11:48:03.644460 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:48:03 crc kubenswrapper[4958]: I1006 11:48:03.644503 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:48:03 crc kubenswrapper[4958]: I1006 11:48:03.644514 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:48:03 crc kubenswrapper[4958]: I1006 11:48:03.644533 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:48:03 crc kubenswrapper[4958]: I1006 11:48:03.644547 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:48:03Z","lastTransitionTime":"2025-10-06T11:48:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:48:03 crc kubenswrapper[4958]: E1006 11:48:03.664251 4958 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-10-06T11:48:03Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-06T11:48:03Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-06T11:48:03Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-06T11:48:03Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-06T11:48:03Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-06T11:48:03Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-06T11:48:03Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-06T11:48:03Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"71775188-f0be-4b91-a34f-f469bf9337b6\\\",\\\"systemUUID\\\":\\\"c838f8b7-52cc-46da-a415-eff8b7b887b9\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:48:03Z is after 2025-08-24T17:21:41Z" Oct 06 11:48:03 crc kubenswrapper[4958]: I1006 11:48:03.668660 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:48:03 crc kubenswrapper[4958]: I1006 11:48:03.668724 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:48:03 crc kubenswrapper[4958]: I1006 11:48:03.668744 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:48:03 crc kubenswrapper[4958]: I1006 11:48:03.668772 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:48:03 crc kubenswrapper[4958]: I1006 11:48:03.668792 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:48:03Z","lastTransitionTime":"2025-10-06T11:48:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:48:03 crc kubenswrapper[4958]: E1006 11:48:03.689194 4958 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-10-06T11:48:03Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-06T11:48:03Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-06T11:48:03Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-06T11:48:03Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-06T11:48:03Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-06T11:48:03Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-06T11:48:03Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-06T11:48:03Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"71775188-f0be-4b91-a34f-f469bf9337b6\\\",\\\"systemUUID\\\":\\\"c838f8b7-52cc-46da-a415-eff8b7b887b9\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:48:03Z is after 2025-08-24T17:21:41Z" Oct 06 11:48:03 crc kubenswrapper[4958]: I1006 11:48:03.693872 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:48:03 crc kubenswrapper[4958]: I1006 11:48:03.693903 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:48:03 crc kubenswrapper[4958]: I1006 11:48:03.693913 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:48:03 crc kubenswrapper[4958]: I1006 11:48:03.693931 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:48:03 crc kubenswrapper[4958]: I1006 11:48:03.693946 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:48:03Z","lastTransitionTime":"2025-10-06T11:48:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:48:03 crc kubenswrapper[4958]: E1006 11:48:03.713305 4958 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-10-06T11:48:03Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-06T11:48:03Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-06T11:48:03Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-06T11:48:03Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-06T11:48:03Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-06T11:48:03Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-06T11:48:03Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-06T11:48:03Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"71775188-f0be-4b91-a34f-f469bf9337b6\\\",\\\"systemUUID\\\":\\\"c838f8b7-52cc-46da-a415-eff8b7b887b9\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:48:03Z is after 2025-08-24T17:21:41Z" Oct 06 11:48:03 crc kubenswrapper[4958]: E1006 11:48:03.713530 4958 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Oct 06 11:48:03 crc kubenswrapper[4958]: I1006 11:48:03.716182 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:48:03 crc kubenswrapper[4958]: I1006 11:48:03.716241 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:48:03 crc kubenswrapper[4958]: I1006 11:48:03.716262 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:48:03 crc kubenswrapper[4958]: I1006 11:48:03.716292 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:48:03 crc kubenswrapper[4958]: I1006 11:48:03.716311 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:48:03Z","lastTransitionTime":"2025-10-06T11:48:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:48:03 crc kubenswrapper[4958]: I1006 11:48:03.781053 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/cbfbf2ea-df5f-4844-ba69-8c9f16a3a26c-metrics-certs\") pod \"network-metrics-daemon-4mxw5\" (UID: \"cbfbf2ea-df5f-4844-ba69-8c9f16a3a26c\") " pod="openshift-multus/network-metrics-daemon-4mxw5" Oct 06 11:48:03 crc kubenswrapper[4958]: E1006 11:48:03.781305 4958 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Oct 06 11:48:03 crc kubenswrapper[4958]: E1006 11:48:03.781429 4958 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/cbfbf2ea-df5f-4844-ba69-8c9f16a3a26c-metrics-certs podName:cbfbf2ea-df5f-4844-ba69-8c9f16a3a26c nodeName:}" failed. No retries permitted until 2025-10-06 11:48:05.781399452 +0000 UTC m=+39.667424800 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/cbfbf2ea-df5f-4844-ba69-8c9f16a3a26c-metrics-certs") pod "network-metrics-daemon-4mxw5" (UID: "cbfbf2ea-df5f-4844-ba69-8c9f16a3a26c") : object "openshift-multus"/"metrics-daemon-secret" not registered Oct 06 11:48:03 crc kubenswrapper[4958]: I1006 11:48:03.819443 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:48:03 crc kubenswrapper[4958]: I1006 11:48:03.819541 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:48:03 crc kubenswrapper[4958]: I1006 11:48:03.819560 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:48:03 crc kubenswrapper[4958]: I1006 11:48:03.819583 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:48:03 crc kubenswrapper[4958]: I1006 11:48:03.819602 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:48:03Z","lastTransitionTime":"2025-10-06T11:48:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:48:03 crc kubenswrapper[4958]: I1006 11:48:03.913060 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-4mxw5" Oct 06 11:48:03 crc kubenswrapper[4958]: E1006 11:48:03.913312 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-4mxw5" podUID="cbfbf2ea-df5f-4844-ba69-8c9f16a3a26c" Oct 06 11:48:03 crc kubenswrapper[4958]: I1006 11:48:03.922298 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:48:03 crc kubenswrapper[4958]: I1006 11:48:03.922393 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:48:03 crc kubenswrapper[4958]: I1006 11:48:03.922414 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:48:03 crc kubenswrapper[4958]: I1006 11:48:03.922438 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:48:03 crc kubenswrapper[4958]: I1006 11:48:03.922455 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:48:03Z","lastTransitionTime":"2025-10-06T11:48:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:48:04 crc kubenswrapper[4958]: I1006 11:48:04.026082 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:48:04 crc kubenswrapper[4958]: I1006 11:48:04.026219 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:48:04 crc kubenswrapper[4958]: I1006 11:48:04.026251 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:48:04 crc kubenswrapper[4958]: I1006 11:48:04.026340 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:48:04 crc kubenswrapper[4958]: I1006 11:48:04.026367 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:48:04Z","lastTransitionTime":"2025-10-06T11:48:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:48:04 crc kubenswrapper[4958]: I1006 11:48:04.129367 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:48:04 crc kubenswrapper[4958]: I1006 11:48:04.129439 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:48:04 crc kubenswrapper[4958]: I1006 11:48:04.129464 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:48:04 crc kubenswrapper[4958]: I1006 11:48:04.129491 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:48:04 crc kubenswrapper[4958]: I1006 11:48:04.129509 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:48:04Z","lastTransitionTime":"2025-10-06T11:48:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:48:04 crc kubenswrapper[4958]: I1006 11:48:04.232193 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:48:04 crc kubenswrapper[4958]: I1006 11:48:04.232269 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:48:04 crc kubenswrapper[4958]: I1006 11:48:04.232283 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:48:04 crc kubenswrapper[4958]: I1006 11:48:04.232304 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:48:04 crc kubenswrapper[4958]: I1006 11:48:04.232318 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:48:04Z","lastTransitionTime":"2025-10-06T11:48:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:48:04 crc kubenswrapper[4958]: I1006 11:48:04.335765 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:48:04 crc kubenswrapper[4958]: I1006 11:48:04.335827 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:48:04 crc kubenswrapper[4958]: I1006 11:48:04.335846 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:48:04 crc kubenswrapper[4958]: I1006 11:48:04.335877 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:48:04 crc kubenswrapper[4958]: I1006 11:48:04.335901 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:48:04Z","lastTransitionTime":"2025-10-06T11:48:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:48:04 crc kubenswrapper[4958]: I1006 11:48:04.438796 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:48:04 crc kubenswrapper[4958]: I1006 11:48:04.438874 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:48:04 crc kubenswrapper[4958]: I1006 11:48:04.438891 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:48:04 crc kubenswrapper[4958]: I1006 11:48:04.438917 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:48:04 crc kubenswrapper[4958]: I1006 11:48:04.438946 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:48:04Z","lastTransitionTime":"2025-10-06T11:48:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:48:04 crc kubenswrapper[4958]: I1006 11:48:04.542269 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:48:04 crc kubenswrapper[4958]: I1006 11:48:04.542332 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:48:04 crc kubenswrapper[4958]: I1006 11:48:04.542349 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:48:04 crc kubenswrapper[4958]: I1006 11:48:04.542373 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:48:04 crc kubenswrapper[4958]: I1006 11:48:04.542391 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:48:04Z","lastTransitionTime":"2025-10-06T11:48:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:48:04 crc kubenswrapper[4958]: I1006 11:48:04.645517 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:48:04 crc kubenswrapper[4958]: I1006 11:48:04.645576 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:48:04 crc kubenswrapper[4958]: I1006 11:48:04.645592 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:48:04 crc kubenswrapper[4958]: I1006 11:48:04.645616 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:48:04 crc kubenswrapper[4958]: I1006 11:48:04.645635 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:48:04Z","lastTransitionTime":"2025-10-06T11:48:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:48:04 crc kubenswrapper[4958]: I1006 11:48:04.748919 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:48:04 crc kubenswrapper[4958]: I1006 11:48:04.748991 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:48:04 crc kubenswrapper[4958]: I1006 11:48:04.749027 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:48:04 crc kubenswrapper[4958]: I1006 11:48:04.749060 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:48:04 crc kubenswrapper[4958]: I1006 11:48:04.749084 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:48:04Z","lastTransitionTime":"2025-10-06T11:48:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:48:04 crc kubenswrapper[4958]: I1006 11:48:04.852360 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:48:04 crc kubenswrapper[4958]: I1006 11:48:04.852418 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:48:04 crc kubenswrapper[4958]: I1006 11:48:04.852438 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:48:04 crc kubenswrapper[4958]: I1006 11:48:04.852462 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:48:04 crc kubenswrapper[4958]: I1006 11:48:04.852481 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:48:04Z","lastTransitionTime":"2025-10-06T11:48:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:48:04 crc kubenswrapper[4958]: I1006 11:48:04.912685 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 06 11:48:04 crc kubenswrapper[4958]: I1006 11:48:04.912745 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 06 11:48:04 crc kubenswrapper[4958]: E1006 11:48:04.912859 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Oct 06 11:48:04 crc kubenswrapper[4958]: I1006 11:48:04.912883 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 06 11:48:04 crc kubenswrapper[4958]: E1006 11:48:04.913033 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Oct 06 11:48:04 crc kubenswrapper[4958]: E1006 11:48:04.913186 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Oct 06 11:48:04 crc kubenswrapper[4958]: I1006 11:48:04.955789 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:48:04 crc kubenswrapper[4958]: I1006 11:48:04.955852 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:48:04 crc kubenswrapper[4958]: I1006 11:48:04.955869 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:48:04 crc kubenswrapper[4958]: I1006 11:48:04.955891 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:48:04 crc kubenswrapper[4958]: I1006 11:48:04.955908 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:48:04Z","lastTransitionTime":"2025-10-06T11:48:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:48:05 crc kubenswrapper[4958]: I1006 11:48:05.058957 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:48:05 crc kubenswrapper[4958]: I1006 11:48:05.059022 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:48:05 crc kubenswrapper[4958]: I1006 11:48:05.059039 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:48:05 crc kubenswrapper[4958]: I1006 11:48:05.059065 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:48:05 crc kubenswrapper[4958]: I1006 11:48:05.059083 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:48:05Z","lastTransitionTime":"2025-10-06T11:48:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:48:05 crc kubenswrapper[4958]: I1006 11:48:05.161716 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:48:05 crc kubenswrapper[4958]: I1006 11:48:05.161791 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:48:05 crc kubenswrapper[4958]: I1006 11:48:05.161808 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:48:05 crc kubenswrapper[4958]: I1006 11:48:05.161833 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:48:05 crc kubenswrapper[4958]: I1006 11:48:05.161849 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:48:05Z","lastTransitionTime":"2025-10-06T11:48:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:48:05 crc kubenswrapper[4958]: I1006 11:48:05.264571 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:48:05 crc kubenswrapper[4958]: I1006 11:48:05.264642 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:48:05 crc kubenswrapper[4958]: I1006 11:48:05.264663 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:48:05 crc kubenswrapper[4958]: I1006 11:48:05.264690 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:48:05 crc kubenswrapper[4958]: I1006 11:48:05.264708 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:48:05Z","lastTransitionTime":"2025-10-06T11:48:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:48:05 crc kubenswrapper[4958]: I1006 11:48:05.368534 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:48:05 crc kubenswrapper[4958]: I1006 11:48:05.368598 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:48:05 crc kubenswrapper[4958]: I1006 11:48:05.368617 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:48:05 crc kubenswrapper[4958]: I1006 11:48:05.368646 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:48:05 crc kubenswrapper[4958]: I1006 11:48:05.368665 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:48:05Z","lastTransitionTime":"2025-10-06T11:48:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:48:05 crc kubenswrapper[4958]: I1006 11:48:05.471487 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:48:05 crc kubenswrapper[4958]: I1006 11:48:05.471558 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:48:05 crc kubenswrapper[4958]: I1006 11:48:05.471580 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:48:05 crc kubenswrapper[4958]: I1006 11:48:05.471605 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:48:05 crc kubenswrapper[4958]: I1006 11:48:05.471625 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:48:05Z","lastTransitionTime":"2025-10-06T11:48:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:48:05 crc kubenswrapper[4958]: I1006 11:48:05.574335 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:48:05 crc kubenswrapper[4958]: I1006 11:48:05.574404 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:48:05 crc kubenswrapper[4958]: I1006 11:48:05.574424 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:48:05 crc kubenswrapper[4958]: I1006 11:48:05.574449 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:48:05 crc kubenswrapper[4958]: I1006 11:48:05.574466 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:48:05Z","lastTransitionTime":"2025-10-06T11:48:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:48:05 crc kubenswrapper[4958]: I1006 11:48:05.678084 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:48:05 crc kubenswrapper[4958]: I1006 11:48:05.678180 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:48:05 crc kubenswrapper[4958]: I1006 11:48:05.678199 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:48:05 crc kubenswrapper[4958]: I1006 11:48:05.678225 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:48:05 crc kubenswrapper[4958]: I1006 11:48:05.678246 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:48:05Z","lastTransitionTime":"2025-10-06T11:48:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:48:05 crc kubenswrapper[4958]: I1006 11:48:05.780430 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:48:05 crc kubenswrapper[4958]: I1006 11:48:05.780472 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:48:05 crc kubenswrapper[4958]: I1006 11:48:05.780484 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:48:05 crc kubenswrapper[4958]: I1006 11:48:05.780502 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:48:05 crc kubenswrapper[4958]: I1006 11:48:05.780514 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:48:05Z","lastTransitionTime":"2025-10-06T11:48:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:48:05 crc kubenswrapper[4958]: I1006 11:48:05.805504 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/cbfbf2ea-df5f-4844-ba69-8c9f16a3a26c-metrics-certs\") pod \"network-metrics-daemon-4mxw5\" (UID: \"cbfbf2ea-df5f-4844-ba69-8c9f16a3a26c\") " pod="openshift-multus/network-metrics-daemon-4mxw5" Oct 06 11:48:05 crc kubenswrapper[4958]: E1006 11:48:05.805688 4958 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Oct 06 11:48:05 crc kubenswrapper[4958]: E1006 11:48:05.805771 4958 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/cbfbf2ea-df5f-4844-ba69-8c9f16a3a26c-metrics-certs podName:cbfbf2ea-df5f-4844-ba69-8c9f16a3a26c nodeName:}" failed. No retries permitted until 2025-10-06 11:48:09.80574629 +0000 UTC m=+43.691771628 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/cbfbf2ea-df5f-4844-ba69-8c9f16a3a26c-metrics-certs") pod "network-metrics-daemon-4mxw5" (UID: "cbfbf2ea-df5f-4844-ba69-8c9f16a3a26c") : object "openshift-multus"/"metrics-daemon-secret" not registered Oct 06 11:48:05 crc kubenswrapper[4958]: I1006 11:48:05.883883 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:48:05 crc kubenswrapper[4958]: I1006 11:48:05.883941 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:48:05 crc kubenswrapper[4958]: I1006 11:48:05.883964 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:48:05 crc kubenswrapper[4958]: I1006 11:48:05.883993 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:48:05 crc kubenswrapper[4958]: I1006 11:48:05.884015 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:48:05Z","lastTransitionTime":"2025-10-06T11:48:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:48:05 crc kubenswrapper[4958]: I1006 11:48:05.912762 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-4mxw5" Oct 06 11:48:05 crc kubenswrapper[4958]: E1006 11:48:05.912961 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-4mxw5" podUID="cbfbf2ea-df5f-4844-ba69-8c9f16a3a26c" Oct 06 11:48:05 crc kubenswrapper[4958]: I1006 11:48:05.987679 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:48:05 crc kubenswrapper[4958]: I1006 11:48:05.987798 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:48:05 crc kubenswrapper[4958]: I1006 11:48:05.987818 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:48:05 crc kubenswrapper[4958]: I1006 11:48:05.987848 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:48:05 crc kubenswrapper[4958]: I1006 11:48:05.987866 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:48:05Z","lastTransitionTime":"2025-10-06T11:48:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:48:06 crc kubenswrapper[4958]: I1006 11:48:06.091498 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:48:06 crc kubenswrapper[4958]: I1006 11:48:06.091561 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:48:06 crc kubenswrapper[4958]: I1006 11:48:06.091580 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:48:06 crc kubenswrapper[4958]: I1006 11:48:06.091606 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:48:06 crc kubenswrapper[4958]: I1006 11:48:06.091627 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:48:06Z","lastTransitionTime":"2025-10-06T11:48:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:48:06 crc kubenswrapper[4958]: I1006 11:48:06.195122 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:48:06 crc kubenswrapper[4958]: I1006 11:48:06.195199 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:48:06 crc kubenswrapper[4958]: I1006 11:48:06.195217 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:48:06 crc kubenswrapper[4958]: I1006 11:48:06.195243 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:48:06 crc kubenswrapper[4958]: I1006 11:48:06.195261 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:48:06Z","lastTransitionTime":"2025-10-06T11:48:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:48:06 crc kubenswrapper[4958]: I1006 11:48:06.297718 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:48:06 crc kubenswrapper[4958]: I1006 11:48:06.297766 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:48:06 crc kubenswrapper[4958]: I1006 11:48:06.297777 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:48:06 crc kubenswrapper[4958]: I1006 11:48:06.297795 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:48:06 crc kubenswrapper[4958]: I1006 11:48:06.297807 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:48:06Z","lastTransitionTime":"2025-10-06T11:48:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:48:06 crc kubenswrapper[4958]: I1006 11:48:06.402376 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:48:06 crc kubenswrapper[4958]: I1006 11:48:06.402436 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:48:06 crc kubenswrapper[4958]: I1006 11:48:06.402445 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:48:06 crc kubenswrapper[4958]: I1006 11:48:06.402461 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:48:06 crc kubenswrapper[4958]: I1006 11:48:06.402471 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:48:06Z","lastTransitionTime":"2025-10-06T11:48:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:48:06 crc kubenswrapper[4958]: I1006 11:48:06.505365 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:48:06 crc kubenswrapper[4958]: I1006 11:48:06.505449 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:48:06 crc kubenswrapper[4958]: I1006 11:48:06.505468 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:48:06 crc kubenswrapper[4958]: I1006 11:48:06.505499 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:48:06 crc kubenswrapper[4958]: I1006 11:48:06.505517 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:48:06Z","lastTransitionTime":"2025-10-06T11:48:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:48:06 crc kubenswrapper[4958]: I1006 11:48:06.608610 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:48:06 crc kubenswrapper[4958]: I1006 11:48:06.608711 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:48:06 crc kubenswrapper[4958]: I1006 11:48:06.608736 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:48:06 crc kubenswrapper[4958]: I1006 11:48:06.608804 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:48:06 crc kubenswrapper[4958]: I1006 11:48:06.608823 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:48:06Z","lastTransitionTime":"2025-10-06T11:48:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:48:06 crc kubenswrapper[4958]: I1006 11:48:06.712014 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:48:06 crc kubenswrapper[4958]: I1006 11:48:06.712077 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:48:06 crc kubenswrapper[4958]: I1006 11:48:06.712100 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:48:06 crc kubenswrapper[4958]: I1006 11:48:06.712127 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:48:06 crc kubenswrapper[4958]: I1006 11:48:06.712189 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:48:06Z","lastTransitionTime":"2025-10-06T11:48:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:48:06 crc kubenswrapper[4958]: I1006 11:48:06.814816 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:48:06 crc kubenswrapper[4958]: I1006 11:48:06.814908 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:48:06 crc kubenswrapper[4958]: I1006 11:48:06.814940 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:48:06 crc kubenswrapper[4958]: I1006 11:48:06.814972 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:48:06 crc kubenswrapper[4958]: I1006 11:48:06.814995 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:48:06Z","lastTransitionTime":"2025-10-06T11:48:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:48:06 crc kubenswrapper[4958]: I1006 11:48:06.913327 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 06 11:48:06 crc kubenswrapper[4958]: I1006 11:48:06.913409 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 06 11:48:06 crc kubenswrapper[4958]: E1006 11:48:06.913539 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Oct 06 11:48:06 crc kubenswrapper[4958]: I1006 11:48:06.913695 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 06 11:48:06 crc kubenswrapper[4958]: E1006 11:48:06.913920 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Oct 06 11:48:06 crc kubenswrapper[4958]: E1006 11:48:06.914060 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Oct 06 11:48:06 crc kubenswrapper[4958]: I1006 11:48:06.918732 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:48:06 crc kubenswrapper[4958]: I1006 11:48:06.918789 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:48:06 crc kubenswrapper[4958]: I1006 11:48:06.918820 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:48:06 crc kubenswrapper[4958]: I1006 11:48:06.918852 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:48:06 crc kubenswrapper[4958]: I1006 11:48:06.918878 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:48:06Z","lastTransitionTime":"2025-10-06T11:48:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:48:06 crc kubenswrapper[4958]: I1006 11:48:06.939733 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e346fee2-2887-49c0-ad05-0e4ca19453e4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://42a6335b390e5de2da2d935de9e0cb1e60391521813d0928c59d1d9c27ae8e7f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:47:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://64dfdaf2d8faa6e7ebde08361728077d3db9118ec963dc4c1fc8caf9eb9c752b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:47:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5cbba96aaea45073aaeada1136c677ba1ba87248f004cf3d7394ddfccf412478\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:47:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://95e5c689ad307b63dec13d774d1ff8fe8fa0a11a7d33e18a0470000a4df0f6dc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:47:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T11:47:27Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:48:06Z is after 2025-08-24T17:21:41Z" Oct 06 11:48:06 crc kubenswrapper[4958]: I1006 11:48:06.960084 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:46Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:48:06Z is after 2025-08-24T17:21:41Z" Oct 06 11:48:06 crc kubenswrapper[4958]: I1006 11:48:06.973437 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-8xzjs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c3545ace-6477-4c0b-9576-f32c6748e0a0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d94d788ac8f6a4665db7852c21f55128c9afb7976c9e18fbc302c02d8c3a0c43\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:47:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lz2sh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T11:47:47Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-8xzjs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:48:06Z is after 2025-08-24T17:21:41Z" Oct 06 11:48:06 crc kubenswrapper[4958]: I1006 11:48:06.993225 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-vns7w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f33eae78-7219-4551-bea7-dcfaf22c4e62\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7c6a67770ab5e7e0af1ed6a7b4965bc2b3937e021ddf18a765edd7b65196ad5c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:47:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f8htn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T11:47:49Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-vns7w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:48:06Z is after 2025-08-24T17:21:41Z" Oct 06 11:48:07 crc kubenswrapper[4958]: I1006 11:48:07.010096 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9bd5b11f-6414-4c57-99b8-516b3f4be804\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:27Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:27Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8f40315a451e12055987bec9804330dac96578f86d7fdedfd96ed187936df857\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:47:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0b16a70f5c26106f885601fdc9f41b6d606668ed3a4a527be3a7674dd4217d36\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:47:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://93cedfe1a3600f0eb73fd787333690ed1fd8cd5f766a54ba40eb6aad808593c7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:47:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fdb3ac1507f9a55981ed07786d2ab8bd2aab7072e8e625b17f15bc27f40f460d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fdb3ac1507f9a55981ed07786d2ab8bd2aab7072e8e625b17f15bc27f40f460d\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-10-06T11:47:48Z\\\",\\\"message\\\":\\\"file observer\\\\nW1006 11:47:47.776944 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1006 11:47:47.777183 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1006 11:47:47.778589 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-624858867/tls.crt::/tmp/serving-cert-624858867/tls.key\\\\\\\"\\\\nI1006 11:47:48.078332 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1006 11:47:48.084427 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1006 11:47:48.084453 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1006 11:47:48.084475 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1006 11:47:48.084480 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1006 11:47:48.108531 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI1006 11:47:48.108546 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW1006 11:47:48.108561 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1006 11:47:48.108582 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1006 11:47:48.108586 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1006 11:47:48.108589 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1006 11:47:48.108592 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1006 11:47:48.108594 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1006 11:47:48.110441 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-06T11:47:42Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://533257b553a482e13f8c8930f7ee7d6f8b70af1499cf861d3dfe73cb2225aa2a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:47:29Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://70c0a2bcb44c905e3d39402f0992d3e50a939252201b98e9a723f5a6777d7f91\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://70c0a2bcb44c905e3d39402f0992d3e50a939252201b98e9a723f5a6777d7f91\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-06T11:47:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-06T11:47:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T11:47:27Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:48:07Z is after 2025-08-24T17:21:41Z" Oct 06 11:48:07 crc kubenswrapper[4958]: I1006 11:48:07.022618 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:48:07 crc kubenswrapper[4958]: I1006 11:48:07.022685 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:48:07 crc kubenswrapper[4958]: I1006 11:48:07.022704 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:48:07 crc kubenswrapper[4958]: I1006 11:48:07.022730 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:48:07 crc kubenswrapper[4958]: I1006 11:48:07.022748 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:48:07Z","lastTransitionTime":"2025-10-06T11:48:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:48:07 crc kubenswrapper[4958]: I1006 11:48:07.031089 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:48Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:48Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b5c0c63694c68c28642df297d1ceff03425621c51b170c758b3fedeafbccc89e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:47:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:48:07Z is after 2025-08-24T17:21:41Z" Oct 06 11:48:07 crc kubenswrapper[4958]: I1006 11:48:07.046049 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:50Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:50Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fbac1bba642ea0238b738ec34631d44326b215575297c7ea3595aa9d691849d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:47:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:48:07Z is after 2025-08-24T17:21:41Z" Oct 06 11:48:07 crc kubenswrapper[4958]: I1006 11:48:07.062119 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-4mxw5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cbfbf2ea-df5f-4844-ba69-8c9f16a3a26c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:48:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:48:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:48:02Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:48:02Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cdkh7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cdkh7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T11:48:02Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-4mxw5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:48:07Z is after 2025-08-24T17:21:41Z" Oct 06 11:48:07 crc kubenswrapper[4958]: I1006 11:48:07.079421 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:46Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:48:07Z is after 2025-08-24T17:21:41Z" Oct 06 11:48:07 crc kubenswrapper[4958]: I1006 11:48:07.103066 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-ntrlk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cd589959-144a-41bd-b6d5-a872e5c25cee\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:48Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:48Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bc57f0ff822a9fcb8c3b73146bf4879f6e05c2f92b6a4317528530e54e419278\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:47:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vzpjm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a38398fb9a752d0059e1adfd4ee23557572492f2d27f63ef90ed4730b742fd0e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:47:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vzpjm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://484bfcace3003fb125465d9296c1b04852447d674b6c4abc8fedbd3a1b6ea8be\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:47:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vzpjm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d7834f1b3141381f7d868acc4aceb2809e81896cb7082de9888c3d540d062078\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:47:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vzpjm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://051e26a11e5f65ab6e49664f99a5f6a87bf781bf7284f2e2e3996c0e26e77c84\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:47:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vzpjm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d90731b1e085c514ef0665d4df356b853f432a1012f84a41ed82fa257d57d9bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:47:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vzpjm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bde47cc2eb6224040038d0ad4b06a8f91be0c8eaabe6545c6169b533f913dbd3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bde47cc2eb6224040038d0ad4b06a8f91be0c8eaabe6545c6169b533f913dbd3\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-10-06T11:47:59Z\\\",\\\"message\\\":\\\"ind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-operator-lifecycle-manager/catalog-operator-metrics\\\\\\\"}\\\\nI1006 11:47:59.290723 6420 services_controller.go:360] Finished syncing service catalog-operator-metrics on namespace openshift-operator-lifecycle-manager for network=default : 1.119513ms\\\\nI1006 11:47:59.290754 6420 loadbalancer.go:304] Deleted 0 stale LBs for map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-apiserver/api\\\\\\\"}\\\\nI1006 11:47:59.290776 6420 services_controller.go:360] Finished syncing service api on namespace openshift-apiserver for network=default : 2.435771ms\\\\nI1006 11:47:59.290868 6420 loadbalancer.go:304] Deleted 0 stale LBs for map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-marketplace/community-operators\\\\\\\"}\\\\nI1006 11:47:59.290896 6420 services_controller.go:360] Finished syncing service community-operators on namespace openshift-marketplace for network=default : 2.503053ms\\\\nI1006 11:47:59.291059 6420 factory.go:1336] Added *v1.EgressFirewall event handler 9\\\\nI1006 11:47:59.291204 6420 controller.go:132] Adding controller ef_node_controller event handlers\\\\nI1006 11:47:59.291271 6420 ovnkube.go:599] Stopped ovnkube\\\\nI1006 11:47:59.291319 6420 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF1006 11:47:59.291428 6420 ovnkube.go:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-06T11:47:58Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-ntrlk_openshift-ovn-kubernetes(cd589959-144a-41bd-b6d5-a872e5c25cee)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vzpjm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1398aae03a42cf60100ed334a3d65c9c4fd501855d5316e7e1ed3974d0c7e22e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:47:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vzpjm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://738882f7b5c7b87e6d6426a997dd7efe5daf1d97973cf1085097cc0f5e790f00\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://738882f7b5c7b87e6d6426a997dd7efe5daf1d97973cf1085097cc0f5e790f00\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-06T11:47:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-06T11:47:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vzpjm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T11:47:48Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-ntrlk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:48:07Z is after 2025-08-24T17:21:41Z" Oct 06 11:48:07 crc kubenswrapper[4958]: I1006 11:48:07.118724 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:48Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:48Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3a93985bd27818990d6f5ec103d8c0722e1317a74398d64300148018c4f77b71\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:47:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9f39e4dcc064167825d4f7323323a394d1a3d1a374b7104bffc6ad56b344d89e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:47:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:48:07Z is after 2025-08-24T17:21:41Z" Oct 06 11:48:07 crc kubenswrapper[4958]: I1006 11:48:07.125407 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:48:07 crc kubenswrapper[4958]: I1006 11:48:07.125477 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:48:07 crc kubenswrapper[4958]: I1006 11:48:07.125498 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:48:07 crc kubenswrapper[4958]: I1006 11:48:07.125529 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:48:07 crc kubenswrapper[4958]: I1006 11:48:07.125554 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:48:07Z","lastTransitionTime":"2025-10-06T11:48:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:48:07 crc kubenswrapper[4958]: I1006 11:48:07.135094 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:46Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:48:07Z is after 2025-08-24T17:21:41Z" Oct 06 11:48:07 crc kubenswrapper[4958]: I1006 11:48:07.160293 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-lwknw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5e46de0e-9f67-4dae-8601-65004d0d71c1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6adb713c9c2659566eee132d371a68bc10d24c6761c017a1e8b0ce8bf974b9fa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:47:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wghx2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://244104564b5ad6b646671bdc75f440e5568a0c2449c111e56431328365d044fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://244104564b5ad6b646671bdc75f440e5568a0c2449c111e56431328365d044fe\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-06T11:47:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-06T11:47:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wghx2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5f832e5afbc55af6f3de6756005ddfef328c3d8357b1bbc8765469e5b254cf8a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5f832e5afbc55af6f3de6756005ddfef328c3d8357b1bbc8765469e5b254cf8a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-06T11:47:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-06T11:47:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wghx2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://99bec1a2f08582cd407d108d771389ab034877ee58f59e7d899541d0c69809b0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://99bec1a2f08582cd407d108d771389ab034877ee58f59e7d899541d0c69809b0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-06T11:47:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-06T11:47:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wghx2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4d090f1099ca4b6472934163fb66f1875dc03819cf0907f3058aa5006d9d2198\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4d090f1099ca4b6472934163fb66f1875dc03819cf0907f3058aa5006d9d2198\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-06T11:47:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-06T11:47:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wghx2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://41bdbeadff713969b3914296b7f045991c7da126f8181ee9b60d4607d7a565dd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://41bdbeadff713969b3914296b7f045991c7da126f8181ee9b60d4607d7a565dd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-06T11:47:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-06T11:47:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wghx2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3e496dd75c5dd4e480d06cd46302b6172620b35eb24aff3308865a0931677bc4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3e496dd75c5dd4e480d06cd46302b6172620b35eb24aff3308865a0931677bc4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-06T11:47:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-06T11:47:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wghx2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T11:47:47Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-lwknw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:48:07Z is after 2025-08-24T17:21:41Z" Oct 06 11:48:07 crc kubenswrapper[4958]: I1006 11:48:07.182240 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-4w4h5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8198c8fd-8ec9-4a56-9e87-4e3967fb8ec7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://979bf29caab50b132b30dd6eff62aed5d2209ae1c43fde2433101410df35035b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:47:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2ncxs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T11:47:47Z\\\"}}\" for pod \"openshift-multus\"/\"multus-4w4h5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:48:07Z is after 2025-08-24T17:21:41Z" Oct 06 11:48:07 crc kubenswrapper[4958]: I1006 11:48:07.200432 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-whw6z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1a63a5e9-6d00-40ff-a080-cfcaf03a1c1b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b4fae35f5c37636077eb5d2fee37236e39d009d77aa158458d267aed5fc29f48\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:47:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bwd9v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0be6251825a9cd485da028e6f7fd169dae74215a7191607dfd0a4b7a470e43c4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:47:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bwd9v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T11:47:47Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-whw6z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:48:07Z is after 2025-08-24T17:21:41Z" Oct 06 11:48:07 crc kubenswrapper[4958]: I1006 11:48:07.218760 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-8jbr9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ddd869cc-2703-4f0d-b694-32bb7735025e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:48:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:48:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:48:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:48:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9298df9f82818033b7c6c96e0d7fd59290a7ef86679ae958f19f1c841998715c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:48:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8vhr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5d95fcda3db5aafa2634bf9d4037a6b8b0456bf4be6fc620c742e57e86aed114\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:48:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8vhr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T11:48:00Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-8jbr9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:48:07Z is after 2025-08-24T17:21:41Z" Oct 06 11:48:07 crc kubenswrapper[4958]: I1006 11:48:07.228351 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:48:07 crc kubenswrapper[4958]: I1006 11:48:07.228399 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:48:07 crc kubenswrapper[4958]: I1006 11:48:07.228415 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:48:07 crc kubenswrapper[4958]: I1006 11:48:07.228436 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:48:07 crc kubenswrapper[4958]: I1006 11:48:07.228448 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:48:07Z","lastTransitionTime":"2025-10-06T11:48:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:48:07 crc kubenswrapper[4958]: I1006 11:48:07.332430 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:48:07 crc kubenswrapper[4958]: I1006 11:48:07.332968 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:48:07 crc kubenswrapper[4958]: I1006 11:48:07.332996 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:48:07 crc kubenswrapper[4958]: I1006 11:48:07.333031 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:48:07 crc kubenswrapper[4958]: I1006 11:48:07.333058 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:48:07Z","lastTransitionTime":"2025-10-06T11:48:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:48:07 crc kubenswrapper[4958]: I1006 11:48:07.435519 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:48:07 crc kubenswrapper[4958]: I1006 11:48:07.435593 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:48:07 crc kubenswrapper[4958]: I1006 11:48:07.435607 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:48:07 crc kubenswrapper[4958]: I1006 11:48:07.435624 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:48:07 crc kubenswrapper[4958]: I1006 11:48:07.435648 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:48:07Z","lastTransitionTime":"2025-10-06T11:48:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:48:07 crc kubenswrapper[4958]: I1006 11:48:07.538877 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:48:07 crc kubenswrapper[4958]: I1006 11:48:07.538941 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:48:07 crc kubenswrapper[4958]: I1006 11:48:07.538961 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:48:07 crc kubenswrapper[4958]: I1006 11:48:07.538987 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:48:07 crc kubenswrapper[4958]: I1006 11:48:07.539007 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:48:07Z","lastTransitionTime":"2025-10-06T11:48:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:48:07 crc kubenswrapper[4958]: I1006 11:48:07.642445 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:48:07 crc kubenswrapper[4958]: I1006 11:48:07.642748 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:48:07 crc kubenswrapper[4958]: I1006 11:48:07.642893 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:48:07 crc kubenswrapper[4958]: I1006 11:48:07.643026 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:48:07 crc kubenswrapper[4958]: I1006 11:48:07.643182 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:48:07Z","lastTransitionTime":"2025-10-06T11:48:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:48:07 crc kubenswrapper[4958]: I1006 11:48:07.746389 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:48:07 crc kubenswrapper[4958]: I1006 11:48:07.746448 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:48:07 crc kubenswrapper[4958]: I1006 11:48:07.746461 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:48:07 crc kubenswrapper[4958]: I1006 11:48:07.746480 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:48:07 crc kubenswrapper[4958]: I1006 11:48:07.746493 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:48:07Z","lastTransitionTime":"2025-10-06T11:48:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:48:07 crc kubenswrapper[4958]: I1006 11:48:07.854394 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:48:07 crc kubenswrapper[4958]: I1006 11:48:07.854470 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:48:07 crc kubenswrapper[4958]: I1006 11:48:07.854493 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:48:07 crc kubenswrapper[4958]: I1006 11:48:07.854515 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:48:07 crc kubenswrapper[4958]: I1006 11:48:07.854535 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:48:07Z","lastTransitionTime":"2025-10-06T11:48:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:48:07 crc kubenswrapper[4958]: I1006 11:48:07.912533 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-4mxw5" Oct 06 11:48:07 crc kubenswrapper[4958]: E1006 11:48:07.912778 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-4mxw5" podUID="cbfbf2ea-df5f-4844-ba69-8c9f16a3a26c" Oct 06 11:48:07 crc kubenswrapper[4958]: I1006 11:48:07.958356 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:48:07 crc kubenswrapper[4958]: I1006 11:48:07.958435 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:48:07 crc kubenswrapper[4958]: I1006 11:48:07.958454 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:48:07 crc kubenswrapper[4958]: I1006 11:48:07.958479 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:48:07 crc kubenswrapper[4958]: I1006 11:48:07.958497 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:48:07Z","lastTransitionTime":"2025-10-06T11:48:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:48:08 crc kubenswrapper[4958]: I1006 11:48:08.061668 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:48:08 crc kubenswrapper[4958]: I1006 11:48:08.061744 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:48:08 crc kubenswrapper[4958]: I1006 11:48:08.061764 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:48:08 crc kubenswrapper[4958]: I1006 11:48:08.061789 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:48:08 crc kubenswrapper[4958]: I1006 11:48:08.061806 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:48:08Z","lastTransitionTime":"2025-10-06T11:48:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:48:08 crc kubenswrapper[4958]: I1006 11:48:08.165679 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:48:08 crc kubenswrapper[4958]: I1006 11:48:08.165746 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:48:08 crc kubenswrapper[4958]: I1006 11:48:08.165772 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:48:08 crc kubenswrapper[4958]: I1006 11:48:08.165807 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:48:08 crc kubenswrapper[4958]: I1006 11:48:08.165833 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:48:08Z","lastTransitionTime":"2025-10-06T11:48:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:48:08 crc kubenswrapper[4958]: I1006 11:48:08.268196 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:48:08 crc kubenswrapper[4958]: I1006 11:48:08.268240 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:48:08 crc kubenswrapper[4958]: I1006 11:48:08.268251 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:48:08 crc kubenswrapper[4958]: I1006 11:48:08.268300 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:48:08 crc kubenswrapper[4958]: I1006 11:48:08.268314 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:48:08Z","lastTransitionTime":"2025-10-06T11:48:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:48:08 crc kubenswrapper[4958]: I1006 11:48:08.371836 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:48:08 crc kubenswrapper[4958]: I1006 11:48:08.371905 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:48:08 crc kubenswrapper[4958]: I1006 11:48:08.371922 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:48:08 crc kubenswrapper[4958]: I1006 11:48:08.371948 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:48:08 crc kubenswrapper[4958]: I1006 11:48:08.371965 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:48:08Z","lastTransitionTime":"2025-10-06T11:48:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:48:08 crc kubenswrapper[4958]: I1006 11:48:08.475837 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:48:08 crc kubenswrapper[4958]: I1006 11:48:08.475908 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:48:08 crc kubenswrapper[4958]: I1006 11:48:08.475929 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:48:08 crc kubenswrapper[4958]: I1006 11:48:08.475957 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:48:08 crc kubenswrapper[4958]: I1006 11:48:08.475979 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:48:08Z","lastTransitionTime":"2025-10-06T11:48:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:48:08 crc kubenswrapper[4958]: I1006 11:48:08.579565 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:48:08 crc kubenswrapper[4958]: I1006 11:48:08.579637 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:48:08 crc kubenswrapper[4958]: I1006 11:48:08.579655 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:48:08 crc kubenswrapper[4958]: I1006 11:48:08.579682 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:48:08 crc kubenswrapper[4958]: I1006 11:48:08.579701 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:48:08Z","lastTransitionTime":"2025-10-06T11:48:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:48:08 crc kubenswrapper[4958]: I1006 11:48:08.683433 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:48:08 crc kubenswrapper[4958]: I1006 11:48:08.683491 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:48:08 crc kubenswrapper[4958]: I1006 11:48:08.683507 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:48:08 crc kubenswrapper[4958]: I1006 11:48:08.683531 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:48:08 crc kubenswrapper[4958]: I1006 11:48:08.683552 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:48:08Z","lastTransitionTime":"2025-10-06T11:48:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:48:08 crc kubenswrapper[4958]: I1006 11:48:08.786491 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:48:08 crc kubenswrapper[4958]: I1006 11:48:08.786557 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:48:08 crc kubenswrapper[4958]: I1006 11:48:08.786577 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:48:08 crc kubenswrapper[4958]: I1006 11:48:08.786609 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:48:08 crc kubenswrapper[4958]: I1006 11:48:08.786629 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:48:08Z","lastTransitionTime":"2025-10-06T11:48:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:48:08 crc kubenswrapper[4958]: I1006 11:48:08.890677 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:48:08 crc kubenswrapper[4958]: I1006 11:48:08.890758 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:48:08 crc kubenswrapper[4958]: I1006 11:48:08.890778 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:48:08 crc kubenswrapper[4958]: I1006 11:48:08.890804 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:48:08 crc kubenswrapper[4958]: I1006 11:48:08.890823 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:48:08Z","lastTransitionTime":"2025-10-06T11:48:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:48:08 crc kubenswrapper[4958]: I1006 11:48:08.912388 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 06 11:48:08 crc kubenswrapper[4958]: I1006 11:48:08.912508 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 06 11:48:08 crc kubenswrapper[4958]: I1006 11:48:08.912610 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 06 11:48:08 crc kubenswrapper[4958]: E1006 11:48:08.912596 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Oct 06 11:48:08 crc kubenswrapper[4958]: E1006 11:48:08.912782 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Oct 06 11:48:08 crc kubenswrapper[4958]: E1006 11:48:08.913018 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Oct 06 11:48:08 crc kubenswrapper[4958]: I1006 11:48:08.993995 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:48:08 crc kubenswrapper[4958]: I1006 11:48:08.994049 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:48:08 crc kubenswrapper[4958]: I1006 11:48:08.994067 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:48:08 crc kubenswrapper[4958]: I1006 11:48:08.994094 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:48:08 crc kubenswrapper[4958]: I1006 11:48:08.994114 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:48:08Z","lastTransitionTime":"2025-10-06T11:48:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:48:09 crc kubenswrapper[4958]: I1006 11:48:09.097352 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:48:09 crc kubenswrapper[4958]: I1006 11:48:09.097414 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:48:09 crc kubenswrapper[4958]: I1006 11:48:09.097432 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:48:09 crc kubenswrapper[4958]: I1006 11:48:09.097461 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:48:09 crc kubenswrapper[4958]: I1006 11:48:09.097479 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:48:09Z","lastTransitionTime":"2025-10-06T11:48:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:48:09 crc kubenswrapper[4958]: I1006 11:48:09.200595 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:48:09 crc kubenswrapper[4958]: I1006 11:48:09.200676 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:48:09 crc kubenswrapper[4958]: I1006 11:48:09.200700 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:48:09 crc kubenswrapper[4958]: I1006 11:48:09.200733 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:48:09 crc kubenswrapper[4958]: I1006 11:48:09.200753 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:48:09Z","lastTransitionTime":"2025-10-06T11:48:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:48:09 crc kubenswrapper[4958]: I1006 11:48:09.303306 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:48:09 crc kubenswrapper[4958]: I1006 11:48:09.303379 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:48:09 crc kubenswrapper[4958]: I1006 11:48:09.303396 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:48:09 crc kubenswrapper[4958]: I1006 11:48:09.303418 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:48:09 crc kubenswrapper[4958]: I1006 11:48:09.303438 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:48:09Z","lastTransitionTime":"2025-10-06T11:48:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:48:09 crc kubenswrapper[4958]: I1006 11:48:09.406290 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:48:09 crc kubenswrapper[4958]: I1006 11:48:09.406380 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:48:09 crc kubenswrapper[4958]: I1006 11:48:09.406411 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:48:09 crc kubenswrapper[4958]: I1006 11:48:09.406438 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:48:09 crc kubenswrapper[4958]: I1006 11:48:09.406455 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:48:09Z","lastTransitionTime":"2025-10-06T11:48:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:48:09 crc kubenswrapper[4958]: I1006 11:48:09.510625 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:48:09 crc kubenswrapper[4958]: I1006 11:48:09.510717 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:48:09 crc kubenswrapper[4958]: I1006 11:48:09.510735 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:48:09 crc kubenswrapper[4958]: I1006 11:48:09.511139 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:48:09 crc kubenswrapper[4958]: I1006 11:48:09.511361 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:48:09Z","lastTransitionTime":"2025-10-06T11:48:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:48:09 crc kubenswrapper[4958]: I1006 11:48:09.614858 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:48:09 crc kubenswrapper[4958]: I1006 11:48:09.614918 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:48:09 crc kubenswrapper[4958]: I1006 11:48:09.614936 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:48:09 crc kubenswrapper[4958]: I1006 11:48:09.614971 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:48:09 crc kubenswrapper[4958]: I1006 11:48:09.615016 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:48:09Z","lastTransitionTime":"2025-10-06T11:48:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:48:09 crc kubenswrapper[4958]: I1006 11:48:09.718200 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:48:09 crc kubenswrapper[4958]: I1006 11:48:09.718284 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:48:09 crc kubenswrapper[4958]: I1006 11:48:09.718301 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:48:09 crc kubenswrapper[4958]: I1006 11:48:09.718327 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:48:09 crc kubenswrapper[4958]: I1006 11:48:09.718345 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:48:09Z","lastTransitionTime":"2025-10-06T11:48:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:48:09 crc kubenswrapper[4958]: I1006 11:48:09.821104 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:48:09 crc kubenswrapper[4958]: I1006 11:48:09.821195 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:48:09 crc kubenswrapper[4958]: I1006 11:48:09.821215 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:48:09 crc kubenswrapper[4958]: I1006 11:48:09.821241 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:48:09 crc kubenswrapper[4958]: I1006 11:48:09.821259 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:48:09Z","lastTransitionTime":"2025-10-06T11:48:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:48:09 crc kubenswrapper[4958]: I1006 11:48:09.849885 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/cbfbf2ea-df5f-4844-ba69-8c9f16a3a26c-metrics-certs\") pod \"network-metrics-daemon-4mxw5\" (UID: \"cbfbf2ea-df5f-4844-ba69-8c9f16a3a26c\") " pod="openshift-multus/network-metrics-daemon-4mxw5" Oct 06 11:48:09 crc kubenswrapper[4958]: E1006 11:48:09.850071 4958 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Oct 06 11:48:09 crc kubenswrapper[4958]: E1006 11:48:09.850178 4958 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/cbfbf2ea-df5f-4844-ba69-8c9f16a3a26c-metrics-certs podName:cbfbf2ea-df5f-4844-ba69-8c9f16a3a26c nodeName:}" failed. No retries permitted until 2025-10-06 11:48:17.850131994 +0000 UTC m=+51.736157312 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/cbfbf2ea-df5f-4844-ba69-8c9f16a3a26c-metrics-certs") pod "network-metrics-daemon-4mxw5" (UID: "cbfbf2ea-df5f-4844-ba69-8c9f16a3a26c") : object "openshift-multus"/"metrics-daemon-secret" not registered Oct 06 11:48:09 crc kubenswrapper[4958]: I1006 11:48:09.912907 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-4mxw5" Oct 06 11:48:09 crc kubenswrapper[4958]: E1006 11:48:09.913139 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-4mxw5" podUID="cbfbf2ea-df5f-4844-ba69-8c9f16a3a26c" Oct 06 11:48:09 crc kubenswrapper[4958]: I1006 11:48:09.928740 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:48:09 crc kubenswrapper[4958]: I1006 11:48:09.928800 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:48:09 crc kubenswrapper[4958]: I1006 11:48:09.928821 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:48:09 crc kubenswrapper[4958]: I1006 11:48:09.928842 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:48:09 crc kubenswrapper[4958]: I1006 11:48:09.928858 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:48:09Z","lastTransitionTime":"2025-10-06T11:48:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:48:10 crc kubenswrapper[4958]: I1006 11:48:10.032196 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:48:10 crc kubenswrapper[4958]: I1006 11:48:10.032292 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:48:10 crc kubenswrapper[4958]: I1006 11:48:10.032316 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:48:10 crc kubenswrapper[4958]: I1006 11:48:10.032346 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:48:10 crc kubenswrapper[4958]: I1006 11:48:10.032376 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:48:10Z","lastTransitionTime":"2025-10-06T11:48:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:48:10 crc kubenswrapper[4958]: I1006 11:48:10.135507 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:48:10 crc kubenswrapper[4958]: I1006 11:48:10.135563 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:48:10 crc kubenswrapper[4958]: I1006 11:48:10.135579 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:48:10 crc kubenswrapper[4958]: I1006 11:48:10.135607 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:48:10 crc kubenswrapper[4958]: I1006 11:48:10.135643 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:48:10Z","lastTransitionTime":"2025-10-06T11:48:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:48:10 crc kubenswrapper[4958]: I1006 11:48:10.239054 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:48:10 crc kubenswrapper[4958]: I1006 11:48:10.239187 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:48:10 crc kubenswrapper[4958]: I1006 11:48:10.239212 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:48:10 crc kubenswrapper[4958]: I1006 11:48:10.239244 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:48:10 crc kubenswrapper[4958]: I1006 11:48:10.239268 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:48:10Z","lastTransitionTime":"2025-10-06T11:48:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:48:10 crc kubenswrapper[4958]: I1006 11:48:10.343417 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:48:10 crc kubenswrapper[4958]: I1006 11:48:10.343491 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:48:10 crc kubenswrapper[4958]: I1006 11:48:10.343512 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:48:10 crc kubenswrapper[4958]: I1006 11:48:10.343540 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:48:10 crc kubenswrapper[4958]: I1006 11:48:10.343559 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:48:10Z","lastTransitionTime":"2025-10-06T11:48:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:48:10 crc kubenswrapper[4958]: I1006 11:48:10.447064 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:48:10 crc kubenswrapper[4958]: I1006 11:48:10.447126 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:48:10 crc kubenswrapper[4958]: I1006 11:48:10.447173 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:48:10 crc kubenswrapper[4958]: I1006 11:48:10.447199 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:48:10 crc kubenswrapper[4958]: I1006 11:48:10.447244 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:48:10Z","lastTransitionTime":"2025-10-06T11:48:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:48:10 crc kubenswrapper[4958]: I1006 11:48:10.550436 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:48:10 crc kubenswrapper[4958]: I1006 11:48:10.550483 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:48:10 crc kubenswrapper[4958]: I1006 11:48:10.550494 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:48:10 crc kubenswrapper[4958]: I1006 11:48:10.550513 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:48:10 crc kubenswrapper[4958]: I1006 11:48:10.550527 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:48:10Z","lastTransitionTime":"2025-10-06T11:48:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:48:10 crc kubenswrapper[4958]: I1006 11:48:10.653547 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:48:10 crc kubenswrapper[4958]: I1006 11:48:10.653607 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:48:10 crc kubenswrapper[4958]: I1006 11:48:10.653626 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:48:10 crc kubenswrapper[4958]: I1006 11:48:10.653649 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:48:10 crc kubenswrapper[4958]: I1006 11:48:10.653667 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:48:10Z","lastTransitionTime":"2025-10-06T11:48:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:48:10 crc kubenswrapper[4958]: I1006 11:48:10.756321 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:48:10 crc kubenswrapper[4958]: I1006 11:48:10.756386 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:48:10 crc kubenswrapper[4958]: I1006 11:48:10.756404 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:48:10 crc kubenswrapper[4958]: I1006 11:48:10.756436 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:48:10 crc kubenswrapper[4958]: I1006 11:48:10.756454 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:48:10Z","lastTransitionTime":"2025-10-06T11:48:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:48:10 crc kubenswrapper[4958]: I1006 11:48:10.859820 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:48:10 crc kubenswrapper[4958]: I1006 11:48:10.859898 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:48:10 crc kubenswrapper[4958]: I1006 11:48:10.859912 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:48:10 crc kubenswrapper[4958]: I1006 11:48:10.859934 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:48:10 crc kubenswrapper[4958]: I1006 11:48:10.859950 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:48:10Z","lastTransitionTime":"2025-10-06T11:48:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:48:10 crc kubenswrapper[4958]: I1006 11:48:10.913377 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 06 11:48:10 crc kubenswrapper[4958]: I1006 11:48:10.913791 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 06 11:48:10 crc kubenswrapper[4958]: I1006 11:48:10.914202 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 06 11:48:10 crc kubenswrapper[4958]: I1006 11:48:10.914518 4958 scope.go:117] "RemoveContainer" containerID="fdb3ac1507f9a55981ed07786d2ab8bd2aab7072e8e625b17f15bc27f40f460d" Oct 06 11:48:10 crc kubenswrapper[4958]: E1006 11:48:10.914547 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Oct 06 11:48:10 crc kubenswrapper[4958]: E1006 11:48:10.915264 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Oct 06 11:48:10 crc kubenswrapper[4958]: E1006 11:48:10.915629 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Oct 06 11:48:10 crc kubenswrapper[4958]: I1006 11:48:10.963346 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:48:10 crc kubenswrapper[4958]: I1006 11:48:10.963409 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:48:10 crc kubenswrapper[4958]: I1006 11:48:10.963435 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:48:10 crc kubenswrapper[4958]: I1006 11:48:10.963461 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:48:10 crc kubenswrapper[4958]: I1006 11:48:10.963479 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:48:10Z","lastTransitionTime":"2025-10-06T11:48:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:48:11 crc kubenswrapper[4958]: I1006 11:48:11.066558 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:48:11 crc kubenswrapper[4958]: I1006 11:48:11.066597 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:48:11 crc kubenswrapper[4958]: I1006 11:48:11.066606 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:48:11 crc kubenswrapper[4958]: I1006 11:48:11.066620 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:48:11 crc kubenswrapper[4958]: I1006 11:48:11.066631 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:48:11Z","lastTransitionTime":"2025-10-06T11:48:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:48:11 crc kubenswrapper[4958]: I1006 11:48:11.169943 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:48:11 crc kubenswrapper[4958]: I1006 11:48:11.170007 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:48:11 crc kubenswrapper[4958]: I1006 11:48:11.170022 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:48:11 crc kubenswrapper[4958]: I1006 11:48:11.170044 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:48:11 crc kubenswrapper[4958]: I1006 11:48:11.170057 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:48:11Z","lastTransitionTime":"2025-10-06T11:48:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:48:11 crc kubenswrapper[4958]: I1006 11:48:11.272933 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:48:11 crc kubenswrapper[4958]: I1006 11:48:11.272980 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:48:11 crc kubenswrapper[4958]: I1006 11:48:11.272989 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:48:11 crc kubenswrapper[4958]: I1006 11:48:11.273004 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:48:11 crc kubenswrapper[4958]: I1006 11:48:11.273014 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:48:11Z","lastTransitionTime":"2025-10-06T11:48:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:48:11 crc kubenswrapper[4958]: I1006 11:48:11.277811 4958 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/1.log" Oct 06 11:48:11 crc kubenswrapper[4958]: I1006 11:48:11.279886 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"da49c178e22edd3e5f0b37f05dca576095bf83765cb9e46fcb68b1b362e304f7"} Oct 06 11:48:11 crc kubenswrapper[4958]: I1006 11:48:11.280809 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Oct 06 11:48:11 crc kubenswrapper[4958]: I1006 11:48:11.295385 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-vns7w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f33eae78-7219-4551-bea7-dcfaf22c4e62\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7c6a67770ab5e7e0af1ed6a7b4965bc2b3937e021ddf18a765edd7b65196ad5c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:47:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f8htn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T11:47:49Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-vns7w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:48:11Z is after 2025-08-24T17:21:41Z" Oct 06 11:48:11 crc kubenswrapper[4958]: I1006 11:48:11.307346 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e346fee2-2887-49c0-ad05-0e4ca19453e4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://42a6335b390e5de2da2d935de9e0cb1e60391521813d0928c59d1d9c27ae8e7f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:47:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://64dfdaf2d8faa6e7ebde08361728077d3db9118ec963dc4c1fc8caf9eb9c752b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:47:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5cbba96aaea45073aaeada1136c677ba1ba87248f004cf3d7394ddfccf412478\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:47:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://95e5c689ad307b63dec13d774d1ff8fe8fa0a11a7d33e18a0470000a4df0f6dc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:47:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T11:47:27Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:48:11Z is after 2025-08-24T17:21:41Z" Oct 06 11:48:11 crc kubenswrapper[4958]: I1006 11:48:11.320323 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:46Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:48:11Z is after 2025-08-24T17:21:41Z" Oct 06 11:48:11 crc kubenswrapper[4958]: I1006 11:48:11.330824 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-8xzjs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c3545ace-6477-4c0b-9576-f32c6748e0a0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d94d788ac8f6a4665db7852c21f55128c9afb7976c9e18fbc302c02d8c3a0c43\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:47:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lz2sh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T11:47:47Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-8xzjs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:48:11Z is after 2025-08-24T17:21:41Z" Oct 06 11:48:11 crc kubenswrapper[4958]: I1006 11:48:11.340349 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-4mxw5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cbfbf2ea-df5f-4844-ba69-8c9f16a3a26c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:48:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:48:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:48:02Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:48:02Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cdkh7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cdkh7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T11:48:02Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-4mxw5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:48:11Z is after 2025-08-24T17:21:41Z" Oct 06 11:48:11 crc kubenswrapper[4958]: I1006 11:48:11.363377 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9bd5b11f-6414-4c57-99b8-516b3f4be804\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:27Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:27Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8f40315a451e12055987bec9804330dac96578f86d7fdedfd96ed187936df857\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:47:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0b16a70f5c26106f885601fdc9f41b6d606668ed3a4a527be3a7674dd4217d36\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:47:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://93cedfe1a3600f0eb73fd787333690ed1fd8cd5f766a54ba40eb6aad808593c7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:47:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://da49c178e22edd3e5f0b37f05dca576095bf83765cb9e46fcb68b1b362e304f7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fdb3ac1507f9a55981ed07786d2ab8bd2aab7072e8e625b17f15bc27f40f460d\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-10-06T11:47:48Z\\\",\\\"message\\\":\\\"file observer\\\\nW1006 11:47:47.776944 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1006 11:47:47.777183 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1006 11:47:47.778589 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-624858867/tls.crt::/tmp/serving-cert-624858867/tls.key\\\\\\\"\\\\nI1006 11:47:48.078332 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1006 11:47:48.084427 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1006 11:47:48.084453 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1006 11:47:48.084475 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1006 11:47:48.084480 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1006 11:47:48.108531 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI1006 11:47:48.108546 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW1006 11:47:48.108561 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1006 11:47:48.108582 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1006 11:47:48.108586 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1006 11:47:48.108589 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1006 11:47:48.108592 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1006 11:47:48.108594 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1006 11:47:48.110441 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-06T11:47:42Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:48:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://533257b553a482e13f8c8930f7ee7d6f8b70af1499cf861d3dfe73cb2225aa2a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:47:29Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://70c0a2bcb44c905e3d39402f0992d3e50a939252201b98e9a723f5a6777d7f91\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://70c0a2bcb44c905e3d39402f0992d3e50a939252201b98e9a723f5a6777d7f91\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-06T11:47:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-06T11:47:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T11:47:27Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:48:11Z is after 2025-08-24T17:21:41Z" Oct 06 11:48:11 crc kubenswrapper[4958]: I1006 11:48:11.374753 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:48Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:48Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b5c0c63694c68c28642df297d1ceff03425621c51b170c758b3fedeafbccc89e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:47:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:48:11Z is after 2025-08-24T17:21:41Z" Oct 06 11:48:11 crc kubenswrapper[4958]: I1006 11:48:11.376047 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:48:11 crc kubenswrapper[4958]: I1006 11:48:11.376178 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:48:11 crc kubenswrapper[4958]: I1006 11:48:11.376241 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:48:11 crc kubenswrapper[4958]: I1006 11:48:11.376330 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:48:11 crc kubenswrapper[4958]: I1006 11:48:11.376396 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:48:11Z","lastTransitionTime":"2025-10-06T11:48:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:48:11 crc kubenswrapper[4958]: I1006 11:48:11.386240 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:50Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:50Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fbac1bba642ea0238b738ec34631d44326b215575297c7ea3595aa9d691849d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:47:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:48:11Z is after 2025-08-24T17:21:41Z" Oct 06 11:48:11 crc kubenswrapper[4958]: I1006 11:48:11.397156 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:46Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:48:11Z is after 2025-08-24T17:21:41Z" Oct 06 11:48:11 crc kubenswrapper[4958]: I1006 11:48:11.414352 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-ntrlk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cd589959-144a-41bd-b6d5-a872e5c25cee\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:48Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:48Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bc57f0ff822a9fcb8c3b73146bf4879f6e05c2f92b6a4317528530e54e419278\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:47:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vzpjm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a38398fb9a752d0059e1adfd4ee23557572492f2d27f63ef90ed4730b742fd0e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:47:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vzpjm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://484bfcace3003fb125465d9296c1b04852447d674b6c4abc8fedbd3a1b6ea8be\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:47:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vzpjm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d7834f1b3141381f7d868acc4aceb2809e81896cb7082de9888c3d540d062078\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:47:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vzpjm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://051e26a11e5f65ab6e49664f99a5f6a87bf781bf7284f2e2e3996c0e26e77c84\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:47:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vzpjm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d90731b1e085c514ef0665d4df356b853f432a1012f84a41ed82fa257d57d9bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:47:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vzpjm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bde47cc2eb6224040038d0ad4b06a8f91be0c8eaabe6545c6169b533f913dbd3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bde47cc2eb6224040038d0ad4b06a8f91be0c8eaabe6545c6169b533f913dbd3\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-10-06T11:47:59Z\\\",\\\"message\\\":\\\"ind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-operator-lifecycle-manager/catalog-operator-metrics\\\\\\\"}\\\\nI1006 11:47:59.290723 6420 services_controller.go:360] Finished syncing service catalog-operator-metrics on namespace openshift-operator-lifecycle-manager for network=default : 1.119513ms\\\\nI1006 11:47:59.290754 6420 loadbalancer.go:304] Deleted 0 stale LBs for map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-apiserver/api\\\\\\\"}\\\\nI1006 11:47:59.290776 6420 services_controller.go:360] Finished syncing service api on namespace openshift-apiserver for network=default : 2.435771ms\\\\nI1006 11:47:59.290868 6420 loadbalancer.go:304] Deleted 0 stale LBs for map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-marketplace/community-operators\\\\\\\"}\\\\nI1006 11:47:59.290896 6420 services_controller.go:360] Finished syncing service community-operators on namespace openshift-marketplace for network=default : 2.503053ms\\\\nI1006 11:47:59.291059 6420 factory.go:1336] Added *v1.EgressFirewall event handler 9\\\\nI1006 11:47:59.291204 6420 controller.go:132] Adding controller ef_node_controller event handlers\\\\nI1006 11:47:59.291271 6420 ovnkube.go:599] Stopped ovnkube\\\\nI1006 11:47:59.291319 6420 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF1006 11:47:59.291428 6420 ovnkube.go:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-06T11:47:58Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-ntrlk_openshift-ovn-kubernetes(cd589959-144a-41bd-b6d5-a872e5c25cee)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vzpjm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1398aae03a42cf60100ed334a3d65c9c4fd501855d5316e7e1ed3974d0c7e22e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:47:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vzpjm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://738882f7b5c7b87e6d6426a997dd7efe5daf1d97973cf1085097cc0f5e790f00\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://738882f7b5c7b87e6d6426a997dd7efe5daf1d97973cf1085097cc0f5e790f00\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-06T11:47:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-06T11:47:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vzpjm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T11:47:48Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-ntrlk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:48:11Z is after 2025-08-24T17:21:41Z" Oct 06 11:48:11 crc kubenswrapper[4958]: I1006 11:48:11.428250 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-4w4h5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8198c8fd-8ec9-4a56-9e87-4e3967fb8ec7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://979bf29caab50b132b30dd6eff62aed5d2209ae1c43fde2433101410df35035b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:47:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2ncxs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T11:47:47Z\\\"}}\" for pod \"openshift-multus\"/\"multus-4w4h5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:48:11Z is after 2025-08-24T17:21:41Z" Oct 06 11:48:11 crc kubenswrapper[4958]: I1006 11:48:11.438290 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-whw6z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1a63a5e9-6d00-40ff-a080-cfcaf03a1c1b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b4fae35f5c37636077eb5d2fee37236e39d009d77aa158458d267aed5fc29f48\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:47:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bwd9v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0be6251825a9cd485da028e6f7fd169dae74215a7191607dfd0a4b7a470e43c4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:47:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bwd9v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T11:47:47Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-whw6z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:48:11Z is after 2025-08-24T17:21:41Z" Oct 06 11:48:11 crc kubenswrapper[4958]: I1006 11:48:11.447166 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-8jbr9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ddd869cc-2703-4f0d-b694-32bb7735025e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:48:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:48:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:48:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:48:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9298df9f82818033b7c6c96e0d7fd59290a7ef86679ae958f19f1c841998715c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:48:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8vhr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5d95fcda3db5aafa2634bf9d4037a6b8b0456bf4be6fc620c742e57e86aed114\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:48:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8vhr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T11:48:00Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-8jbr9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:48:11Z is after 2025-08-24T17:21:41Z" Oct 06 11:48:11 crc kubenswrapper[4958]: I1006 11:48:11.457383 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:48Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:48Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3a93985bd27818990d6f5ec103d8c0722e1317a74398d64300148018c4f77b71\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:47:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9f39e4dcc064167825d4f7323323a394d1a3d1a374b7104bffc6ad56b344d89e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:47:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:48:11Z is after 2025-08-24T17:21:41Z" Oct 06 11:48:11 crc kubenswrapper[4958]: I1006 11:48:11.467671 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:46Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:48:11Z is after 2025-08-24T17:21:41Z" Oct 06 11:48:11 crc kubenswrapper[4958]: I1006 11:48:11.479250 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:48:11 crc kubenswrapper[4958]: I1006 11:48:11.479285 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:48:11 crc kubenswrapper[4958]: I1006 11:48:11.479296 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:48:11 crc kubenswrapper[4958]: I1006 11:48:11.479313 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:48:11 crc kubenswrapper[4958]: I1006 11:48:11.479326 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:48:11Z","lastTransitionTime":"2025-10-06T11:48:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:48:11 crc kubenswrapper[4958]: I1006 11:48:11.483476 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-lwknw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5e46de0e-9f67-4dae-8601-65004d0d71c1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6adb713c9c2659566eee132d371a68bc10d24c6761c017a1e8b0ce8bf974b9fa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:47:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wghx2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://244104564b5ad6b646671bdc75f440e5568a0c2449c111e56431328365d044fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://244104564b5ad6b646671bdc75f440e5568a0c2449c111e56431328365d044fe\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-06T11:47:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-06T11:47:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wghx2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5f832e5afbc55af6f3de6756005ddfef328c3d8357b1bbc8765469e5b254cf8a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5f832e5afbc55af6f3de6756005ddfef328c3d8357b1bbc8765469e5b254cf8a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-06T11:47:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-06T11:47:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wghx2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://99bec1a2f08582cd407d108d771389ab034877ee58f59e7d899541d0c69809b0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://99bec1a2f08582cd407d108d771389ab034877ee58f59e7d899541d0c69809b0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-06T11:47:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-06T11:47:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wghx2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4d090f1099ca4b6472934163fb66f1875dc03819cf0907f3058aa5006d9d2198\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4d090f1099ca4b6472934163fb66f1875dc03819cf0907f3058aa5006d9d2198\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-06T11:47:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-06T11:47:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wghx2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://41bdbeadff713969b3914296b7f045991c7da126f8181ee9b60d4607d7a565dd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://41bdbeadff713969b3914296b7f045991c7da126f8181ee9b60d4607d7a565dd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-06T11:47:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-06T11:47:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wghx2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3e496dd75c5dd4e480d06cd46302b6172620b35eb24aff3308865a0931677bc4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3e496dd75c5dd4e480d06cd46302b6172620b35eb24aff3308865a0931677bc4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-06T11:47:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-06T11:47:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wghx2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T11:47:47Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-lwknw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:48:11Z is after 2025-08-24T17:21:41Z" Oct 06 11:48:11 crc kubenswrapper[4958]: I1006 11:48:11.584952 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:48:11 crc kubenswrapper[4958]: I1006 11:48:11.585706 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:48:11 crc kubenswrapper[4958]: I1006 11:48:11.585723 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:48:11 crc kubenswrapper[4958]: I1006 11:48:11.585745 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:48:11 crc kubenswrapper[4958]: I1006 11:48:11.585757 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:48:11Z","lastTransitionTime":"2025-10-06T11:48:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:48:11 crc kubenswrapper[4958]: I1006 11:48:11.689293 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:48:11 crc kubenswrapper[4958]: I1006 11:48:11.689362 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:48:11 crc kubenswrapper[4958]: I1006 11:48:11.689381 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:48:11 crc kubenswrapper[4958]: I1006 11:48:11.689406 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:48:11 crc kubenswrapper[4958]: I1006 11:48:11.689428 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:48:11Z","lastTransitionTime":"2025-10-06T11:48:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:48:11 crc kubenswrapper[4958]: I1006 11:48:11.792415 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:48:11 crc kubenswrapper[4958]: I1006 11:48:11.792470 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:48:11 crc kubenswrapper[4958]: I1006 11:48:11.792487 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:48:11 crc kubenswrapper[4958]: I1006 11:48:11.792515 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:48:11 crc kubenswrapper[4958]: I1006 11:48:11.792538 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:48:11Z","lastTransitionTime":"2025-10-06T11:48:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:48:11 crc kubenswrapper[4958]: I1006 11:48:11.895393 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:48:11 crc kubenswrapper[4958]: I1006 11:48:11.895454 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:48:11 crc kubenswrapper[4958]: I1006 11:48:11.895473 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:48:11 crc kubenswrapper[4958]: I1006 11:48:11.895500 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:48:11 crc kubenswrapper[4958]: I1006 11:48:11.895517 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:48:11Z","lastTransitionTime":"2025-10-06T11:48:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:48:11 crc kubenswrapper[4958]: I1006 11:48:11.912891 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-4mxw5" Oct 06 11:48:11 crc kubenswrapper[4958]: E1006 11:48:11.913305 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-4mxw5" podUID="cbfbf2ea-df5f-4844-ba69-8c9f16a3a26c" Oct 06 11:48:11 crc kubenswrapper[4958]: I1006 11:48:11.998892 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:48:11 crc kubenswrapper[4958]: I1006 11:48:11.998938 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:48:11 crc kubenswrapper[4958]: I1006 11:48:11.998950 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:48:11 crc kubenswrapper[4958]: I1006 11:48:11.998967 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:48:11 crc kubenswrapper[4958]: I1006 11:48:11.998982 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:48:11Z","lastTransitionTime":"2025-10-06T11:48:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:48:12 crc kubenswrapper[4958]: I1006 11:48:12.101521 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:48:12 crc kubenswrapper[4958]: I1006 11:48:12.101589 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:48:12 crc kubenswrapper[4958]: I1006 11:48:12.101607 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:48:12 crc kubenswrapper[4958]: I1006 11:48:12.101632 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:48:12 crc kubenswrapper[4958]: I1006 11:48:12.101648 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:48:12Z","lastTransitionTime":"2025-10-06T11:48:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:48:12 crc kubenswrapper[4958]: I1006 11:48:12.204130 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:48:12 crc kubenswrapper[4958]: I1006 11:48:12.204404 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:48:12 crc kubenswrapper[4958]: I1006 11:48:12.204477 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:48:12 crc kubenswrapper[4958]: I1006 11:48:12.204571 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:48:12 crc kubenswrapper[4958]: I1006 11:48:12.204652 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:48:12Z","lastTransitionTime":"2025-10-06T11:48:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:48:12 crc kubenswrapper[4958]: I1006 11:48:12.307039 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:48:12 crc kubenswrapper[4958]: I1006 11:48:12.307115 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:48:12 crc kubenswrapper[4958]: I1006 11:48:12.307134 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:48:12 crc kubenswrapper[4958]: I1006 11:48:12.307199 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:48:12 crc kubenswrapper[4958]: I1006 11:48:12.307220 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:48:12Z","lastTransitionTime":"2025-10-06T11:48:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:48:12 crc kubenswrapper[4958]: I1006 11:48:12.409280 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:48:12 crc kubenswrapper[4958]: I1006 11:48:12.409353 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:48:12 crc kubenswrapper[4958]: I1006 11:48:12.409375 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:48:12 crc kubenswrapper[4958]: I1006 11:48:12.409406 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:48:12 crc kubenswrapper[4958]: I1006 11:48:12.409447 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:48:12Z","lastTransitionTime":"2025-10-06T11:48:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:48:12 crc kubenswrapper[4958]: I1006 11:48:12.511846 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:48:12 crc kubenswrapper[4958]: I1006 11:48:12.511901 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:48:12 crc kubenswrapper[4958]: I1006 11:48:12.511923 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:48:12 crc kubenswrapper[4958]: I1006 11:48:12.511955 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:48:12 crc kubenswrapper[4958]: I1006 11:48:12.511977 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:48:12Z","lastTransitionTime":"2025-10-06T11:48:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:48:12 crc kubenswrapper[4958]: I1006 11:48:12.614969 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:48:12 crc kubenswrapper[4958]: I1006 11:48:12.615033 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:48:12 crc kubenswrapper[4958]: I1006 11:48:12.615052 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:48:12 crc kubenswrapper[4958]: I1006 11:48:12.615078 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:48:12 crc kubenswrapper[4958]: I1006 11:48:12.615100 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:48:12Z","lastTransitionTime":"2025-10-06T11:48:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:48:12 crc kubenswrapper[4958]: I1006 11:48:12.718098 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:48:12 crc kubenswrapper[4958]: I1006 11:48:12.718212 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:48:12 crc kubenswrapper[4958]: I1006 11:48:12.718230 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:48:12 crc kubenswrapper[4958]: I1006 11:48:12.718254 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:48:12 crc kubenswrapper[4958]: I1006 11:48:12.718274 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:48:12Z","lastTransitionTime":"2025-10-06T11:48:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:48:12 crc kubenswrapper[4958]: I1006 11:48:12.821088 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:48:12 crc kubenswrapper[4958]: I1006 11:48:12.821186 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:48:12 crc kubenswrapper[4958]: I1006 11:48:12.821212 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:48:12 crc kubenswrapper[4958]: I1006 11:48:12.821236 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:48:12 crc kubenswrapper[4958]: I1006 11:48:12.821253 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:48:12Z","lastTransitionTime":"2025-10-06T11:48:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:48:12 crc kubenswrapper[4958]: I1006 11:48:12.913420 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 06 11:48:12 crc kubenswrapper[4958]: I1006 11:48:12.913483 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 06 11:48:12 crc kubenswrapper[4958]: I1006 11:48:12.913500 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 06 11:48:12 crc kubenswrapper[4958]: E1006 11:48:12.913649 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Oct 06 11:48:12 crc kubenswrapper[4958]: E1006 11:48:12.913811 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Oct 06 11:48:12 crc kubenswrapper[4958]: E1006 11:48:12.914004 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Oct 06 11:48:12 crc kubenswrapper[4958]: I1006 11:48:12.924348 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:48:12 crc kubenswrapper[4958]: I1006 11:48:12.924449 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:48:12 crc kubenswrapper[4958]: I1006 11:48:12.924474 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:48:12 crc kubenswrapper[4958]: I1006 11:48:12.924508 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:48:12 crc kubenswrapper[4958]: I1006 11:48:12.924530 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:48:12Z","lastTransitionTime":"2025-10-06T11:48:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:48:13 crc kubenswrapper[4958]: I1006 11:48:13.028509 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:48:13 crc kubenswrapper[4958]: I1006 11:48:13.028601 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:48:13 crc kubenswrapper[4958]: I1006 11:48:13.028631 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:48:13 crc kubenswrapper[4958]: I1006 11:48:13.028661 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:48:13 crc kubenswrapper[4958]: I1006 11:48:13.028679 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:48:13Z","lastTransitionTime":"2025-10-06T11:48:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:48:13 crc kubenswrapper[4958]: I1006 11:48:13.131502 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:48:13 crc kubenswrapper[4958]: I1006 11:48:13.131567 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:48:13 crc kubenswrapper[4958]: I1006 11:48:13.131584 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:48:13 crc kubenswrapper[4958]: I1006 11:48:13.131608 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:48:13 crc kubenswrapper[4958]: I1006 11:48:13.131628 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:48:13Z","lastTransitionTime":"2025-10-06T11:48:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:48:13 crc kubenswrapper[4958]: I1006 11:48:13.234823 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:48:13 crc kubenswrapper[4958]: I1006 11:48:13.234913 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:48:13 crc kubenswrapper[4958]: I1006 11:48:13.234931 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:48:13 crc kubenswrapper[4958]: I1006 11:48:13.234959 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:48:13 crc kubenswrapper[4958]: I1006 11:48:13.234976 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:48:13Z","lastTransitionTime":"2025-10-06T11:48:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:48:13 crc kubenswrapper[4958]: I1006 11:48:13.338181 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:48:13 crc kubenswrapper[4958]: I1006 11:48:13.338250 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:48:13 crc kubenswrapper[4958]: I1006 11:48:13.338268 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:48:13 crc kubenswrapper[4958]: I1006 11:48:13.338293 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:48:13 crc kubenswrapper[4958]: I1006 11:48:13.338311 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:48:13Z","lastTransitionTime":"2025-10-06T11:48:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:48:13 crc kubenswrapper[4958]: I1006 11:48:13.441257 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:48:13 crc kubenswrapper[4958]: I1006 11:48:13.441296 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:48:13 crc kubenswrapper[4958]: I1006 11:48:13.441306 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:48:13 crc kubenswrapper[4958]: I1006 11:48:13.441319 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:48:13 crc kubenswrapper[4958]: I1006 11:48:13.441327 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:48:13Z","lastTransitionTime":"2025-10-06T11:48:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:48:13 crc kubenswrapper[4958]: I1006 11:48:13.544452 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:48:13 crc kubenswrapper[4958]: I1006 11:48:13.544527 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:48:13 crc kubenswrapper[4958]: I1006 11:48:13.544596 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:48:13 crc kubenswrapper[4958]: I1006 11:48:13.544630 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:48:13 crc kubenswrapper[4958]: I1006 11:48:13.544653 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:48:13Z","lastTransitionTime":"2025-10-06T11:48:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:48:13 crc kubenswrapper[4958]: I1006 11:48:13.648098 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:48:13 crc kubenswrapper[4958]: I1006 11:48:13.648210 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:48:13 crc kubenswrapper[4958]: I1006 11:48:13.648230 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:48:13 crc kubenswrapper[4958]: I1006 11:48:13.648258 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:48:13 crc kubenswrapper[4958]: I1006 11:48:13.648279 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:48:13Z","lastTransitionTime":"2025-10-06T11:48:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:48:13 crc kubenswrapper[4958]: I1006 11:48:13.751026 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:48:13 crc kubenswrapper[4958]: I1006 11:48:13.751079 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:48:13 crc kubenswrapper[4958]: I1006 11:48:13.751091 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:48:13 crc kubenswrapper[4958]: I1006 11:48:13.751110 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:48:13 crc kubenswrapper[4958]: I1006 11:48:13.751123 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:48:13Z","lastTransitionTime":"2025-10-06T11:48:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:48:13 crc kubenswrapper[4958]: I1006 11:48:13.854117 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:48:13 crc kubenswrapper[4958]: I1006 11:48:13.854210 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:48:13 crc kubenswrapper[4958]: I1006 11:48:13.854253 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:48:13 crc kubenswrapper[4958]: I1006 11:48:13.854281 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:48:13 crc kubenswrapper[4958]: I1006 11:48:13.854302 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:48:13Z","lastTransitionTime":"2025-10-06T11:48:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:48:13 crc kubenswrapper[4958]: I1006 11:48:13.912786 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-4mxw5" Oct 06 11:48:13 crc kubenswrapper[4958]: E1006 11:48:13.913052 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-4mxw5" podUID="cbfbf2ea-df5f-4844-ba69-8c9f16a3a26c" Oct 06 11:48:13 crc kubenswrapper[4958]: I1006 11:48:13.948069 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:48:13 crc kubenswrapper[4958]: I1006 11:48:13.948474 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:48:13 crc kubenswrapper[4958]: I1006 11:48:13.948733 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:48:13 crc kubenswrapper[4958]: I1006 11:48:13.948952 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:48:13 crc kubenswrapper[4958]: I1006 11:48:13.949098 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:48:13Z","lastTransitionTime":"2025-10-06T11:48:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:48:13 crc kubenswrapper[4958]: E1006 11:48:13.969665 4958 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-10-06T11:48:13Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-06T11:48:13Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-06T11:48:13Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-06T11:48:13Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-06T11:48:13Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-06T11:48:13Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-06T11:48:13Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-06T11:48:13Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"71775188-f0be-4b91-a34f-f469bf9337b6\\\",\\\"systemUUID\\\":\\\"c838f8b7-52cc-46da-a415-eff8b7b887b9\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:48:13Z is after 2025-08-24T17:21:41Z" Oct 06 11:48:13 crc kubenswrapper[4958]: I1006 11:48:13.974880 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:48:13 crc kubenswrapper[4958]: I1006 11:48:13.974949 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:48:13 crc kubenswrapper[4958]: I1006 11:48:13.974975 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:48:13 crc kubenswrapper[4958]: I1006 11:48:13.975006 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:48:13 crc kubenswrapper[4958]: I1006 11:48:13.975023 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:48:13Z","lastTransitionTime":"2025-10-06T11:48:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:48:14 crc kubenswrapper[4958]: E1006 11:48:14.000414 4958 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-10-06T11:48:13Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-06T11:48:13Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-06T11:48:13Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-06T11:48:13Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-06T11:48:13Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-06T11:48:13Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-06T11:48:13Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-06T11:48:13Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"71775188-f0be-4b91-a34f-f469bf9337b6\\\",\\\"systemUUID\\\":\\\"c838f8b7-52cc-46da-a415-eff8b7b887b9\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:48:13Z is after 2025-08-24T17:21:41Z" Oct 06 11:48:14 crc kubenswrapper[4958]: I1006 11:48:14.005357 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:48:14 crc kubenswrapper[4958]: I1006 11:48:14.005402 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:48:14 crc kubenswrapper[4958]: I1006 11:48:14.005420 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:48:14 crc kubenswrapper[4958]: I1006 11:48:14.005445 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:48:14 crc kubenswrapper[4958]: I1006 11:48:14.005467 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:48:14Z","lastTransitionTime":"2025-10-06T11:48:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:48:14 crc kubenswrapper[4958]: E1006 11:48:14.025603 4958 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-10-06T11:48:14Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-06T11:48:14Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-06T11:48:14Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-06T11:48:14Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-06T11:48:14Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-06T11:48:14Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-06T11:48:14Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-06T11:48:14Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"71775188-f0be-4b91-a34f-f469bf9337b6\\\",\\\"systemUUID\\\":\\\"c838f8b7-52cc-46da-a415-eff8b7b887b9\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:48:14Z is after 2025-08-24T17:21:41Z" Oct 06 11:48:14 crc kubenswrapper[4958]: I1006 11:48:14.031236 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:48:14 crc kubenswrapper[4958]: I1006 11:48:14.031300 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:48:14 crc kubenswrapper[4958]: I1006 11:48:14.031323 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:48:14 crc kubenswrapper[4958]: I1006 11:48:14.031416 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:48:14 crc kubenswrapper[4958]: I1006 11:48:14.031443 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:48:14Z","lastTransitionTime":"2025-10-06T11:48:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:48:14 crc kubenswrapper[4958]: E1006 11:48:14.053312 4958 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-10-06T11:48:14Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-06T11:48:14Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-06T11:48:14Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-06T11:48:14Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-06T11:48:14Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-06T11:48:14Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-06T11:48:14Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-06T11:48:14Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"71775188-f0be-4b91-a34f-f469bf9337b6\\\",\\\"systemUUID\\\":\\\"c838f8b7-52cc-46da-a415-eff8b7b887b9\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:48:14Z is after 2025-08-24T17:21:41Z" Oct 06 11:48:14 crc kubenswrapper[4958]: I1006 11:48:14.058605 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:48:14 crc kubenswrapper[4958]: I1006 11:48:14.058673 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:48:14 crc kubenswrapper[4958]: I1006 11:48:14.058715 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:48:14 crc kubenswrapper[4958]: I1006 11:48:14.058750 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:48:14 crc kubenswrapper[4958]: I1006 11:48:14.058772 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:48:14Z","lastTransitionTime":"2025-10-06T11:48:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:48:14 crc kubenswrapper[4958]: E1006 11:48:14.080239 4958 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-10-06T11:48:14Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-06T11:48:14Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-06T11:48:14Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-06T11:48:14Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-06T11:48:14Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-06T11:48:14Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-06T11:48:14Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-06T11:48:14Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"71775188-f0be-4b91-a34f-f469bf9337b6\\\",\\\"systemUUID\\\":\\\"c838f8b7-52cc-46da-a415-eff8b7b887b9\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:48:14Z is after 2025-08-24T17:21:41Z" Oct 06 11:48:14 crc kubenswrapper[4958]: E1006 11:48:14.080488 4958 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Oct 06 11:48:14 crc kubenswrapper[4958]: I1006 11:48:14.082527 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:48:14 crc kubenswrapper[4958]: I1006 11:48:14.082561 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:48:14 crc kubenswrapper[4958]: I1006 11:48:14.082573 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:48:14 crc kubenswrapper[4958]: I1006 11:48:14.082589 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:48:14 crc kubenswrapper[4958]: I1006 11:48:14.082600 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:48:14Z","lastTransitionTime":"2025-10-06T11:48:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:48:14 crc kubenswrapper[4958]: I1006 11:48:14.185945 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:48:14 crc kubenswrapper[4958]: I1006 11:48:14.186027 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:48:14 crc kubenswrapper[4958]: I1006 11:48:14.186052 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:48:14 crc kubenswrapper[4958]: I1006 11:48:14.186083 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:48:14 crc kubenswrapper[4958]: I1006 11:48:14.186105 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:48:14Z","lastTransitionTime":"2025-10-06T11:48:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:48:14 crc kubenswrapper[4958]: I1006 11:48:14.291073 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:48:14 crc kubenswrapper[4958]: I1006 11:48:14.291137 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:48:14 crc kubenswrapper[4958]: I1006 11:48:14.291207 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:48:14 crc kubenswrapper[4958]: I1006 11:48:14.291238 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:48:14 crc kubenswrapper[4958]: I1006 11:48:14.291260 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:48:14Z","lastTransitionTime":"2025-10-06T11:48:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:48:14 crc kubenswrapper[4958]: I1006 11:48:14.394519 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:48:14 crc kubenswrapper[4958]: I1006 11:48:14.394581 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:48:14 crc kubenswrapper[4958]: I1006 11:48:14.394599 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:48:14 crc kubenswrapper[4958]: I1006 11:48:14.394628 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:48:14 crc kubenswrapper[4958]: I1006 11:48:14.394647 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:48:14Z","lastTransitionTime":"2025-10-06T11:48:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:48:14 crc kubenswrapper[4958]: I1006 11:48:14.497995 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:48:14 crc kubenswrapper[4958]: I1006 11:48:14.498512 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:48:14 crc kubenswrapper[4958]: I1006 11:48:14.498722 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:48:14 crc kubenswrapper[4958]: I1006 11:48:14.498881 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:48:14 crc kubenswrapper[4958]: I1006 11:48:14.499004 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:48:14Z","lastTransitionTime":"2025-10-06T11:48:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:48:14 crc kubenswrapper[4958]: I1006 11:48:14.608990 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:48:14 crc kubenswrapper[4958]: I1006 11:48:14.609043 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:48:14 crc kubenswrapper[4958]: I1006 11:48:14.609065 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:48:14 crc kubenswrapper[4958]: I1006 11:48:14.609095 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:48:14 crc kubenswrapper[4958]: I1006 11:48:14.609119 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:48:14Z","lastTransitionTime":"2025-10-06T11:48:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:48:14 crc kubenswrapper[4958]: I1006 11:48:14.713324 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:48:14 crc kubenswrapper[4958]: I1006 11:48:14.713587 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:48:14 crc kubenswrapper[4958]: I1006 11:48:14.713628 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:48:14 crc kubenswrapper[4958]: I1006 11:48:14.713671 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:48:14 crc kubenswrapper[4958]: I1006 11:48:14.713696 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:48:14Z","lastTransitionTime":"2025-10-06T11:48:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:48:14 crc kubenswrapper[4958]: I1006 11:48:14.816530 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:48:14 crc kubenswrapper[4958]: I1006 11:48:14.816611 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:48:14 crc kubenswrapper[4958]: I1006 11:48:14.816634 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:48:14 crc kubenswrapper[4958]: I1006 11:48:14.816659 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:48:14 crc kubenswrapper[4958]: I1006 11:48:14.816676 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:48:14Z","lastTransitionTime":"2025-10-06T11:48:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:48:14 crc kubenswrapper[4958]: I1006 11:48:14.913012 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 06 11:48:14 crc kubenswrapper[4958]: I1006 11:48:14.913253 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 06 11:48:14 crc kubenswrapper[4958]: I1006 11:48:14.913319 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 06 11:48:14 crc kubenswrapper[4958]: E1006 11:48:14.913442 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Oct 06 11:48:14 crc kubenswrapper[4958]: E1006 11:48:14.913750 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Oct 06 11:48:14 crc kubenswrapper[4958]: E1006 11:48:14.914243 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Oct 06 11:48:14 crc kubenswrapper[4958]: I1006 11:48:14.927727 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:48:14 crc kubenswrapper[4958]: I1006 11:48:14.927792 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:48:14 crc kubenswrapper[4958]: I1006 11:48:14.927816 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:48:14 crc kubenswrapper[4958]: I1006 11:48:14.927845 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:48:14 crc kubenswrapper[4958]: I1006 11:48:14.927865 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:48:14Z","lastTransitionTime":"2025-10-06T11:48:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:48:15 crc kubenswrapper[4958]: I1006 11:48:15.032650 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:48:15 crc kubenswrapper[4958]: I1006 11:48:15.032736 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:48:15 crc kubenswrapper[4958]: I1006 11:48:15.032760 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:48:15 crc kubenswrapper[4958]: I1006 11:48:15.032792 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:48:15 crc kubenswrapper[4958]: I1006 11:48:15.032815 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:48:15Z","lastTransitionTime":"2025-10-06T11:48:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:48:15 crc kubenswrapper[4958]: I1006 11:48:15.136535 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:48:15 crc kubenswrapper[4958]: I1006 11:48:15.136614 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:48:15 crc kubenswrapper[4958]: I1006 11:48:15.136633 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:48:15 crc kubenswrapper[4958]: I1006 11:48:15.136663 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:48:15 crc kubenswrapper[4958]: I1006 11:48:15.136681 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:48:15Z","lastTransitionTime":"2025-10-06T11:48:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:48:15 crc kubenswrapper[4958]: I1006 11:48:15.245008 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:48:15 crc kubenswrapper[4958]: I1006 11:48:15.245074 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:48:15 crc kubenswrapper[4958]: I1006 11:48:15.245092 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:48:15 crc kubenswrapper[4958]: I1006 11:48:15.245115 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:48:15 crc kubenswrapper[4958]: I1006 11:48:15.245132 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:48:15Z","lastTransitionTime":"2025-10-06T11:48:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:48:15 crc kubenswrapper[4958]: I1006 11:48:15.348663 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:48:15 crc kubenswrapper[4958]: I1006 11:48:15.348750 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:48:15 crc kubenswrapper[4958]: I1006 11:48:15.348776 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:48:15 crc kubenswrapper[4958]: I1006 11:48:15.348808 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:48:15 crc kubenswrapper[4958]: I1006 11:48:15.348832 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:48:15Z","lastTransitionTime":"2025-10-06T11:48:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:48:15 crc kubenswrapper[4958]: I1006 11:48:15.451913 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:48:15 crc kubenswrapper[4958]: I1006 11:48:15.451984 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:48:15 crc kubenswrapper[4958]: I1006 11:48:15.452007 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:48:15 crc kubenswrapper[4958]: I1006 11:48:15.452035 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:48:15 crc kubenswrapper[4958]: I1006 11:48:15.452058 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:48:15Z","lastTransitionTime":"2025-10-06T11:48:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:48:15 crc kubenswrapper[4958]: I1006 11:48:15.555300 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:48:15 crc kubenswrapper[4958]: I1006 11:48:15.555357 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:48:15 crc kubenswrapper[4958]: I1006 11:48:15.555375 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:48:15 crc kubenswrapper[4958]: I1006 11:48:15.555398 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:48:15 crc kubenswrapper[4958]: I1006 11:48:15.555415 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:48:15Z","lastTransitionTime":"2025-10-06T11:48:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:48:15 crc kubenswrapper[4958]: I1006 11:48:15.658196 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:48:15 crc kubenswrapper[4958]: I1006 11:48:15.658264 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:48:15 crc kubenswrapper[4958]: I1006 11:48:15.658283 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:48:15 crc kubenswrapper[4958]: I1006 11:48:15.658310 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:48:15 crc kubenswrapper[4958]: I1006 11:48:15.658328 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:48:15Z","lastTransitionTime":"2025-10-06T11:48:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:48:15 crc kubenswrapper[4958]: I1006 11:48:15.760493 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:48:15 crc kubenswrapper[4958]: I1006 11:48:15.760562 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:48:15 crc kubenswrapper[4958]: I1006 11:48:15.760582 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:48:15 crc kubenswrapper[4958]: I1006 11:48:15.760606 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:48:15 crc kubenswrapper[4958]: I1006 11:48:15.760623 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:48:15Z","lastTransitionTime":"2025-10-06T11:48:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:48:15 crc kubenswrapper[4958]: I1006 11:48:15.863226 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:48:15 crc kubenswrapper[4958]: I1006 11:48:15.863299 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:48:15 crc kubenswrapper[4958]: I1006 11:48:15.863324 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:48:15 crc kubenswrapper[4958]: I1006 11:48:15.863352 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:48:15 crc kubenswrapper[4958]: I1006 11:48:15.863373 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:48:15Z","lastTransitionTime":"2025-10-06T11:48:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:48:15 crc kubenswrapper[4958]: I1006 11:48:15.912866 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-4mxw5" Oct 06 11:48:15 crc kubenswrapper[4958]: E1006 11:48:15.913015 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-4mxw5" podUID="cbfbf2ea-df5f-4844-ba69-8c9f16a3a26c" Oct 06 11:48:15 crc kubenswrapper[4958]: I1006 11:48:15.966530 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:48:15 crc kubenswrapper[4958]: I1006 11:48:15.966613 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:48:15 crc kubenswrapper[4958]: I1006 11:48:15.966636 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:48:15 crc kubenswrapper[4958]: I1006 11:48:15.966666 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:48:15 crc kubenswrapper[4958]: I1006 11:48:15.966689 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:48:15Z","lastTransitionTime":"2025-10-06T11:48:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:48:16 crc kubenswrapper[4958]: I1006 11:48:16.070133 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:48:16 crc kubenswrapper[4958]: I1006 11:48:16.070265 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:48:16 crc kubenswrapper[4958]: I1006 11:48:16.070288 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:48:16 crc kubenswrapper[4958]: I1006 11:48:16.070316 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:48:16 crc kubenswrapper[4958]: I1006 11:48:16.070334 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:48:16Z","lastTransitionTime":"2025-10-06T11:48:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:48:16 crc kubenswrapper[4958]: I1006 11:48:16.173004 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:48:16 crc kubenswrapper[4958]: I1006 11:48:16.173075 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:48:16 crc kubenswrapper[4958]: I1006 11:48:16.173102 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:48:16 crc kubenswrapper[4958]: I1006 11:48:16.173134 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:48:16 crc kubenswrapper[4958]: I1006 11:48:16.173193 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:48:16Z","lastTransitionTime":"2025-10-06T11:48:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:48:16 crc kubenswrapper[4958]: I1006 11:48:16.276560 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:48:16 crc kubenswrapper[4958]: I1006 11:48:16.276622 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:48:16 crc kubenswrapper[4958]: I1006 11:48:16.276640 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:48:16 crc kubenswrapper[4958]: I1006 11:48:16.276663 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:48:16 crc kubenswrapper[4958]: I1006 11:48:16.276681 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:48:16Z","lastTransitionTime":"2025-10-06T11:48:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:48:16 crc kubenswrapper[4958]: I1006 11:48:16.379809 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:48:16 crc kubenswrapper[4958]: I1006 11:48:16.379880 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:48:16 crc kubenswrapper[4958]: I1006 11:48:16.379897 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:48:16 crc kubenswrapper[4958]: I1006 11:48:16.379924 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:48:16 crc kubenswrapper[4958]: I1006 11:48:16.379947 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:48:16Z","lastTransitionTime":"2025-10-06T11:48:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:48:16 crc kubenswrapper[4958]: I1006 11:48:16.483307 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:48:16 crc kubenswrapper[4958]: I1006 11:48:16.483343 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:48:16 crc kubenswrapper[4958]: I1006 11:48:16.483354 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:48:16 crc kubenswrapper[4958]: I1006 11:48:16.483369 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:48:16 crc kubenswrapper[4958]: I1006 11:48:16.483381 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:48:16Z","lastTransitionTime":"2025-10-06T11:48:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:48:16 crc kubenswrapper[4958]: I1006 11:48:16.586235 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:48:16 crc kubenswrapper[4958]: I1006 11:48:16.586312 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:48:16 crc kubenswrapper[4958]: I1006 11:48:16.586335 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:48:16 crc kubenswrapper[4958]: I1006 11:48:16.586368 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:48:16 crc kubenswrapper[4958]: I1006 11:48:16.586390 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:48:16Z","lastTransitionTime":"2025-10-06T11:48:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:48:16 crc kubenswrapper[4958]: I1006 11:48:16.689705 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:48:16 crc kubenswrapper[4958]: I1006 11:48:16.689795 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:48:16 crc kubenswrapper[4958]: I1006 11:48:16.689815 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:48:16 crc kubenswrapper[4958]: I1006 11:48:16.689838 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:48:16 crc kubenswrapper[4958]: I1006 11:48:16.689874 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:48:16Z","lastTransitionTime":"2025-10-06T11:48:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:48:16 crc kubenswrapper[4958]: I1006 11:48:16.792254 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:48:16 crc kubenswrapper[4958]: I1006 11:48:16.792288 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:48:16 crc kubenswrapper[4958]: I1006 11:48:16.792296 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:48:16 crc kubenswrapper[4958]: I1006 11:48:16.792310 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:48:16 crc kubenswrapper[4958]: I1006 11:48:16.792319 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:48:16Z","lastTransitionTime":"2025-10-06T11:48:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:48:16 crc kubenswrapper[4958]: I1006 11:48:16.894729 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:48:16 crc kubenswrapper[4958]: I1006 11:48:16.894760 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:48:16 crc kubenswrapper[4958]: I1006 11:48:16.894768 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:48:16 crc kubenswrapper[4958]: I1006 11:48:16.894783 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:48:16 crc kubenswrapper[4958]: I1006 11:48:16.894792 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:48:16Z","lastTransitionTime":"2025-10-06T11:48:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:48:16 crc kubenswrapper[4958]: I1006 11:48:16.913381 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 06 11:48:16 crc kubenswrapper[4958]: I1006 11:48:16.913450 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 06 11:48:16 crc kubenswrapper[4958]: E1006 11:48:16.913530 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Oct 06 11:48:16 crc kubenswrapper[4958]: I1006 11:48:16.913557 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 06 11:48:16 crc kubenswrapper[4958]: E1006 11:48:16.913695 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Oct 06 11:48:16 crc kubenswrapper[4958]: E1006 11:48:16.913898 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Oct 06 11:48:16 crc kubenswrapper[4958]: I1006 11:48:16.937503 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-lwknw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5e46de0e-9f67-4dae-8601-65004d0d71c1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6adb713c9c2659566eee132d371a68bc10d24c6761c017a1e8b0ce8bf974b9fa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:47:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wghx2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://244104564b5ad6b646671bdc75f440e5568a0c2449c111e56431328365d044fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://244104564b5ad6b646671bdc75f440e5568a0c2449c111e56431328365d044fe\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-06T11:47:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-06T11:47:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wghx2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5f832e5afbc55af6f3de6756005ddfef328c3d8357b1bbc8765469e5b254cf8a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5f832e5afbc55af6f3de6756005ddfef328c3d8357b1bbc8765469e5b254cf8a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-06T11:47:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-06T11:47:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wghx2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://99bec1a2f08582cd407d108d771389ab034877ee58f59e7d899541d0c69809b0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://99bec1a2f08582cd407d108d771389ab034877ee58f59e7d899541d0c69809b0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-06T11:47:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-06T11:47:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wghx2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4d090f1099ca4b6472934163fb66f1875dc03819cf0907f3058aa5006d9d2198\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4d090f1099ca4b6472934163fb66f1875dc03819cf0907f3058aa5006d9d2198\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-06T11:47:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-06T11:47:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wghx2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://41bdbeadff713969b3914296b7f045991c7da126f8181ee9b60d4607d7a565dd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://41bdbeadff713969b3914296b7f045991c7da126f8181ee9b60d4607d7a565dd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-06T11:47:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-06T11:47:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wghx2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3e496dd75c5dd4e480d06cd46302b6172620b35eb24aff3308865a0931677bc4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3e496dd75c5dd4e480d06cd46302b6172620b35eb24aff3308865a0931677bc4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-06T11:47:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-06T11:47:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wghx2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T11:47:47Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-lwknw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:48:16Z is after 2025-08-24T17:21:41Z" Oct 06 11:48:16 crc kubenswrapper[4958]: I1006 11:48:16.958584 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-4w4h5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8198c8fd-8ec9-4a56-9e87-4e3967fb8ec7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://979bf29caab50b132b30dd6eff62aed5d2209ae1c43fde2433101410df35035b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:47:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2ncxs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T11:47:47Z\\\"}}\" for pod \"openshift-multus\"/\"multus-4w4h5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:48:16Z is after 2025-08-24T17:21:41Z" Oct 06 11:48:16 crc kubenswrapper[4958]: I1006 11:48:16.978914 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-whw6z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1a63a5e9-6d00-40ff-a080-cfcaf03a1c1b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b4fae35f5c37636077eb5d2fee37236e39d009d77aa158458d267aed5fc29f48\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:47:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bwd9v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0be6251825a9cd485da028e6f7fd169dae74215a7191607dfd0a4b7a470e43c4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:47:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bwd9v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T11:47:47Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-whw6z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:48:16Z is after 2025-08-24T17:21:41Z" Oct 06 11:48:16 crc kubenswrapper[4958]: I1006 11:48:16.996697 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:48:16 crc kubenswrapper[4958]: I1006 11:48:16.996745 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:48:16 crc kubenswrapper[4958]: I1006 11:48:16.996761 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:48:16 crc kubenswrapper[4958]: I1006 11:48:16.996783 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:48:16 crc kubenswrapper[4958]: I1006 11:48:16.996798 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:48:16Z","lastTransitionTime":"2025-10-06T11:48:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:48:17 crc kubenswrapper[4958]: I1006 11:48:17.000916 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-8jbr9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ddd869cc-2703-4f0d-b694-32bb7735025e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:48:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:48:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:48:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:48:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9298df9f82818033b7c6c96e0d7fd59290a7ef86679ae958f19f1c841998715c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:48:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8vhr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5d95fcda3db5aafa2634bf9d4037a6b8b0456bf4be6fc620c742e57e86aed114\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:48:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8vhr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T11:48:00Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-8jbr9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:48:16Z is after 2025-08-24T17:21:41Z" Oct 06 11:48:17 crc kubenswrapper[4958]: I1006 11:48:17.018470 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:48Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:48Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3a93985bd27818990d6f5ec103d8c0722e1317a74398d64300148018c4f77b71\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:47:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9f39e4dcc064167825d4f7323323a394d1a3d1a374b7104bffc6ad56b344d89e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:47:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:48:17Z is after 2025-08-24T17:21:41Z" Oct 06 11:48:17 crc kubenswrapper[4958]: I1006 11:48:17.034669 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:46Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:48:17Z is after 2025-08-24T17:21:41Z" Oct 06 11:48:17 crc kubenswrapper[4958]: I1006 11:48:17.047373 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-8xzjs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c3545ace-6477-4c0b-9576-f32c6748e0a0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d94d788ac8f6a4665db7852c21f55128c9afb7976c9e18fbc302c02d8c3a0c43\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:47:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lz2sh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T11:47:47Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-8xzjs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:48:17Z is after 2025-08-24T17:21:41Z" Oct 06 11:48:17 crc kubenswrapper[4958]: I1006 11:48:17.063197 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-vns7w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f33eae78-7219-4551-bea7-dcfaf22c4e62\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7c6a67770ab5e7e0af1ed6a7b4965bc2b3937e021ddf18a765edd7b65196ad5c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:47:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f8htn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T11:47:49Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-vns7w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:48:17Z is after 2025-08-24T17:21:41Z" Oct 06 11:48:17 crc kubenswrapper[4958]: I1006 11:48:17.085778 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Oct 06 11:48:17 crc kubenswrapper[4958]: I1006 11:48:17.089731 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e346fee2-2887-49c0-ad05-0e4ca19453e4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://42a6335b390e5de2da2d935de9e0cb1e60391521813d0928c59d1d9c27ae8e7f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:47:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://64dfdaf2d8faa6e7ebde08361728077d3db9118ec963dc4c1fc8caf9eb9c752b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:47:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5cbba96aaea45073aaeada1136c677ba1ba87248f004cf3d7394ddfccf412478\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:47:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://95e5c689ad307b63dec13d774d1ff8fe8fa0a11a7d33e18a0470000a4df0f6dc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:47:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T11:47:27Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:48:17Z is after 2025-08-24T17:21:41Z" Oct 06 11:48:17 crc kubenswrapper[4958]: I1006 11:48:17.101624 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:48:17 crc kubenswrapper[4958]: I1006 11:48:17.101671 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:48:17 crc kubenswrapper[4958]: I1006 11:48:17.101682 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:48:17 crc kubenswrapper[4958]: I1006 11:48:17.101701 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:48:17 crc kubenswrapper[4958]: I1006 11:48:17.101713 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:48:17Z","lastTransitionTime":"2025-10-06T11:48:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:48:17 crc kubenswrapper[4958]: I1006 11:48:17.102201 4958 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler/openshift-kube-scheduler-crc"] Oct 06 11:48:17 crc kubenswrapper[4958]: I1006 11:48:17.111386 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:46Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:48:17Z is after 2025-08-24T17:21:41Z" Oct 06 11:48:17 crc kubenswrapper[4958]: I1006 11:48:17.130107 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:50Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:50Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fbac1bba642ea0238b738ec34631d44326b215575297c7ea3595aa9d691849d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:47:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:48:17Z is after 2025-08-24T17:21:41Z" Oct 06 11:48:17 crc kubenswrapper[4958]: I1006 11:48:17.149786 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-4mxw5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cbfbf2ea-df5f-4844-ba69-8c9f16a3a26c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:48:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:48:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:48:02Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:48:02Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cdkh7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cdkh7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T11:48:02Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-4mxw5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:48:17Z is after 2025-08-24T17:21:41Z" Oct 06 11:48:17 crc kubenswrapper[4958]: I1006 11:48:17.172870 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9bd5b11f-6414-4c57-99b8-516b3f4be804\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:27Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:27Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8f40315a451e12055987bec9804330dac96578f86d7fdedfd96ed187936df857\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:47:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0b16a70f5c26106f885601fdc9f41b6d606668ed3a4a527be3a7674dd4217d36\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:47:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://93cedfe1a3600f0eb73fd787333690ed1fd8cd5f766a54ba40eb6aad808593c7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:47:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://da49c178e22edd3e5f0b37f05dca576095bf83765cb9e46fcb68b1b362e304f7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fdb3ac1507f9a55981ed07786d2ab8bd2aab7072e8e625b17f15bc27f40f460d\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-10-06T11:47:48Z\\\",\\\"message\\\":\\\"file observer\\\\nW1006 11:47:47.776944 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1006 11:47:47.777183 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1006 11:47:47.778589 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-624858867/tls.crt::/tmp/serving-cert-624858867/tls.key\\\\\\\"\\\\nI1006 11:47:48.078332 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1006 11:47:48.084427 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1006 11:47:48.084453 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1006 11:47:48.084475 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1006 11:47:48.084480 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1006 11:47:48.108531 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI1006 11:47:48.108546 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW1006 11:47:48.108561 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1006 11:47:48.108582 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1006 11:47:48.108586 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1006 11:47:48.108589 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1006 11:47:48.108592 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1006 11:47:48.108594 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1006 11:47:48.110441 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-06T11:47:42Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:48:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://533257b553a482e13f8c8930f7ee7d6f8b70af1499cf861d3dfe73cb2225aa2a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:47:29Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://70c0a2bcb44c905e3d39402f0992d3e50a939252201b98e9a723f5a6777d7f91\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://70c0a2bcb44c905e3d39402f0992d3e50a939252201b98e9a723f5a6777d7f91\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-06T11:47:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-06T11:47:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T11:47:27Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:48:17Z is after 2025-08-24T17:21:41Z" Oct 06 11:48:17 crc kubenswrapper[4958]: I1006 11:48:17.192760 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:48Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:48Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b5c0c63694c68c28642df297d1ceff03425621c51b170c758b3fedeafbccc89e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:47:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:48:17Z is after 2025-08-24T17:21:41Z" Oct 06 11:48:17 crc kubenswrapper[4958]: I1006 11:48:17.205310 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:48:17 crc kubenswrapper[4958]: I1006 11:48:17.205383 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:48:17 crc kubenswrapper[4958]: I1006 11:48:17.205408 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:48:17 crc kubenswrapper[4958]: I1006 11:48:17.205444 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:48:17 crc kubenswrapper[4958]: I1006 11:48:17.205468 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:48:17Z","lastTransitionTime":"2025-10-06T11:48:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:48:17 crc kubenswrapper[4958]: I1006 11:48:17.210433 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:46Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:48:17Z is after 2025-08-24T17:21:41Z" Oct 06 11:48:17 crc kubenswrapper[4958]: I1006 11:48:17.240252 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-ntrlk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cd589959-144a-41bd-b6d5-a872e5c25cee\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:48Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:48Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bc57f0ff822a9fcb8c3b73146bf4879f6e05c2f92b6a4317528530e54e419278\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:47:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vzpjm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a38398fb9a752d0059e1adfd4ee23557572492f2d27f63ef90ed4730b742fd0e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:47:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vzpjm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://484bfcace3003fb125465d9296c1b04852447d674b6c4abc8fedbd3a1b6ea8be\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:47:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vzpjm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d7834f1b3141381f7d868acc4aceb2809e81896cb7082de9888c3d540d062078\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:47:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vzpjm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://051e26a11e5f65ab6e49664f99a5f6a87bf781bf7284f2e2e3996c0e26e77c84\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:47:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vzpjm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d90731b1e085c514ef0665d4df356b853f432a1012f84a41ed82fa257d57d9bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:47:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vzpjm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bde47cc2eb6224040038d0ad4b06a8f91be0c8eaabe6545c6169b533f913dbd3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bde47cc2eb6224040038d0ad4b06a8f91be0c8eaabe6545c6169b533f913dbd3\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-10-06T11:47:59Z\\\",\\\"message\\\":\\\"ind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-operator-lifecycle-manager/catalog-operator-metrics\\\\\\\"}\\\\nI1006 11:47:59.290723 6420 services_controller.go:360] Finished syncing service catalog-operator-metrics on namespace openshift-operator-lifecycle-manager for network=default : 1.119513ms\\\\nI1006 11:47:59.290754 6420 loadbalancer.go:304] Deleted 0 stale LBs for map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-apiserver/api\\\\\\\"}\\\\nI1006 11:47:59.290776 6420 services_controller.go:360] Finished syncing service api on namespace openshift-apiserver for network=default : 2.435771ms\\\\nI1006 11:47:59.290868 6420 loadbalancer.go:304] Deleted 0 stale LBs for map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-marketplace/community-operators\\\\\\\"}\\\\nI1006 11:47:59.290896 6420 services_controller.go:360] Finished syncing service community-operators on namespace openshift-marketplace for network=default : 2.503053ms\\\\nI1006 11:47:59.291059 6420 factory.go:1336] Added *v1.EgressFirewall event handler 9\\\\nI1006 11:47:59.291204 6420 controller.go:132] Adding controller ef_node_controller event handlers\\\\nI1006 11:47:59.291271 6420 ovnkube.go:599] Stopped ovnkube\\\\nI1006 11:47:59.291319 6420 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF1006 11:47:59.291428 6420 ovnkube.go:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-06T11:47:58Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-ntrlk_openshift-ovn-kubernetes(cd589959-144a-41bd-b6d5-a872e5c25cee)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vzpjm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1398aae03a42cf60100ed334a3d65c9c4fd501855d5316e7e1ed3974d0c7e22e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:47:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vzpjm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://738882f7b5c7b87e6d6426a997dd7efe5daf1d97973cf1085097cc0f5e790f00\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://738882f7b5c7b87e6d6426a997dd7efe5daf1d97973cf1085097cc0f5e790f00\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-06T11:47:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-06T11:47:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vzpjm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T11:47:48Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-ntrlk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:48:17Z is after 2025-08-24T17:21:41Z" Oct 06 11:48:17 crc kubenswrapper[4958]: I1006 11:48:17.258610 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9bd5b11f-6414-4c57-99b8-516b3f4be804\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:27Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:27Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8f40315a451e12055987bec9804330dac96578f86d7fdedfd96ed187936df857\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:47:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0b16a70f5c26106f885601fdc9f41b6d606668ed3a4a527be3a7674dd4217d36\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:47:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://93cedfe1a3600f0eb73fd787333690ed1fd8cd5f766a54ba40eb6aad808593c7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:47:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://da49c178e22edd3e5f0b37f05dca576095bf83765cb9e46fcb68b1b362e304f7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fdb3ac1507f9a55981ed07786d2ab8bd2aab7072e8e625b17f15bc27f40f460d\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-10-06T11:47:48Z\\\",\\\"message\\\":\\\"file observer\\\\nW1006 11:47:47.776944 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1006 11:47:47.777183 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1006 11:47:47.778589 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-624858867/tls.crt::/tmp/serving-cert-624858867/tls.key\\\\\\\"\\\\nI1006 11:47:48.078332 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1006 11:47:48.084427 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1006 11:47:48.084453 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1006 11:47:48.084475 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1006 11:47:48.084480 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1006 11:47:48.108531 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI1006 11:47:48.108546 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW1006 11:47:48.108561 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1006 11:47:48.108582 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1006 11:47:48.108586 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1006 11:47:48.108589 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1006 11:47:48.108592 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1006 11:47:48.108594 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1006 11:47:48.110441 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-06T11:47:42Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:48:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://533257b553a482e13f8c8930f7ee7d6f8b70af1499cf861d3dfe73cb2225aa2a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:47:29Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://70c0a2bcb44c905e3d39402f0992d3e50a939252201b98e9a723f5a6777d7f91\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://70c0a2bcb44c905e3d39402f0992d3e50a939252201b98e9a723f5a6777d7f91\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-06T11:47:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-06T11:47:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T11:47:27Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:48:17Z is after 2025-08-24T17:21:41Z" Oct 06 11:48:17 crc kubenswrapper[4958]: I1006 11:48:17.278577 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:48Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:48Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b5c0c63694c68c28642df297d1ceff03425621c51b170c758b3fedeafbccc89e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:47:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:48:17Z is after 2025-08-24T17:21:41Z" Oct 06 11:48:17 crc kubenswrapper[4958]: I1006 11:48:17.298096 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:50Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:50Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fbac1bba642ea0238b738ec34631d44326b215575297c7ea3595aa9d691849d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:47:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:48:17Z is after 2025-08-24T17:21:41Z" Oct 06 11:48:17 crc kubenswrapper[4958]: I1006 11:48:17.307596 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:48:17 crc kubenswrapper[4958]: I1006 11:48:17.307642 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:48:17 crc kubenswrapper[4958]: I1006 11:48:17.307662 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:48:17 crc kubenswrapper[4958]: I1006 11:48:17.307688 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:48:17 crc kubenswrapper[4958]: I1006 11:48:17.307706 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:48:17Z","lastTransitionTime":"2025-10-06T11:48:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:48:17 crc kubenswrapper[4958]: I1006 11:48:17.316564 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-4mxw5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cbfbf2ea-df5f-4844-ba69-8c9f16a3a26c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:48:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:48:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:48:02Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:48:02Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cdkh7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cdkh7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T11:48:02Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-4mxw5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:48:17Z is after 2025-08-24T17:21:41Z" Oct 06 11:48:17 crc kubenswrapper[4958]: I1006 11:48:17.335315 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:46Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:48:17Z is after 2025-08-24T17:21:41Z" Oct 06 11:48:17 crc kubenswrapper[4958]: I1006 11:48:17.365466 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-ntrlk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cd589959-144a-41bd-b6d5-a872e5c25cee\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:48Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:48Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bc57f0ff822a9fcb8c3b73146bf4879f6e05c2f92b6a4317528530e54e419278\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:47:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vzpjm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a38398fb9a752d0059e1adfd4ee23557572492f2d27f63ef90ed4730b742fd0e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:47:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vzpjm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://484bfcace3003fb125465d9296c1b04852447d674b6c4abc8fedbd3a1b6ea8be\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:47:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vzpjm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d7834f1b3141381f7d868acc4aceb2809e81896cb7082de9888c3d540d062078\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:47:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vzpjm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://051e26a11e5f65ab6e49664f99a5f6a87bf781bf7284f2e2e3996c0e26e77c84\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:47:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vzpjm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d90731b1e085c514ef0665d4df356b853f432a1012f84a41ed82fa257d57d9bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:47:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vzpjm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bde47cc2eb6224040038d0ad4b06a8f91be0c8eaabe6545c6169b533f913dbd3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bde47cc2eb6224040038d0ad4b06a8f91be0c8eaabe6545c6169b533f913dbd3\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-10-06T11:47:59Z\\\",\\\"message\\\":\\\"ind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-operator-lifecycle-manager/catalog-operator-metrics\\\\\\\"}\\\\nI1006 11:47:59.290723 6420 services_controller.go:360] Finished syncing service catalog-operator-metrics on namespace openshift-operator-lifecycle-manager for network=default : 1.119513ms\\\\nI1006 11:47:59.290754 6420 loadbalancer.go:304] Deleted 0 stale LBs for map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-apiserver/api\\\\\\\"}\\\\nI1006 11:47:59.290776 6420 services_controller.go:360] Finished syncing service api on namespace openshift-apiserver for network=default : 2.435771ms\\\\nI1006 11:47:59.290868 6420 loadbalancer.go:304] Deleted 0 stale LBs for map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-marketplace/community-operators\\\\\\\"}\\\\nI1006 11:47:59.290896 6420 services_controller.go:360] Finished syncing service community-operators on namespace openshift-marketplace for network=default : 2.503053ms\\\\nI1006 11:47:59.291059 6420 factory.go:1336] Added *v1.EgressFirewall event handler 9\\\\nI1006 11:47:59.291204 6420 controller.go:132] Adding controller ef_node_controller event handlers\\\\nI1006 11:47:59.291271 6420 ovnkube.go:599] Stopped ovnkube\\\\nI1006 11:47:59.291319 6420 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF1006 11:47:59.291428 6420 ovnkube.go:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-06T11:47:58Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-ntrlk_openshift-ovn-kubernetes(cd589959-144a-41bd-b6d5-a872e5c25cee)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vzpjm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1398aae03a42cf60100ed334a3d65c9c4fd501855d5316e7e1ed3974d0c7e22e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:47:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vzpjm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://738882f7b5c7b87e6d6426a997dd7efe5daf1d97973cf1085097cc0f5e790f00\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://738882f7b5c7b87e6d6426a997dd7efe5daf1d97973cf1085097cc0f5e790f00\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-06T11:47:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-06T11:47:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vzpjm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T11:47:48Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-ntrlk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:48:17Z is after 2025-08-24T17:21:41Z" Oct 06 11:48:17 crc kubenswrapper[4958]: I1006 11:48:17.383921 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ba205413-cfaf-4b44-a363-11756cfa7dda\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:48:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:48:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fb611e3b5c05dc18f2016ae86df59c40ce281cb8b0180bd570de0ef76740db43\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:47:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ec146c3b7bff7005ab104f50dbc189e81ee82b7b2275ddb606e60832db5a0046\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:47:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0dcb7478b9b3f837afca516fc35e7f3f41e293147c5f27119fcca05f0d4477b3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:47:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c7f87b2139a49d50f883f483bfbfff0af00d2580d85bbddb604d395afbae3a73\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c7f87b2139a49d50f883f483bfbfff0af00d2580d85bbddb604d395afbae3a73\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-06T11:47:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-06T11:47:28Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T11:47:27Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:48:17Z is after 2025-08-24T17:21:41Z" Oct 06 11:48:17 crc kubenswrapper[4958]: I1006 11:48:17.400901 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:48Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:48Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3a93985bd27818990d6f5ec103d8c0722e1317a74398d64300148018c4f77b71\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:47:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9f39e4dcc064167825d4f7323323a394d1a3d1a374b7104bffc6ad56b344d89e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:47:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:48:17Z is after 2025-08-24T17:21:41Z" Oct 06 11:48:17 crc kubenswrapper[4958]: I1006 11:48:17.411502 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:48:17 crc kubenswrapper[4958]: I1006 11:48:17.411571 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:48:17 crc kubenswrapper[4958]: I1006 11:48:17.411582 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:48:17 crc kubenswrapper[4958]: I1006 11:48:17.411602 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:48:17 crc kubenswrapper[4958]: I1006 11:48:17.411616 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:48:17Z","lastTransitionTime":"2025-10-06T11:48:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:48:17 crc kubenswrapper[4958]: I1006 11:48:17.422616 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:46Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:48:17Z is after 2025-08-24T17:21:41Z" Oct 06 11:48:17 crc kubenswrapper[4958]: I1006 11:48:17.455988 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-lwknw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5e46de0e-9f67-4dae-8601-65004d0d71c1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6adb713c9c2659566eee132d371a68bc10d24c6761c017a1e8b0ce8bf974b9fa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:47:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wghx2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://244104564b5ad6b646671bdc75f440e5568a0c2449c111e56431328365d044fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://244104564b5ad6b646671bdc75f440e5568a0c2449c111e56431328365d044fe\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-06T11:47:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-06T11:47:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wghx2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5f832e5afbc55af6f3de6756005ddfef328c3d8357b1bbc8765469e5b254cf8a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5f832e5afbc55af6f3de6756005ddfef328c3d8357b1bbc8765469e5b254cf8a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-06T11:47:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-06T11:47:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wghx2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://99bec1a2f08582cd407d108d771389ab034877ee58f59e7d899541d0c69809b0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://99bec1a2f08582cd407d108d771389ab034877ee58f59e7d899541d0c69809b0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-06T11:47:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-06T11:47:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wghx2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4d090f1099ca4b6472934163fb66f1875dc03819cf0907f3058aa5006d9d2198\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4d090f1099ca4b6472934163fb66f1875dc03819cf0907f3058aa5006d9d2198\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-06T11:47:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-06T11:47:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wghx2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://41bdbeadff713969b3914296b7f045991c7da126f8181ee9b60d4607d7a565dd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://41bdbeadff713969b3914296b7f045991c7da126f8181ee9b60d4607d7a565dd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-06T11:47:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-06T11:47:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wghx2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3e496dd75c5dd4e480d06cd46302b6172620b35eb24aff3308865a0931677bc4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3e496dd75c5dd4e480d06cd46302b6172620b35eb24aff3308865a0931677bc4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-06T11:47:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-06T11:47:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wghx2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T11:47:47Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-lwknw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:48:17Z is after 2025-08-24T17:21:41Z" Oct 06 11:48:17 crc kubenswrapper[4958]: I1006 11:48:17.477052 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-4w4h5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8198c8fd-8ec9-4a56-9e87-4e3967fb8ec7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://979bf29caab50b132b30dd6eff62aed5d2209ae1c43fde2433101410df35035b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:47:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2ncxs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T11:47:47Z\\\"}}\" for pod \"openshift-multus\"/\"multus-4w4h5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:48:17Z is after 2025-08-24T17:21:41Z" Oct 06 11:48:17 crc kubenswrapper[4958]: I1006 11:48:17.491408 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-whw6z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1a63a5e9-6d00-40ff-a080-cfcaf03a1c1b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b4fae35f5c37636077eb5d2fee37236e39d009d77aa158458d267aed5fc29f48\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:47:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bwd9v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0be6251825a9cd485da028e6f7fd169dae74215a7191607dfd0a4b7a470e43c4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:47:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bwd9v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T11:47:47Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-whw6z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:48:17Z is after 2025-08-24T17:21:41Z" Oct 06 11:48:17 crc kubenswrapper[4958]: I1006 11:48:17.506794 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-8jbr9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ddd869cc-2703-4f0d-b694-32bb7735025e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:48:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:48:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:48:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:48:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9298df9f82818033b7c6c96e0d7fd59290a7ef86679ae958f19f1c841998715c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:48:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8vhr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5d95fcda3db5aafa2634bf9d4037a6b8b0456bf4be6fc620c742e57e86aed114\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:48:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8vhr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T11:48:00Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-8jbr9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:48:17Z is after 2025-08-24T17:21:41Z" Oct 06 11:48:17 crc kubenswrapper[4958]: I1006 11:48:17.517849 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:48:17 crc kubenswrapper[4958]: I1006 11:48:17.517952 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:48:17 crc kubenswrapper[4958]: I1006 11:48:17.517978 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:48:17 crc kubenswrapper[4958]: I1006 11:48:17.518056 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:48:17 crc kubenswrapper[4958]: I1006 11:48:17.518118 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:48:17Z","lastTransitionTime":"2025-10-06T11:48:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:48:17 crc kubenswrapper[4958]: I1006 11:48:17.527034 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e346fee2-2887-49c0-ad05-0e4ca19453e4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://42a6335b390e5de2da2d935de9e0cb1e60391521813d0928c59d1d9c27ae8e7f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:47:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://64dfdaf2d8faa6e7ebde08361728077d3db9118ec963dc4c1fc8caf9eb9c752b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:47:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5cbba96aaea45073aaeada1136c677ba1ba87248f004cf3d7394ddfccf412478\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:47:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://95e5c689ad307b63dec13d774d1ff8fe8fa0a11a7d33e18a0470000a4df0f6dc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:47:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T11:47:27Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:48:17Z is after 2025-08-24T17:21:41Z" Oct 06 11:48:17 crc kubenswrapper[4958]: I1006 11:48:17.546193 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:46Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:48:17Z is after 2025-08-24T17:21:41Z" Oct 06 11:48:17 crc kubenswrapper[4958]: I1006 11:48:17.565119 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-8xzjs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c3545ace-6477-4c0b-9576-f32c6748e0a0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d94d788ac8f6a4665db7852c21f55128c9afb7976c9e18fbc302c02d8c3a0c43\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:47:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lz2sh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T11:47:47Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-8xzjs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:48:17Z is after 2025-08-24T17:21:41Z" Oct 06 11:48:17 crc kubenswrapper[4958]: I1006 11:48:17.585508 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-vns7w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f33eae78-7219-4551-bea7-dcfaf22c4e62\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7c6a67770ab5e7e0af1ed6a7b4965bc2b3937e021ddf18a765edd7b65196ad5c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:47:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f8htn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T11:47:49Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-vns7w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:48:17Z is after 2025-08-24T17:21:41Z" Oct 06 11:48:17 crc kubenswrapper[4958]: I1006 11:48:17.622094 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:48:17 crc kubenswrapper[4958]: I1006 11:48:17.622196 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:48:17 crc kubenswrapper[4958]: I1006 11:48:17.622225 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:48:17 crc kubenswrapper[4958]: I1006 11:48:17.622256 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:48:17 crc kubenswrapper[4958]: I1006 11:48:17.622280 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:48:17Z","lastTransitionTime":"2025-10-06T11:48:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:48:17 crc kubenswrapper[4958]: I1006 11:48:17.725278 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:48:17 crc kubenswrapper[4958]: I1006 11:48:17.725356 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:48:17 crc kubenswrapper[4958]: I1006 11:48:17.725398 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:48:17 crc kubenswrapper[4958]: I1006 11:48:17.725422 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:48:17 crc kubenswrapper[4958]: I1006 11:48:17.725438 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:48:17Z","lastTransitionTime":"2025-10-06T11:48:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:48:17 crc kubenswrapper[4958]: I1006 11:48:17.828803 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:48:17 crc kubenswrapper[4958]: I1006 11:48:17.828866 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:48:17 crc kubenswrapper[4958]: I1006 11:48:17.828885 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:48:17 crc kubenswrapper[4958]: I1006 11:48:17.828911 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:48:17 crc kubenswrapper[4958]: I1006 11:48:17.828928 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:48:17Z","lastTransitionTime":"2025-10-06T11:48:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:48:17 crc kubenswrapper[4958]: I1006 11:48:17.869115 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/cbfbf2ea-df5f-4844-ba69-8c9f16a3a26c-metrics-certs\") pod \"network-metrics-daemon-4mxw5\" (UID: \"cbfbf2ea-df5f-4844-ba69-8c9f16a3a26c\") " pod="openshift-multus/network-metrics-daemon-4mxw5" Oct 06 11:48:17 crc kubenswrapper[4958]: E1006 11:48:17.869365 4958 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Oct 06 11:48:17 crc kubenswrapper[4958]: E1006 11:48:17.869509 4958 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/cbfbf2ea-df5f-4844-ba69-8c9f16a3a26c-metrics-certs podName:cbfbf2ea-df5f-4844-ba69-8c9f16a3a26c nodeName:}" failed. No retries permitted until 2025-10-06 11:48:33.86946899 +0000 UTC m=+67.755494328 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/cbfbf2ea-df5f-4844-ba69-8c9f16a3a26c-metrics-certs") pod "network-metrics-daemon-4mxw5" (UID: "cbfbf2ea-df5f-4844-ba69-8c9f16a3a26c") : object "openshift-multus"/"metrics-daemon-secret" not registered Oct 06 11:48:17 crc kubenswrapper[4958]: I1006 11:48:17.912627 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-4mxw5" Oct 06 11:48:17 crc kubenswrapper[4958]: E1006 11:48:17.912806 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-4mxw5" podUID="cbfbf2ea-df5f-4844-ba69-8c9f16a3a26c" Oct 06 11:48:17 crc kubenswrapper[4958]: I1006 11:48:17.915442 4958 scope.go:117] "RemoveContainer" containerID="bde47cc2eb6224040038d0ad4b06a8f91be0c8eaabe6545c6169b533f913dbd3" Oct 06 11:48:17 crc kubenswrapper[4958]: I1006 11:48:17.932040 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:48:17 crc kubenswrapper[4958]: I1006 11:48:17.932104 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:48:17 crc kubenswrapper[4958]: I1006 11:48:17.932127 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:48:17 crc kubenswrapper[4958]: I1006 11:48:17.932191 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:48:17 crc kubenswrapper[4958]: I1006 11:48:17.932219 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:48:17Z","lastTransitionTime":"2025-10-06T11:48:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:48:18 crc kubenswrapper[4958]: I1006 11:48:18.035814 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:48:18 crc kubenswrapper[4958]: I1006 11:48:18.035883 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:48:18 crc kubenswrapper[4958]: I1006 11:48:18.035900 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:48:18 crc kubenswrapper[4958]: I1006 11:48:18.035925 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:48:18 crc kubenswrapper[4958]: I1006 11:48:18.035944 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:48:18Z","lastTransitionTime":"2025-10-06T11:48:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:48:18 crc kubenswrapper[4958]: I1006 11:48:18.139604 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:48:18 crc kubenswrapper[4958]: I1006 11:48:18.139653 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:48:18 crc kubenswrapper[4958]: I1006 11:48:18.139667 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:48:18 crc kubenswrapper[4958]: I1006 11:48:18.139686 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:48:18 crc kubenswrapper[4958]: I1006 11:48:18.139701 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:48:18Z","lastTransitionTime":"2025-10-06T11:48:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:48:18 crc kubenswrapper[4958]: I1006 11:48:18.242388 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:48:18 crc kubenswrapper[4958]: I1006 11:48:18.242457 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:48:18 crc kubenswrapper[4958]: I1006 11:48:18.242483 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:48:18 crc kubenswrapper[4958]: I1006 11:48:18.242512 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:48:18 crc kubenswrapper[4958]: I1006 11:48:18.242534 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:48:18Z","lastTransitionTime":"2025-10-06T11:48:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:48:18 crc kubenswrapper[4958]: I1006 11:48:18.311626 4958 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-ntrlk_cd589959-144a-41bd-b6d5-a872e5c25cee/ovnkube-controller/1.log" Oct 06 11:48:18 crc kubenswrapper[4958]: I1006 11:48:18.314446 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-ntrlk" event={"ID":"cd589959-144a-41bd-b6d5-a872e5c25cee","Type":"ContainerStarted","Data":"3385ca172113f8a597490f651e5ab5d40777cf02ba40bb61b2ffa350d5598462"} Oct 06 11:48:18 crc kubenswrapper[4958]: I1006 11:48:18.315656 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-ntrlk" Oct 06 11:48:18 crc kubenswrapper[4958]: I1006 11:48:18.328833 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:46Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:48:18Z is after 2025-08-24T17:21:41Z" Oct 06 11:48:18 crc kubenswrapper[4958]: I1006 11:48:18.346573 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:48:18 crc kubenswrapper[4958]: I1006 11:48:18.346632 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:48:18 crc kubenswrapper[4958]: I1006 11:48:18.346644 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:48:18 crc kubenswrapper[4958]: I1006 11:48:18.346662 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:48:18 crc kubenswrapper[4958]: I1006 11:48:18.346676 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:48:18Z","lastTransitionTime":"2025-10-06T11:48:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:48:18 crc kubenswrapper[4958]: I1006 11:48:18.350503 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-ntrlk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cd589959-144a-41bd-b6d5-a872e5c25cee\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:48Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:48Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bc57f0ff822a9fcb8c3b73146bf4879f6e05c2f92b6a4317528530e54e419278\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:47:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vzpjm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a38398fb9a752d0059e1adfd4ee23557572492f2d27f63ef90ed4730b742fd0e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:47:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vzpjm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://484bfcace3003fb125465d9296c1b04852447d674b6c4abc8fedbd3a1b6ea8be\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:47:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vzpjm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d7834f1b3141381f7d868acc4aceb2809e81896cb7082de9888c3d540d062078\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:47:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vzpjm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://051e26a11e5f65ab6e49664f99a5f6a87bf781bf7284f2e2e3996c0e26e77c84\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:47:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vzpjm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d90731b1e085c514ef0665d4df356b853f432a1012f84a41ed82fa257d57d9bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:47:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vzpjm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3385ca172113f8a597490f651e5ab5d40777cf02ba40bb61b2ffa350d5598462\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bde47cc2eb6224040038d0ad4b06a8f91be0c8eaabe6545c6169b533f913dbd3\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-10-06T11:47:59Z\\\",\\\"message\\\":\\\"ind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-operator-lifecycle-manager/catalog-operator-metrics\\\\\\\"}\\\\nI1006 11:47:59.290723 6420 services_controller.go:360] Finished syncing service catalog-operator-metrics on namespace openshift-operator-lifecycle-manager for network=default : 1.119513ms\\\\nI1006 11:47:59.290754 6420 loadbalancer.go:304] Deleted 0 stale LBs for map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-apiserver/api\\\\\\\"}\\\\nI1006 11:47:59.290776 6420 services_controller.go:360] Finished syncing service api on namespace openshift-apiserver for network=default : 2.435771ms\\\\nI1006 11:47:59.290868 6420 loadbalancer.go:304] Deleted 0 stale LBs for map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-marketplace/community-operators\\\\\\\"}\\\\nI1006 11:47:59.290896 6420 services_controller.go:360] Finished syncing service community-operators on namespace openshift-marketplace for network=default : 2.503053ms\\\\nI1006 11:47:59.291059 6420 factory.go:1336] Added *v1.EgressFirewall event handler 9\\\\nI1006 11:47:59.291204 6420 controller.go:132] Adding controller ef_node_controller event handlers\\\\nI1006 11:47:59.291271 6420 ovnkube.go:599] Stopped ovnkube\\\\nI1006 11:47:59.291319 6420 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF1006 11:47:59.291428 6420 ovnkube.go:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-06T11:47:58Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:48:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vzpjm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1398aae03a42cf60100ed334a3d65c9c4fd501855d5316e7e1ed3974d0c7e22e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:47:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vzpjm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://738882f7b5c7b87e6d6426a997dd7efe5daf1d97973cf1085097cc0f5e790f00\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://738882f7b5c7b87e6d6426a997dd7efe5daf1d97973cf1085097cc0f5e790f00\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-06T11:47:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-06T11:47:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vzpjm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T11:47:48Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-ntrlk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:48:18Z is after 2025-08-24T17:21:41Z" Oct 06 11:48:18 crc kubenswrapper[4958]: I1006 11:48:18.371729 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-8jbr9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ddd869cc-2703-4f0d-b694-32bb7735025e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:48:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:48:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:48:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:48:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9298df9f82818033b7c6c96e0d7fd59290a7ef86679ae958f19f1c841998715c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:48:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8vhr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5d95fcda3db5aafa2634bf9d4037a6b8b0456bf4be6fc620c742e57e86aed114\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:48:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8vhr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T11:48:00Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-8jbr9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:48:18Z is after 2025-08-24T17:21:41Z" Oct 06 11:48:18 crc kubenswrapper[4958]: I1006 11:48:18.395190 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ba205413-cfaf-4b44-a363-11756cfa7dda\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:48:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:48:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fb611e3b5c05dc18f2016ae86df59c40ce281cb8b0180bd570de0ef76740db43\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:47:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ec146c3b7bff7005ab104f50dbc189e81ee82b7b2275ddb606e60832db5a0046\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:47:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0dcb7478b9b3f837afca516fc35e7f3f41e293147c5f27119fcca05f0d4477b3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:47:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c7f87b2139a49d50f883f483bfbfff0af00d2580d85bbddb604d395afbae3a73\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c7f87b2139a49d50f883f483bfbfff0af00d2580d85bbddb604d395afbae3a73\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-06T11:47:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-06T11:47:28Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T11:47:27Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:48:18Z is after 2025-08-24T17:21:41Z" Oct 06 11:48:18 crc kubenswrapper[4958]: I1006 11:48:18.425309 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:48Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:48Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3a93985bd27818990d6f5ec103d8c0722e1317a74398d64300148018c4f77b71\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:47:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9f39e4dcc064167825d4f7323323a394d1a3d1a374b7104bffc6ad56b344d89e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:47:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:48:18Z is after 2025-08-24T17:21:41Z" Oct 06 11:48:18 crc kubenswrapper[4958]: I1006 11:48:18.449175 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:48:18 crc kubenswrapper[4958]: I1006 11:48:18.449211 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:48:18 crc kubenswrapper[4958]: I1006 11:48:18.449219 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:48:18 crc kubenswrapper[4958]: I1006 11:48:18.449234 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:48:18 crc kubenswrapper[4958]: I1006 11:48:18.449244 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:48:18Z","lastTransitionTime":"2025-10-06T11:48:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:48:18 crc kubenswrapper[4958]: I1006 11:48:18.456798 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:46Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:48:18Z is after 2025-08-24T17:21:41Z" Oct 06 11:48:18 crc kubenswrapper[4958]: I1006 11:48:18.477294 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-lwknw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5e46de0e-9f67-4dae-8601-65004d0d71c1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6adb713c9c2659566eee132d371a68bc10d24c6761c017a1e8b0ce8bf974b9fa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:47:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wghx2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://244104564b5ad6b646671bdc75f440e5568a0c2449c111e56431328365d044fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://244104564b5ad6b646671bdc75f440e5568a0c2449c111e56431328365d044fe\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-06T11:47:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-06T11:47:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wghx2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5f832e5afbc55af6f3de6756005ddfef328c3d8357b1bbc8765469e5b254cf8a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5f832e5afbc55af6f3de6756005ddfef328c3d8357b1bbc8765469e5b254cf8a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-06T11:47:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-06T11:47:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wghx2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://99bec1a2f08582cd407d108d771389ab034877ee58f59e7d899541d0c69809b0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://99bec1a2f08582cd407d108d771389ab034877ee58f59e7d899541d0c69809b0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-06T11:47:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-06T11:47:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wghx2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4d090f1099ca4b6472934163fb66f1875dc03819cf0907f3058aa5006d9d2198\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4d090f1099ca4b6472934163fb66f1875dc03819cf0907f3058aa5006d9d2198\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-06T11:47:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-06T11:47:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wghx2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://41bdbeadff713969b3914296b7f045991c7da126f8181ee9b60d4607d7a565dd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://41bdbeadff713969b3914296b7f045991c7da126f8181ee9b60d4607d7a565dd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-06T11:47:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-06T11:47:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wghx2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3e496dd75c5dd4e480d06cd46302b6172620b35eb24aff3308865a0931677bc4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3e496dd75c5dd4e480d06cd46302b6172620b35eb24aff3308865a0931677bc4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-06T11:47:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-06T11:47:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wghx2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T11:47:47Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-lwknw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:48:18Z is after 2025-08-24T17:21:41Z" Oct 06 11:48:18 crc kubenswrapper[4958]: I1006 11:48:18.491716 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-4w4h5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8198c8fd-8ec9-4a56-9e87-4e3967fb8ec7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://979bf29caab50b132b30dd6eff62aed5d2209ae1c43fde2433101410df35035b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:47:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2ncxs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T11:47:47Z\\\"}}\" for pod \"openshift-multus\"/\"multus-4w4h5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:48:18Z is after 2025-08-24T17:21:41Z" Oct 06 11:48:18 crc kubenswrapper[4958]: I1006 11:48:18.510379 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-whw6z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1a63a5e9-6d00-40ff-a080-cfcaf03a1c1b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b4fae35f5c37636077eb5d2fee37236e39d009d77aa158458d267aed5fc29f48\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:47:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bwd9v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0be6251825a9cd485da028e6f7fd169dae74215a7191607dfd0a4b7a470e43c4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:47:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bwd9v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T11:47:47Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-whw6z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:48:18Z is after 2025-08-24T17:21:41Z" Oct 06 11:48:18 crc kubenswrapper[4958]: I1006 11:48:18.529176 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e346fee2-2887-49c0-ad05-0e4ca19453e4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://42a6335b390e5de2da2d935de9e0cb1e60391521813d0928c59d1d9c27ae8e7f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:47:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://64dfdaf2d8faa6e7ebde08361728077d3db9118ec963dc4c1fc8caf9eb9c752b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:47:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5cbba96aaea45073aaeada1136c677ba1ba87248f004cf3d7394ddfccf412478\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:47:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://95e5c689ad307b63dec13d774d1ff8fe8fa0a11a7d33e18a0470000a4df0f6dc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:47:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T11:47:27Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:48:18Z is after 2025-08-24T17:21:41Z" Oct 06 11:48:18 crc kubenswrapper[4958]: I1006 11:48:18.550360 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:46Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:48:18Z is after 2025-08-24T17:21:41Z" Oct 06 11:48:18 crc kubenswrapper[4958]: I1006 11:48:18.551688 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:48:18 crc kubenswrapper[4958]: I1006 11:48:18.551740 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:48:18 crc kubenswrapper[4958]: I1006 11:48:18.551755 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:48:18 crc kubenswrapper[4958]: I1006 11:48:18.551775 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:48:18 crc kubenswrapper[4958]: I1006 11:48:18.551788 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:48:18Z","lastTransitionTime":"2025-10-06T11:48:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:48:18 crc kubenswrapper[4958]: I1006 11:48:18.564192 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-8xzjs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c3545ace-6477-4c0b-9576-f32c6748e0a0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d94d788ac8f6a4665db7852c21f55128c9afb7976c9e18fbc302c02d8c3a0c43\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:47:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lz2sh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T11:47:47Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-8xzjs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:48:18Z is after 2025-08-24T17:21:41Z" Oct 06 11:48:18 crc kubenswrapper[4958]: I1006 11:48:18.578793 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 06 11:48:18 crc kubenswrapper[4958]: E1006 11:48:18.579052 4958 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-06 11:48:50.579021262 +0000 UTC m=+84.465046590 (durationBeforeRetry 32s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 06 11:48:18 crc kubenswrapper[4958]: I1006 11:48:18.579041 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-vns7w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f33eae78-7219-4551-bea7-dcfaf22c4e62\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7c6a67770ab5e7e0af1ed6a7b4965bc2b3937e021ddf18a765edd7b65196ad5c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:47:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f8htn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T11:47:49Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-vns7w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:48:18Z is after 2025-08-24T17:21:41Z" Oct 06 11:48:18 crc kubenswrapper[4958]: I1006 11:48:18.595002 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9bd5b11f-6414-4c57-99b8-516b3f4be804\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:27Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:27Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8f40315a451e12055987bec9804330dac96578f86d7fdedfd96ed187936df857\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:47:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0b16a70f5c26106f885601fdc9f41b6d606668ed3a4a527be3a7674dd4217d36\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:47:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://93cedfe1a3600f0eb73fd787333690ed1fd8cd5f766a54ba40eb6aad808593c7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:47:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://da49c178e22edd3e5f0b37f05dca576095bf83765cb9e46fcb68b1b362e304f7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fdb3ac1507f9a55981ed07786d2ab8bd2aab7072e8e625b17f15bc27f40f460d\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-10-06T11:47:48Z\\\",\\\"message\\\":\\\"file observer\\\\nW1006 11:47:47.776944 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1006 11:47:47.777183 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1006 11:47:47.778589 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-624858867/tls.crt::/tmp/serving-cert-624858867/tls.key\\\\\\\"\\\\nI1006 11:47:48.078332 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1006 11:47:48.084427 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1006 11:47:48.084453 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1006 11:47:48.084475 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1006 11:47:48.084480 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1006 11:47:48.108531 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI1006 11:47:48.108546 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW1006 11:47:48.108561 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1006 11:47:48.108582 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1006 11:47:48.108586 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1006 11:47:48.108589 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1006 11:47:48.108592 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1006 11:47:48.108594 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1006 11:47:48.110441 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-06T11:47:42Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:48:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://533257b553a482e13f8c8930f7ee7d6f8b70af1499cf861d3dfe73cb2225aa2a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:47:29Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://70c0a2bcb44c905e3d39402f0992d3e50a939252201b98e9a723f5a6777d7f91\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://70c0a2bcb44c905e3d39402f0992d3e50a939252201b98e9a723f5a6777d7f91\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-06T11:47:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-06T11:47:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T11:47:27Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:48:18Z is after 2025-08-24T17:21:41Z" Oct 06 11:48:18 crc kubenswrapper[4958]: I1006 11:48:18.612005 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:48Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:48Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b5c0c63694c68c28642df297d1ceff03425621c51b170c758b3fedeafbccc89e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:47:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:48:18Z is after 2025-08-24T17:21:41Z" Oct 06 11:48:18 crc kubenswrapper[4958]: I1006 11:48:18.628780 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:50Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:50Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fbac1bba642ea0238b738ec34631d44326b215575297c7ea3595aa9d691849d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:47:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:48:18Z is after 2025-08-24T17:21:41Z" Oct 06 11:48:18 crc kubenswrapper[4958]: I1006 11:48:18.641857 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-4mxw5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cbfbf2ea-df5f-4844-ba69-8c9f16a3a26c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:48:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:48:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:48:02Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:48:02Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cdkh7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cdkh7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T11:48:02Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-4mxw5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:48:18Z is after 2025-08-24T17:21:41Z" Oct 06 11:48:18 crc kubenswrapper[4958]: I1006 11:48:18.655046 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:48:18 crc kubenswrapper[4958]: I1006 11:48:18.655104 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:48:18 crc kubenswrapper[4958]: I1006 11:48:18.655127 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:48:18 crc kubenswrapper[4958]: I1006 11:48:18.655210 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:48:18 crc kubenswrapper[4958]: I1006 11:48:18.655258 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:48:18Z","lastTransitionTime":"2025-10-06T11:48:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:48:18 crc kubenswrapper[4958]: I1006 11:48:18.679950 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 06 11:48:18 crc kubenswrapper[4958]: I1006 11:48:18.680057 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 06 11:48:18 crc kubenswrapper[4958]: I1006 11:48:18.680179 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 06 11:48:18 crc kubenswrapper[4958]: I1006 11:48:18.680236 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 06 11:48:18 crc kubenswrapper[4958]: E1006 11:48:18.680464 4958 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Oct 06 11:48:18 crc kubenswrapper[4958]: E1006 11:48:18.680500 4958 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Oct 06 11:48:18 crc kubenswrapper[4958]: E1006 11:48:18.680526 4958 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Oct 06 11:48:18 crc kubenswrapper[4958]: E1006 11:48:18.680616 4958 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2025-10-06 11:48:50.680587702 +0000 UTC m=+84.566613070 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Oct 06 11:48:18 crc kubenswrapper[4958]: E1006 11:48:18.680713 4958 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Oct 06 11:48:18 crc kubenswrapper[4958]: E1006 11:48:18.680761 4958 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Oct 06 11:48:18 crc kubenswrapper[4958]: E1006 11:48:18.680779 4958 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-10-06 11:48:50.680766166 +0000 UTC m=+84.566791484 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Oct 06 11:48:18 crc kubenswrapper[4958]: E1006 11:48:18.680855 4958 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-10-06 11:48:50.680838197 +0000 UTC m=+84.566863595 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Oct 06 11:48:18 crc kubenswrapper[4958]: E1006 11:48:18.680852 4958 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Oct 06 11:48:18 crc kubenswrapper[4958]: E1006 11:48:18.680887 4958 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Oct 06 11:48:18 crc kubenswrapper[4958]: E1006 11:48:18.680899 4958 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Oct 06 11:48:18 crc kubenswrapper[4958]: E1006 11:48:18.680960 4958 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2025-10-06 11:48:50.68094451 +0000 UTC m=+84.566969818 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Oct 06 11:48:18 crc kubenswrapper[4958]: I1006 11:48:18.757923 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:48:18 crc kubenswrapper[4958]: I1006 11:48:18.758196 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:48:18 crc kubenswrapper[4958]: I1006 11:48:18.758209 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:48:18 crc kubenswrapper[4958]: I1006 11:48:18.758225 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:48:18 crc kubenswrapper[4958]: I1006 11:48:18.758237 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:48:18Z","lastTransitionTime":"2025-10-06T11:48:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:48:18 crc kubenswrapper[4958]: I1006 11:48:18.865481 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:48:18 crc kubenswrapper[4958]: I1006 11:48:18.865543 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:48:18 crc kubenswrapper[4958]: I1006 11:48:18.865558 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:48:18 crc kubenswrapper[4958]: I1006 11:48:18.865577 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:48:18 crc kubenswrapper[4958]: I1006 11:48:18.865589 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:48:18Z","lastTransitionTime":"2025-10-06T11:48:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:48:18 crc kubenswrapper[4958]: I1006 11:48:18.912489 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 06 11:48:18 crc kubenswrapper[4958]: I1006 11:48:18.912536 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 06 11:48:18 crc kubenswrapper[4958]: I1006 11:48:18.912616 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 06 11:48:18 crc kubenswrapper[4958]: E1006 11:48:18.912721 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Oct 06 11:48:18 crc kubenswrapper[4958]: E1006 11:48:18.912804 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Oct 06 11:48:18 crc kubenswrapper[4958]: E1006 11:48:18.912950 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Oct 06 11:48:18 crc kubenswrapper[4958]: I1006 11:48:18.968404 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:48:18 crc kubenswrapper[4958]: I1006 11:48:18.968467 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:48:18 crc kubenswrapper[4958]: I1006 11:48:18.968486 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:48:18 crc kubenswrapper[4958]: I1006 11:48:18.968511 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:48:18 crc kubenswrapper[4958]: I1006 11:48:18.968530 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:48:18Z","lastTransitionTime":"2025-10-06T11:48:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:48:19 crc kubenswrapper[4958]: I1006 11:48:19.072089 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:48:19 crc kubenswrapper[4958]: I1006 11:48:19.072183 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:48:19 crc kubenswrapper[4958]: I1006 11:48:19.072203 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:48:19 crc kubenswrapper[4958]: I1006 11:48:19.072228 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:48:19 crc kubenswrapper[4958]: I1006 11:48:19.072248 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:48:19Z","lastTransitionTime":"2025-10-06T11:48:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:48:19 crc kubenswrapper[4958]: I1006 11:48:19.175373 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:48:19 crc kubenswrapper[4958]: I1006 11:48:19.175468 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:48:19 crc kubenswrapper[4958]: I1006 11:48:19.175486 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:48:19 crc kubenswrapper[4958]: I1006 11:48:19.175512 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:48:19 crc kubenswrapper[4958]: I1006 11:48:19.175531 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:48:19Z","lastTransitionTime":"2025-10-06T11:48:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:48:19 crc kubenswrapper[4958]: I1006 11:48:19.278846 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:48:19 crc kubenswrapper[4958]: I1006 11:48:19.278915 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:48:19 crc kubenswrapper[4958]: I1006 11:48:19.278933 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:48:19 crc kubenswrapper[4958]: I1006 11:48:19.278959 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:48:19 crc kubenswrapper[4958]: I1006 11:48:19.278983 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:48:19Z","lastTransitionTime":"2025-10-06T11:48:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:48:19 crc kubenswrapper[4958]: I1006 11:48:19.320382 4958 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-ntrlk_cd589959-144a-41bd-b6d5-a872e5c25cee/ovnkube-controller/2.log" Oct 06 11:48:19 crc kubenswrapper[4958]: I1006 11:48:19.321535 4958 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-ntrlk_cd589959-144a-41bd-b6d5-a872e5c25cee/ovnkube-controller/1.log" Oct 06 11:48:19 crc kubenswrapper[4958]: I1006 11:48:19.324709 4958 generic.go:334] "Generic (PLEG): container finished" podID="cd589959-144a-41bd-b6d5-a872e5c25cee" containerID="3385ca172113f8a597490f651e5ab5d40777cf02ba40bb61b2ffa350d5598462" exitCode=1 Oct 06 11:48:19 crc kubenswrapper[4958]: I1006 11:48:19.324747 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-ntrlk" event={"ID":"cd589959-144a-41bd-b6d5-a872e5c25cee","Type":"ContainerDied","Data":"3385ca172113f8a597490f651e5ab5d40777cf02ba40bb61b2ffa350d5598462"} Oct 06 11:48:19 crc kubenswrapper[4958]: I1006 11:48:19.324788 4958 scope.go:117] "RemoveContainer" containerID="bde47cc2eb6224040038d0ad4b06a8f91be0c8eaabe6545c6169b533f913dbd3" Oct 06 11:48:19 crc kubenswrapper[4958]: I1006 11:48:19.326001 4958 scope.go:117] "RemoveContainer" containerID="3385ca172113f8a597490f651e5ab5d40777cf02ba40bb61b2ffa350d5598462" Oct 06 11:48:19 crc kubenswrapper[4958]: E1006 11:48:19.326327 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-ntrlk_openshift-ovn-kubernetes(cd589959-144a-41bd-b6d5-a872e5c25cee)\"" pod="openshift-ovn-kubernetes/ovnkube-node-ntrlk" podUID="cd589959-144a-41bd-b6d5-a872e5c25cee" Oct 06 11:48:19 crc kubenswrapper[4958]: I1006 11:48:19.354457 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ba205413-cfaf-4b44-a363-11756cfa7dda\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:48:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:48:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fb611e3b5c05dc18f2016ae86df59c40ce281cb8b0180bd570de0ef76740db43\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:47:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ec146c3b7bff7005ab104f50dbc189e81ee82b7b2275ddb606e60832db5a0046\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:47:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0dcb7478b9b3f837afca516fc35e7f3f41e293147c5f27119fcca05f0d4477b3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:47:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c7f87b2139a49d50f883f483bfbfff0af00d2580d85bbddb604d395afbae3a73\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c7f87b2139a49d50f883f483bfbfff0af00d2580d85bbddb604d395afbae3a73\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-06T11:47:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-06T11:47:28Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T11:47:27Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:48:19Z is after 2025-08-24T17:21:41Z" Oct 06 11:48:19 crc kubenswrapper[4958]: I1006 11:48:19.376786 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:48Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:48Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3a93985bd27818990d6f5ec103d8c0722e1317a74398d64300148018c4f77b71\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:47:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9f39e4dcc064167825d4f7323323a394d1a3d1a374b7104bffc6ad56b344d89e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:47:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:48:19Z is after 2025-08-24T17:21:41Z" Oct 06 11:48:19 crc kubenswrapper[4958]: I1006 11:48:19.381513 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:48:19 crc kubenswrapper[4958]: I1006 11:48:19.381541 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:48:19 crc kubenswrapper[4958]: I1006 11:48:19.381551 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:48:19 crc kubenswrapper[4958]: I1006 11:48:19.381566 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:48:19 crc kubenswrapper[4958]: I1006 11:48:19.381578 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:48:19Z","lastTransitionTime":"2025-10-06T11:48:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:48:19 crc kubenswrapper[4958]: I1006 11:48:19.398680 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:46Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:48:19Z is after 2025-08-24T17:21:41Z" Oct 06 11:48:19 crc kubenswrapper[4958]: I1006 11:48:19.416774 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-lwknw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5e46de0e-9f67-4dae-8601-65004d0d71c1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6adb713c9c2659566eee132d371a68bc10d24c6761c017a1e8b0ce8bf974b9fa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:47:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wghx2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://244104564b5ad6b646671bdc75f440e5568a0c2449c111e56431328365d044fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://244104564b5ad6b646671bdc75f440e5568a0c2449c111e56431328365d044fe\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-06T11:47:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-06T11:47:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wghx2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5f832e5afbc55af6f3de6756005ddfef328c3d8357b1bbc8765469e5b254cf8a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5f832e5afbc55af6f3de6756005ddfef328c3d8357b1bbc8765469e5b254cf8a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-06T11:47:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-06T11:47:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wghx2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://99bec1a2f08582cd407d108d771389ab034877ee58f59e7d899541d0c69809b0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://99bec1a2f08582cd407d108d771389ab034877ee58f59e7d899541d0c69809b0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-06T11:47:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-06T11:47:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wghx2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4d090f1099ca4b6472934163fb66f1875dc03819cf0907f3058aa5006d9d2198\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4d090f1099ca4b6472934163fb66f1875dc03819cf0907f3058aa5006d9d2198\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-06T11:47:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-06T11:47:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wghx2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://41bdbeadff713969b3914296b7f045991c7da126f8181ee9b60d4607d7a565dd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://41bdbeadff713969b3914296b7f045991c7da126f8181ee9b60d4607d7a565dd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-06T11:47:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-06T11:47:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wghx2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3e496dd75c5dd4e480d06cd46302b6172620b35eb24aff3308865a0931677bc4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3e496dd75c5dd4e480d06cd46302b6172620b35eb24aff3308865a0931677bc4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-06T11:47:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-06T11:47:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wghx2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T11:47:47Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-lwknw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:48:19Z is after 2025-08-24T17:21:41Z" Oct 06 11:48:19 crc kubenswrapper[4958]: I1006 11:48:19.433083 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-4w4h5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8198c8fd-8ec9-4a56-9e87-4e3967fb8ec7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://979bf29caab50b132b30dd6eff62aed5d2209ae1c43fde2433101410df35035b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:47:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2ncxs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T11:47:47Z\\\"}}\" for pod \"openshift-multus\"/\"multus-4w4h5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:48:19Z is after 2025-08-24T17:21:41Z" Oct 06 11:48:19 crc kubenswrapper[4958]: I1006 11:48:19.448074 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-whw6z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1a63a5e9-6d00-40ff-a080-cfcaf03a1c1b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b4fae35f5c37636077eb5d2fee37236e39d009d77aa158458d267aed5fc29f48\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:47:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bwd9v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0be6251825a9cd485da028e6f7fd169dae74215a7191607dfd0a4b7a470e43c4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:47:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bwd9v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T11:47:47Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-whw6z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:48:19Z is after 2025-08-24T17:21:41Z" Oct 06 11:48:19 crc kubenswrapper[4958]: I1006 11:48:19.460500 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-8jbr9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ddd869cc-2703-4f0d-b694-32bb7735025e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:48:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:48:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:48:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:48:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9298df9f82818033b7c6c96e0d7fd59290a7ef86679ae958f19f1c841998715c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:48:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8vhr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5d95fcda3db5aafa2634bf9d4037a6b8b0456bf4be6fc620c742e57e86aed114\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:48:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8vhr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T11:48:00Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-8jbr9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:48:19Z is after 2025-08-24T17:21:41Z" Oct 06 11:48:19 crc kubenswrapper[4958]: I1006 11:48:19.477862 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e346fee2-2887-49c0-ad05-0e4ca19453e4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://42a6335b390e5de2da2d935de9e0cb1e60391521813d0928c59d1d9c27ae8e7f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:47:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://64dfdaf2d8faa6e7ebde08361728077d3db9118ec963dc4c1fc8caf9eb9c752b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:47:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5cbba96aaea45073aaeada1136c677ba1ba87248f004cf3d7394ddfccf412478\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:47:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://95e5c689ad307b63dec13d774d1ff8fe8fa0a11a7d33e18a0470000a4df0f6dc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:47:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T11:47:27Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:48:19Z is after 2025-08-24T17:21:41Z" Oct 06 11:48:19 crc kubenswrapper[4958]: I1006 11:48:19.483752 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:48:19 crc kubenswrapper[4958]: I1006 11:48:19.483820 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:48:19 crc kubenswrapper[4958]: I1006 11:48:19.483839 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:48:19 crc kubenswrapper[4958]: I1006 11:48:19.483869 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:48:19 crc kubenswrapper[4958]: I1006 11:48:19.483889 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:48:19Z","lastTransitionTime":"2025-10-06T11:48:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:48:19 crc kubenswrapper[4958]: I1006 11:48:19.492833 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:46Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:48:19Z is after 2025-08-24T17:21:41Z" Oct 06 11:48:19 crc kubenswrapper[4958]: I1006 11:48:19.504872 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-8xzjs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c3545ace-6477-4c0b-9576-f32c6748e0a0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d94d788ac8f6a4665db7852c21f55128c9afb7976c9e18fbc302c02d8c3a0c43\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:47:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lz2sh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T11:47:47Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-8xzjs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:48:19Z is after 2025-08-24T17:21:41Z" Oct 06 11:48:19 crc kubenswrapper[4958]: I1006 11:48:19.518280 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-vns7w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f33eae78-7219-4551-bea7-dcfaf22c4e62\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7c6a67770ab5e7e0af1ed6a7b4965bc2b3937e021ddf18a765edd7b65196ad5c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:47:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f8htn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T11:47:49Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-vns7w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:48:19Z is after 2025-08-24T17:21:41Z" Oct 06 11:48:19 crc kubenswrapper[4958]: I1006 11:48:19.533229 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9bd5b11f-6414-4c57-99b8-516b3f4be804\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:27Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:27Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8f40315a451e12055987bec9804330dac96578f86d7fdedfd96ed187936df857\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:47:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0b16a70f5c26106f885601fdc9f41b6d606668ed3a4a527be3a7674dd4217d36\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:47:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://93cedfe1a3600f0eb73fd787333690ed1fd8cd5f766a54ba40eb6aad808593c7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:47:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://da49c178e22edd3e5f0b37f05dca576095bf83765cb9e46fcb68b1b362e304f7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fdb3ac1507f9a55981ed07786d2ab8bd2aab7072e8e625b17f15bc27f40f460d\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-10-06T11:47:48Z\\\",\\\"message\\\":\\\"file observer\\\\nW1006 11:47:47.776944 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1006 11:47:47.777183 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1006 11:47:47.778589 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-624858867/tls.crt::/tmp/serving-cert-624858867/tls.key\\\\\\\"\\\\nI1006 11:47:48.078332 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1006 11:47:48.084427 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1006 11:47:48.084453 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1006 11:47:48.084475 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1006 11:47:48.084480 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1006 11:47:48.108531 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI1006 11:47:48.108546 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW1006 11:47:48.108561 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1006 11:47:48.108582 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1006 11:47:48.108586 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1006 11:47:48.108589 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1006 11:47:48.108592 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1006 11:47:48.108594 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1006 11:47:48.110441 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-06T11:47:42Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:48:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://533257b553a482e13f8c8930f7ee7d6f8b70af1499cf861d3dfe73cb2225aa2a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:47:29Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://70c0a2bcb44c905e3d39402f0992d3e50a939252201b98e9a723f5a6777d7f91\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://70c0a2bcb44c905e3d39402f0992d3e50a939252201b98e9a723f5a6777d7f91\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-06T11:47:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-06T11:47:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T11:47:27Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:48:19Z is after 2025-08-24T17:21:41Z" Oct 06 11:48:19 crc kubenswrapper[4958]: I1006 11:48:19.553232 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:48Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:48Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b5c0c63694c68c28642df297d1ceff03425621c51b170c758b3fedeafbccc89e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:47:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:48:19Z is after 2025-08-24T17:21:41Z" Oct 06 11:48:19 crc kubenswrapper[4958]: I1006 11:48:19.565108 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:50Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:50Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fbac1bba642ea0238b738ec34631d44326b215575297c7ea3595aa9d691849d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:47:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:48:19Z is after 2025-08-24T17:21:41Z" Oct 06 11:48:19 crc kubenswrapper[4958]: I1006 11:48:19.577422 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-4mxw5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cbfbf2ea-df5f-4844-ba69-8c9f16a3a26c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:48:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:48:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:48:02Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:48:02Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cdkh7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cdkh7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T11:48:02Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-4mxw5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:48:19Z is after 2025-08-24T17:21:41Z" Oct 06 11:48:19 crc kubenswrapper[4958]: I1006 11:48:19.586473 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:48:19 crc kubenswrapper[4958]: I1006 11:48:19.586541 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:48:19 crc kubenswrapper[4958]: I1006 11:48:19.586565 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:48:19 crc kubenswrapper[4958]: I1006 11:48:19.586590 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:48:19 crc kubenswrapper[4958]: I1006 11:48:19.586608 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:48:19Z","lastTransitionTime":"2025-10-06T11:48:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:48:19 crc kubenswrapper[4958]: I1006 11:48:19.595220 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:46Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:48:19Z is after 2025-08-24T17:21:41Z" Oct 06 11:48:19 crc kubenswrapper[4958]: I1006 11:48:19.620531 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-ntrlk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cd589959-144a-41bd-b6d5-a872e5c25cee\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:48Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:48Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bc57f0ff822a9fcb8c3b73146bf4879f6e05c2f92b6a4317528530e54e419278\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:47:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vzpjm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a38398fb9a752d0059e1adfd4ee23557572492f2d27f63ef90ed4730b742fd0e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:47:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vzpjm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://484bfcace3003fb125465d9296c1b04852447d674b6c4abc8fedbd3a1b6ea8be\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:47:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vzpjm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d7834f1b3141381f7d868acc4aceb2809e81896cb7082de9888c3d540d062078\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:47:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vzpjm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://051e26a11e5f65ab6e49664f99a5f6a87bf781bf7284f2e2e3996c0e26e77c84\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:47:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vzpjm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d90731b1e085c514ef0665d4df356b853f432a1012f84a41ed82fa257d57d9bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:47:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vzpjm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3385ca172113f8a597490f651e5ab5d40777cf02ba40bb61b2ffa350d5598462\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bde47cc2eb6224040038d0ad4b06a8f91be0c8eaabe6545c6169b533f913dbd3\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-10-06T11:47:59Z\\\",\\\"message\\\":\\\"ind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-operator-lifecycle-manager/catalog-operator-metrics\\\\\\\"}\\\\nI1006 11:47:59.290723 6420 services_controller.go:360] Finished syncing service catalog-operator-metrics on namespace openshift-operator-lifecycle-manager for network=default : 1.119513ms\\\\nI1006 11:47:59.290754 6420 loadbalancer.go:304] Deleted 0 stale LBs for map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-apiserver/api\\\\\\\"}\\\\nI1006 11:47:59.290776 6420 services_controller.go:360] Finished syncing service api on namespace openshift-apiserver for network=default : 2.435771ms\\\\nI1006 11:47:59.290868 6420 loadbalancer.go:304] Deleted 0 stale LBs for map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-marketplace/community-operators\\\\\\\"}\\\\nI1006 11:47:59.290896 6420 services_controller.go:360] Finished syncing service community-operators on namespace openshift-marketplace for network=default : 2.503053ms\\\\nI1006 11:47:59.291059 6420 factory.go:1336] Added *v1.EgressFirewall event handler 9\\\\nI1006 11:47:59.291204 6420 controller.go:132] Adding controller ef_node_controller event handlers\\\\nI1006 11:47:59.291271 6420 ovnkube.go:599] Stopped ovnkube\\\\nI1006 11:47:59.291319 6420 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF1006 11:47:59.291428 6420 ovnkube.go:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-06T11:47:58Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3385ca172113f8a597490f651e5ab5d40777cf02ba40bb61b2ffa350d5598462\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-10-06T11:48:18Z\\\",\\\"message\\\":\\\"edroute/v1/apis/informers/externalversions/factory.go:140\\\\nI1006 11:48:18.897422 6666 reflector.go:311] Stopping reflector *v1alpha1.AdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI1006 11:48:18.897824 6666 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI1006 11:48:18.897885 6666 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI1006 11:48:18.897895 6666 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI1006 11:48:18.897916 6666 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI1006 11:48:18.897986 6666 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI1006 11:48:18.897991 6666 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI1006 11:48:18.897989 6666 handler.go:208] Removed *v1.Pod event handler 3\\\\nI1006 11:48:18.898014 6666 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI1006 11:48:18.898007 6666 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI1006 11:48:18.898024 6666 handler.go:208] Removed *v1.Pod event handler 6\\\\nI1006 11:48:18.898039 6666 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI1006 11:48:18.898072 6666 factory.go:656] Stopping watch factory\\\\nI1006 11:48:18.898105 6666 ovnkube.go:599] Stopped ovnkube\\\\nI1006 11\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-06T11:48:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vzpjm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1398aae03a42cf60100ed334a3d65c9c4fd501855d5316e7e1ed3974d0c7e22e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:47:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vzpjm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://738882f7b5c7b87e6d6426a997dd7efe5daf1d97973cf1085097cc0f5e790f00\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://738882f7b5c7b87e6d6426a997dd7efe5daf1d97973cf1085097cc0f5e790f00\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-06T11:47:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-06T11:47:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vzpjm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T11:47:48Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-ntrlk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:48:19Z is after 2025-08-24T17:21:41Z" Oct 06 11:48:19 crc kubenswrapper[4958]: I1006 11:48:19.689663 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:48:19 crc kubenswrapper[4958]: I1006 11:48:19.689774 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:48:19 crc kubenswrapper[4958]: I1006 11:48:19.689802 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:48:19 crc kubenswrapper[4958]: I1006 11:48:19.689833 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:48:19 crc kubenswrapper[4958]: I1006 11:48:19.689857 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:48:19Z","lastTransitionTime":"2025-10-06T11:48:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:48:19 crc kubenswrapper[4958]: I1006 11:48:19.792459 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:48:19 crc kubenswrapper[4958]: I1006 11:48:19.792511 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:48:19 crc kubenswrapper[4958]: I1006 11:48:19.792532 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:48:19 crc kubenswrapper[4958]: I1006 11:48:19.792558 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:48:19 crc kubenswrapper[4958]: I1006 11:48:19.792577 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:48:19Z","lastTransitionTime":"2025-10-06T11:48:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:48:19 crc kubenswrapper[4958]: I1006 11:48:19.895732 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:48:19 crc kubenswrapper[4958]: I1006 11:48:19.895826 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:48:19 crc kubenswrapper[4958]: I1006 11:48:19.895854 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:48:19 crc kubenswrapper[4958]: I1006 11:48:19.895881 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:48:19 crc kubenswrapper[4958]: I1006 11:48:19.895900 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:48:19Z","lastTransitionTime":"2025-10-06T11:48:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:48:19 crc kubenswrapper[4958]: I1006 11:48:19.913236 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-4mxw5" Oct 06 11:48:19 crc kubenswrapper[4958]: E1006 11:48:19.913427 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-4mxw5" podUID="cbfbf2ea-df5f-4844-ba69-8c9f16a3a26c" Oct 06 11:48:19 crc kubenswrapper[4958]: I1006 11:48:19.998608 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:48:19 crc kubenswrapper[4958]: I1006 11:48:19.998700 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:48:19 crc kubenswrapper[4958]: I1006 11:48:19.998717 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:48:19 crc kubenswrapper[4958]: I1006 11:48:19.998740 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:48:19 crc kubenswrapper[4958]: I1006 11:48:19.998761 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:48:19Z","lastTransitionTime":"2025-10-06T11:48:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:48:20 crc kubenswrapper[4958]: I1006 11:48:20.102050 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:48:20 crc kubenswrapper[4958]: I1006 11:48:20.102104 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:48:20 crc kubenswrapper[4958]: I1006 11:48:20.102115 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:48:20 crc kubenswrapper[4958]: I1006 11:48:20.102132 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:48:20 crc kubenswrapper[4958]: I1006 11:48:20.102169 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:48:20Z","lastTransitionTime":"2025-10-06T11:48:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:48:20 crc kubenswrapper[4958]: I1006 11:48:20.205243 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:48:20 crc kubenswrapper[4958]: I1006 11:48:20.205310 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:48:20 crc kubenswrapper[4958]: I1006 11:48:20.205345 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:48:20 crc kubenswrapper[4958]: I1006 11:48:20.205371 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:48:20 crc kubenswrapper[4958]: I1006 11:48:20.205391 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:48:20Z","lastTransitionTime":"2025-10-06T11:48:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:48:20 crc kubenswrapper[4958]: I1006 11:48:20.309978 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:48:20 crc kubenswrapper[4958]: I1006 11:48:20.310048 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:48:20 crc kubenswrapper[4958]: I1006 11:48:20.310066 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:48:20 crc kubenswrapper[4958]: I1006 11:48:20.310091 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:48:20 crc kubenswrapper[4958]: I1006 11:48:20.310109 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:48:20Z","lastTransitionTime":"2025-10-06T11:48:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:48:20 crc kubenswrapper[4958]: I1006 11:48:20.338951 4958 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-ntrlk_cd589959-144a-41bd-b6d5-a872e5c25cee/ovnkube-controller/2.log" Oct 06 11:48:20 crc kubenswrapper[4958]: I1006 11:48:20.346054 4958 scope.go:117] "RemoveContainer" containerID="3385ca172113f8a597490f651e5ab5d40777cf02ba40bb61b2ffa350d5598462" Oct 06 11:48:20 crc kubenswrapper[4958]: E1006 11:48:20.346334 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-ntrlk_openshift-ovn-kubernetes(cd589959-144a-41bd-b6d5-a872e5c25cee)\"" pod="openshift-ovn-kubernetes/ovnkube-node-ntrlk" podUID="cd589959-144a-41bd-b6d5-a872e5c25cee" Oct 06 11:48:20 crc kubenswrapper[4958]: I1006 11:48:20.365983 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e346fee2-2887-49c0-ad05-0e4ca19453e4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://42a6335b390e5de2da2d935de9e0cb1e60391521813d0928c59d1d9c27ae8e7f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:47:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://64dfdaf2d8faa6e7ebde08361728077d3db9118ec963dc4c1fc8caf9eb9c752b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:47:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5cbba96aaea45073aaeada1136c677ba1ba87248f004cf3d7394ddfccf412478\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:47:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://95e5c689ad307b63dec13d774d1ff8fe8fa0a11a7d33e18a0470000a4df0f6dc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:47:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T11:47:27Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:48:20Z is after 2025-08-24T17:21:41Z" Oct 06 11:48:20 crc kubenswrapper[4958]: I1006 11:48:20.383736 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:46Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:48:20Z is after 2025-08-24T17:21:41Z" Oct 06 11:48:20 crc kubenswrapper[4958]: I1006 11:48:20.401618 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-8xzjs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c3545ace-6477-4c0b-9576-f32c6748e0a0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d94d788ac8f6a4665db7852c21f55128c9afb7976c9e18fbc302c02d8c3a0c43\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:47:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lz2sh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T11:47:47Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-8xzjs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:48:20Z is after 2025-08-24T17:21:41Z" Oct 06 11:48:20 crc kubenswrapper[4958]: I1006 11:48:20.413621 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:48:20 crc kubenswrapper[4958]: I1006 11:48:20.413667 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:48:20 crc kubenswrapper[4958]: I1006 11:48:20.413678 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:48:20 crc kubenswrapper[4958]: I1006 11:48:20.413697 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:48:20 crc kubenswrapper[4958]: I1006 11:48:20.413709 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:48:20Z","lastTransitionTime":"2025-10-06T11:48:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:48:20 crc kubenswrapper[4958]: I1006 11:48:20.418879 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-vns7w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f33eae78-7219-4551-bea7-dcfaf22c4e62\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7c6a67770ab5e7e0af1ed6a7b4965bc2b3937e021ddf18a765edd7b65196ad5c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:47:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f8htn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T11:47:49Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-vns7w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:48:20Z is after 2025-08-24T17:21:41Z" Oct 06 11:48:20 crc kubenswrapper[4958]: I1006 11:48:20.439140 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9bd5b11f-6414-4c57-99b8-516b3f4be804\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:27Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:27Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8f40315a451e12055987bec9804330dac96578f86d7fdedfd96ed187936df857\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:47:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0b16a70f5c26106f885601fdc9f41b6d606668ed3a4a527be3a7674dd4217d36\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:47:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://93cedfe1a3600f0eb73fd787333690ed1fd8cd5f766a54ba40eb6aad808593c7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:47:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://da49c178e22edd3e5f0b37f05dca576095bf83765cb9e46fcb68b1b362e304f7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fdb3ac1507f9a55981ed07786d2ab8bd2aab7072e8e625b17f15bc27f40f460d\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-10-06T11:47:48Z\\\",\\\"message\\\":\\\"file observer\\\\nW1006 11:47:47.776944 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1006 11:47:47.777183 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1006 11:47:47.778589 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-624858867/tls.crt::/tmp/serving-cert-624858867/tls.key\\\\\\\"\\\\nI1006 11:47:48.078332 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1006 11:47:48.084427 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1006 11:47:48.084453 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1006 11:47:48.084475 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1006 11:47:48.084480 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1006 11:47:48.108531 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI1006 11:47:48.108546 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW1006 11:47:48.108561 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1006 11:47:48.108582 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1006 11:47:48.108586 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1006 11:47:48.108589 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1006 11:47:48.108592 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1006 11:47:48.108594 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1006 11:47:48.110441 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-06T11:47:42Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:48:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://533257b553a482e13f8c8930f7ee7d6f8b70af1499cf861d3dfe73cb2225aa2a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:47:29Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://70c0a2bcb44c905e3d39402f0992d3e50a939252201b98e9a723f5a6777d7f91\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://70c0a2bcb44c905e3d39402f0992d3e50a939252201b98e9a723f5a6777d7f91\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-06T11:47:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-06T11:47:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T11:47:27Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:48:20Z is after 2025-08-24T17:21:41Z" Oct 06 11:48:20 crc kubenswrapper[4958]: I1006 11:48:20.459667 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:48Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:48Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b5c0c63694c68c28642df297d1ceff03425621c51b170c758b3fedeafbccc89e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:47:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:48:20Z is after 2025-08-24T17:21:41Z" Oct 06 11:48:20 crc kubenswrapper[4958]: I1006 11:48:20.475585 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:50Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:50Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fbac1bba642ea0238b738ec34631d44326b215575297c7ea3595aa9d691849d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:47:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:48:20Z is after 2025-08-24T17:21:41Z" Oct 06 11:48:20 crc kubenswrapper[4958]: I1006 11:48:20.493376 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-4mxw5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cbfbf2ea-df5f-4844-ba69-8c9f16a3a26c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:48:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:48:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:48:02Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:48:02Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cdkh7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cdkh7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T11:48:02Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-4mxw5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:48:20Z is after 2025-08-24T17:21:41Z" Oct 06 11:48:20 crc kubenswrapper[4958]: I1006 11:48:20.512889 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:46Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:48:20Z is after 2025-08-24T17:21:41Z" Oct 06 11:48:20 crc kubenswrapper[4958]: I1006 11:48:20.517216 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:48:20 crc kubenswrapper[4958]: I1006 11:48:20.517286 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:48:20 crc kubenswrapper[4958]: I1006 11:48:20.517309 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:48:20 crc kubenswrapper[4958]: I1006 11:48:20.517335 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:48:20 crc kubenswrapper[4958]: I1006 11:48:20.517356 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:48:20Z","lastTransitionTime":"2025-10-06T11:48:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:48:20 crc kubenswrapper[4958]: I1006 11:48:20.544755 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-ntrlk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cd589959-144a-41bd-b6d5-a872e5c25cee\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:48Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:48Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bc57f0ff822a9fcb8c3b73146bf4879f6e05c2f92b6a4317528530e54e419278\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:47:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vzpjm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a38398fb9a752d0059e1adfd4ee23557572492f2d27f63ef90ed4730b742fd0e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:47:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vzpjm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://484bfcace3003fb125465d9296c1b04852447d674b6c4abc8fedbd3a1b6ea8be\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:47:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vzpjm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d7834f1b3141381f7d868acc4aceb2809e81896cb7082de9888c3d540d062078\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:47:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vzpjm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://051e26a11e5f65ab6e49664f99a5f6a87bf781bf7284f2e2e3996c0e26e77c84\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:47:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vzpjm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d90731b1e085c514ef0665d4df356b853f432a1012f84a41ed82fa257d57d9bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:47:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vzpjm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3385ca172113f8a597490f651e5ab5d40777cf02ba40bb61b2ffa350d5598462\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3385ca172113f8a597490f651e5ab5d40777cf02ba40bb61b2ffa350d5598462\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-10-06T11:48:18Z\\\",\\\"message\\\":\\\"edroute/v1/apis/informers/externalversions/factory.go:140\\\\nI1006 11:48:18.897422 6666 reflector.go:311] Stopping reflector *v1alpha1.AdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI1006 11:48:18.897824 6666 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI1006 11:48:18.897885 6666 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI1006 11:48:18.897895 6666 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI1006 11:48:18.897916 6666 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI1006 11:48:18.897986 6666 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI1006 11:48:18.897991 6666 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI1006 11:48:18.897989 6666 handler.go:208] Removed *v1.Pod event handler 3\\\\nI1006 11:48:18.898014 6666 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI1006 11:48:18.898007 6666 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI1006 11:48:18.898024 6666 handler.go:208] Removed *v1.Pod event handler 6\\\\nI1006 11:48:18.898039 6666 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI1006 11:48:18.898072 6666 factory.go:656] Stopping watch factory\\\\nI1006 11:48:18.898105 6666 ovnkube.go:599] Stopped ovnkube\\\\nI1006 11\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-06T11:48:18Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-ntrlk_openshift-ovn-kubernetes(cd589959-144a-41bd-b6d5-a872e5c25cee)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vzpjm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1398aae03a42cf60100ed334a3d65c9c4fd501855d5316e7e1ed3974d0c7e22e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:47:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vzpjm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://738882f7b5c7b87e6d6426a997dd7efe5daf1d97973cf1085097cc0f5e790f00\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://738882f7b5c7b87e6d6426a997dd7efe5daf1d97973cf1085097cc0f5e790f00\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-06T11:47:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-06T11:47:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vzpjm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T11:47:48Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-ntrlk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:48:20Z is after 2025-08-24T17:21:41Z" Oct 06 11:48:20 crc kubenswrapper[4958]: I1006 11:48:20.563751 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ba205413-cfaf-4b44-a363-11756cfa7dda\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:48:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:48:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fb611e3b5c05dc18f2016ae86df59c40ce281cb8b0180bd570de0ef76740db43\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:47:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ec146c3b7bff7005ab104f50dbc189e81ee82b7b2275ddb606e60832db5a0046\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:47:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0dcb7478b9b3f837afca516fc35e7f3f41e293147c5f27119fcca05f0d4477b3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:47:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c7f87b2139a49d50f883f483bfbfff0af00d2580d85bbddb604d395afbae3a73\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c7f87b2139a49d50f883f483bfbfff0af00d2580d85bbddb604d395afbae3a73\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-06T11:47:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-06T11:47:28Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T11:47:27Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:48:20Z is after 2025-08-24T17:21:41Z" Oct 06 11:48:20 crc kubenswrapper[4958]: I1006 11:48:20.579558 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:48Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:48Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3a93985bd27818990d6f5ec103d8c0722e1317a74398d64300148018c4f77b71\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:47:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9f39e4dcc064167825d4f7323323a394d1a3d1a374b7104bffc6ad56b344d89e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:47:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:48:20Z is after 2025-08-24T17:21:41Z" Oct 06 11:48:20 crc kubenswrapper[4958]: I1006 11:48:20.598584 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:46Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:48:20Z is after 2025-08-24T17:21:41Z" Oct 06 11:48:20 crc kubenswrapper[4958]: I1006 11:48:20.619750 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:48:20 crc kubenswrapper[4958]: I1006 11:48:20.619790 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:48:20 crc kubenswrapper[4958]: I1006 11:48:20.619799 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:48:20 crc kubenswrapper[4958]: I1006 11:48:20.619812 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:48:20 crc kubenswrapper[4958]: I1006 11:48:20.619821 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:48:20Z","lastTransitionTime":"2025-10-06T11:48:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:48:20 crc kubenswrapper[4958]: I1006 11:48:20.624072 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-lwknw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5e46de0e-9f67-4dae-8601-65004d0d71c1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6adb713c9c2659566eee132d371a68bc10d24c6761c017a1e8b0ce8bf974b9fa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:47:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wghx2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://244104564b5ad6b646671bdc75f440e5568a0c2449c111e56431328365d044fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://244104564b5ad6b646671bdc75f440e5568a0c2449c111e56431328365d044fe\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-06T11:47:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-06T11:47:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wghx2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5f832e5afbc55af6f3de6756005ddfef328c3d8357b1bbc8765469e5b254cf8a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5f832e5afbc55af6f3de6756005ddfef328c3d8357b1bbc8765469e5b254cf8a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-06T11:47:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-06T11:47:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wghx2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://99bec1a2f08582cd407d108d771389ab034877ee58f59e7d899541d0c69809b0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://99bec1a2f08582cd407d108d771389ab034877ee58f59e7d899541d0c69809b0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-06T11:47:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-06T11:47:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wghx2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4d090f1099ca4b6472934163fb66f1875dc03819cf0907f3058aa5006d9d2198\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4d090f1099ca4b6472934163fb66f1875dc03819cf0907f3058aa5006d9d2198\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-06T11:47:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-06T11:47:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wghx2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://41bdbeadff713969b3914296b7f045991c7da126f8181ee9b60d4607d7a565dd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://41bdbeadff713969b3914296b7f045991c7da126f8181ee9b60d4607d7a565dd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-06T11:47:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-06T11:47:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wghx2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3e496dd75c5dd4e480d06cd46302b6172620b35eb24aff3308865a0931677bc4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3e496dd75c5dd4e480d06cd46302b6172620b35eb24aff3308865a0931677bc4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-06T11:47:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-06T11:47:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wghx2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T11:47:47Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-lwknw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:48:20Z is after 2025-08-24T17:21:41Z" Oct 06 11:48:20 crc kubenswrapper[4958]: I1006 11:48:20.642286 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-4w4h5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8198c8fd-8ec9-4a56-9e87-4e3967fb8ec7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://979bf29caab50b132b30dd6eff62aed5d2209ae1c43fde2433101410df35035b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:47:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2ncxs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T11:47:47Z\\\"}}\" for pod \"openshift-multus\"/\"multus-4w4h5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:48:20Z is after 2025-08-24T17:21:41Z" Oct 06 11:48:20 crc kubenswrapper[4958]: I1006 11:48:20.658685 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-whw6z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1a63a5e9-6d00-40ff-a080-cfcaf03a1c1b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b4fae35f5c37636077eb5d2fee37236e39d009d77aa158458d267aed5fc29f48\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:47:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bwd9v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0be6251825a9cd485da028e6f7fd169dae74215a7191607dfd0a4b7a470e43c4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:47:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bwd9v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T11:47:47Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-whw6z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:48:20Z is after 2025-08-24T17:21:41Z" Oct 06 11:48:20 crc kubenswrapper[4958]: I1006 11:48:20.671875 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-8jbr9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ddd869cc-2703-4f0d-b694-32bb7735025e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:48:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:48:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:48:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:48:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9298df9f82818033b7c6c96e0d7fd59290a7ef86679ae958f19f1c841998715c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:48:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8vhr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5d95fcda3db5aafa2634bf9d4037a6b8b0456bf4be6fc620c742e57e86aed114\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:48:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8vhr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T11:48:00Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-8jbr9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:48:20Z is after 2025-08-24T17:21:41Z" Oct 06 11:48:20 crc kubenswrapper[4958]: I1006 11:48:20.722719 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:48:20 crc kubenswrapper[4958]: I1006 11:48:20.722780 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:48:20 crc kubenswrapper[4958]: I1006 11:48:20.722799 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:48:20 crc kubenswrapper[4958]: I1006 11:48:20.722823 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:48:20 crc kubenswrapper[4958]: I1006 11:48:20.722841 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:48:20Z","lastTransitionTime":"2025-10-06T11:48:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:48:20 crc kubenswrapper[4958]: I1006 11:48:20.825403 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:48:20 crc kubenswrapper[4958]: I1006 11:48:20.825470 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:48:20 crc kubenswrapper[4958]: I1006 11:48:20.825488 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:48:20 crc kubenswrapper[4958]: I1006 11:48:20.825512 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:48:20 crc kubenswrapper[4958]: I1006 11:48:20.825528 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:48:20Z","lastTransitionTime":"2025-10-06T11:48:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:48:20 crc kubenswrapper[4958]: I1006 11:48:20.913047 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 06 11:48:20 crc kubenswrapper[4958]: I1006 11:48:20.913253 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 06 11:48:20 crc kubenswrapper[4958]: E1006 11:48:20.913356 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Oct 06 11:48:20 crc kubenswrapper[4958]: I1006 11:48:20.913411 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 06 11:48:20 crc kubenswrapper[4958]: E1006 11:48:20.913597 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Oct 06 11:48:20 crc kubenswrapper[4958]: E1006 11:48:20.913776 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Oct 06 11:48:20 crc kubenswrapper[4958]: I1006 11:48:20.928206 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:48:20 crc kubenswrapper[4958]: I1006 11:48:20.928286 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:48:20 crc kubenswrapper[4958]: I1006 11:48:20.928313 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:48:20 crc kubenswrapper[4958]: I1006 11:48:20.928343 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:48:20 crc kubenswrapper[4958]: I1006 11:48:20.928366 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:48:20Z","lastTransitionTime":"2025-10-06T11:48:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:48:21 crc kubenswrapper[4958]: I1006 11:48:21.031665 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:48:21 crc kubenswrapper[4958]: I1006 11:48:21.031721 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:48:21 crc kubenswrapper[4958]: I1006 11:48:21.031739 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:48:21 crc kubenswrapper[4958]: I1006 11:48:21.031762 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:48:21 crc kubenswrapper[4958]: I1006 11:48:21.031781 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:48:21Z","lastTransitionTime":"2025-10-06T11:48:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:48:21 crc kubenswrapper[4958]: I1006 11:48:21.135018 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:48:21 crc kubenswrapper[4958]: I1006 11:48:21.135069 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:48:21 crc kubenswrapper[4958]: I1006 11:48:21.135086 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:48:21 crc kubenswrapper[4958]: I1006 11:48:21.135110 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:48:21 crc kubenswrapper[4958]: I1006 11:48:21.135129 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:48:21Z","lastTransitionTime":"2025-10-06T11:48:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:48:21 crc kubenswrapper[4958]: I1006 11:48:21.239010 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:48:21 crc kubenswrapper[4958]: I1006 11:48:21.239090 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:48:21 crc kubenswrapper[4958]: I1006 11:48:21.239114 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:48:21 crc kubenswrapper[4958]: I1006 11:48:21.239184 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:48:21 crc kubenswrapper[4958]: I1006 11:48:21.239212 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:48:21Z","lastTransitionTime":"2025-10-06T11:48:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:48:21 crc kubenswrapper[4958]: I1006 11:48:21.342082 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:48:21 crc kubenswrapper[4958]: I1006 11:48:21.342137 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:48:21 crc kubenswrapper[4958]: I1006 11:48:21.342181 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:48:21 crc kubenswrapper[4958]: I1006 11:48:21.342204 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:48:21 crc kubenswrapper[4958]: I1006 11:48:21.342220 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:48:21Z","lastTransitionTime":"2025-10-06T11:48:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:48:21 crc kubenswrapper[4958]: I1006 11:48:21.445640 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:48:21 crc kubenswrapper[4958]: I1006 11:48:21.445761 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:48:21 crc kubenswrapper[4958]: I1006 11:48:21.445781 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:48:21 crc kubenswrapper[4958]: I1006 11:48:21.445804 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:48:21 crc kubenswrapper[4958]: I1006 11:48:21.445822 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:48:21Z","lastTransitionTime":"2025-10-06T11:48:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:48:21 crc kubenswrapper[4958]: I1006 11:48:21.549111 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:48:21 crc kubenswrapper[4958]: I1006 11:48:21.549233 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:48:21 crc kubenswrapper[4958]: I1006 11:48:21.549256 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:48:21 crc kubenswrapper[4958]: I1006 11:48:21.549280 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:48:21 crc kubenswrapper[4958]: I1006 11:48:21.549301 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:48:21Z","lastTransitionTime":"2025-10-06T11:48:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:48:21 crc kubenswrapper[4958]: I1006 11:48:21.652873 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:48:21 crc kubenswrapper[4958]: I1006 11:48:21.652951 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:48:21 crc kubenswrapper[4958]: I1006 11:48:21.652998 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:48:21 crc kubenswrapper[4958]: I1006 11:48:21.653034 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:48:21 crc kubenswrapper[4958]: I1006 11:48:21.653060 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:48:21Z","lastTransitionTime":"2025-10-06T11:48:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:48:21 crc kubenswrapper[4958]: I1006 11:48:21.755944 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:48:21 crc kubenswrapper[4958]: I1006 11:48:21.756000 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:48:21 crc kubenswrapper[4958]: I1006 11:48:21.756012 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:48:21 crc kubenswrapper[4958]: I1006 11:48:21.756030 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:48:21 crc kubenswrapper[4958]: I1006 11:48:21.756042 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:48:21Z","lastTransitionTime":"2025-10-06T11:48:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:48:21 crc kubenswrapper[4958]: I1006 11:48:21.859234 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:48:21 crc kubenswrapper[4958]: I1006 11:48:21.859291 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:48:21 crc kubenswrapper[4958]: I1006 11:48:21.859306 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:48:21 crc kubenswrapper[4958]: I1006 11:48:21.859328 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:48:21 crc kubenswrapper[4958]: I1006 11:48:21.859344 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:48:21Z","lastTransitionTime":"2025-10-06T11:48:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:48:21 crc kubenswrapper[4958]: I1006 11:48:21.912503 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-4mxw5" Oct 06 11:48:21 crc kubenswrapper[4958]: E1006 11:48:21.912790 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-4mxw5" podUID="cbfbf2ea-df5f-4844-ba69-8c9f16a3a26c" Oct 06 11:48:21 crc kubenswrapper[4958]: I1006 11:48:21.962498 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:48:21 crc kubenswrapper[4958]: I1006 11:48:21.962581 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:48:21 crc kubenswrapper[4958]: I1006 11:48:21.962619 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:48:21 crc kubenswrapper[4958]: I1006 11:48:21.962652 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:48:21 crc kubenswrapper[4958]: I1006 11:48:21.962674 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:48:21Z","lastTransitionTime":"2025-10-06T11:48:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:48:22 crc kubenswrapper[4958]: I1006 11:48:22.065922 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:48:22 crc kubenswrapper[4958]: I1006 11:48:22.065982 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:48:22 crc kubenswrapper[4958]: I1006 11:48:22.066004 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:48:22 crc kubenswrapper[4958]: I1006 11:48:22.066032 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:48:22 crc kubenswrapper[4958]: I1006 11:48:22.066051 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:48:22Z","lastTransitionTime":"2025-10-06T11:48:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:48:22 crc kubenswrapper[4958]: I1006 11:48:22.169569 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:48:22 crc kubenswrapper[4958]: I1006 11:48:22.169727 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:48:22 crc kubenswrapper[4958]: I1006 11:48:22.169749 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:48:22 crc kubenswrapper[4958]: I1006 11:48:22.169775 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:48:22 crc kubenswrapper[4958]: I1006 11:48:22.169794 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:48:22Z","lastTransitionTime":"2025-10-06T11:48:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:48:22 crc kubenswrapper[4958]: I1006 11:48:22.272791 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:48:22 crc kubenswrapper[4958]: I1006 11:48:22.272852 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:48:22 crc kubenswrapper[4958]: I1006 11:48:22.272870 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:48:22 crc kubenswrapper[4958]: I1006 11:48:22.272896 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:48:22 crc kubenswrapper[4958]: I1006 11:48:22.272914 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:48:22Z","lastTransitionTime":"2025-10-06T11:48:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:48:22 crc kubenswrapper[4958]: I1006 11:48:22.376023 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:48:22 crc kubenswrapper[4958]: I1006 11:48:22.376077 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:48:22 crc kubenswrapper[4958]: I1006 11:48:22.376093 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:48:22 crc kubenswrapper[4958]: I1006 11:48:22.376115 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:48:22 crc kubenswrapper[4958]: I1006 11:48:22.376132 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:48:22Z","lastTransitionTime":"2025-10-06T11:48:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:48:22 crc kubenswrapper[4958]: I1006 11:48:22.479910 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:48:22 crc kubenswrapper[4958]: I1006 11:48:22.479961 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:48:22 crc kubenswrapper[4958]: I1006 11:48:22.479977 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:48:22 crc kubenswrapper[4958]: I1006 11:48:22.479998 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:48:22 crc kubenswrapper[4958]: I1006 11:48:22.480015 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:48:22Z","lastTransitionTime":"2025-10-06T11:48:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:48:22 crc kubenswrapper[4958]: I1006 11:48:22.527456 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Oct 06 11:48:22 crc kubenswrapper[4958]: I1006 11:48:22.548822 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:46Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:48:22Z is after 2025-08-24T17:21:41Z" Oct 06 11:48:22 crc kubenswrapper[4958]: I1006 11:48:22.569716 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-ntrlk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cd589959-144a-41bd-b6d5-a872e5c25cee\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:48Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:48Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bc57f0ff822a9fcb8c3b73146bf4879f6e05c2f92b6a4317528530e54e419278\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:47:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vzpjm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a38398fb9a752d0059e1adfd4ee23557572492f2d27f63ef90ed4730b742fd0e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:47:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vzpjm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://484bfcace3003fb125465d9296c1b04852447d674b6c4abc8fedbd3a1b6ea8be\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:47:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vzpjm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d7834f1b3141381f7d868acc4aceb2809e81896cb7082de9888c3d540d062078\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:47:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vzpjm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://051e26a11e5f65ab6e49664f99a5f6a87bf781bf7284f2e2e3996c0e26e77c84\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:47:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vzpjm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d90731b1e085c514ef0665d4df356b853f432a1012f84a41ed82fa257d57d9bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:47:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vzpjm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3385ca172113f8a597490f651e5ab5d40777cf02ba40bb61b2ffa350d5598462\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3385ca172113f8a597490f651e5ab5d40777cf02ba40bb61b2ffa350d5598462\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-10-06T11:48:18Z\\\",\\\"message\\\":\\\"edroute/v1/apis/informers/externalversions/factory.go:140\\\\nI1006 11:48:18.897422 6666 reflector.go:311] Stopping reflector *v1alpha1.AdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI1006 11:48:18.897824 6666 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI1006 11:48:18.897885 6666 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI1006 11:48:18.897895 6666 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI1006 11:48:18.897916 6666 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI1006 11:48:18.897986 6666 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI1006 11:48:18.897991 6666 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI1006 11:48:18.897989 6666 handler.go:208] Removed *v1.Pod event handler 3\\\\nI1006 11:48:18.898014 6666 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI1006 11:48:18.898007 6666 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI1006 11:48:18.898024 6666 handler.go:208] Removed *v1.Pod event handler 6\\\\nI1006 11:48:18.898039 6666 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI1006 11:48:18.898072 6666 factory.go:656] Stopping watch factory\\\\nI1006 11:48:18.898105 6666 ovnkube.go:599] Stopped ovnkube\\\\nI1006 11\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-06T11:48:18Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-ntrlk_openshift-ovn-kubernetes(cd589959-144a-41bd-b6d5-a872e5c25cee)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vzpjm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1398aae03a42cf60100ed334a3d65c9c4fd501855d5316e7e1ed3974d0c7e22e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:47:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vzpjm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://738882f7b5c7b87e6d6426a997dd7efe5daf1d97973cf1085097cc0f5e790f00\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://738882f7b5c7b87e6d6426a997dd7efe5daf1d97973cf1085097cc0f5e790f00\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-06T11:47:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-06T11:47:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vzpjm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T11:47:48Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-ntrlk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:48:22Z is after 2025-08-24T17:21:41Z" Oct 06 11:48:22 crc kubenswrapper[4958]: I1006 11:48:22.582835 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:48:22 crc kubenswrapper[4958]: I1006 11:48:22.582894 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:48:22 crc kubenswrapper[4958]: I1006 11:48:22.582920 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:48:22 crc kubenswrapper[4958]: I1006 11:48:22.582951 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:48:22 crc kubenswrapper[4958]: I1006 11:48:22.582974 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:48:22Z","lastTransitionTime":"2025-10-06T11:48:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:48:22 crc kubenswrapper[4958]: I1006 11:48:22.589980 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ba205413-cfaf-4b44-a363-11756cfa7dda\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:48:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:48:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fb611e3b5c05dc18f2016ae86df59c40ce281cb8b0180bd570de0ef76740db43\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:47:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ec146c3b7bff7005ab104f50dbc189e81ee82b7b2275ddb606e60832db5a0046\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:47:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0dcb7478b9b3f837afca516fc35e7f3f41e293147c5f27119fcca05f0d4477b3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:47:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c7f87b2139a49d50f883f483bfbfff0af00d2580d85bbddb604d395afbae3a73\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c7f87b2139a49d50f883f483bfbfff0af00d2580d85bbddb604d395afbae3a73\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-06T11:47:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-06T11:47:28Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T11:47:27Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:48:22Z is after 2025-08-24T17:21:41Z" Oct 06 11:48:22 crc kubenswrapper[4958]: I1006 11:48:22.610971 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:48Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:48Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3a93985bd27818990d6f5ec103d8c0722e1317a74398d64300148018c4f77b71\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:47:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9f39e4dcc064167825d4f7323323a394d1a3d1a374b7104bffc6ad56b344d89e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:47:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:48:22Z is after 2025-08-24T17:21:41Z" Oct 06 11:48:22 crc kubenswrapper[4958]: I1006 11:48:22.635355 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:46Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:48:22Z is after 2025-08-24T17:21:41Z" Oct 06 11:48:22 crc kubenswrapper[4958]: I1006 11:48:22.660306 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-lwknw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5e46de0e-9f67-4dae-8601-65004d0d71c1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6adb713c9c2659566eee132d371a68bc10d24c6761c017a1e8b0ce8bf974b9fa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:47:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wghx2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://244104564b5ad6b646671bdc75f440e5568a0c2449c111e56431328365d044fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://244104564b5ad6b646671bdc75f440e5568a0c2449c111e56431328365d044fe\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-06T11:47:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-06T11:47:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wghx2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5f832e5afbc55af6f3de6756005ddfef328c3d8357b1bbc8765469e5b254cf8a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5f832e5afbc55af6f3de6756005ddfef328c3d8357b1bbc8765469e5b254cf8a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-06T11:47:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-06T11:47:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wghx2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://99bec1a2f08582cd407d108d771389ab034877ee58f59e7d899541d0c69809b0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://99bec1a2f08582cd407d108d771389ab034877ee58f59e7d899541d0c69809b0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-06T11:47:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-06T11:47:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wghx2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4d090f1099ca4b6472934163fb66f1875dc03819cf0907f3058aa5006d9d2198\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4d090f1099ca4b6472934163fb66f1875dc03819cf0907f3058aa5006d9d2198\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-06T11:47:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-06T11:47:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wghx2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://41bdbeadff713969b3914296b7f045991c7da126f8181ee9b60d4607d7a565dd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://41bdbeadff713969b3914296b7f045991c7da126f8181ee9b60d4607d7a565dd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-06T11:47:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-06T11:47:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wghx2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3e496dd75c5dd4e480d06cd46302b6172620b35eb24aff3308865a0931677bc4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3e496dd75c5dd4e480d06cd46302b6172620b35eb24aff3308865a0931677bc4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-06T11:47:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-06T11:47:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wghx2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T11:47:47Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-lwknw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:48:22Z is after 2025-08-24T17:21:41Z" Oct 06 11:48:22 crc kubenswrapper[4958]: I1006 11:48:22.686543 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-4w4h5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8198c8fd-8ec9-4a56-9e87-4e3967fb8ec7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://979bf29caab50b132b30dd6eff62aed5d2209ae1c43fde2433101410df35035b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:47:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2ncxs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T11:47:47Z\\\"}}\" for pod \"openshift-multus\"/\"multus-4w4h5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:48:22Z is after 2025-08-24T17:21:41Z" Oct 06 11:48:22 crc kubenswrapper[4958]: I1006 11:48:22.686736 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:48:22 crc kubenswrapper[4958]: I1006 11:48:22.687194 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:48:22 crc kubenswrapper[4958]: I1006 11:48:22.687212 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:48:22 crc kubenswrapper[4958]: I1006 11:48:22.687235 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:48:22 crc kubenswrapper[4958]: I1006 11:48:22.687254 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:48:22Z","lastTransitionTime":"2025-10-06T11:48:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:48:22 crc kubenswrapper[4958]: I1006 11:48:22.711776 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-whw6z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1a63a5e9-6d00-40ff-a080-cfcaf03a1c1b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b4fae35f5c37636077eb5d2fee37236e39d009d77aa158458d267aed5fc29f48\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:47:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bwd9v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0be6251825a9cd485da028e6f7fd169dae74215a7191607dfd0a4b7a470e43c4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:47:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bwd9v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T11:47:47Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-whw6z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:48:22Z is after 2025-08-24T17:21:41Z" Oct 06 11:48:22 crc kubenswrapper[4958]: I1006 11:48:22.729515 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-8jbr9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ddd869cc-2703-4f0d-b694-32bb7735025e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:48:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:48:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:48:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:48:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9298df9f82818033b7c6c96e0d7fd59290a7ef86679ae958f19f1c841998715c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:48:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8vhr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5d95fcda3db5aafa2634bf9d4037a6b8b0456bf4be6fc620c742e57e86aed114\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:48:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8vhr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T11:48:00Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-8jbr9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:48:22Z is after 2025-08-24T17:21:41Z" Oct 06 11:48:22 crc kubenswrapper[4958]: I1006 11:48:22.751701 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e346fee2-2887-49c0-ad05-0e4ca19453e4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://42a6335b390e5de2da2d935de9e0cb1e60391521813d0928c59d1d9c27ae8e7f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:47:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://64dfdaf2d8faa6e7ebde08361728077d3db9118ec963dc4c1fc8caf9eb9c752b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:47:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5cbba96aaea45073aaeada1136c677ba1ba87248f004cf3d7394ddfccf412478\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:47:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://95e5c689ad307b63dec13d774d1ff8fe8fa0a11a7d33e18a0470000a4df0f6dc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:47:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T11:47:27Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:48:22Z is after 2025-08-24T17:21:41Z" Oct 06 11:48:22 crc kubenswrapper[4958]: I1006 11:48:22.773344 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:46Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:48:22Z is after 2025-08-24T17:21:41Z" Oct 06 11:48:22 crc kubenswrapper[4958]: I1006 11:48:22.788540 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-8xzjs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c3545ace-6477-4c0b-9576-f32c6748e0a0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d94d788ac8f6a4665db7852c21f55128c9afb7976c9e18fbc302c02d8c3a0c43\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:47:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lz2sh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T11:47:47Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-8xzjs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:48:22Z is after 2025-08-24T17:21:41Z" Oct 06 11:48:22 crc kubenswrapper[4958]: I1006 11:48:22.791199 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:48:22 crc kubenswrapper[4958]: I1006 11:48:22.791259 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:48:22 crc kubenswrapper[4958]: I1006 11:48:22.791282 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:48:22 crc kubenswrapper[4958]: I1006 11:48:22.791315 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:48:22 crc kubenswrapper[4958]: I1006 11:48:22.791340 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:48:22Z","lastTransitionTime":"2025-10-06T11:48:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:48:22 crc kubenswrapper[4958]: I1006 11:48:22.804825 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-vns7w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f33eae78-7219-4551-bea7-dcfaf22c4e62\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7c6a67770ab5e7e0af1ed6a7b4965bc2b3937e021ddf18a765edd7b65196ad5c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:47:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f8htn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T11:47:49Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-vns7w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:48:22Z is after 2025-08-24T17:21:41Z" Oct 06 11:48:22 crc kubenswrapper[4958]: I1006 11:48:22.832602 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9bd5b11f-6414-4c57-99b8-516b3f4be804\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:48:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:48:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8f40315a451e12055987bec9804330dac96578f86d7fdedfd96ed187936df857\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:47:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0b16a70f5c26106f885601fdc9f41b6d606668ed3a4a527be3a7674dd4217d36\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:47:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://93cedfe1a3600f0eb73fd787333690ed1fd8cd5f766a54ba40eb6aad808593c7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:47:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://da49c178e22edd3e5f0b37f05dca576095bf83765cb9e46fcb68b1b362e304f7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fdb3ac1507f9a55981ed07786d2ab8bd2aab7072e8e625b17f15bc27f40f460d\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-10-06T11:47:48Z\\\",\\\"message\\\":\\\"file observer\\\\nW1006 11:47:47.776944 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1006 11:47:47.777183 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1006 11:47:47.778589 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-624858867/tls.crt::/tmp/serving-cert-624858867/tls.key\\\\\\\"\\\\nI1006 11:47:48.078332 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1006 11:47:48.084427 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1006 11:47:48.084453 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1006 11:47:48.084475 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1006 11:47:48.084480 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1006 11:47:48.108531 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI1006 11:47:48.108546 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW1006 11:47:48.108561 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1006 11:47:48.108582 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1006 11:47:48.108586 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1006 11:47:48.108589 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1006 11:47:48.108592 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1006 11:47:48.108594 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1006 11:47:48.110441 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-06T11:47:42Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:48:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://533257b553a482e13f8c8930f7ee7d6f8b70af1499cf861d3dfe73cb2225aa2a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:47:29Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://70c0a2bcb44c905e3d39402f0992d3e50a939252201b98e9a723f5a6777d7f91\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://70c0a2bcb44c905e3d39402f0992d3e50a939252201b98e9a723f5a6777d7f91\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-06T11:47:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-06T11:47:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T11:47:27Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:48:22Z is after 2025-08-24T17:21:41Z" Oct 06 11:48:22 crc kubenswrapper[4958]: I1006 11:48:22.856795 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:48Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:48Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b5c0c63694c68c28642df297d1ceff03425621c51b170c758b3fedeafbccc89e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:47:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:48:22Z is after 2025-08-24T17:21:41Z" Oct 06 11:48:22 crc kubenswrapper[4958]: I1006 11:48:22.872612 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:50Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:50Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fbac1bba642ea0238b738ec34631d44326b215575297c7ea3595aa9d691849d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:47:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:48:22Z is after 2025-08-24T17:21:41Z" Oct 06 11:48:22 crc kubenswrapper[4958]: I1006 11:48:22.886941 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-4mxw5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cbfbf2ea-df5f-4844-ba69-8c9f16a3a26c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:48:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:48:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:48:02Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:48:02Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cdkh7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cdkh7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T11:48:02Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-4mxw5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:48:22Z is after 2025-08-24T17:21:41Z" Oct 06 11:48:22 crc kubenswrapper[4958]: I1006 11:48:22.893921 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:48:22 crc kubenswrapper[4958]: I1006 11:48:22.893973 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:48:22 crc kubenswrapper[4958]: I1006 11:48:22.893990 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:48:22 crc kubenswrapper[4958]: I1006 11:48:22.894014 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:48:22 crc kubenswrapper[4958]: I1006 11:48:22.894030 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:48:22Z","lastTransitionTime":"2025-10-06T11:48:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:48:22 crc kubenswrapper[4958]: I1006 11:48:22.912942 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 06 11:48:22 crc kubenswrapper[4958]: I1006 11:48:22.912970 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 06 11:48:22 crc kubenswrapper[4958]: E1006 11:48:22.913084 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Oct 06 11:48:22 crc kubenswrapper[4958]: I1006 11:48:22.913166 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 06 11:48:22 crc kubenswrapper[4958]: E1006 11:48:22.913278 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Oct 06 11:48:22 crc kubenswrapper[4958]: E1006 11:48:22.914493 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Oct 06 11:48:22 crc kubenswrapper[4958]: I1006 11:48:22.996455 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:48:22 crc kubenswrapper[4958]: I1006 11:48:22.996498 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:48:22 crc kubenswrapper[4958]: I1006 11:48:22.996507 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:48:22 crc kubenswrapper[4958]: I1006 11:48:22.996520 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:48:22 crc kubenswrapper[4958]: I1006 11:48:22.996530 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:48:22Z","lastTransitionTime":"2025-10-06T11:48:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:48:23 crc kubenswrapper[4958]: I1006 11:48:23.099277 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:48:23 crc kubenswrapper[4958]: I1006 11:48:23.099331 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:48:23 crc kubenswrapper[4958]: I1006 11:48:23.099348 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:48:23 crc kubenswrapper[4958]: I1006 11:48:23.099371 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:48:23 crc kubenswrapper[4958]: I1006 11:48:23.099388 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:48:23Z","lastTransitionTime":"2025-10-06T11:48:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:48:23 crc kubenswrapper[4958]: I1006 11:48:23.203322 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:48:23 crc kubenswrapper[4958]: I1006 11:48:23.203420 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:48:23 crc kubenswrapper[4958]: I1006 11:48:23.203444 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:48:23 crc kubenswrapper[4958]: I1006 11:48:23.203477 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:48:23 crc kubenswrapper[4958]: I1006 11:48:23.203502 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:48:23Z","lastTransitionTime":"2025-10-06T11:48:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:48:23 crc kubenswrapper[4958]: I1006 11:48:23.306404 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:48:23 crc kubenswrapper[4958]: I1006 11:48:23.306500 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:48:23 crc kubenswrapper[4958]: I1006 11:48:23.306549 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:48:23 crc kubenswrapper[4958]: I1006 11:48:23.306574 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:48:23 crc kubenswrapper[4958]: I1006 11:48:23.306595 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:48:23Z","lastTransitionTime":"2025-10-06T11:48:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:48:23 crc kubenswrapper[4958]: I1006 11:48:23.410037 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:48:23 crc kubenswrapper[4958]: I1006 11:48:23.410091 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:48:23 crc kubenswrapper[4958]: I1006 11:48:23.410111 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:48:23 crc kubenswrapper[4958]: I1006 11:48:23.410134 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:48:23 crc kubenswrapper[4958]: I1006 11:48:23.410176 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:48:23Z","lastTransitionTime":"2025-10-06T11:48:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:48:23 crc kubenswrapper[4958]: I1006 11:48:23.513202 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:48:23 crc kubenswrapper[4958]: I1006 11:48:23.513284 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:48:23 crc kubenswrapper[4958]: I1006 11:48:23.513308 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:48:23 crc kubenswrapper[4958]: I1006 11:48:23.513339 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:48:23 crc kubenswrapper[4958]: I1006 11:48:23.513365 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:48:23Z","lastTransitionTime":"2025-10-06T11:48:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:48:23 crc kubenswrapper[4958]: I1006 11:48:23.616111 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:48:23 crc kubenswrapper[4958]: I1006 11:48:23.616212 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:48:23 crc kubenswrapper[4958]: I1006 11:48:23.616230 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:48:23 crc kubenswrapper[4958]: I1006 11:48:23.616260 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:48:23 crc kubenswrapper[4958]: I1006 11:48:23.616279 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:48:23Z","lastTransitionTime":"2025-10-06T11:48:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:48:23 crc kubenswrapper[4958]: I1006 11:48:23.719336 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:48:23 crc kubenswrapper[4958]: I1006 11:48:23.719449 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:48:23 crc kubenswrapper[4958]: I1006 11:48:23.719468 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:48:23 crc kubenswrapper[4958]: I1006 11:48:23.719490 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:48:23 crc kubenswrapper[4958]: I1006 11:48:23.719506 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:48:23Z","lastTransitionTime":"2025-10-06T11:48:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:48:23 crc kubenswrapper[4958]: I1006 11:48:23.822245 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:48:23 crc kubenswrapper[4958]: I1006 11:48:23.822311 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:48:23 crc kubenswrapper[4958]: I1006 11:48:23.822330 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:48:23 crc kubenswrapper[4958]: I1006 11:48:23.822364 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:48:23 crc kubenswrapper[4958]: I1006 11:48:23.822382 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:48:23Z","lastTransitionTime":"2025-10-06T11:48:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:48:23 crc kubenswrapper[4958]: I1006 11:48:23.912651 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-4mxw5" Oct 06 11:48:23 crc kubenswrapper[4958]: E1006 11:48:23.912887 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-4mxw5" podUID="cbfbf2ea-df5f-4844-ba69-8c9f16a3a26c" Oct 06 11:48:23 crc kubenswrapper[4958]: I1006 11:48:23.925215 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:48:23 crc kubenswrapper[4958]: I1006 11:48:23.925279 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:48:23 crc kubenswrapper[4958]: I1006 11:48:23.925298 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:48:23 crc kubenswrapper[4958]: I1006 11:48:23.925323 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:48:23 crc kubenswrapper[4958]: I1006 11:48:23.925346 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:48:23Z","lastTransitionTime":"2025-10-06T11:48:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:48:24 crc kubenswrapper[4958]: I1006 11:48:24.028755 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:48:24 crc kubenswrapper[4958]: I1006 11:48:24.028808 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:48:24 crc kubenswrapper[4958]: I1006 11:48:24.028817 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:48:24 crc kubenswrapper[4958]: I1006 11:48:24.028834 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:48:24 crc kubenswrapper[4958]: I1006 11:48:24.028851 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:48:24Z","lastTransitionTime":"2025-10-06T11:48:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:48:24 crc kubenswrapper[4958]: I1006 11:48:24.132185 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:48:24 crc kubenswrapper[4958]: I1006 11:48:24.132241 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:48:24 crc kubenswrapper[4958]: I1006 11:48:24.132261 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:48:24 crc kubenswrapper[4958]: I1006 11:48:24.132293 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:48:24 crc kubenswrapper[4958]: I1006 11:48:24.132311 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:48:24Z","lastTransitionTime":"2025-10-06T11:48:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:48:24 crc kubenswrapper[4958]: I1006 11:48:24.137177 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:48:24 crc kubenswrapper[4958]: I1006 11:48:24.137224 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:48:24 crc kubenswrapper[4958]: I1006 11:48:24.137240 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:48:24 crc kubenswrapper[4958]: I1006 11:48:24.137258 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:48:24 crc kubenswrapper[4958]: I1006 11:48:24.137283 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:48:24Z","lastTransitionTime":"2025-10-06T11:48:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:48:24 crc kubenswrapper[4958]: E1006 11:48:24.155374 4958 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-10-06T11:48:24Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-06T11:48:24Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-06T11:48:24Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-06T11:48:24Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-06T11:48:24Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-06T11:48:24Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-06T11:48:24Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-06T11:48:24Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"71775188-f0be-4b91-a34f-f469bf9337b6\\\",\\\"systemUUID\\\":\\\"c838f8b7-52cc-46da-a415-eff8b7b887b9\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:48:24Z is after 2025-08-24T17:21:41Z" Oct 06 11:48:24 crc kubenswrapper[4958]: I1006 11:48:24.161215 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:48:24 crc kubenswrapper[4958]: I1006 11:48:24.161288 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:48:24 crc kubenswrapper[4958]: I1006 11:48:24.161312 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:48:24 crc kubenswrapper[4958]: I1006 11:48:24.161340 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:48:24 crc kubenswrapper[4958]: I1006 11:48:24.161359 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:48:24Z","lastTransitionTime":"2025-10-06T11:48:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:48:24 crc kubenswrapper[4958]: E1006 11:48:24.177933 4958 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-10-06T11:48:24Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-06T11:48:24Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-06T11:48:24Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-06T11:48:24Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-06T11:48:24Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-06T11:48:24Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-06T11:48:24Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-06T11:48:24Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"71775188-f0be-4b91-a34f-f469bf9337b6\\\",\\\"systemUUID\\\":\\\"c838f8b7-52cc-46da-a415-eff8b7b887b9\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:48:24Z is after 2025-08-24T17:21:41Z" Oct 06 11:48:24 crc kubenswrapper[4958]: I1006 11:48:24.182600 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:48:24 crc kubenswrapper[4958]: I1006 11:48:24.182650 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:48:24 crc kubenswrapper[4958]: I1006 11:48:24.182669 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:48:24 crc kubenswrapper[4958]: I1006 11:48:24.182692 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:48:24 crc kubenswrapper[4958]: I1006 11:48:24.182709 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:48:24Z","lastTransitionTime":"2025-10-06T11:48:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:48:24 crc kubenswrapper[4958]: E1006 11:48:24.197633 4958 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-10-06T11:48:24Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-06T11:48:24Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-06T11:48:24Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-06T11:48:24Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-06T11:48:24Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-06T11:48:24Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-06T11:48:24Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-06T11:48:24Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"71775188-f0be-4b91-a34f-f469bf9337b6\\\",\\\"systemUUID\\\":\\\"c838f8b7-52cc-46da-a415-eff8b7b887b9\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:48:24Z is after 2025-08-24T17:21:41Z" Oct 06 11:48:24 crc kubenswrapper[4958]: I1006 11:48:24.203361 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:48:24 crc kubenswrapper[4958]: I1006 11:48:24.203446 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:48:24 crc kubenswrapper[4958]: I1006 11:48:24.203472 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:48:24 crc kubenswrapper[4958]: I1006 11:48:24.203496 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:48:24 crc kubenswrapper[4958]: I1006 11:48:24.203543 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:48:24Z","lastTransitionTime":"2025-10-06T11:48:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:48:24 crc kubenswrapper[4958]: E1006 11:48:24.224372 4958 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-10-06T11:48:24Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-06T11:48:24Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-06T11:48:24Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-06T11:48:24Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-06T11:48:24Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-06T11:48:24Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-06T11:48:24Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-06T11:48:24Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"71775188-f0be-4b91-a34f-f469bf9337b6\\\",\\\"systemUUID\\\":\\\"c838f8b7-52cc-46da-a415-eff8b7b887b9\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:48:24Z is after 2025-08-24T17:21:41Z" Oct 06 11:48:24 crc kubenswrapper[4958]: I1006 11:48:24.230904 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:48:24 crc kubenswrapper[4958]: I1006 11:48:24.231060 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:48:24 crc kubenswrapper[4958]: I1006 11:48:24.231189 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:48:24 crc kubenswrapper[4958]: I1006 11:48:24.231290 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:48:24 crc kubenswrapper[4958]: I1006 11:48:24.231393 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:48:24Z","lastTransitionTime":"2025-10-06T11:48:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:48:24 crc kubenswrapper[4958]: E1006 11:48:24.250667 4958 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-10-06T11:48:24Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-06T11:48:24Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-06T11:48:24Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-06T11:48:24Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-06T11:48:24Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-06T11:48:24Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-06T11:48:24Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-06T11:48:24Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"71775188-f0be-4b91-a34f-f469bf9337b6\\\",\\\"systemUUID\\\":\\\"c838f8b7-52cc-46da-a415-eff8b7b887b9\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:48:24Z is after 2025-08-24T17:21:41Z" Oct 06 11:48:24 crc kubenswrapper[4958]: E1006 11:48:24.250795 4958 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Oct 06 11:48:24 crc kubenswrapper[4958]: I1006 11:48:24.252387 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:48:24 crc kubenswrapper[4958]: I1006 11:48:24.252425 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:48:24 crc kubenswrapper[4958]: I1006 11:48:24.252438 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:48:24 crc kubenswrapper[4958]: I1006 11:48:24.252455 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:48:24 crc kubenswrapper[4958]: I1006 11:48:24.252467 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:48:24Z","lastTransitionTime":"2025-10-06T11:48:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:48:24 crc kubenswrapper[4958]: I1006 11:48:24.356686 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:48:24 crc kubenswrapper[4958]: I1006 11:48:24.356764 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:48:24 crc kubenswrapper[4958]: I1006 11:48:24.356790 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:48:24 crc kubenswrapper[4958]: I1006 11:48:24.356823 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:48:24 crc kubenswrapper[4958]: I1006 11:48:24.356850 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:48:24Z","lastTransitionTime":"2025-10-06T11:48:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:48:24 crc kubenswrapper[4958]: I1006 11:48:24.460340 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:48:24 crc kubenswrapper[4958]: I1006 11:48:24.460409 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:48:24 crc kubenswrapper[4958]: I1006 11:48:24.460425 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:48:24 crc kubenswrapper[4958]: I1006 11:48:24.460449 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:48:24 crc kubenswrapper[4958]: I1006 11:48:24.460466 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:48:24Z","lastTransitionTime":"2025-10-06T11:48:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:48:24 crc kubenswrapper[4958]: I1006 11:48:24.563401 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:48:24 crc kubenswrapper[4958]: I1006 11:48:24.563457 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:48:24 crc kubenswrapper[4958]: I1006 11:48:24.563475 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:48:24 crc kubenswrapper[4958]: I1006 11:48:24.563500 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:48:24 crc kubenswrapper[4958]: I1006 11:48:24.563518 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:48:24Z","lastTransitionTime":"2025-10-06T11:48:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:48:24 crc kubenswrapper[4958]: I1006 11:48:24.665732 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:48:24 crc kubenswrapper[4958]: I1006 11:48:24.666055 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:48:24 crc kubenswrapper[4958]: I1006 11:48:24.666250 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:48:24 crc kubenswrapper[4958]: I1006 11:48:24.666382 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:48:24 crc kubenswrapper[4958]: I1006 11:48:24.666502 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:48:24Z","lastTransitionTime":"2025-10-06T11:48:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:48:24 crc kubenswrapper[4958]: I1006 11:48:24.768879 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:48:24 crc kubenswrapper[4958]: I1006 11:48:24.768924 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:48:24 crc kubenswrapper[4958]: I1006 11:48:24.768937 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:48:24 crc kubenswrapper[4958]: I1006 11:48:24.768953 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:48:24 crc kubenswrapper[4958]: I1006 11:48:24.768966 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:48:24Z","lastTransitionTime":"2025-10-06T11:48:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:48:24 crc kubenswrapper[4958]: I1006 11:48:24.871406 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:48:24 crc kubenswrapper[4958]: I1006 11:48:24.871622 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:48:24 crc kubenswrapper[4958]: I1006 11:48:24.871683 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:48:24 crc kubenswrapper[4958]: I1006 11:48:24.871790 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:48:24 crc kubenswrapper[4958]: I1006 11:48:24.872018 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:48:24Z","lastTransitionTime":"2025-10-06T11:48:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:48:24 crc kubenswrapper[4958]: I1006 11:48:24.913238 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 06 11:48:24 crc kubenswrapper[4958]: I1006 11:48:24.913255 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 06 11:48:24 crc kubenswrapper[4958]: I1006 11:48:24.913581 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 06 11:48:24 crc kubenswrapper[4958]: E1006 11:48:24.913772 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Oct 06 11:48:24 crc kubenswrapper[4958]: E1006 11:48:24.913908 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Oct 06 11:48:24 crc kubenswrapper[4958]: E1006 11:48:24.914030 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Oct 06 11:48:24 crc kubenswrapper[4958]: I1006 11:48:24.974985 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:48:24 crc kubenswrapper[4958]: I1006 11:48:24.975057 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:48:24 crc kubenswrapper[4958]: I1006 11:48:24.975083 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:48:24 crc kubenswrapper[4958]: I1006 11:48:24.975109 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:48:24 crc kubenswrapper[4958]: I1006 11:48:24.975131 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:48:24Z","lastTransitionTime":"2025-10-06T11:48:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:48:25 crc kubenswrapper[4958]: I1006 11:48:25.078699 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:48:25 crc kubenswrapper[4958]: I1006 11:48:25.078771 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:48:25 crc kubenswrapper[4958]: I1006 11:48:25.078799 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:48:25 crc kubenswrapper[4958]: I1006 11:48:25.078831 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:48:25 crc kubenswrapper[4958]: I1006 11:48:25.078854 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:48:25Z","lastTransitionTime":"2025-10-06T11:48:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:48:25 crc kubenswrapper[4958]: I1006 11:48:25.181967 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:48:25 crc kubenswrapper[4958]: I1006 11:48:25.182030 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:48:25 crc kubenswrapper[4958]: I1006 11:48:25.182047 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:48:25 crc kubenswrapper[4958]: I1006 11:48:25.182070 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:48:25 crc kubenswrapper[4958]: I1006 11:48:25.182090 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:48:25Z","lastTransitionTime":"2025-10-06T11:48:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:48:25 crc kubenswrapper[4958]: I1006 11:48:25.284731 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:48:25 crc kubenswrapper[4958]: I1006 11:48:25.284802 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:48:25 crc kubenswrapper[4958]: I1006 11:48:25.284824 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:48:25 crc kubenswrapper[4958]: I1006 11:48:25.284850 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:48:25 crc kubenswrapper[4958]: I1006 11:48:25.284868 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:48:25Z","lastTransitionTime":"2025-10-06T11:48:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:48:25 crc kubenswrapper[4958]: I1006 11:48:25.387889 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:48:25 crc kubenswrapper[4958]: I1006 11:48:25.387968 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:48:25 crc kubenswrapper[4958]: I1006 11:48:25.388003 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:48:25 crc kubenswrapper[4958]: I1006 11:48:25.388034 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:48:25 crc kubenswrapper[4958]: I1006 11:48:25.388094 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:48:25Z","lastTransitionTime":"2025-10-06T11:48:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:48:25 crc kubenswrapper[4958]: I1006 11:48:25.491368 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:48:25 crc kubenswrapper[4958]: I1006 11:48:25.491480 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:48:25 crc kubenswrapper[4958]: I1006 11:48:25.491497 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:48:25 crc kubenswrapper[4958]: I1006 11:48:25.491521 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:48:25 crc kubenswrapper[4958]: I1006 11:48:25.491540 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:48:25Z","lastTransitionTime":"2025-10-06T11:48:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:48:25 crc kubenswrapper[4958]: I1006 11:48:25.593834 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:48:25 crc kubenswrapper[4958]: I1006 11:48:25.593888 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:48:25 crc kubenswrapper[4958]: I1006 11:48:25.593905 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:48:25 crc kubenswrapper[4958]: I1006 11:48:25.593928 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:48:25 crc kubenswrapper[4958]: I1006 11:48:25.593944 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:48:25Z","lastTransitionTime":"2025-10-06T11:48:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:48:25 crc kubenswrapper[4958]: I1006 11:48:25.696644 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:48:25 crc kubenswrapper[4958]: I1006 11:48:25.696700 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:48:25 crc kubenswrapper[4958]: I1006 11:48:25.696769 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:48:25 crc kubenswrapper[4958]: I1006 11:48:25.696792 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:48:25 crc kubenswrapper[4958]: I1006 11:48:25.696832 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:48:25Z","lastTransitionTime":"2025-10-06T11:48:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:48:25 crc kubenswrapper[4958]: I1006 11:48:25.799827 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:48:25 crc kubenswrapper[4958]: I1006 11:48:25.799890 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:48:25 crc kubenswrapper[4958]: I1006 11:48:25.799925 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:48:25 crc kubenswrapper[4958]: I1006 11:48:25.799964 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:48:25 crc kubenswrapper[4958]: I1006 11:48:25.799989 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:48:25Z","lastTransitionTime":"2025-10-06T11:48:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:48:25 crc kubenswrapper[4958]: I1006 11:48:25.902954 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:48:25 crc kubenswrapper[4958]: I1006 11:48:25.903022 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:48:25 crc kubenswrapper[4958]: I1006 11:48:25.903045 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:48:25 crc kubenswrapper[4958]: I1006 11:48:25.903074 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:48:25 crc kubenswrapper[4958]: I1006 11:48:25.903097 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:48:25Z","lastTransitionTime":"2025-10-06T11:48:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:48:25 crc kubenswrapper[4958]: I1006 11:48:25.912171 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-4mxw5" Oct 06 11:48:25 crc kubenswrapper[4958]: E1006 11:48:25.912287 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-4mxw5" podUID="cbfbf2ea-df5f-4844-ba69-8c9f16a3a26c" Oct 06 11:48:26 crc kubenswrapper[4958]: I1006 11:48:26.005926 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:48:26 crc kubenswrapper[4958]: I1006 11:48:26.006013 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:48:26 crc kubenswrapper[4958]: I1006 11:48:26.006039 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:48:26 crc kubenswrapper[4958]: I1006 11:48:26.006069 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:48:26 crc kubenswrapper[4958]: I1006 11:48:26.006092 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:48:26Z","lastTransitionTime":"2025-10-06T11:48:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:48:26 crc kubenswrapper[4958]: I1006 11:48:26.109526 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:48:26 crc kubenswrapper[4958]: I1006 11:48:26.109606 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:48:26 crc kubenswrapper[4958]: I1006 11:48:26.109630 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:48:26 crc kubenswrapper[4958]: I1006 11:48:26.109658 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:48:26 crc kubenswrapper[4958]: I1006 11:48:26.109684 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:48:26Z","lastTransitionTime":"2025-10-06T11:48:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:48:26 crc kubenswrapper[4958]: I1006 11:48:26.212594 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:48:26 crc kubenswrapper[4958]: I1006 11:48:26.212652 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:48:26 crc kubenswrapper[4958]: I1006 11:48:26.212682 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:48:26 crc kubenswrapper[4958]: I1006 11:48:26.212727 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:48:26 crc kubenswrapper[4958]: I1006 11:48:26.212751 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:48:26Z","lastTransitionTime":"2025-10-06T11:48:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:48:26 crc kubenswrapper[4958]: I1006 11:48:26.315890 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:48:26 crc kubenswrapper[4958]: I1006 11:48:26.315947 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:48:26 crc kubenswrapper[4958]: I1006 11:48:26.315964 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:48:26 crc kubenswrapper[4958]: I1006 11:48:26.315987 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:48:26 crc kubenswrapper[4958]: I1006 11:48:26.316011 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:48:26Z","lastTransitionTime":"2025-10-06T11:48:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:48:26 crc kubenswrapper[4958]: I1006 11:48:26.419354 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:48:26 crc kubenswrapper[4958]: I1006 11:48:26.419423 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:48:26 crc kubenswrapper[4958]: I1006 11:48:26.419441 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:48:26 crc kubenswrapper[4958]: I1006 11:48:26.419466 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:48:26 crc kubenswrapper[4958]: I1006 11:48:26.419485 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:48:26Z","lastTransitionTime":"2025-10-06T11:48:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:48:26 crc kubenswrapper[4958]: I1006 11:48:26.522584 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:48:26 crc kubenswrapper[4958]: I1006 11:48:26.522667 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:48:26 crc kubenswrapper[4958]: I1006 11:48:26.522697 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:48:26 crc kubenswrapper[4958]: I1006 11:48:26.522726 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:48:26 crc kubenswrapper[4958]: I1006 11:48:26.522748 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:48:26Z","lastTransitionTime":"2025-10-06T11:48:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:48:26 crc kubenswrapper[4958]: I1006 11:48:26.625573 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:48:26 crc kubenswrapper[4958]: I1006 11:48:26.626015 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:48:26 crc kubenswrapper[4958]: I1006 11:48:26.626250 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:48:26 crc kubenswrapper[4958]: I1006 11:48:26.626459 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:48:26 crc kubenswrapper[4958]: I1006 11:48:26.626618 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:48:26Z","lastTransitionTime":"2025-10-06T11:48:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:48:26 crc kubenswrapper[4958]: I1006 11:48:26.729513 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:48:26 crc kubenswrapper[4958]: I1006 11:48:26.729573 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:48:26 crc kubenswrapper[4958]: I1006 11:48:26.729590 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:48:26 crc kubenswrapper[4958]: I1006 11:48:26.729613 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:48:26 crc kubenswrapper[4958]: I1006 11:48:26.729631 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:48:26Z","lastTransitionTime":"2025-10-06T11:48:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:48:26 crc kubenswrapper[4958]: I1006 11:48:26.834119 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:48:26 crc kubenswrapper[4958]: I1006 11:48:26.834227 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:48:26 crc kubenswrapper[4958]: I1006 11:48:26.834333 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:48:26 crc kubenswrapper[4958]: I1006 11:48:26.834359 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:48:26 crc kubenswrapper[4958]: I1006 11:48:26.834581 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:48:26Z","lastTransitionTime":"2025-10-06T11:48:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:48:26 crc kubenswrapper[4958]: I1006 11:48:26.912275 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 06 11:48:26 crc kubenswrapper[4958]: I1006 11:48:26.912318 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 06 11:48:26 crc kubenswrapper[4958]: I1006 11:48:26.912295 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 06 11:48:26 crc kubenswrapper[4958]: E1006 11:48:26.912419 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Oct 06 11:48:26 crc kubenswrapper[4958]: E1006 11:48:26.912503 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Oct 06 11:48:26 crc kubenswrapper[4958]: E1006 11:48:26.912720 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Oct 06 11:48:26 crc kubenswrapper[4958]: I1006 11:48:26.929470 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e346fee2-2887-49c0-ad05-0e4ca19453e4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://42a6335b390e5de2da2d935de9e0cb1e60391521813d0928c59d1d9c27ae8e7f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:47:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://64dfdaf2d8faa6e7ebde08361728077d3db9118ec963dc4c1fc8caf9eb9c752b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:47:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5cbba96aaea45073aaeada1136c677ba1ba87248f004cf3d7394ddfccf412478\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:47:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://95e5c689ad307b63dec13d774d1ff8fe8fa0a11a7d33e18a0470000a4df0f6dc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:47:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T11:47:27Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:48:26Z is after 2025-08-24T17:21:41Z" Oct 06 11:48:26 crc kubenswrapper[4958]: I1006 11:48:26.938198 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:48:26 crc kubenswrapper[4958]: I1006 11:48:26.938255 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:48:26 crc kubenswrapper[4958]: I1006 11:48:26.938266 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:48:26 crc kubenswrapper[4958]: I1006 11:48:26.938379 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:48:26 crc kubenswrapper[4958]: I1006 11:48:26.938411 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:48:26Z","lastTransitionTime":"2025-10-06T11:48:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:48:26 crc kubenswrapper[4958]: I1006 11:48:26.944411 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:46Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:48:26Z is after 2025-08-24T17:21:41Z" Oct 06 11:48:26 crc kubenswrapper[4958]: I1006 11:48:26.957583 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-8xzjs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c3545ace-6477-4c0b-9576-f32c6748e0a0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d94d788ac8f6a4665db7852c21f55128c9afb7976c9e18fbc302c02d8c3a0c43\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:47:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lz2sh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T11:47:47Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-8xzjs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:48:26Z is after 2025-08-24T17:21:41Z" Oct 06 11:48:26 crc kubenswrapper[4958]: I1006 11:48:26.970870 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-vns7w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f33eae78-7219-4551-bea7-dcfaf22c4e62\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7c6a67770ab5e7e0af1ed6a7b4965bc2b3937e021ddf18a765edd7b65196ad5c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:47:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f8htn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T11:47:49Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-vns7w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:48:26Z is after 2025-08-24T17:21:41Z" Oct 06 11:48:26 crc kubenswrapper[4958]: I1006 11:48:26.986334 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9bd5b11f-6414-4c57-99b8-516b3f4be804\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:48:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:48:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8f40315a451e12055987bec9804330dac96578f86d7fdedfd96ed187936df857\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:47:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0b16a70f5c26106f885601fdc9f41b6d606668ed3a4a527be3a7674dd4217d36\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:47:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://93cedfe1a3600f0eb73fd787333690ed1fd8cd5f766a54ba40eb6aad808593c7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:47:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://da49c178e22edd3e5f0b37f05dca576095bf83765cb9e46fcb68b1b362e304f7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fdb3ac1507f9a55981ed07786d2ab8bd2aab7072e8e625b17f15bc27f40f460d\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-10-06T11:47:48Z\\\",\\\"message\\\":\\\"file observer\\\\nW1006 11:47:47.776944 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1006 11:47:47.777183 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1006 11:47:47.778589 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-624858867/tls.crt::/tmp/serving-cert-624858867/tls.key\\\\\\\"\\\\nI1006 11:47:48.078332 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1006 11:47:48.084427 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1006 11:47:48.084453 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1006 11:47:48.084475 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1006 11:47:48.084480 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1006 11:47:48.108531 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI1006 11:47:48.108546 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW1006 11:47:48.108561 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1006 11:47:48.108582 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1006 11:47:48.108586 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1006 11:47:48.108589 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1006 11:47:48.108592 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1006 11:47:48.108594 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1006 11:47:48.110441 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-06T11:47:42Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:48:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://533257b553a482e13f8c8930f7ee7d6f8b70af1499cf861d3dfe73cb2225aa2a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:47:29Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://70c0a2bcb44c905e3d39402f0992d3e50a939252201b98e9a723f5a6777d7f91\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://70c0a2bcb44c905e3d39402f0992d3e50a939252201b98e9a723f5a6777d7f91\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-06T11:47:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-06T11:47:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T11:47:27Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:48:26Z is after 2025-08-24T17:21:41Z" Oct 06 11:48:27 crc kubenswrapper[4958]: I1006 11:48:27.003088 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:48Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:48Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b5c0c63694c68c28642df297d1ceff03425621c51b170c758b3fedeafbccc89e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:47:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:48:27Z is after 2025-08-24T17:21:41Z" Oct 06 11:48:27 crc kubenswrapper[4958]: I1006 11:48:27.017442 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:50Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:50Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fbac1bba642ea0238b738ec34631d44326b215575297c7ea3595aa9d691849d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:47:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:48:27Z is after 2025-08-24T17:21:41Z" Oct 06 11:48:27 crc kubenswrapper[4958]: I1006 11:48:27.030752 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-4mxw5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cbfbf2ea-df5f-4844-ba69-8c9f16a3a26c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:48:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:48:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:48:02Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:48:02Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cdkh7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cdkh7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T11:48:02Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-4mxw5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:48:27Z is after 2025-08-24T17:21:41Z" Oct 06 11:48:27 crc kubenswrapper[4958]: I1006 11:48:27.041011 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:48:27 crc kubenswrapper[4958]: I1006 11:48:27.041053 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:48:27 crc kubenswrapper[4958]: I1006 11:48:27.041062 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:48:27 crc kubenswrapper[4958]: I1006 11:48:27.041078 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:48:27 crc kubenswrapper[4958]: I1006 11:48:27.041088 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:48:27Z","lastTransitionTime":"2025-10-06T11:48:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:48:27 crc kubenswrapper[4958]: I1006 11:48:27.046338 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:46Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:48:27Z is after 2025-08-24T17:21:41Z" Oct 06 11:48:27 crc kubenswrapper[4958]: I1006 11:48:27.077868 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-ntrlk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cd589959-144a-41bd-b6d5-a872e5c25cee\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:48Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:48Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bc57f0ff822a9fcb8c3b73146bf4879f6e05c2f92b6a4317528530e54e419278\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:47:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vzpjm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a38398fb9a752d0059e1adfd4ee23557572492f2d27f63ef90ed4730b742fd0e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:47:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vzpjm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://484bfcace3003fb125465d9296c1b04852447d674b6c4abc8fedbd3a1b6ea8be\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:47:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vzpjm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d7834f1b3141381f7d868acc4aceb2809e81896cb7082de9888c3d540d062078\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:47:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vzpjm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://051e26a11e5f65ab6e49664f99a5f6a87bf781bf7284f2e2e3996c0e26e77c84\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:47:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vzpjm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d90731b1e085c514ef0665d4df356b853f432a1012f84a41ed82fa257d57d9bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:47:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vzpjm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3385ca172113f8a597490f651e5ab5d40777cf02ba40bb61b2ffa350d5598462\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3385ca172113f8a597490f651e5ab5d40777cf02ba40bb61b2ffa350d5598462\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-10-06T11:48:18Z\\\",\\\"message\\\":\\\"edroute/v1/apis/informers/externalversions/factory.go:140\\\\nI1006 11:48:18.897422 6666 reflector.go:311] Stopping reflector *v1alpha1.AdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI1006 11:48:18.897824 6666 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI1006 11:48:18.897885 6666 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI1006 11:48:18.897895 6666 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI1006 11:48:18.897916 6666 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI1006 11:48:18.897986 6666 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI1006 11:48:18.897991 6666 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI1006 11:48:18.897989 6666 handler.go:208] Removed *v1.Pod event handler 3\\\\nI1006 11:48:18.898014 6666 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI1006 11:48:18.898007 6666 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI1006 11:48:18.898024 6666 handler.go:208] Removed *v1.Pod event handler 6\\\\nI1006 11:48:18.898039 6666 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI1006 11:48:18.898072 6666 factory.go:656] Stopping watch factory\\\\nI1006 11:48:18.898105 6666 ovnkube.go:599] Stopped ovnkube\\\\nI1006 11\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-06T11:48:18Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-ntrlk_openshift-ovn-kubernetes(cd589959-144a-41bd-b6d5-a872e5c25cee)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vzpjm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1398aae03a42cf60100ed334a3d65c9c4fd501855d5316e7e1ed3974d0c7e22e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:47:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vzpjm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://738882f7b5c7b87e6d6426a997dd7efe5daf1d97973cf1085097cc0f5e790f00\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://738882f7b5c7b87e6d6426a997dd7efe5daf1d97973cf1085097cc0f5e790f00\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-06T11:47:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-06T11:47:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vzpjm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T11:47:48Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-ntrlk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:48:27Z is after 2025-08-24T17:21:41Z" Oct 06 11:48:27 crc kubenswrapper[4958]: I1006 11:48:27.091243 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ba205413-cfaf-4b44-a363-11756cfa7dda\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:48:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:48:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fb611e3b5c05dc18f2016ae86df59c40ce281cb8b0180bd570de0ef76740db43\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:47:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ec146c3b7bff7005ab104f50dbc189e81ee82b7b2275ddb606e60832db5a0046\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:47:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0dcb7478b9b3f837afca516fc35e7f3f41e293147c5f27119fcca05f0d4477b3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:47:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c7f87b2139a49d50f883f483bfbfff0af00d2580d85bbddb604d395afbae3a73\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c7f87b2139a49d50f883f483bfbfff0af00d2580d85bbddb604d395afbae3a73\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-06T11:47:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-06T11:47:28Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T11:47:27Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:48:27Z is after 2025-08-24T17:21:41Z" Oct 06 11:48:27 crc kubenswrapper[4958]: I1006 11:48:27.107476 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:48Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:48Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3a93985bd27818990d6f5ec103d8c0722e1317a74398d64300148018c4f77b71\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:47:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9f39e4dcc064167825d4f7323323a394d1a3d1a374b7104bffc6ad56b344d89e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:47:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:48:27Z is after 2025-08-24T17:21:41Z" Oct 06 11:48:27 crc kubenswrapper[4958]: I1006 11:48:27.123397 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:46Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:48:27Z is after 2025-08-24T17:21:41Z" Oct 06 11:48:27 crc kubenswrapper[4958]: I1006 11:48:27.139397 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-lwknw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5e46de0e-9f67-4dae-8601-65004d0d71c1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6adb713c9c2659566eee132d371a68bc10d24c6761c017a1e8b0ce8bf974b9fa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:47:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wghx2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://244104564b5ad6b646671bdc75f440e5568a0c2449c111e56431328365d044fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://244104564b5ad6b646671bdc75f440e5568a0c2449c111e56431328365d044fe\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-06T11:47:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-06T11:47:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wghx2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5f832e5afbc55af6f3de6756005ddfef328c3d8357b1bbc8765469e5b254cf8a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5f832e5afbc55af6f3de6756005ddfef328c3d8357b1bbc8765469e5b254cf8a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-06T11:47:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-06T11:47:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wghx2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://99bec1a2f08582cd407d108d771389ab034877ee58f59e7d899541d0c69809b0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://99bec1a2f08582cd407d108d771389ab034877ee58f59e7d899541d0c69809b0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-06T11:47:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-06T11:47:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wghx2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4d090f1099ca4b6472934163fb66f1875dc03819cf0907f3058aa5006d9d2198\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4d090f1099ca4b6472934163fb66f1875dc03819cf0907f3058aa5006d9d2198\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-06T11:47:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-06T11:47:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wghx2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://41bdbeadff713969b3914296b7f045991c7da126f8181ee9b60d4607d7a565dd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://41bdbeadff713969b3914296b7f045991c7da126f8181ee9b60d4607d7a565dd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-06T11:47:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-06T11:47:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wghx2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3e496dd75c5dd4e480d06cd46302b6172620b35eb24aff3308865a0931677bc4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3e496dd75c5dd4e480d06cd46302b6172620b35eb24aff3308865a0931677bc4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-06T11:47:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-06T11:47:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wghx2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T11:47:47Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-lwknw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:48:27Z is after 2025-08-24T17:21:41Z" Oct 06 11:48:27 crc kubenswrapper[4958]: I1006 11:48:27.143390 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:48:27 crc kubenswrapper[4958]: I1006 11:48:27.143436 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:48:27 crc kubenswrapper[4958]: I1006 11:48:27.143450 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:48:27 crc kubenswrapper[4958]: I1006 11:48:27.143469 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:48:27 crc kubenswrapper[4958]: I1006 11:48:27.143481 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:48:27Z","lastTransitionTime":"2025-10-06T11:48:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:48:27 crc kubenswrapper[4958]: I1006 11:48:27.151982 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-4w4h5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8198c8fd-8ec9-4a56-9e87-4e3967fb8ec7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://979bf29caab50b132b30dd6eff62aed5d2209ae1c43fde2433101410df35035b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:47:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2ncxs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T11:47:47Z\\\"}}\" for pod \"openshift-multus\"/\"multus-4w4h5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:48:27Z is after 2025-08-24T17:21:41Z" Oct 06 11:48:27 crc kubenswrapper[4958]: I1006 11:48:27.164003 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-whw6z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1a63a5e9-6d00-40ff-a080-cfcaf03a1c1b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b4fae35f5c37636077eb5d2fee37236e39d009d77aa158458d267aed5fc29f48\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:47:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bwd9v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0be6251825a9cd485da028e6f7fd169dae74215a7191607dfd0a4b7a470e43c4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:47:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bwd9v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T11:47:47Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-whw6z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:48:27Z is after 2025-08-24T17:21:41Z" Oct 06 11:48:27 crc kubenswrapper[4958]: I1006 11:48:27.176602 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-8jbr9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ddd869cc-2703-4f0d-b694-32bb7735025e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:48:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:48:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:48:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:48:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9298df9f82818033b7c6c96e0d7fd59290a7ef86679ae958f19f1c841998715c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:48:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8vhr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5d95fcda3db5aafa2634bf9d4037a6b8b0456bf4be6fc620c742e57e86aed114\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:48:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8vhr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T11:48:00Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-8jbr9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:48:27Z is after 2025-08-24T17:21:41Z" Oct 06 11:48:27 crc kubenswrapper[4958]: I1006 11:48:27.246359 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:48:27 crc kubenswrapper[4958]: I1006 11:48:27.246409 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:48:27 crc kubenswrapper[4958]: I1006 11:48:27.246422 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:48:27 crc kubenswrapper[4958]: I1006 11:48:27.246441 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:48:27 crc kubenswrapper[4958]: I1006 11:48:27.246454 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:48:27Z","lastTransitionTime":"2025-10-06T11:48:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:48:27 crc kubenswrapper[4958]: I1006 11:48:27.348892 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:48:27 crc kubenswrapper[4958]: I1006 11:48:27.348944 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:48:27 crc kubenswrapper[4958]: I1006 11:48:27.348963 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:48:27 crc kubenswrapper[4958]: I1006 11:48:27.348987 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:48:27 crc kubenswrapper[4958]: I1006 11:48:27.349007 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:48:27Z","lastTransitionTime":"2025-10-06T11:48:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:48:27 crc kubenswrapper[4958]: I1006 11:48:27.452356 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:48:27 crc kubenswrapper[4958]: I1006 11:48:27.452408 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:48:27 crc kubenswrapper[4958]: I1006 11:48:27.452425 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:48:27 crc kubenswrapper[4958]: I1006 11:48:27.452449 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:48:27 crc kubenswrapper[4958]: I1006 11:48:27.452466 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:48:27Z","lastTransitionTime":"2025-10-06T11:48:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:48:27 crc kubenswrapper[4958]: I1006 11:48:27.555350 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:48:27 crc kubenswrapper[4958]: I1006 11:48:27.555415 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:48:27 crc kubenswrapper[4958]: I1006 11:48:27.555434 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:48:27 crc kubenswrapper[4958]: I1006 11:48:27.555458 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:48:27 crc kubenswrapper[4958]: I1006 11:48:27.555480 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:48:27Z","lastTransitionTime":"2025-10-06T11:48:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:48:27 crc kubenswrapper[4958]: I1006 11:48:27.658825 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:48:27 crc kubenswrapper[4958]: I1006 11:48:27.658898 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:48:27 crc kubenswrapper[4958]: I1006 11:48:27.658916 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:48:27 crc kubenswrapper[4958]: I1006 11:48:27.658939 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:48:27 crc kubenswrapper[4958]: I1006 11:48:27.658956 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:48:27Z","lastTransitionTime":"2025-10-06T11:48:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:48:27 crc kubenswrapper[4958]: I1006 11:48:27.761233 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:48:27 crc kubenswrapper[4958]: I1006 11:48:27.761301 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:48:27 crc kubenswrapper[4958]: I1006 11:48:27.761319 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:48:27 crc kubenswrapper[4958]: I1006 11:48:27.761343 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:48:27 crc kubenswrapper[4958]: I1006 11:48:27.761360 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:48:27Z","lastTransitionTime":"2025-10-06T11:48:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:48:27 crc kubenswrapper[4958]: I1006 11:48:27.864139 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:48:27 crc kubenswrapper[4958]: I1006 11:48:27.864239 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:48:27 crc kubenswrapper[4958]: I1006 11:48:27.864287 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:48:27 crc kubenswrapper[4958]: I1006 11:48:27.864309 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:48:27 crc kubenswrapper[4958]: I1006 11:48:27.864324 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:48:27Z","lastTransitionTime":"2025-10-06T11:48:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:48:27 crc kubenswrapper[4958]: I1006 11:48:27.913032 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-4mxw5" Oct 06 11:48:27 crc kubenswrapper[4958]: E1006 11:48:27.913232 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-4mxw5" podUID="cbfbf2ea-df5f-4844-ba69-8c9f16a3a26c" Oct 06 11:48:27 crc kubenswrapper[4958]: I1006 11:48:27.967093 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:48:27 crc kubenswrapper[4958]: I1006 11:48:27.967184 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:48:27 crc kubenswrapper[4958]: I1006 11:48:27.967203 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:48:27 crc kubenswrapper[4958]: I1006 11:48:27.967236 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:48:27 crc kubenswrapper[4958]: I1006 11:48:27.967255 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:48:27Z","lastTransitionTime":"2025-10-06T11:48:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:48:28 crc kubenswrapper[4958]: I1006 11:48:28.071187 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:48:28 crc kubenswrapper[4958]: I1006 11:48:28.071231 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:48:28 crc kubenswrapper[4958]: I1006 11:48:28.071245 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:48:28 crc kubenswrapper[4958]: I1006 11:48:28.071262 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:48:28 crc kubenswrapper[4958]: I1006 11:48:28.071274 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:48:28Z","lastTransitionTime":"2025-10-06T11:48:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:48:28 crc kubenswrapper[4958]: I1006 11:48:28.174696 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:48:28 crc kubenswrapper[4958]: I1006 11:48:28.174737 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:48:28 crc kubenswrapper[4958]: I1006 11:48:28.174748 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:48:28 crc kubenswrapper[4958]: I1006 11:48:28.174763 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:48:28 crc kubenswrapper[4958]: I1006 11:48:28.174776 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:48:28Z","lastTransitionTime":"2025-10-06T11:48:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:48:28 crc kubenswrapper[4958]: I1006 11:48:28.277985 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:48:28 crc kubenswrapper[4958]: I1006 11:48:28.278072 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:48:28 crc kubenswrapper[4958]: I1006 11:48:28.278093 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:48:28 crc kubenswrapper[4958]: I1006 11:48:28.278117 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:48:28 crc kubenswrapper[4958]: I1006 11:48:28.278166 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:48:28Z","lastTransitionTime":"2025-10-06T11:48:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:48:28 crc kubenswrapper[4958]: I1006 11:48:28.381929 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:48:28 crc kubenswrapper[4958]: I1006 11:48:28.382015 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:48:28 crc kubenswrapper[4958]: I1006 11:48:28.382039 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:48:28 crc kubenswrapper[4958]: I1006 11:48:28.382070 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:48:28 crc kubenswrapper[4958]: I1006 11:48:28.382091 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:48:28Z","lastTransitionTime":"2025-10-06T11:48:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:48:28 crc kubenswrapper[4958]: I1006 11:48:28.485250 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:48:28 crc kubenswrapper[4958]: I1006 11:48:28.485320 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:48:28 crc kubenswrapper[4958]: I1006 11:48:28.485343 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:48:28 crc kubenswrapper[4958]: I1006 11:48:28.485376 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:48:28 crc kubenswrapper[4958]: I1006 11:48:28.485400 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:48:28Z","lastTransitionTime":"2025-10-06T11:48:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:48:28 crc kubenswrapper[4958]: I1006 11:48:28.590832 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:48:28 crc kubenswrapper[4958]: I1006 11:48:28.590909 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:48:28 crc kubenswrapper[4958]: I1006 11:48:28.590935 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:48:28 crc kubenswrapper[4958]: I1006 11:48:28.590966 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:48:28 crc kubenswrapper[4958]: I1006 11:48:28.590989 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:48:28Z","lastTransitionTime":"2025-10-06T11:48:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:48:28 crc kubenswrapper[4958]: I1006 11:48:28.693472 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:48:28 crc kubenswrapper[4958]: I1006 11:48:28.693523 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:48:28 crc kubenswrapper[4958]: I1006 11:48:28.693538 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:48:28 crc kubenswrapper[4958]: I1006 11:48:28.693560 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:48:28 crc kubenswrapper[4958]: I1006 11:48:28.693577 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:48:28Z","lastTransitionTime":"2025-10-06T11:48:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:48:28 crc kubenswrapper[4958]: I1006 11:48:28.796486 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:48:28 crc kubenswrapper[4958]: I1006 11:48:28.796559 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:48:28 crc kubenswrapper[4958]: I1006 11:48:28.796577 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:48:28 crc kubenswrapper[4958]: I1006 11:48:28.796601 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:48:28 crc kubenswrapper[4958]: I1006 11:48:28.796617 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:48:28Z","lastTransitionTime":"2025-10-06T11:48:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:48:28 crc kubenswrapper[4958]: I1006 11:48:28.900013 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:48:28 crc kubenswrapper[4958]: I1006 11:48:28.900058 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:48:28 crc kubenswrapper[4958]: I1006 11:48:28.900074 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:48:28 crc kubenswrapper[4958]: I1006 11:48:28.900096 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:48:28 crc kubenswrapper[4958]: I1006 11:48:28.900115 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:48:28Z","lastTransitionTime":"2025-10-06T11:48:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:48:28 crc kubenswrapper[4958]: I1006 11:48:28.912240 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 06 11:48:28 crc kubenswrapper[4958]: I1006 11:48:28.912266 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 06 11:48:28 crc kubenswrapper[4958]: E1006 11:48:28.912394 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Oct 06 11:48:28 crc kubenswrapper[4958]: I1006 11:48:28.912491 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 06 11:48:28 crc kubenswrapper[4958]: E1006 11:48:28.912686 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Oct 06 11:48:28 crc kubenswrapper[4958]: E1006 11:48:28.913006 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Oct 06 11:48:29 crc kubenswrapper[4958]: I1006 11:48:29.003423 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:48:29 crc kubenswrapper[4958]: I1006 11:48:29.003489 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:48:29 crc kubenswrapper[4958]: I1006 11:48:29.003507 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:48:29 crc kubenswrapper[4958]: I1006 11:48:29.003588 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:48:29 crc kubenswrapper[4958]: I1006 11:48:29.003610 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:48:29Z","lastTransitionTime":"2025-10-06T11:48:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:48:29 crc kubenswrapper[4958]: I1006 11:48:29.118648 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:48:29 crc kubenswrapper[4958]: I1006 11:48:29.118721 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:48:29 crc kubenswrapper[4958]: I1006 11:48:29.118738 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:48:29 crc kubenswrapper[4958]: I1006 11:48:29.118786 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:48:29 crc kubenswrapper[4958]: I1006 11:48:29.118807 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:48:29Z","lastTransitionTime":"2025-10-06T11:48:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:48:29 crc kubenswrapper[4958]: I1006 11:48:29.222317 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:48:29 crc kubenswrapper[4958]: I1006 11:48:29.222377 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:48:29 crc kubenswrapper[4958]: I1006 11:48:29.222391 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:48:29 crc kubenswrapper[4958]: I1006 11:48:29.222410 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:48:29 crc kubenswrapper[4958]: I1006 11:48:29.222425 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:48:29Z","lastTransitionTime":"2025-10-06T11:48:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:48:29 crc kubenswrapper[4958]: I1006 11:48:29.325827 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:48:29 crc kubenswrapper[4958]: I1006 11:48:29.325896 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:48:29 crc kubenswrapper[4958]: I1006 11:48:29.325915 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:48:29 crc kubenswrapper[4958]: I1006 11:48:29.325939 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:48:29 crc kubenswrapper[4958]: I1006 11:48:29.325956 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:48:29Z","lastTransitionTime":"2025-10-06T11:48:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:48:29 crc kubenswrapper[4958]: I1006 11:48:29.429346 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:48:29 crc kubenswrapper[4958]: I1006 11:48:29.429420 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:48:29 crc kubenswrapper[4958]: I1006 11:48:29.429522 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:48:29 crc kubenswrapper[4958]: I1006 11:48:29.429605 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:48:29 crc kubenswrapper[4958]: I1006 11:48:29.429627 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:48:29Z","lastTransitionTime":"2025-10-06T11:48:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:48:29 crc kubenswrapper[4958]: I1006 11:48:29.532911 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:48:29 crc kubenswrapper[4958]: I1006 11:48:29.532980 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:48:29 crc kubenswrapper[4958]: I1006 11:48:29.533000 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:48:29 crc kubenswrapper[4958]: I1006 11:48:29.533027 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:48:29 crc kubenswrapper[4958]: I1006 11:48:29.533044 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:48:29Z","lastTransitionTime":"2025-10-06T11:48:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:48:29 crc kubenswrapper[4958]: I1006 11:48:29.636416 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:48:29 crc kubenswrapper[4958]: I1006 11:48:29.636498 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:48:29 crc kubenswrapper[4958]: I1006 11:48:29.636533 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:48:29 crc kubenswrapper[4958]: I1006 11:48:29.636563 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:48:29 crc kubenswrapper[4958]: I1006 11:48:29.636584 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:48:29Z","lastTransitionTime":"2025-10-06T11:48:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:48:29 crc kubenswrapper[4958]: I1006 11:48:29.739842 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:48:29 crc kubenswrapper[4958]: I1006 11:48:29.739907 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:48:29 crc kubenswrapper[4958]: I1006 11:48:29.739917 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:48:29 crc kubenswrapper[4958]: I1006 11:48:29.739932 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:48:29 crc kubenswrapper[4958]: I1006 11:48:29.739943 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:48:29Z","lastTransitionTime":"2025-10-06T11:48:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:48:29 crc kubenswrapper[4958]: I1006 11:48:29.842192 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:48:29 crc kubenswrapper[4958]: I1006 11:48:29.842296 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:48:29 crc kubenswrapper[4958]: I1006 11:48:29.842307 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:48:29 crc kubenswrapper[4958]: I1006 11:48:29.842321 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:48:29 crc kubenswrapper[4958]: I1006 11:48:29.842332 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:48:29Z","lastTransitionTime":"2025-10-06T11:48:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:48:29 crc kubenswrapper[4958]: I1006 11:48:29.912564 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-4mxw5" Oct 06 11:48:29 crc kubenswrapper[4958]: E1006 11:48:29.912684 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-4mxw5" podUID="cbfbf2ea-df5f-4844-ba69-8c9f16a3a26c" Oct 06 11:48:29 crc kubenswrapper[4958]: I1006 11:48:29.944661 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:48:29 crc kubenswrapper[4958]: I1006 11:48:29.944735 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:48:29 crc kubenswrapper[4958]: I1006 11:48:29.944758 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:48:29 crc kubenswrapper[4958]: I1006 11:48:29.944790 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:48:29 crc kubenswrapper[4958]: I1006 11:48:29.944810 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:48:29Z","lastTransitionTime":"2025-10-06T11:48:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:48:30 crc kubenswrapper[4958]: I1006 11:48:30.047766 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:48:30 crc kubenswrapper[4958]: I1006 11:48:30.047832 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:48:30 crc kubenswrapper[4958]: I1006 11:48:30.047851 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:48:30 crc kubenswrapper[4958]: I1006 11:48:30.047876 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:48:30 crc kubenswrapper[4958]: I1006 11:48:30.047932 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:48:30Z","lastTransitionTime":"2025-10-06T11:48:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:48:30 crc kubenswrapper[4958]: I1006 11:48:30.151683 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:48:30 crc kubenswrapper[4958]: I1006 11:48:30.151732 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:48:30 crc kubenswrapper[4958]: I1006 11:48:30.151743 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:48:30 crc kubenswrapper[4958]: I1006 11:48:30.151759 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:48:30 crc kubenswrapper[4958]: I1006 11:48:30.151771 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:48:30Z","lastTransitionTime":"2025-10-06T11:48:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:48:30 crc kubenswrapper[4958]: I1006 11:48:30.254270 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:48:30 crc kubenswrapper[4958]: I1006 11:48:30.254328 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:48:30 crc kubenswrapper[4958]: I1006 11:48:30.254346 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:48:30 crc kubenswrapper[4958]: I1006 11:48:30.254393 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:48:30 crc kubenswrapper[4958]: I1006 11:48:30.254412 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:48:30Z","lastTransitionTime":"2025-10-06T11:48:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:48:30 crc kubenswrapper[4958]: I1006 11:48:30.358010 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:48:30 crc kubenswrapper[4958]: I1006 11:48:30.358076 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:48:30 crc kubenswrapper[4958]: I1006 11:48:30.358101 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:48:30 crc kubenswrapper[4958]: I1006 11:48:30.358241 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:48:30 crc kubenswrapper[4958]: I1006 11:48:30.358276 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:48:30Z","lastTransitionTime":"2025-10-06T11:48:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:48:30 crc kubenswrapper[4958]: I1006 11:48:30.468590 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:48:30 crc kubenswrapper[4958]: I1006 11:48:30.468710 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:48:30 crc kubenswrapper[4958]: I1006 11:48:30.468749 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:48:30 crc kubenswrapper[4958]: I1006 11:48:30.468777 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:48:30 crc kubenswrapper[4958]: I1006 11:48:30.468798 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:48:30Z","lastTransitionTime":"2025-10-06T11:48:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:48:30 crc kubenswrapper[4958]: I1006 11:48:30.571501 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:48:30 crc kubenswrapper[4958]: I1006 11:48:30.571559 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:48:30 crc kubenswrapper[4958]: I1006 11:48:30.571576 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:48:30 crc kubenswrapper[4958]: I1006 11:48:30.571598 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:48:30 crc kubenswrapper[4958]: I1006 11:48:30.571615 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:48:30Z","lastTransitionTime":"2025-10-06T11:48:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:48:30 crc kubenswrapper[4958]: I1006 11:48:30.674773 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:48:30 crc kubenswrapper[4958]: I1006 11:48:30.674834 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:48:30 crc kubenswrapper[4958]: I1006 11:48:30.674852 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:48:30 crc kubenswrapper[4958]: I1006 11:48:30.674874 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:48:30 crc kubenswrapper[4958]: I1006 11:48:30.674892 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:48:30Z","lastTransitionTime":"2025-10-06T11:48:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:48:30 crc kubenswrapper[4958]: I1006 11:48:30.778188 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:48:30 crc kubenswrapper[4958]: I1006 11:48:30.778265 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:48:30 crc kubenswrapper[4958]: I1006 11:48:30.778283 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:48:30 crc kubenswrapper[4958]: I1006 11:48:30.778309 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:48:30 crc kubenswrapper[4958]: I1006 11:48:30.778329 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:48:30Z","lastTransitionTime":"2025-10-06T11:48:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:48:30 crc kubenswrapper[4958]: I1006 11:48:30.881738 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:48:30 crc kubenswrapper[4958]: I1006 11:48:30.881840 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:48:30 crc kubenswrapper[4958]: I1006 11:48:30.881863 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:48:30 crc kubenswrapper[4958]: I1006 11:48:30.881889 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:48:30 crc kubenswrapper[4958]: I1006 11:48:30.881908 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:48:30Z","lastTransitionTime":"2025-10-06T11:48:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:48:30 crc kubenswrapper[4958]: I1006 11:48:30.913313 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 06 11:48:30 crc kubenswrapper[4958]: I1006 11:48:30.913399 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 06 11:48:30 crc kubenswrapper[4958]: I1006 11:48:30.913324 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 06 11:48:30 crc kubenswrapper[4958]: E1006 11:48:30.913579 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Oct 06 11:48:30 crc kubenswrapper[4958]: E1006 11:48:30.913762 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Oct 06 11:48:30 crc kubenswrapper[4958]: E1006 11:48:30.914437 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Oct 06 11:48:30 crc kubenswrapper[4958]: I1006 11:48:30.915077 4958 scope.go:117] "RemoveContainer" containerID="3385ca172113f8a597490f651e5ab5d40777cf02ba40bb61b2ffa350d5598462" Oct 06 11:48:30 crc kubenswrapper[4958]: E1006 11:48:30.915402 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-ntrlk_openshift-ovn-kubernetes(cd589959-144a-41bd-b6d5-a872e5c25cee)\"" pod="openshift-ovn-kubernetes/ovnkube-node-ntrlk" podUID="cd589959-144a-41bd-b6d5-a872e5c25cee" Oct 06 11:48:30 crc kubenswrapper[4958]: I1006 11:48:30.985711 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:48:30 crc kubenswrapper[4958]: I1006 11:48:30.985778 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:48:30 crc kubenswrapper[4958]: I1006 11:48:30.985797 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:48:30 crc kubenswrapper[4958]: I1006 11:48:30.985828 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:48:30 crc kubenswrapper[4958]: I1006 11:48:30.985846 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:48:30Z","lastTransitionTime":"2025-10-06T11:48:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:48:31 crc kubenswrapper[4958]: I1006 11:48:31.088443 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:48:31 crc kubenswrapper[4958]: I1006 11:48:31.088505 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:48:31 crc kubenswrapper[4958]: I1006 11:48:31.088527 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:48:31 crc kubenswrapper[4958]: I1006 11:48:31.088558 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:48:31 crc kubenswrapper[4958]: I1006 11:48:31.088580 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:48:31Z","lastTransitionTime":"2025-10-06T11:48:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:48:31 crc kubenswrapper[4958]: I1006 11:48:31.192073 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:48:31 crc kubenswrapper[4958]: I1006 11:48:31.192212 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:48:31 crc kubenswrapper[4958]: I1006 11:48:31.192239 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:48:31 crc kubenswrapper[4958]: I1006 11:48:31.192267 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:48:31 crc kubenswrapper[4958]: I1006 11:48:31.192286 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:48:31Z","lastTransitionTime":"2025-10-06T11:48:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:48:31 crc kubenswrapper[4958]: I1006 11:48:31.294841 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:48:31 crc kubenswrapper[4958]: I1006 11:48:31.294934 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:48:31 crc kubenswrapper[4958]: I1006 11:48:31.294952 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:48:31 crc kubenswrapper[4958]: I1006 11:48:31.294977 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:48:31 crc kubenswrapper[4958]: I1006 11:48:31.294994 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:48:31Z","lastTransitionTime":"2025-10-06T11:48:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:48:31 crc kubenswrapper[4958]: I1006 11:48:31.396769 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:48:31 crc kubenswrapper[4958]: I1006 11:48:31.396831 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:48:31 crc kubenswrapper[4958]: I1006 11:48:31.396853 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:48:31 crc kubenswrapper[4958]: I1006 11:48:31.396884 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:48:31 crc kubenswrapper[4958]: I1006 11:48:31.396907 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:48:31Z","lastTransitionTime":"2025-10-06T11:48:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:48:31 crc kubenswrapper[4958]: I1006 11:48:31.499012 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:48:31 crc kubenswrapper[4958]: I1006 11:48:31.499069 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:48:31 crc kubenswrapper[4958]: I1006 11:48:31.499077 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:48:31 crc kubenswrapper[4958]: I1006 11:48:31.499089 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:48:31 crc kubenswrapper[4958]: I1006 11:48:31.499097 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:48:31Z","lastTransitionTime":"2025-10-06T11:48:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:48:31 crc kubenswrapper[4958]: I1006 11:48:31.602097 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:48:31 crc kubenswrapper[4958]: I1006 11:48:31.602311 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:48:31 crc kubenswrapper[4958]: I1006 11:48:31.602343 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:48:31 crc kubenswrapper[4958]: I1006 11:48:31.602428 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:48:31 crc kubenswrapper[4958]: I1006 11:48:31.602453 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:48:31Z","lastTransitionTime":"2025-10-06T11:48:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:48:31 crc kubenswrapper[4958]: I1006 11:48:31.705050 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:48:31 crc kubenswrapper[4958]: I1006 11:48:31.705127 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:48:31 crc kubenswrapper[4958]: I1006 11:48:31.705202 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:48:31 crc kubenswrapper[4958]: I1006 11:48:31.705232 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:48:31 crc kubenswrapper[4958]: I1006 11:48:31.705250 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:48:31Z","lastTransitionTime":"2025-10-06T11:48:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:48:31 crc kubenswrapper[4958]: I1006 11:48:31.808115 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:48:31 crc kubenswrapper[4958]: I1006 11:48:31.808216 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:48:31 crc kubenswrapper[4958]: I1006 11:48:31.808236 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:48:31 crc kubenswrapper[4958]: I1006 11:48:31.808261 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:48:31 crc kubenswrapper[4958]: I1006 11:48:31.808280 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:48:31Z","lastTransitionTime":"2025-10-06T11:48:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:48:31 crc kubenswrapper[4958]: I1006 11:48:31.911796 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:48:31 crc kubenswrapper[4958]: I1006 11:48:31.911854 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:48:31 crc kubenswrapper[4958]: I1006 11:48:31.911871 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:48:31 crc kubenswrapper[4958]: I1006 11:48:31.911894 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:48:31 crc kubenswrapper[4958]: I1006 11:48:31.911911 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:48:31Z","lastTransitionTime":"2025-10-06T11:48:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:48:31 crc kubenswrapper[4958]: I1006 11:48:31.912261 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-4mxw5" Oct 06 11:48:31 crc kubenswrapper[4958]: E1006 11:48:31.912504 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-4mxw5" podUID="cbfbf2ea-df5f-4844-ba69-8c9f16a3a26c" Oct 06 11:48:32 crc kubenswrapper[4958]: I1006 11:48:32.014791 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:48:32 crc kubenswrapper[4958]: I1006 11:48:32.014858 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:48:32 crc kubenswrapper[4958]: I1006 11:48:32.014877 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:48:32 crc kubenswrapper[4958]: I1006 11:48:32.014902 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:48:32 crc kubenswrapper[4958]: I1006 11:48:32.014922 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:48:32Z","lastTransitionTime":"2025-10-06T11:48:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:48:32 crc kubenswrapper[4958]: I1006 11:48:32.117684 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:48:32 crc kubenswrapper[4958]: I1006 11:48:32.117758 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:48:32 crc kubenswrapper[4958]: I1006 11:48:32.117784 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:48:32 crc kubenswrapper[4958]: I1006 11:48:32.117813 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:48:32 crc kubenswrapper[4958]: I1006 11:48:32.117836 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:48:32Z","lastTransitionTime":"2025-10-06T11:48:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:48:32 crc kubenswrapper[4958]: I1006 11:48:32.220275 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:48:32 crc kubenswrapper[4958]: I1006 11:48:32.220360 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:48:32 crc kubenswrapper[4958]: I1006 11:48:32.220381 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:48:32 crc kubenswrapper[4958]: I1006 11:48:32.220408 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:48:32 crc kubenswrapper[4958]: I1006 11:48:32.220425 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:48:32Z","lastTransitionTime":"2025-10-06T11:48:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:48:32 crc kubenswrapper[4958]: I1006 11:48:32.323331 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:48:32 crc kubenswrapper[4958]: I1006 11:48:32.323402 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:48:32 crc kubenswrapper[4958]: I1006 11:48:32.323414 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:48:32 crc kubenswrapper[4958]: I1006 11:48:32.323430 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:48:32 crc kubenswrapper[4958]: I1006 11:48:32.323442 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:48:32Z","lastTransitionTime":"2025-10-06T11:48:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:48:32 crc kubenswrapper[4958]: I1006 11:48:32.425514 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:48:32 crc kubenswrapper[4958]: I1006 11:48:32.425573 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:48:32 crc kubenswrapper[4958]: I1006 11:48:32.425591 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:48:32 crc kubenswrapper[4958]: I1006 11:48:32.425613 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:48:32 crc kubenswrapper[4958]: I1006 11:48:32.425630 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:48:32Z","lastTransitionTime":"2025-10-06T11:48:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:48:32 crc kubenswrapper[4958]: I1006 11:48:32.528218 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:48:32 crc kubenswrapper[4958]: I1006 11:48:32.528297 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:48:32 crc kubenswrapper[4958]: I1006 11:48:32.528310 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:48:32 crc kubenswrapper[4958]: I1006 11:48:32.528329 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:48:32 crc kubenswrapper[4958]: I1006 11:48:32.528343 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:48:32Z","lastTransitionTime":"2025-10-06T11:48:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:48:32 crc kubenswrapper[4958]: I1006 11:48:32.631733 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:48:32 crc kubenswrapper[4958]: I1006 11:48:32.631814 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:48:32 crc kubenswrapper[4958]: I1006 11:48:32.631878 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:48:32 crc kubenswrapper[4958]: I1006 11:48:32.631907 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:48:32 crc kubenswrapper[4958]: I1006 11:48:32.631929 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:48:32Z","lastTransitionTime":"2025-10-06T11:48:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:48:32 crc kubenswrapper[4958]: I1006 11:48:32.734497 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:48:32 crc kubenswrapper[4958]: I1006 11:48:32.734572 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:48:32 crc kubenswrapper[4958]: I1006 11:48:32.734620 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:48:32 crc kubenswrapper[4958]: I1006 11:48:32.734650 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:48:32 crc kubenswrapper[4958]: I1006 11:48:32.734675 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:48:32Z","lastTransitionTime":"2025-10-06T11:48:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:48:32 crc kubenswrapper[4958]: I1006 11:48:32.837780 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:48:32 crc kubenswrapper[4958]: I1006 11:48:32.837846 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:48:32 crc kubenswrapper[4958]: I1006 11:48:32.837868 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:48:32 crc kubenswrapper[4958]: I1006 11:48:32.837901 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:48:32 crc kubenswrapper[4958]: I1006 11:48:32.837925 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:48:32Z","lastTransitionTime":"2025-10-06T11:48:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:48:32 crc kubenswrapper[4958]: I1006 11:48:32.912603 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 06 11:48:32 crc kubenswrapper[4958]: E1006 11:48:32.912860 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Oct 06 11:48:32 crc kubenswrapper[4958]: I1006 11:48:32.913292 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 06 11:48:32 crc kubenswrapper[4958]: E1006 11:48:32.913407 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Oct 06 11:48:32 crc kubenswrapper[4958]: I1006 11:48:32.913682 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 06 11:48:32 crc kubenswrapper[4958]: E1006 11:48:32.913798 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Oct 06 11:48:32 crc kubenswrapper[4958]: I1006 11:48:32.940483 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:48:32 crc kubenswrapper[4958]: I1006 11:48:32.940547 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:48:32 crc kubenswrapper[4958]: I1006 11:48:32.940569 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:48:32 crc kubenswrapper[4958]: I1006 11:48:32.940599 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:48:32 crc kubenswrapper[4958]: I1006 11:48:32.940620 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:48:32Z","lastTransitionTime":"2025-10-06T11:48:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:48:33 crc kubenswrapper[4958]: I1006 11:48:33.044182 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:48:33 crc kubenswrapper[4958]: I1006 11:48:33.044248 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:48:33 crc kubenswrapper[4958]: I1006 11:48:33.044273 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:48:33 crc kubenswrapper[4958]: I1006 11:48:33.044302 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:48:33 crc kubenswrapper[4958]: I1006 11:48:33.044325 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:48:33Z","lastTransitionTime":"2025-10-06T11:48:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:48:33 crc kubenswrapper[4958]: I1006 11:48:33.146954 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:48:33 crc kubenswrapper[4958]: I1006 11:48:33.147007 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:48:33 crc kubenswrapper[4958]: I1006 11:48:33.147016 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:48:33 crc kubenswrapper[4958]: I1006 11:48:33.147028 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:48:33 crc kubenswrapper[4958]: I1006 11:48:33.147037 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:48:33Z","lastTransitionTime":"2025-10-06T11:48:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:48:33 crc kubenswrapper[4958]: I1006 11:48:33.249052 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:48:33 crc kubenswrapper[4958]: I1006 11:48:33.249095 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:48:33 crc kubenswrapper[4958]: I1006 11:48:33.249107 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:48:33 crc kubenswrapper[4958]: I1006 11:48:33.249121 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:48:33 crc kubenswrapper[4958]: I1006 11:48:33.249133 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:48:33Z","lastTransitionTime":"2025-10-06T11:48:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:48:33 crc kubenswrapper[4958]: I1006 11:48:33.351320 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:48:33 crc kubenswrapper[4958]: I1006 11:48:33.351356 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:48:33 crc kubenswrapper[4958]: I1006 11:48:33.351365 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:48:33 crc kubenswrapper[4958]: I1006 11:48:33.351378 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:48:33 crc kubenswrapper[4958]: I1006 11:48:33.351387 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:48:33Z","lastTransitionTime":"2025-10-06T11:48:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:48:33 crc kubenswrapper[4958]: I1006 11:48:33.453101 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:48:33 crc kubenswrapper[4958]: I1006 11:48:33.453166 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:48:33 crc kubenswrapper[4958]: I1006 11:48:33.453181 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:48:33 crc kubenswrapper[4958]: I1006 11:48:33.453199 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:48:33 crc kubenswrapper[4958]: I1006 11:48:33.453213 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:48:33Z","lastTransitionTime":"2025-10-06T11:48:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:48:33 crc kubenswrapper[4958]: I1006 11:48:33.556344 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:48:33 crc kubenswrapper[4958]: I1006 11:48:33.556394 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:48:33 crc kubenswrapper[4958]: I1006 11:48:33.556409 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:48:33 crc kubenswrapper[4958]: I1006 11:48:33.556430 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:48:33 crc kubenswrapper[4958]: I1006 11:48:33.556443 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:48:33Z","lastTransitionTime":"2025-10-06T11:48:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:48:33 crc kubenswrapper[4958]: I1006 11:48:33.658958 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:48:33 crc kubenswrapper[4958]: I1006 11:48:33.659020 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:48:33 crc kubenswrapper[4958]: I1006 11:48:33.659038 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:48:33 crc kubenswrapper[4958]: I1006 11:48:33.659063 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:48:33 crc kubenswrapper[4958]: I1006 11:48:33.659084 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:48:33Z","lastTransitionTime":"2025-10-06T11:48:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:48:33 crc kubenswrapper[4958]: I1006 11:48:33.762011 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:48:33 crc kubenswrapper[4958]: I1006 11:48:33.762084 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:48:33 crc kubenswrapper[4958]: I1006 11:48:33.762108 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:48:33 crc kubenswrapper[4958]: I1006 11:48:33.762139 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:48:33 crc kubenswrapper[4958]: I1006 11:48:33.762199 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:48:33Z","lastTransitionTime":"2025-10-06T11:48:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:48:33 crc kubenswrapper[4958]: I1006 11:48:33.865012 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:48:33 crc kubenswrapper[4958]: I1006 11:48:33.865178 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:48:33 crc kubenswrapper[4958]: I1006 11:48:33.865205 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:48:33 crc kubenswrapper[4958]: I1006 11:48:33.865236 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:48:33 crc kubenswrapper[4958]: I1006 11:48:33.865259 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:48:33Z","lastTransitionTime":"2025-10-06T11:48:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:48:33 crc kubenswrapper[4958]: I1006 11:48:33.886867 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/cbfbf2ea-df5f-4844-ba69-8c9f16a3a26c-metrics-certs\") pod \"network-metrics-daemon-4mxw5\" (UID: \"cbfbf2ea-df5f-4844-ba69-8c9f16a3a26c\") " pod="openshift-multus/network-metrics-daemon-4mxw5" Oct 06 11:48:33 crc kubenswrapper[4958]: E1006 11:48:33.887050 4958 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Oct 06 11:48:33 crc kubenswrapper[4958]: E1006 11:48:33.887273 4958 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/cbfbf2ea-df5f-4844-ba69-8c9f16a3a26c-metrics-certs podName:cbfbf2ea-df5f-4844-ba69-8c9f16a3a26c nodeName:}" failed. No retries permitted until 2025-10-06 11:49:05.887230599 +0000 UTC m=+99.773255957 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/cbfbf2ea-df5f-4844-ba69-8c9f16a3a26c-metrics-certs") pod "network-metrics-daemon-4mxw5" (UID: "cbfbf2ea-df5f-4844-ba69-8c9f16a3a26c") : object "openshift-multus"/"metrics-daemon-secret" not registered Oct 06 11:48:33 crc kubenswrapper[4958]: I1006 11:48:33.913188 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-4mxw5" Oct 06 11:48:33 crc kubenswrapper[4958]: E1006 11:48:33.913391 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-4mxw5" podUID="cbfbf2ea-df5f-4844-ba69-8c9f16a3a26c" Oct 06 11:48:33 crc kubenswrapper[4958]: I1006 11:48:33.967512 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:48:33 crc kubenswrapper[4958]: I1006 11:48:33.967534 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:48:33 crc kubenswrapper[4958]: I1006 11:48:33.967542 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:48:33 crc kubenswrapper[4958]: I1006 11:48:33.967552 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:48:33 crc kubenswrapper[4958]: I1006 11:48:33.967560 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:48:33Z","lastTransitionTime":"2025-10-06T11:48:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:48:34 crc kubenswrapper[4958]: I1006 11:48:34.070176 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:48:34 crc kubenswrapper[4958]: I1006 11:48:34.070237 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:48:34 crc kubenswrapper[4958]: I1006 11:48:34.070254 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:48:34 crc kubenswrapper[4958]: I1006 11:48:34.070269 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:48:34 crc kubenswrapper[4958]: I1006 11:48:34.070279 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:48:34Z","lastTransitionTime":"2025-10-06T11:48:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:48:34 crc kubenswrapper[4958]: I1006 11:48:34.172711 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:48:34 crc kubenswrapper[4958]: I1006 11:48:34.172783 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:48:34 crc kubenswrapper[4958]: I1006 11:48:34.172806 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:48:34 crc kubenswrapper[4958]: I1006 11:48:34.172828 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:48:34 crc kubenswrapper[4958]: I1006 11:48:34.172844 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:48:34Z","lastTransitionTime":"2025-10-06T11:48:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:48:34 crc kubenswrapper[4958]: I1006 11:48:34.276223 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:48:34 crc kubenswrapper[4958]: I1006 11:48:34.276281 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:48:34 crc kubenswrapper[4958]: I1006 11:48:34.276294 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:48:34 crc kubenswrapper[4958]: I1006 11:48:34.276309 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:48:34 crc kubenswrapper[4958]: I1006 11:48:34.276336 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:48:34Z","lastTransitionTime":"2025-10-06T11:48:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:48:34 crc kubenswrapper[4958]: I1006 11:48:34.379791 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:48:34 crc kubenswrapper[4958]: I1006 11:48:34.379830 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:48:34 crc kubenswrapper[4958]: I1006 11:48:34.379839 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:48:34 crc kubenswrapper[4958]: I1006 11:48:34.379854 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:48:34 crc kubenswrapper[4958]: I1006 11:48:34.379866 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:48:34Z","lastTransitionTime":"2025-10-06T11:48:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:48:34 crc kubenswrapper[4958]: I1006 11:48:34.482295 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:48:34 crc kubenswrapper[4958]: I1006 11:48:34.482337 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:48:34 crc kubenswrapper[4958]: I1006 11:48:34.482350 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:48:34 crc kubenswrapper[4958]: I1006 11:48:34.482368 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:48:34 crc kubenswrapper[4958]: I1006 11:48:34.482381 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:48:34Z","lastTransitionTime":"2025-10-06T11:48:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:48:34 crc kubenswrapper[4958]: I1006 11:48:34.584776 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:48:34 crc kubenswrapper[4958]: I1006 11:48:34.585078 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:48:34 crc kubenswrapper[4958]: I1006 11:48:34.585209 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:48:34 crc kubenswrapper[4958]: I1006 11:48:34.585368 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:48:34 crc kubenswrapper[4958]: I1006 11:48:34.585498 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:48:34Z","lastTransitionTime":"2025-10-06T11:48:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:48:34 crc kubenswrapper[4958]: I1006 11:48:34.630554 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:48:34 crc kubenswrapper[4958]: I1006 11:48:34.630607 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:48:34 crc kubenswrapper[4958]: I1006 11:48:34.630620 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:48:34 crc kubenswrapper[4958]: I1006 11:48:34.630635 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:48:34 crc kubenswrapper[4958]: I1006 11:48:34.630649 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:48:34Z","lastTransitionTime":"2025-10-06T11:48:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:48:34 crc kubenswrapper[4958]: E1006 11:48:34.643244 4958 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-10-06T11:48:34Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-06T11:48:34Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-06T11:48:34Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-06T11:48:34Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-06T11:48:34Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-06T11:48:34Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-06T11:48:34Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-06T11:48:34Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"71775188-f0be-4b91-a34f-f469bf9337b6\\\",\\\"systemUUID\\\":\\\"c838f8b7-52cc-46da-a415-eff8b7b887b9\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:48:34Z is after 2025-08-24T17:21:41Z" Oct 06 11:48:34 crc kubenswrapper[4958]: I1006 11:48:34.647487 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:48:34 crc kubenswrapper[4958]: I1006 11:48:34.647542 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:48:34 crc kubenswrapper[4958]: I1006 11:48:34.647559 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:48:34 crc kubenswrapper[4958]: I1006 11:48:34.647582 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:48:34 crc kubenswrapper[4958]: I1006 11:48:34.647599 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:48:34Z","lastTransitionTime":"2025-10-06T11:48:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:48:34 crc kubenswrapper[4958]: E1006 11:48:34.664642 4958 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-10-06T11:48:34Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-06T11:48:34Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-06T11:48:34Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-06T11:48:34Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-06T11:48:34Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-06T11:48:34Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-06T11:48:34Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-06T11:48:34Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"71775188-f0be-4b91-a34f-f469bf9337b6\\\",\\\"systemUUID\\\":\\\"c838f8b7-52cc-46da-a415-eff8b7b887b9\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:48:34Z is after 2025-08-24T17:21:41Z" Oct 06 11:48:34 crc kubenswrapper[4958]: I1006 11:48:34.667551 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:48:34 crc kubenswrapper[4958]: I1006 11:48:34.667596 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:48:34 crc kubenswrapper[4958]: I1006 11:48:34.667614 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:48:34 crc kubenswrapper[4958]: I1006 11:48:34.667636 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:48:34 crc kubenswrapper[4958]: I1006 11:48:34.667654 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:48:34Z","lastTransitionTime":"2025-10-06T11:48:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:48:34 crc kubenswrapper[4958]: E1006 11:48:34.681297 4958 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-10-06T11:48:34Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-06T11:48:34Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-06T11:48:34Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-06T11:48:34Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-06T11:48:34Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-06T11:48:34Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-06T11:48:34Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-06T11:48:34Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"71775188-f0be-4b91-a34f-f469bf9337b6\\\",\\\"systemUUID\\\":\\\"c838f8b7-52cc-46da-a415-eff8b7b887b9\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:48:34Z is after 2025-08-24T17:21:41Z" Oct 06 11:48:34 crc kubenswrapper[4958]: I1006 11:48:34.686006 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:48:34 crc kubenswrapper[4958]: I1006 11:48:34.686060 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:48:34 crc kubenswrapper[4958]: I1006 11:48:34.686073 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:48:34 crc kubenswrapper[4958]: I1006 11:48:34.686091 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:48:34 crc kubenswrapper[4958]: I1006 11:48:34.686104 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:48:34Z","lastTransitionTime":"2025-10-06T11:48:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:48:34 crc kubenswrapper[4958]: E1006 11:48:34.700272 4958 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-10-06T11:48:34Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-06T11:48:34Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-06T11:48:34Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-06T11:48:34Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-06T11:48:34Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-06T11:48:34Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-06T11:48:34Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-06T11:48:34Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"71775188-f0be-4b91-a34f-f469bf9337b6\\\",\\\"systemUUID\\\":\\\"c838f8b7-52cc-46da-a415-eff8b7b887b9\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:48:34Z is after 2025-08-24T17:21:41Z" Oct 06 11:48:34 crc kubenswrapper[4958]: I1006 11:48:34.703670 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:48:34 crc kubenswrapper[4958]: I1006 11:48:34.703730 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:48:34 crc kubenswrapper[4958]: I1006 11:48:34.703747 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:48:34 crc kubenswrapper[4958]: I1006 11:48:34.703770 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:48:34 crc kubenswrapper[4958]: I1006 11:48:34.703786 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:48:34Z","lastTransitionTime":"2025-10-06T11:48:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:48:34 crc kubenswrapper[4958]: E1006 11:48:34.719439 4958 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-10-06T11:48:34Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-06T11:48:34Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-06T11:48:34Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-06T11:48:34Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-06T11:48:34Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-06T11:48:34Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-06T11:48:34Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-06T11:48:34Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"71775188-f0be-4b91-a34f-f469bf9337b6\\\",\\\"systemUUID\\\":\\\"c838f8b7-52cc-46da-a415-eff8b7b887b9\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:48:34Z is after 2025-08-24T17:21:41Z" Oct 06 11:48:34 crc kubenswrapper[4958]: E1006 11:48:34.719632 4958 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Oct 06 11:48:34 crc kubenswrapper[4958]: I1006 11:48:34.721128 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:48:34 crc kubenswrapper[4958]: I1006 11:48:34.721255 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:48:34 crc kubenswrapper[4958]: I1006 11:48:34.721323 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:48:34 crc kubenswrapper[4958]: I1006 11:48:34.721396 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:48:34 crc kubenswrapper[4958]: I1006 11:48:34.721455 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:48:34Z","lastTransitionTime":"2025-10-06T11:48:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:48:34 crc kubenswrapper[4958]: I1006 11:48:34.824591 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:48:34 crc kubenswrapper[4958]: I1006 11:48:34.824638 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:48:34 crc kubenswrapper[4958]: I1006 11:48:34.824649 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:48:34 crc kubenswrapper[4958]: I1006 11:48:34.824668 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:48:34 crc kubenswrapper[4958]: I1006 11:48:34.824679 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:48:34Z","lastTransitionTime":"2025-10-06T11:48:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:48:34 crc kubenswrapper[4958]: I1006 11:48:34.912458 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 06 11:48:34 crc kubenswrapper[4958]: E1006 11:48:34.912641 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Oct 06 11:48:34 crc kubenswrapper[4958]: I1006 11:48:34.912737 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 06 11:48:34 crc kubenswrapper[4958]: E1006 11:48:34.912794 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Oct 06 11:48:34 crc kubenswrapper[4958]: I1006 11:48:34.912851 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 06 11:48:34 crc kubenswrapper[4958]: E1006 11:48:34.913117 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Oct 06 11:48:34 crc kubenswrapper[4958]: I1006 11:48:34.926413 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:48:34 crc kubenswrapper[4958]: I1006 11:48:34.926448 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:48:34 crc kubenswrapper[4958]: I1006 11:48:34.926459 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:48:34 crc kubenswrapper[4958]: I1006 11:48:34.926471 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:48:34 crc kubenswrapper[4958]: I1006 11:48:34.926481 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:48:34Z","lastTransitionTime":"2025-10-06T11:48:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:48:35 crc kubenswrapper[4958]: I1006 11:48:35.028500 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:48:35 crc kubenswrapper[4958]: I1006 11:48:35.028529 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:48:35 crc kubenswrapper[4958]: I1006 11:48:35.028539 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:48:35 crc kubenswrapper[4958]: I1006 11:48:35.028551 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:48:35 crc kubenswrapper[4958]: I1006 11:48:35.028560 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:48:35Z","lastTransitionTime":"2025-10-06T11:48:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:48:35 crc kubenswrapper[4958]: I1006 11:48:35.136067 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:48:35 crc kubenswrapper[4958]: I1006 11:48:35.136102 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:48:35 crc kubenswrapper[4958]: I1006 11:48:35.136110 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:48:35 crc kubenswrapper[4958]: I1006 11:48:35.136127 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:48:35 crc kubenswrapper[4958]: I1006 11:48:35.136136 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:48:35Z","lastTransitionTime":"2025-10-06T11:48:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:48:35 crc kubenswrapper[4958]: I1006 11:48:35.239015 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:48:35 crc kubenswrapper[4958]: I1006 11:48:35.239065 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:48:35 crc kubenswrapper[4958]: I1006 11:48:35.239078 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:48:35 crc kubenswrapper[4958]: I1006 11:48:35.239096 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:48:35 crc kubenswrapper[4958]: I1006 11:48:35.239107 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:48:35Z","lastTransitionTime":"2025-10-06T11:48:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:48:35 crc kubenswrapper[4958]: I1006 11:48:35.340843 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:48:35 crc kubenswrapper[4958]: I1006 11:48:35.340897 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:48:35 crc kubenswrapper[4958]: I1006 11:48:35.340908 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:48:35 crc kubenswrapper[4958]: I1006 11:48:35.340923 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:48:35 crc kubenswrapper[4958]: I1006 11:48:35.340934 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:48:35Z","lastTransitionTime":"2025-10-06T11:48:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:48:35 crc kubenswrapper[4958]: I1006 11:48:35.443702 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:48:35 crc kubenswrapper[4958]: I1006 11:48:35.443763 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:48:35 crc kubenswrapper[4958]: I1006 11:48:35.443783 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:48:35 crc kubenswrapper[4958]: I1006 11:48:35.443808 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:48:35 crc kubenswrapper[4958]: I1006 11:48:35.443825 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:48:35Z","lastTransitionTime":"2025-10-06T11:48:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:48:35 crc kubenswrapper[4958]: I1006 11:48:35.546747 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:48:35 crc kubenswrapper[4958]: I1006 11:48:35.546814 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:48:35 crc kubenswrapper[4958]: I1006 11:48:35.546838 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:48:35 crc kubenswrapper[4958]: I1006 11:48:35.546882 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:48:35 crc kubenswrapper[4958]: I1006 11:48:35.546903 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:48:35Z","lastTransitionTime":"2025-10-06T11:48:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:48:35 crc kubenswrapper[4958]: I1006 11:48:35.649262 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:48:35 crc kubenswrapper[4958]: I1006 11:48:35.649299 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:48:35 crc kubenswrapper[4958]: I1006 11:48:35.649312 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:48:35 crc kubenswrapper[4958]: I1006 11:48:35.649328 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:48:35 crc kubenswrapper[4958]: I1006 11:48:35.649340 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:48:35Z","lastTransitionTime":"2025-10-06T11:48:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:48:35 crc kubenswrapper[4958]: I1006 11:48:35.751493 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:48:35 crc kubenswrapper[4958]: I1006 11:48:35.751523 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:48:35 crc kubenswrapper[4958]: I1006 11:48:35.751533 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:48:35 crc kubenswrapper[4958]: I1006 11:48:35.751545 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:48:35 crc kubenswrapper[4958]: I1006 11:48:35.751556 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:48:35Z","lastTransitionTime":"2025-10-06T11:48:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:48:35 crc kubenswrapper[4958]: I1006 11:48:35.853671 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:48:35 crc kubenswrapper[4958]: I1006 11:48:35.853726 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:48:35 crc kubenswrapper[4958]: I1006 11:48:35.853744 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:48:35 crc kubenswrapper[4958]: I1006 11:48:35.853765 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:48:35 crc kubenswrapper[4958]: I1006 11:48:35.853783 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:48:35Z","lastTransitionTime":"2025-10-06T11:48:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:48:35 crc kubenswrapper[4958]: I1006 11:48:35.912589 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-4mxw5" Oct 06 11:48:35 crc kubenswrapper[4958]: E1006 11:48:35.912949 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-4mxw5" podUID="cbfbf2ea-df5f-4844-ba69-8c9f16a3a26c" Oct 06 11:48:35 crc kubenswrapper[4958]: I1006 11:48:35.956576 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:48:35 crc kubenswrapper[4958]: I1006 11:48:35.956632 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:48:35 crc kubenswrapper[4958]: I1006 11:48:35.956650 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:48:35 crc kubenswrapper[4958]: I1006 11:48:35.956675 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:48:35 crc kubenswrapper[4958]: I1006 11:48:35.956696 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:48:35Z","lastTransitionTime":"2025-10-06T11:48:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:48:36 crc kubenswrapper[4958]: I1006 11:48:36.059120 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:48:36 crc kubenswrapper[4958]: I1006 11:48:36.059176 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:48:36 crc kubenswrapper[4958]: I1006 11:48:36.059186 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:48:36 crc kubenswrapper[4958]: I1006 11:48:36.059202 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:48:36 crc kubenswrapper[4958]: I1006 11:48:36.059213 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:48:36Z","lastTransitionTime":"2025-10-06T11:48:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:48:36 crc kubenswrapper[4958]: I1006 11:48:36.162660 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:48:36 crc kubenswrapper[4958]: I1006 11:48:36.162698 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:48:36 crc kubenswrapper[4958]: I1006 11:48:36.162706 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:48:36 crc kubenswrapper[4958]: I1006 11:48:36.162719 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:48:36 crc kubenswrapper[4958]: I1006 11:48:36.162729 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:48:36Z","lastTransitionTime":"2025-10-06T11:48:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:48:36 crc kubenswrapper[4958]: I1006 11:48:36.265734 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:48:36 crc kubenswrapper[4958]: I1006 11:48:36.265765 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:48:36 crc kubenswrapper[4958]: I1006 11:48:36.265773 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:48:36 crc kubenswrapper[4958]: I1006 11:48:36.265786 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:48:36 crc kubenswrapper[4958]: I1006 11:48:36.265795 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:48:36Z","lastTransitionTime":"2025-10-06T11:48:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:48:36 crc kubenswrapper[4958]: I1006 11:48:36.368021 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:48:36 crc kubenswrapper[4958]: I1006 11:48:36.368087 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:48:36 crc kubenswrapper[4958]: I1006 11:48:36.368105 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:48:36 crc kubenswrapper[4958]: I1006 11:48:36.368129 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:48:36 crc kubenswrapper[4958]: I1006 11:48:36.368183 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:48:36Z","lastTransitionTime":"2025-10-06T11:48:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:48:36 crc kubenswrapper[4958]: I1006 11:48:36.415109 4958 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-4w4h5_8198c8fd-8ec9-4a56-9e87-4e3967fb8ec7/kube-multus/0.log" Oct 06 11:48:36 crc kubenswrapper[4958]: I1006 11:48:36.415199 4958 generic.go:334] "Generic (PLEG): container finished" podID="8198c8fd-8ec9-4a56-9e87-4e3967fb8ec7" containerID="979bf29caab50b132b30dd6eff62aed5d2209ae1c43fde2433101410df35035b" exitCode=1 Oct 06 11:48:36 crc kubenswrapper[4958]: I1006 11:48:36.415237 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-4w4h5" event={"ID":"8198c8fd-8ec9-4a56-9e87-4e3967fb8ec7","Type":"ContainerDied","Data":"979bf29caab50b132b30dd6eff62aed5d2209ae1c43fde2433101410df35035b"} Oct 06 11:48:36 crc kubenswrapper[4958]: I1006 11:48:36.416263 4958 scope.go:117] "RemoveContainer" containerID="979bf29caab50b132b30dd6eff62aed5d2209ae1c43fde2433101410df35035b" Oct 06 11:48:36 crc kubenswrapper[4958]: I1006 11:48:36.434673 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e346fee2-2887-49c0-ad05-0e4ca19453e4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://42a6335b390e5de2da2d935de9e0cb1e60391521813d0928c59d1d9c27ae8e7f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:47:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://64dfdaf2d8faa6e7ebde08361728077d3db9118ec963dc4c1fc8caf9eb9c752b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:47:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5cbba96aaea45073aaeada1136c677ba1ba87248f004cf3d7394ddfccf412478\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:47:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://95e5c689ad307b63dec13d774d1ff8fe8fa0a11a7d33e18a0470000a4df0f6dc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:47:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T11:47:27Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:48:36Z is after 2025-08-24T17:21:41Z" Oct 06 11:48:36 crc kubenswrapper[4958]: I1006 11:48:36.447775 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:46Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:48:36Z is after 2025-08-24T17:21:41Z" Oct 06 11:48:36 crc kubenswrapper[4958]: I1006 11:48:36.459779 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-8xzjs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c3545ace-6477-4c0b-9576-f32c6748e0a0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d94d788ac8f6a4665db7852c21f55128c9afb7976c9e18fbc302c02d8c3a0c43\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:47:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lz2sh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T11:47:47Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-8xzjs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:48:36Z is after 2025-08-24T17:21:41Z" Oct 06 11:48:36 crc kubenswrapper[4958]: I1006 11:48:36.471432 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:48:36 crc kubenswrapper[4958]: I1006 11:48:36.471456 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:48:36 crc kubenswrapper[4958]: I1006 11:48:36.471463 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:48:36 crc kubenswrapper[4958]: I1006 11:48:36.471475 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:48:36 crc kubenswrapper[4958]: I1006 11:48:36.471484 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:48:36Z","lastTransitionTime":"2025-10-06T11:48:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:48:36 crc kubenswrapper[4958]: I1006 11:48:36.471734 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-vns7w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f33eae78-7219-4551-bea7-dcfaf22c4e62\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7c6a67770ab5e7e0af1ed6a7b4965bc2b3937e021ddf18a765edd7b65196ad5c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:47:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f8htn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T11:47:49Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-vns7w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:48:36Z is after 2025-08-24T17:21:41Z" Oct 06 11:48:36 crc kubenswrapper[4958]: I1006 11:48:36.493336 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9bd5b11f-6414-4c57-99b8-516b3f4be804\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:48:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:48:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8f40315a451e12055987bec9804330dac96578f86d7fdedfd96ed187936df857\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:47:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0b16a70f5c26106f885601fdc9f41b6d606668ed3a4a527be3a7674dd4217d36\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:47:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://93cedfe1a3600f0eb73fd787333690ed1fd8cd5f766a54ba40eb6aad808593c7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:47:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://da49c178e22edd3e5f0b37f05dca576095bf83765cb9e46fcb68b1b362e304f7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fdb3ac1507f9a55981ed07786d2ab8bd2aab7072e8e625b17f15bc27f40f460d\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-10-06T11:47:48Z\\\",\\\"message\\\":\\\"file observer\\\\nW1006 11:47:47.776944 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1006 11:47:47.777183 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1006 11:47:47.778589 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-624858867/tls.crt::/tmp/serving-cert-624858867/tls.key\\\\\\\"\\\\nI1006 11:47:48.078332 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1006 11:47:48.084427 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1006 11:47:48.084453 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1006 11:47:48.084475 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1006 11:47:48.084480 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1006 11:47:48.108531 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI1006 11:47:48.108546 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW1006 11:47:48.108561 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1006 11:47:48.108582 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1006 11:47:48.108586 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1006 11:47:48.108589 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1006 11:47:48.108592 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1006 11:47:48.108594 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1006 11:47:48.110441 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-06T11:47:42Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:48:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://533257b553a482e13f8c8930f7ee7d6f8b70af1499cf861d3dfe73cb2225aa2a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:47:29Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://70c0a2bcb44c905e3d39402f0992d3e50a939252201b98e9a723f5a6777d7f91\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://70c0a2bcb44c905e3d39402f0992d3e50a939252201b98e9a723f5a6777d7f91\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-06T11:47:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-06T11:47:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T11:47:27Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:48:36Z is after 2025-08-24T17:21:41Z" Oct 06 11:48:36 crc kubenswrapper[4958]: I1006 11:48:36.522588 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:48Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:48Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b5c0c63694c68c28642df297d1ceff03425621c51b170c758b3fedeafbccc89e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:47:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:48:36Z is after 2025-08-24T17:21:41Z" Oct 06 11:48:36 crc kubenswrapper[4958]: I1006 11:48:36.563610 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:50Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:50Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fbac1bba642ea0238b738ec34631d44326b215575297c7ea3595aa9d691849d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:47:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:48:36Z is after 2025-08-24T17:21:41Z" Oct 06 11:48:36 crc kubenswrapper[4958]: I1006 11:48:36.573969 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:48:36 crc kubenswrapper[4958]: I1006 11:48:36.574035 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:48:36 crc kubenswrapper[4958]: I1006 11:48:36.574052 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:48:36 crc kubenswrapper[4958]: I1006 11:48:36.574074 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:48:36 crc kubenswrapper[4958]: I1006 11:48:36.574091 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:48:36Z","lastTransitionTime":"2025-10-06T11:48:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:48:36 crc kubenswrapper[4958]: I1006 11:48:36.575290 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-4mxw5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cbfbf2ea-df5f-4844-ba69-8c9f16a3a26c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:48:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:48:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:48:02Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:48:02Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cdkh7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cdkh7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T11:48:02Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-4mxw5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:48:36Z is after 2025-08-24T17:21:41Z" Oct 06 11:48:36 crc kubenswrapper[4958]: I1006 11:48:36.591932 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:46Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:48:36Z is after 2025-08-24T17:21:41Z" Oct 06 11:48:36 crc kubenswrapper[4958]: I1006 11:48:36.613034 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-ntrlk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cd589959-144a-41bd-b6d5-a872e5c25cee\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:48Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:48Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bc57f0ff822a9fcb8c3b73146bf4879f6e05c2f92b6a4317528530e54e419278\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:47:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vzpjm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a38398fb9a752d0059e1adfd4ee23557572492f2d27f63ef90ed4730b742fd0e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:47:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vzpjm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://484bfcace3003fb125465d9296c1b04852447d674b6c4abc8fedbd3a1b6ea8be\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:47:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vzpjm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d7834f1b3141381f7d868acc4aceb2809e81896cb7082de9888c3d540d062078\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:47:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vzpjm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://051e26a11e5f65ab6e49664f99a5f6a87bf781bf7284f2e2e3996c0e26e77c84\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:47:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vzpjm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d90731b1e085c514ef0665d4df356b853f432a1012f84a41ed82fa257d57d9bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:47:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vzpjm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3385ca172113f8a597490f651e5ab5d40777cf02ba40bb61b2ffa350d5598462\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3385ca172113f8a597490f651e5ab5d40777cf02ba40bb61b2ffa350d5598462\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-10-06T11:48:18Z\\\",\\\"message\\\":\\\"edroute/v1/apis/informers/externalversions/factory.go:140\\\\nI1006 11:48:18.897422 6666 reflector.go:311] Stopping reflector *v1alpha1.AdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI1006 11:48:18.897824 6666 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI1006 11:48:18.897885 6666 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI1006 11:48:18.897895 6666 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI1006 11:48:18.897916 6666 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI1006 11:48:18.897986 6666 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI1006 11:48:18.897991 6666 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI1006 11:48:18.897989 6666 handler.go:208] Removed *v1.Pod event handler 3\\\\nI1006 11:48:18.898014 6666 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI1006 11:48:18.898007 6666 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI1006 11:48:18.898024 6666 handler.go:208] Removed *v1.Pod event handler 6\\\\nI1006 11:48:18.898039 6666 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI1006 11:48:18.898072 6666 factory.go:656] Stopping watch factory\\\\nI1006 11:48:18.898105 6666 ovnkube.go:599] Stopped ovnkube\\\\nI1006 11\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-06T11:48:18Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-ntrlk_openshift-ovn-kubernetes(cd589959-144a-41bd-b6d5-a872e5c25cee)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vzpjm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1398aae03a42cf60100ed334a3d65c9c4fd501855d5316e7e1ed3974d0c7e22e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:47:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vzpjm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://738882f7b5c7b87e6d6426a997dd7efe5daf1d97973cf1085097cc0f5e790f00\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://738882f7b5c7b87e6d6426a997dd7efe5daf1d97973cf1085097cc0f5e790f00\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-06T11:47:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-06T11:47:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vzpjm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T11:47:48Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-ntrlk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:48:36Z is after 2025-08-24T17:21:41Z" Oct 06 11:48:36 crc kubenswrapper[4958]: I1006 11:48:36.625273 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-whw6z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1a63a5e9-6d00-40ff-a080-cfcaf03a1c1b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b4fae35f5c37636077eb5d2fee37236e39d009d77aa158458d267aed5fc29f48\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:47:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bwd9v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0be6251825a9cd485da028e6f7fd169dae74215a7191607dfd0a4b7a470e43c4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:47:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bwd9v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T11:47:47Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-whw6z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:48:36Z is after 2025-08-24T17:21:41Z" Oct 06 11:48:36 crc kubenswrapper[4958]: I1006 11:48:36.637830 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-8jbr9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ddd869cc-2703-4f0d-b694-32bb7735025e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:48:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:48:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:48:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:48:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9298df9f82818033b7c6c96e0d7fd59290a7ef86679ae958f19f1c841998715c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:48:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8vhr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5d95fcda3db5aafa2634bf9d4037a6b8b0456bf4be6fc620c742e57e86aed114\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:48:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8vhr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T11:48:00Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-8jbr9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:48:36Z is after 2025-08-24T17:21:41Z" Oct 06 11:48:36 crc kubenswrapper[4958]: I1006 11:48:36.648982 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ba205413-cfaf-4b44-a363-11756cfa7dda\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:48:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:48:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fb611e3b5c05dc18f2016ae86df59c40ce281cb8b0180bd570de0ef76740db43\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:47:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ec146c3b7bff7005ab104f50dbc189e81ee82b7b2275ddb606e60832db5a0046\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:47:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0dcb7478b9b3f837afca516fc35e7f3f41e293147c5f27119fcca05f0d4477b3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:47:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c7f87b2139a49d50f883f483bfbfff0af00d2580d85bbddb604d395afbae3a73\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c7f87b2139a49d50f883f483bfbfff0af00d2580d85bbddb604d395afbae3a73\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-06T11:47:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-06T11:47:28Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T11:47:27Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:48:36Z is after 2025-08-24T17:21:41Z" Oct 06 11:48:36 crc kubenswrapper[4958]: I1006 11:48:36.662698 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:48Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:48Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3a93985bd27818990d6f5ec103d8c0722e1317a74398d64300148018c4f77b71\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:47:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9f39e4dcc064167825d4f7323323a394d1a3d1a374b7104bffc6ad56b344d89e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:47:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:48:36Z is after 2025-08-24T17:21:41Z" Oct 06 11:48:36 crc kubenswrapper[4958]: I1006 11:48:36.675845 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:46Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:48:36Z is after 2025-08-24T17:21:41Z" Oct 06 11:48:36 crc kubenswrapper[4958]: I1006 11:48:36.676767 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:48:36 crc kubenswrapper[4958]: I1006 11:48:36.676806 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:48:36 crc kubenswrapper[4958]: I1006 11:48:36.676817 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:48:36 crc kubenswrapper[4958]: I1006 11:48:36.676831 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:48:36 crc kubenswrapper[4958]: I1006 11:48:36.676843 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:48:36Z","lastTransitionTime":"2025-10-06T11:48:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:48:36 crc kubenswrapper[4958]: I1006 11:48:36.691425 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-lwknw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5e46de0e-9f67-4dae-8601-65004d0d71c1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6adb713c9c2659566eee132d371a68bc10d24c6761c017a1e8b0ce8bf974b9fa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:47:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wghx2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://244104564b5ad6b646671bdc75f440e5568a0c2449c111e56431328365d044fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://244104564b5ad6b646671bdc75f440e5568a0c2449c111e56431328365d044fe\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-06T11:47:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-06T11:47:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wghx2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5f832e5afbc55af6f3de6756005ddfef328c3d8357b1bbc8765469e5b254cf8a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5f832e5afbc55af6f3de6756005ddfef328c3d8357b1bbc8765469e5b254cf8a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-06T11:47:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-06T11:47:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wghx2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://99bec1a2f08582cd407d108d771389ab034877ee58f59e7d899541d0c69809b0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://99bec1a2f08582cd407d108d771389ab034877ee58f59e7d899541d0c69809b0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-06T11:47:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-06T11:47:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wghx2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4d090f1099ca4b6472934163fb66f1875dc03819cf0907f3058aa5006d9d2198\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4d090f1099ca4b6472934163fb66f1875dc03819cf0907f3058aa5006d9d2198\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-06T11:47:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-06T11:47:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wghx2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://41bdbeadff713969b3914296b7f045991c7da126f8181ee9b60d4607d7a565dd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://41bdbeadff713969b3914296b7f045991c7da126f8181ee9b60d4607d7a565dd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-06T11:47:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-06T11:47:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wghx2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3e496dd75c5dd4e480d06cd46302b6172620b35eb24aff3308865a0931677bc4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3e496dd75c5dd4e480d06cd46302b6172620b35eb24aff3308865a0931677bc4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-06T11:47:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-06T11:47:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wghx2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T11:47:47Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-lwknw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:48:36Z is after 2025-08-24T17:21:41Z" Oct 06 11:48:36 crc kubenswrapper[4958]: I1006 11:48:36.704276 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-4w4h5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8198c8fd-8ec9-4a56-9e87-4e3967fb8ec7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:48:36Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:48:36Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://979bf29caab50b132b30dd6eff62aed5d2209ae1c43fde2433101410df35035b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://979bf29caab50b132b30dd6eff62aed5d2209ae1c43fde2433101410df35035b\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-10-06T11:48:35Z\\\",\\\"message\\\":\\\"2025-10-06T11:47:50+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_98b69acf-4cc1-4904-b48b-cf009f0cef75\\\\n2025-10-06T11:47:50+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_98b69acf-4cc1-4904-b48b-cf009f0cef75 to /host/opt/cni/bin/\\\\n2025-10-06T11:47:50Z [verbose] multus-daemon started\\\\n2025-10-06T11:47:50Z [verbose] Readiness Indicator file check\\\\n2025-10-06T11:48:35Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-06T11:47:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2ncxs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T11:47:47Z\\\"}}\" for pod \"openshift-multus\"/\"multus-4w4h5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:48:36Z is after 2025-08-24T17:21:41Z" Oct 06 11:48:36 crc kubenswrapper[4958]: I1006 11:48:36.780315 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:48:36 crc kubenswrapper[4958]: I1006 11:48:36.780387 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:48:36 crc kubenswrapper[4958]: I1006 11:48:36.780412 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:48:36 crc kubenswrapper[4958]: I1006 11:48:36.780441 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:48:36 crc kubenswrapper[4958]: I1006 11:48:36.780495 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:48:36Z","lastTransitionTime":"2025-10-06T11:48:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:48:36 crc kubenswrapper[4958]: I1006 11:48:36.883489 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:48:36 crc kubenswrapper[4958]: I1006 11:48:36.883540 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:48:36 crc kubenswrapper[4958]: I1006 11:48:36.883548 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:48:36 crc kubenswrapper[4958]: I1006 11:48:36.883566 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:48:36 crc kubenswrapper[4958]: I1006 11:48:36.883577 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:48:36Z","lastTransitionTime":"2025-10-06T11:48:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:48:36 crc kubenswrapper[4958]: I1006 11:48:36.912523 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 06 11:48:36 crc kubenswrapper[4958]: I1006 11:48:36.912594 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 06 11:48:36 crc kubenswrapper[4958]: I1006 11:48:36.912525 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 06 11:48:36 crc kubenswrapper[4958]: E1006 11:48:36.912687 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Oct 06 11:48:36 crc kubenswrapper[4958]: E1006 11:48:36.912833 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Oct 06 11:48:36 crc kubenswrapper[4958]: E1006 11:48:36.913006 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Oct 06 11:48:36 crc kubenswrapper[4958]: I1006 11:48:36.926583 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-whw6z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1a63a5e9-6d00-40ff-a080-cfcaf03a1c1b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b4fae35f5c37636077eb5d2fee37236e39d009d77aa158458d267aed5fc29f48\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:47:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bwd9v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0be6251825a9cd485da028e6f7fd169dae74215a7191607dfd0a4b7a470e43c4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:47:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bwd9v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T11:47:47Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-whw6z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:48:36Z is after 2025-08-24T17:21:41Z" Oct 06 11:48:36 crc kubenswrapper[4958]: I1006 11:48:36.938133 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-8jbr9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ddd869cc-2703-4f0d-b694-32bb7735025e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:48:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:48:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:48:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:48:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9298df9f82818033b7c6c96e0d7fd59290a7ef86679ae958f19f1c841998715c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:48:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8vhr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5d95fcda3db5aafa2634bf9d4037a6b8b0456bf4be6fc620c742e57e86aed114\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:48:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8vhr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T11:48:00Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-8jbr9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:48:36Z is after 2025-08-24T17:21:41Z" Oct 06 11:48:36 crc kubenswrapper[4958]: I1006 11:48:36.953763 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ba205413-cfaf-4b44-a363-11756cfa7dda\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:48:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:48:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fb611e3b5c05dc18f2016ae86df59c40ce281cb8b0180bd570de0ef76740db43\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:47:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ec146c3b7bff7005ab104f50dbc189e81ee82b7b2275ddb606e60832db5a0046\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:47:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0dcb7478b9b3f837afca516fc35e7f3f41e293147c5f27119fcca05f0d4477b3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:47:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c7f87b2139a49d50f883f483bfbfff0af00d2580d85bbddb604d395afbae3a73\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c7f87b2139a49d50f883f483bfbfff0af00d2580d85bbddb604d395afbae3a73\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-06T11:47:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-06T11:47:28Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T11:47:27Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:48:36Z is after 2025-08-24T17:21:41Z" Oct 06 11:48:36 crc kubenswrapper[4958]: I1006 11:48:36.965751 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:48Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:48Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3a93985bd27818990d6f5ec103d8c0722e1317a74398d64300148018c4f77b71\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:47:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9f39e4dcc064167825d4f7323323a394d1a3d1a374b7104bffc6ad56b344d89e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:47:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:48:36Z is after 2025-08-24T17:21:41Z" Oct 06 11:48:36 crc kubenswrapper[4958]: I1006 11:48:36.980868 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:46Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:48:36Z is after 2025-08-24T17:21:41Z" Oct 06 11:48:36 crc kubenswrapper[4958]: I1006 11:48:36.985819 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:48:36 crc kubenswrapper[4958]: I1006 11:48:36.985953 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:48:36 crc kubenswrapper[4958]: I1006 11:48:36.986045 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:48:36 crc kubenswrapper[4958]: I1006 11:48:36.986132 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:48:36 crc kubenswrapper[4958]: I1006 11:48:36.986259 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:48:36Z","lastTransitionTime":"2025-10-06T11:48:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:48:37 crc kubenswrapper[4958]: I1006 11:48:37.001123 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-lwknw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5e46de0e-9f67-4dae-8601-65004d0d71c1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6adb713c9c2659566eee132d371a68bc10d24c6761c017a1e8b0ce8bf974b9fa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:47:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wghx2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://244104564b5ad6b646671bdc75f440e5568a0c2449c111e56431328365d044fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://244104564b5ad6b646671bdc75f440e5568a0c2449c111e56431328365d044fe\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-06T11:47:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-06T11:47:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wghx2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5f832e5afbc55af6f3de6756005ddfef328c3d8357b1bbc8765469e5b254cf8a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5f832e5afbc55af6f3de6756005ddfef328c3d8357b1bbc8765469e5b254cf8a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-06T11:47:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-06T11:47:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wghx2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://99bec1a2f08582cd407d108d771389ab034877ee58f59e7d899541d0c69809b0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://99bec1a2f08582cd407d108d771389ab034877ee58f59e7d899541d0c69809b0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-06T11:47:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-06T11:47:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wghx2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4d090f1099ca4b6472934163fb66f1875dc03819cf0907f3058aa5006d9d2198\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4d090f1099ca4b6472934163fb66f1875dc03819cf0907f3058aa5006d9d2198\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-06T11:47:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-06T11:47:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wghx2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://41bdbeadff713969b3914296b7f045991c7da126f8181ee9b60d4607d7a565dd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://41bdbeadff713969b3914296b7f045991c7da126f8181ee9b60d4607d7a565dd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-06T11:47:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-06T11:47:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wghx2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3e496dd75c5dd4e480d06cd46302b6172620b35eb24aff3308865a0931677bc4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3e496dd75c5dd4e480d06cd46302b6172620b35eb24aff3308865a0931677bc4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-06T11:47:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-06T11:47:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wghx2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T11:47:47Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-lwknw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:48:36Z is after 2025-08-24T17:21:41Z" Oct 06 11:48:37 crc kubenswrapper[4958]: I1006 11:48:37.015755 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-4w4h5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8198c8fd-8ec9-4a56-9e87-4e3967fb8ec7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:48:36Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:48:36Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://979bf29caab50b132b30dd6eff62aed5d2209ae1c43fde2433101410df35035b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://979bf29caab50b132b30dd6eff62aed5d2209ae1c43fde2433101410df35035b\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-10-06T11:48:35Z\\\",\\\"message\\\":\\\"2025-10-06T11:47:50+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_98b69acf-4cc1-4904-b48b-cf009f0cef75\\\\n2025-10-06T11:47:50+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_98b69acf-4cc1-4904-b48b-cf009f0cef75 to /host/opt/cni/bin/\\\\n2025-10-06T11:47:50Z [verbose] multus-daemon started\\\\n2025-10-06T11:47:50Z [verbose] Readiness Indicator file check\\\\n2025-10-06T11:48:35Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-06T11:47:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2ncxs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T11:47:47Z\\\"}}\" for pod \"openshift-multus\"/\"multus-4w4h5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:48:37Z is after 2025-08-24T17:21:41Z" Oct 06 11:48:37 crc kubenswrapper[4958]: I1006 11:48:37.029013 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e346fee2-2887-49c0-ad05-0e4ca19453e4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://42a6335b390e5de2da2d935de9e0cb1e60391521813d0928c59d1d9c27ae8e7f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:47:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://64dfdaf2d8faa6e7ebde08361728077d3db9118ec963dc4c1fc8caf9eb9c752b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:47:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5cbba96aaea45073aaeada1136c677ba1ba87248f004cf3d7394ddfccf412478\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:47:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://95e5c689ad307b63dec13d774d1ff8fe8fa0a11a7d33e18a0470000a4df0f6dc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:47:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T11:47:27Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:48:37Z is after 2025-08-24T17:21:41Z" Oct 06 11:48:37 crc kubenswrapper[4958]: I1006 11:48:37.044209 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:46Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:48:37Z is after 2025-08-24T17:21:41Z" Oct 06 11:48:37 crc kubenswrapper[4958]: I1006 11:48:37.053670 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-8xzjs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c3545ace-6477-4c0b-9576-f32c6748e0a0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d94d788ac8f6a4665db7852c21f55128c9afb7976c9e18fbc302c02d8c3a0c43\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:47:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lz2sh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T11:47:47Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-8xzjs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:48:37Z is after 2025-08-24T17:21:41Z" Oct 06 11:48:37 crc kubenswrapper[4958]: I1006 11:48:37.066619 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-vns7w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f33eae78-7219-4551-bea7-dcfaf22c4e62\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7c6a67770ab5e7e0af1ed6a7b4965bc2b3937e021ddf18a765edd7b65196ad5c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:47:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f8htn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T11:47:49Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-vns7w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:48:37Z is after 2025-08-24T17:21:41Z" Oct 06 11:48:37 crc kubenswrapper[4958]: I1006 11:48:37.081000 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9bd5b11f-6414-4c57-99b8-516b3f4be804\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:48:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:48:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8f40315a451e12055987bec9804330dac96578f86d7fdedfd96ed187936df857\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:47:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0b16a70f5c26106f885601fdc9f41b6d606668ed3a4a527be3a7674dd4217d36\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:47:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://93cedfe1a3600f0eb73fd787333690ed1fd8cd5f766a54ba40eb6aad808593c7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:47:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://da49c178e22edd3e5f0b37f05dca576095bf83765cb9e46fcb68b1b362e304f7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fdb3ac1507f9a55981ed07786d2ab8bd2aab7072e8e625b17f15bc27f40f460d\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-10-06T11:47:48Z\\\",\\\"message\\\":\\\"file observer\\\\nW1006 11:47:47.776944 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1006 11:47:47.777183 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1006 11:47:47.778589 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-624858867/tls.crt::/tmp/serving-cert-624858867/tls.key\\\\\\\"\\\\nI1006 11:47:48.078332 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1006 11:47:48.084427 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1006 11:47:48.084453 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1006 11:47:48.084475 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1006 11:47:48.084480 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1006 11:47:48.108531 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI1006 11:47:48.108546 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW1006 11:47:48.108561 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1006 11:47:48.108582 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1006 11:47:48.108586 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1006 11:47:48.108589 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1006 11:47:48.108592 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1006 11:47:48.108594 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1006 11:47:48.110441 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-06T11:47:42Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:48:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://533257b553a482e13f8c8930f7ee7d6f8b70af1499cf861d3dfe73cb2225aa2a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:47:29Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://70c0a2bcb44c905e3d39402f0992d3e50a939252201b98e9a723f5a6777d7f91\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://70c0a2bcb44c905e3d39402f0992d3e50a939252201b98e9a723f5a6777d7f91\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-06T11:47:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-06T11:47:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T11:47:27Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:48:37Z is after 2025-08-24T17:21:41Z" Oct 06 11:48:37 crc kubenswrapper[4958]: I1006 11:48:37.088548 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:48:37 crc kubenswrapper[4958]: I1006 11:48:37.088582 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:48:37 crc kubenswrapper[4958]: I1006 11:48:37.088590 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:48:37 crc kubenswrapper[4958]: I1006 11:48:37.088604 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:48:37 crc kubenswrapper[4958]: I1006 11:48:37.088627 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:48:37Z","lastTransitionTime":"2025-10-06T11:48:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:48:37 crc kubenswrapper[4958]: I1006 11:48:37.101365 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:48Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:48Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b5c0c63694c68c28642df297d1ceff03425621c51b170c758b3fedeafbccc89e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:47:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:48:37Z is after 2025-08-24T17:21:41Z" Oct 06 11:48:37 crc kubenswrapper[4958]: I1006 11:48:37.113362 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:50Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:50Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fbac1bba642ea0238b738ec34631d44326b215575297c7ea3595aa9d691849d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:47:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:48:37Z is after 2025-08-24T17:21:41Z" Oct 06 11:48:37 crc kubenswrapper[4958]: I1006 11:48:37.122595 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-4mxw5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cbfbf2ea-df5f-4844-ba69-8c9f16a3a26c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:48:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:48:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:48:02Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:48:02Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cdkh7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cdkh7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T11:48:02Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-4mxw5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:48:37Z is after 2025-08-24T17:21:41Z" Oct 06 11:48:37 crc kubenswrapper[4958]: I1006 11:48:37.134440 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:46Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:48:37Z is after 2025-08-24T17:21:41Z" Oct 06 11:48:37 crc kubenswrapper[4958]: I1006 11:48:37.153553 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-ntrlk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cd589959-144a-41bd-b6d5-a872e5c25cee\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:48Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:48Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bc57f0ff822a9fcb8c3b73146bf4879f6e05c2f92b6a4317528530e54e419278\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:47:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vzpjm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a38398fb9a752d0059e1adfd4ee23557572492f2d27f63ef90ed4730b742fd0e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:47:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vzpjm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://484bfcace3003fb125465d9296c1b04852447d674b6c4abc8fedbd3a1b6ea8be\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:47:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vzpjm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d7834f1b3141381f7d868acc4aceb2809e81896cb7082de9888c3d540d062078\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:47:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vzpjm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://051e26a11e5f65ab6e49664f99a5f6a87bf781bf7284f2e2e3996c0e26e77c84\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:47:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vzpjm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d90731b1e085c514ef0665d4df356b853f432a1012f84a41ed82fa257d57d9bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:47:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vzpjm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3385ca172113f8a597490f651e5ab5d40777cf02ba40bb61b2ffa350d5598462\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3385ca172113f8a597490f651e5ab5d40777cf02ba40bb61b2ffa350d5598462\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-10-06T11:48:18Z\\\",\\\"message\\\":\\\"edroute/v1/apis/informers/externalversions/factory.go:140\\\\nI1006 11:48:18.897422 6666 reflector.go:311] Stopping reflector *v1alpha1.AdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI1006 11:48:18.897824 6666 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI1006 11:48:18.897885 6666 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI1006 11:48:18.897895 6666 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI1006 11:48:18.897916 6666 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI1006 11:48:18.897986 6666 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI1006 11:48:18.897991 6666 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI1006 11:48:18.897989 6666 handler.go:208] Removed *v1.Pod event handler 3\\\\nI1006 11:48:18.898014 6666 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI1006 11:48:18.898007 6666 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI1006 11:48:18.898024 6666 handler.go:208] Removed *v1.Pod event handler 6\\\\nI1006 11:48:18.898039 6666 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI1006 11:48:18.898072 6666 factory.go:656] Stopping watch factory\\\\nI1006 11:48:18.898105 6666 ovnkube.go:599] Stopped ovnkube\\\\nI1006 11\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-06T11:48:18Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-ntrlk_openshift-ovn-kubernetes(cd589959-144a-41bd-b6d5-a872e5c25cee)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vzpjm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1398aae03a42cf60100ed334a3d65c9c4fd501855d5316e7e1ed3974d0c7e22e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:47:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vzpjm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://738882f7b5c7b87e6d6426a997dd7efe5daf1d97973cf1085097cc0f5e790f00\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://738882f7b5c7b87e6d6426a997dd7efe5daf1d97973cf1085097cc0f5e790f00\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-06T11:47:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-06T11:47:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vzpjm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T11:47:48Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-ntrlk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:48:37Z is after 2025-08-24T17:21:41Z" Oct 06 11:48:37 crc kubenswrapper[4958]: I1006 11:48:37.191425 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:48:37 crc kubenswrapper[4958]: I1006 11:48:37.191471 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:48:37 crc kubenswrapper[4958]: I1006 11:48:37.191479 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:48:37 crc kubenswrapper[4958]: I1006 11:48:37.191493 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:48:37 crc kubenswrapper[4958]: I1006 11:48:37.191503 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:48:37Z","lastTransitionTime":"2025-10-06T11:48:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:48:37 crc kubenswrapper[4958]: I1006 11:48:37.293007 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:48:37 crc kubenswrapper[4958]: I1006 11:48:37.293038 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:48:37 crc kubenswrapper[4958]: I1006 11:48:37.293047 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:48:37 crc kubenswrapper[4958]: I1006 11:48:37.293060 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:48:37 crc kubenswrapper[4958]: I1006 11:48:37.293069 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:48:37Z","lastTransitionTime":"2025-10-06T11:48:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:48:37 crc kubenswrapper[4958]: I1006 11:48:37.396016 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:48:37 crc kubenswrapper[4958]: I1006 11:48:37.396064 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:48:37 crc kubenswrapper[4958]: I1006 11:48:37.396081 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:48:37 crc kubenswrapper[4958]: I1006 11:48:37.396102 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:48:37 crc kubenswrapper[4958]: I1006 11:48:37.396119 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:48:37Z","lastTransitionTime":"2025-10-06T11:48:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:48:37 crc kubenswrapper[4958]: I1006 11:48:37.420967 4958 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-4w4h5_8198c8fd-8ec9-4a56-9e87-4e3967fb8ec7/kube-multus/0.log" Oct 06 11:48:37 crc kubenswrapper[4958]: I1006 11:48:37.421040 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-4w4h5" event={"ID":"8198c8fd-8ec9-4a56-9e87-4e3967fb8ec7","Type":"ContainerStarted","Data":"4e73d50647a8c8e49d503c873fc1939a3747681ec2646bd44db73acf5c7740d0"} Oct 06 11:48:37 crc kubenswrapper[4958]: I1006 11:48:37.438902 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:46Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:48:37Z is after 2025-08-24T17:21:41Z" Oct 06 11:48:37 crc kubenswrapper[4958]: I1006 11:48:37.467029 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-ntrlk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cd589959-144a-41bd-b6d5-a872e5c25cee\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:48Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:48Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bc57f0ff822a9fcb8c3b73146bf4879f6e05c2f92b6a4317528530e54e419278\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:47:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vzpjm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a38398fb9a752d0059e1adfd4ee23557572492f2d27f63ef90ed4730b742fd0e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:47:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vzpjm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://484bfcace3003fb125465d9296c1b04852447d674b6c4abc8fedbd3a1b6ea8be\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:47:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vzpjm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d7834f1b3141381f7d868acc4aceb2809e81896cb7082de9888c3d540d062078\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:47:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vzpjm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://051e26a11e5f65ab6e49664f99a5f6a87bf781bf7284f2e2e3996c0e26e77c84\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:47:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vzpjm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d90731b1e085c514ef0665d4df356b853f432a1012f84a41ed82fa257d57d9bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:47:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vzpjm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3385ca172113f8a597490f651e5ab5d40777cf02ba40bb61b2ffa350d5598462\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3385ca172113f8a597490f651e5ab5d40777cf02ba40bb61b2ffa350d5598462\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-10-06T11:48:18Z\\\",\\\"message\\\":\\\"edroute/v1/apis/informers/externalversions/factory.go:140\\\\nI1006 11:48:18.897422 6666 reflector.go:311] Stopping reflector *v1alpha1.AdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI1006 11:48:18.897824 6666 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI1006 11:48:18.897885 6666 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI1006 11:48:18.897895 6666 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI1006 11:48:18.897916 6666 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI1006 11:48:18.897986 6666 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI1006 11:48:18.897991 6666 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI1006 11:48:18.897989 6666 handler.go:208] Removed *v1.Pod event handler 3\\\\nI1006 11:48:18.898014 6666 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI1006 11:48:18.898007 6666 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI1006 11:48:18.898024 6666 handler.go:208] Removed *v1.Pod event handler 6\\\\nI1006 11:48:18.898039 6666 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI1006 11:48:18.898072 6666 factory.go:656] Stopping watch factory\\\\nI1006 11:48:18.898105 6666 ovnkube.go:599] Stopped ovnkube\\\\nI1006 11\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-06T11:48:18Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-ntrlk_openshift-ovn-kubernetes(cd589959-144a-41bd-b6d5-a872e5c25cee)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vzpjm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1398aae03a42cf60100ed334a3d65c9c4fd501855d5316e7e1ed3974d0c7e22e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:47:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vzpjm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://738882f7b5c7b87e6d6426a997dd7efe5daf1d97973cf1085097cc0f5e790f00\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://738882f7b5c7b87e6d6426a997dd7efe5daf1d97973cf1085097cc0f5e790f00\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-06T11:47:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-06T11:47:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vzpjm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T11:47:48Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-ntrlk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:48:37Z is after 2025-08-24T17:21:41Z" Oct 06 11:48:37 crc kubenswrapper[4958]: I1006 11:48:37.488789 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-lwknw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5e46de0e-9f67-4dae-8601-65004d0d71c1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6adb713c9c2659566eee132d371a68bc10d24c6761c017a1e8b0ce8bf974b9fa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:47:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wghx2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://244104564b5ad6b646671bdc75f440e5568a0c2449c111e56431328365d044fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://244104564b5ad6b646671bdc75f440e5568a0c2449c111e56431328365d044fe\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-06T11:47:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-06T11:47:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wghx2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5f832e5afbc55af6f3de6756005ddfef328c3d8357b1bbc8765469e5b254cf8a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5f832e5afbc55af6f3de6756005ddfef328c3d8357b1bbc8765469e5b254cf8a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-06T11:47:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-06T11:47:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wghx2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://99bec1a2f08582cd407d108d771389ab034877ee58f59e7d899541d0c69809b0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://99bec1a2f08582cd407d108d771389ab034877ee58f59e7d899541d0c69809b0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-06T11:47:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-06T11:47:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wghx2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4d090f1099ca4b6472934163fb66f1875dc03819cf0907f3058aa5006d9d2198\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4d090f1099ca4b6472934163fb66f1875dc03819cf0907f3058aa5006d9d2198\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-06T11:47:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-06T11:47:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wghx2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://41bdbeadff713969b3914296b7f045991c7da126f8181ee9b60d4607d7a565dd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://41bdbeadff713969b3914296b7f045991c7da126f8181ee9b60d4607d7a565dd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-06T11:47:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-06T11:47:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wghx2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3e496dd75c5dd4e480d06cd46302b6172620b35eb24aff3308865a0931677bc4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3e496dd75c5dd4e480d06cd46302b6172620b35eb24aff3308865a0931677bc4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-06T11:47:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-06T11:47:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wghx2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T11:47:47Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-lwknw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:48:37Z is after 2025-08-24T17:21:41Z" Oct 06 11:48:37 crc kubenswrapper[4958]: I1006 11:48:37.498415 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:48:37 crc kubenswrapper[4958]: I1006 11:48:37.498472 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:48:37 crc kubenswrapper[4958]: I1006 11:48:37.498485 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:48:37 crc kubenswrapper[4958]: I1006 11:48:37.498504 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:48:37 crc kubenswrapper[4958]: I1006 11:48:37.498516 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:48:37Z","lastTransitionTime":"2025-10-06T11:48:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:48:37 crc kubenswrapper[4958]: I1006 11:48:37.506814 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-4w4h5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8198c8fd-8ec9-4a56-9e87-4e3967fb8ec7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:48:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:48:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4e73d50647a8c8e49d503c873fc1939a3747681ec2646bd44db73acf5c7740d0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://979bf29caab50b132b30dd6eff62aed5d2209ae1c43fde2433101410df35035b\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-10-06T11:48:35Z\\\",\\\"message\\\":\\\"2025-10-06T11:47:50+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_98b69acf-4cc1-4904-b48b-cf009f0cef75\\\\n2025-10-06T11:47:50+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_98b69acf-4cc1-4904-b48b-cf009f0cef75 to /host/opt/cni/bin/\\\\n2025-10-06T11:47:50Z [verbose] multus-daemon started\\\\n2025-10-06T11:47:50Z [verbose] Readiness Indicator file check\\\\n2025-10-06T11:48:35Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-06T11:47:48Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:48:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2ncxs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T11:47:47Z\\\"}}\" for pod \"openshift-multus\"/\"multus-4w4h5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:48:37Z is after 2025-08-24T17:21:41Z" Oct 06 11:48:37 crc kubenswrapper[4958]: I1006 11:48:37.520749 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-whw6z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1a63a5e9-6d00-40ff-a080-cfcaf03a1c1b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b4fae35f5c37636077eb5d2fee37236e39d009d77aa158458d267aed5fc29f48\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:47:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bwd9v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0be6251825a9cd485da028e6f7fd169dae74215a7191607dfd0a4b7a470e43c4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:47:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bwd9v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T11:47:47Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-whw6z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:48:37Z is after 2025-08-24T17:21:41Z" Oct 06 11:48:37 crc kubenswrapper[4958]: I1006 11:48:37.532294 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-8jbr9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ddd869cc-2703-4f0d-b694-32bb7735025e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:48:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:48:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:48:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:48:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9298df9f82818033b7c6c96e0d7fd59290a7ef86679ae958f19f1c841998715c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:48:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8vhr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5d95fcda3db5aafa2634bf9d4037a6b8b0456bf4be6fc620c742e57e86aed114\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:48:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8vhr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T11:48:00Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-8jbr9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:48:37Z is after 2025-08-24T17:21:41Z" Oct 06 11:48:37 crc kubenswrapper[4958]: I1006 11:48:37.544696 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ba205413-cfaf-4b44-a363-11756cfa7dda\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:48:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:48:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fb611e3b5c05dc18f2016ae86df59c40ce281cb8b0180bd570de0ef76740db43\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:47:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ec146c3b7bff7005ab104f50dbc189e81ee82b7b2275ddb606e60832db5a0046\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:47:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0dcb7478b9b3f837afca516fc35e7f3f41e293147c5f27119fcca05f0d4477b3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:47:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c7f87b2139a49d50f883f483bfbfff0af00d2580d85bbddb604d395afbae3a73\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c7f87b2139a49d50f883f483bfbfff0af00d2580d85bbddb604d395afbae3a73\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-06T11:47:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-06T11:47:28Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T11:47:27Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:48:37Z is after 2025-08-24T17:21:41Z" Oct 06 11:48:37 crc kubenswrapper[4958]: I1006 11:48:37.558611 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:48Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:48Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3a93985bd27818990d6f5ec103d8c0722e1317a74398d64300148018c4f77b71\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:47:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9f39e4dcc064167825d4f7323323a394d1a3d1a374b7104bffc6ad56b344d89e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:47:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:48:37Z is after 2025-08-24T17:21:41Z" Oct 06 11:48:37 crc kubenswrapper[4958]: I1006 11:48:37.571627 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:46Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:48:37Z is after 2025-08-24T17:21:41Z" Oct 06 11:48:37 crc kubenswrapper[4958]: I1006 11:48:37.582974 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-8xzjs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c3545ace-6477-4c0b-9576-f32c6748e0a0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d94d788ac8f6a4665db7852c21f55128c9afb7976c9e18fbc302c02d8c3a0c43\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:47:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lz2sh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T11:47:47Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-8xzjs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:48:37Z is after 2025-08-24T17:21:41Z" Oct 06 11:48:37 crc kubenswrapper[4958]: I1006 11:48:37.593532 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-vns7w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f33eae78-7219-4551-bea7-dcfaf22c4e62\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7c6a67770ab5e7e0af1ed6a7b4965bc2b3937e021ddf18a765edd7b65196ad5c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:47:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f8htn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T11:47:49Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-vns7w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:48:37Z is after 2025-08-24T17:21:41Z" Oct 06 11:48:37 crc kubenswrapper[4958]: I1006 11:48:37.601031 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:48:37 crc kubenswrapper[4958]: I1006 11:48:37.601099 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:48:37 crc kubenswrapper[4958]: I1006 11:48:37.601117 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:48:37 crc kubenswrapper[4958]: I1006 11:48:37.601178 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:48:37 crc kubenswrapper[4958]: I1006 11:48:37.601198 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:48:37Z","lastTransitionTime":"2025-10-06T11:48:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:48:37 crc kubenswrapper[4958]: I1006 11:48:37.605672 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e346fee2-2887-49c0-ad05-0e4ca19453e4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://42a6335b390e5de2da2d935de9e0cb1e60391521813d0928c59d1d9c27ae8e7f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:47:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://64dfdaf2d8faa6e7ebde08361728077d3db9118ec963dc4c1fc8caf9eb9c752b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:47:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5cbba96aaea45073aaeada1136c677ba1ba87248f004cf3d7394ddfccf412478\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:47:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://95e5c689ad307b63dec13d774d1ff8fe8fa0a11a7d33e18a0470000a4df0f6dc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:47:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T11:47:27Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:48:37Z is after 2025-08-24T17:21:41Z" Oct 06 11:48:37 crc kubenswrapper[4958]: I1006 11:48:37.620183 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:46Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:48:37Z is after 2025-08-24T17:21:41Z" Oct 06 11:48:37 crc kubenswrapper[4958]: I1006 11:48:37.633459 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:50Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:50Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fbac1bba642ea0238b738ec34631d44326b215575297c7ea3595aa9d691849d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:47:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:48:37Z is after 2025-08-24T17:21:41Z" Oct 06 11:48:37 crc kubenswrapper[4958]: I1006 11:48:37.644372 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-4mxw5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cbfbf2ea-df5f-4844-ba69-8c9f16a3a26c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:48:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:48:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:48:02Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:48:02Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cdkh7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cdkh7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T11:48:02Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-4mxw5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:48:37Z is after 2025-08-24T17:21:41Z" Oct 06 11:48:37 crc kubenswrapper[4958]: I1006 11:48:37.660276 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9bd5b11f-6414-4c57-99b8-516b3f4be804\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:48:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:48:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8f40315a451e12055987bec9804330dac96578f86d7fdedfd96ed187936df857\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:47:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0b16a70f5c26106f885601fdc9f41b6d606668ed3a4a527be3a7674dd4217d36\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:47:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://93cedfe1a3600f0eb73fd787333690ed1fd8cd5f766a54ba40eb6aad808593c7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:47:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://da49c178e22edd3e5f0b37f05dca576095bf83765cb9e46fcb68b1b362e304f7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fdb3ac1507f9a55981ed07786d2ab8bd2aab7072e8e625b17f15bc27f40f460d\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-10-06T11:47:48Z\\\",\\\"message\\\":\\\"file observer\\\\nW1006 11:47:47.776944 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1006 11:47:47.777183 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1006 11:47:47.778589 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-624858867/tls.crt::/tmp/serving-cert-624858867/tls.key\\\\\\\"\\\\nI1006 11:47:48.078332 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1006 11:47:48.084427 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1006 11:47:48.084453 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1006 11:47:48.084475 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1006 11:47:48.084480 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1006 11:47:48.108531 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI1006 11:47:48.108546 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW1006 11:47:48.108561 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1006 11:47:48.108582 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1006 11:47:48.108586 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1006 11:47:48.108589 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1006 11:47:48.108592 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1006 11:47:48.108594 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1006 11:47:48.110441 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-06T11:47:42Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:48:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://533257b553a482e13f8c8930f7ee7d6f8b70af1499cf861d3dfe73cb2225aa2a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:47:29Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://70c0a2bcb44c905e3d39402f0992d3e50a939252201b98e9a723f5a6777d7f91\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://70c0a2bcb44c905e3d39402f0992d3e50a939252201b98e9a723f5a6777d7f91\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-06T11:47:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-06T11:47:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T11:47:27Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:48:37Z is after 2025-08-24T17:21:41Z" Oct 06 11:48:37 crc kubenswrapper[4958]: I1006 11:48:37.674762 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:48Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:48Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b5c0c63694c68c28642df297d1ceff03425621c51b170c758b3fedeafbccc89e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:47:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:48:37Z is after 2025-08-24T17:21:41Z" Oct 06 11:48:37 crc kubenswrapper[4958]: I1006 11:48:37.703832 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:48:37 crc kubenswrapper[4958]: I1006 11:48:37.703902 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:48:37 crc kubenswrapper[4958]: I1006 11:48:37.703918 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:48:37 crc kubenswrapper[4958]: I1006 11:48:37.703944 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:48:37 crc kubenswrapper[4958]: I1006 11:48:37.703965 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:48:37Z","lastTransitionTime":"2025-10-06T11:48:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:48:37 crc kubenswrapper[4958]: I1006 11:48:37.806887 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:48:37 crc kubenswrapper[4958]: I1006 11:48:37.806942 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:48:37 crc kubenswrapper[4958]: I1006 11:48:37.806959 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:48:37 crc kubenswrapper[4958]: I1006 11:48:37.806982 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:48:37 crc kubenswrapper[4958]: I1006 11:48:37.807001 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:48:37Z","lastTransitionTime":"2025-10-06T11:48:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:48:37 crc kubenswrapper[4958]: I1006 11:48:37.909792 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:48:37 crc kubenswrapper[4958]: I1006 11:48:37.909853 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:48:37 crc kubenswrapper[4958]: I1006 11:48:37.909876 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:48:37 crc kubenswrapper[4958]: I1006 11:48:37.909900 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:48:37 crc kubenswrapper[4958]: I1006 11:48:37.909916 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:48:37Z","lastTransitionTime":"2025-10-06T11:48:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:48:37 crc kubenswrapper[4958]: I1006 11:48:37.913030 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-4mxw5" Oct 06 11:48:37 crc kubenswrapper[4958]: E1006 11:48:37.913236 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-4mxw5" podUID="cbfbf2ea-df5f-4844-ba69-8c9f16a3a26c" Oct 06 11:48:38 crc kubenswrapper[4958]: I1006 11:48:38.012463 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:48:38 crc kubenswrapper[4958]: I1006 11:48:38.012519 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:48:38 crc kubenswrapper[4958]: I1006 11:48:38.012537 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:48:38 crc kubenswrapper[4958]: I1006 11:48:38.012561 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:48:38 crc kubenswrapper[4958]: I1006 11:48:38.012578 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:48:38Z","lastTransitionTime":"2025-10-06T11:48:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:48:38 crc kubenswrapper[4958]: I1006 11:48:38.114891 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:48:38 crc kubenswrapper[4958]: I1006 11:48:38.114945 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:48:38 crc kubenswrapper[4958]: I1006 11:48:38.114963 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:48:38 crc kubenswrapper[4958]: I1006 11:48:38.114985 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:48:38 crc kubenswrapper[4958]: I1006 11:48:38.115001 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:48:38Z","lastTransitionTime":"2025-10-06T11:48:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:48:38 crc kubenswrapper[4958]: I1006 11:48:38.217756 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:48:38 crc kubenswrapper[4958]: I1006 11:48:38.217804 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:48:38 crc kubenswrapper[4958]: I1006 11:48:38.217821 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:48:38 crc kubenswrapper[4958]: I1006 11:48:38.217843 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:48:38 crc kubenswrapper[4958]: I1006 11:48:38.217859 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:48:38Z","lastTransitionTime":"2025-10-06T11:48:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:48:38 crc kubenswrapper[4958]: I1006 11:48:38.320068 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:48:38 crc kubenswrapper[4958]: I1006 11:48:38.320178 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:48:38 crc kubenswrapper[4958]: I1006 11:48:38.320197 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:48:38 crc kubenswrapper[4958]: I1006 11:48:38.320222 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:48:38 crc kubenswrapper[4958]: I1006 11:48:38.320241 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:48:38Z","lastTransitionTime":"2025-10-06T11:48:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:48:38 crc kubenswrapper[4958]: I1006 11:48:38.423002 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:48:38 crc kubenswrapper[4958]: I1006 11:48:38.423078 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:48:38 crc kubenswrapper[4958]: I1006 11:48:38.423092 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:48:38 crc kubenswrapper[4958]: I1006 11:48:38.423110 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:48:38 crc kubenswrapper[4958]: I1006 11:48:38.423157 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:48:38Z","lastTransitionTime":"2025-10-06T11:48:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:48:38 crc kubenswrapper[4958]: I1006 11:48:38.526628 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:48:38 crc kubenswrapper[4958]: I1006 11:48:38.526697 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:48:38 crc kubenswrapper[4958]: I1006 11:48:38.526718 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:48:38 crc kubenswrapper[4958]: I1006 11:48:38.526741 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:48:38 crc kubenswrapper[4958]: I1006 11:48:38.526758 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:48:38Z","lastTransitionTime":"2025-10-06T11:48:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:48:38 crc kubenswrapper[4958]: I1006 11:48:38.629394 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:48:38 crc kubenswrapper[4958]: I1006 11:48:38.629478 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:48:38 crc kubenswrapper[4958]: I1006 11:48:38.629498 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:48:38 crc kubenswrapper[4958]: I1006 11:48:38.629526 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:48:38 crc kubenswrapper[4958]: I1006 11:48:38.629545 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:48:38Z","lastTransitionTime":"2025-10-06T11:48:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:48:38 crc kubenswrapper[4958]: I1006 11:48:38.732453 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:48:38 crc kubenswrapper[4958]: I1006 11:48:38.732502 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:48:38 crc kubenswrapper[4958]: I1006 11:48:38.732516 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:48:38 crc kubenswrapper[4958]: I1006 11:48:38.732534 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:48:38 crc kubenswrapper[4958]: I1006 11:48:38.732547 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:48:38Z","lastTransitionTime":"2025-10-06T11:48:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:48:38 crc kubenswrapper[4958]: I1006 11:48:38.835764 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:48:38 crc kubenswrapper[4958]: I1006 11:48:38.835803 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:48:38 crc kubenswrapper[4958]: I1006 11:48:38.835812 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:48:38 crc kubenswrapper[4958]: I1006 11:48:38.835826 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:48:38 crc kubenswrapper[4958]: I1006 11:48:38.835835 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:48:38Z","lastTransitionTime":"2025-10-06T11:48:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:48:38 crc kubenswrapper[4958]: I1006 11:48:38.912615 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 06 11:48:38 crc kubenswrapper[4958]: I1006 11:48:38.912664 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 06 11:48:38 crc kubenswrapper[4958]: E1006 11:48:38.912713 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Oct 06 11:48:38 crc kubenswrapper[4958]: I1006 11:48:38.912680 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 06 11:48:38 crc kubenswrapper[4958]: E1006 11:48:38.912835 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Oct 06 11:48:38 crc kubenswrapper[4958]: E1006 11:48:38.913001 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Oct 06 11:48:38 crc kubenswrapper[4958]: I1006 11:48:38.937932 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:48:38 crc kubenswrapper[4958]: I1006 11:48:38.937997 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:48:38 crc kubenswrapper[4958]: I1006 11:48:38.938016 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:48:38 crc kubenswrapper[4958]: I1006 11:48:38.938043 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:48:38 crc kubenswrapper[4958]: I1006 11:48:38.938065 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:48:38Z","lastTransitionTime":"2025-10-06T11:48:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:48:39 crc kubenswrapper[4958]: I1006 11:48:39.040874 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:48:39 crc kubenswrapper[4958]: I1006 11:48:39.040935 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:48:39 crc kubenswrapper[4958]: I1006 11:48:39.040956 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:48:39 crc kubenswrapper[4958]: I1006 11:48:39.040980 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:48:39 crc kubenswrapper[4958]: I1006 11:48:39.040997 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:48:39Z","lastTransitionTime":"2025-10-06T11:48:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:48:39 crc kubenswrapper[4958]: I1006 11:48:39.143750 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:48:39 crc kubenswrapper[4958]: I1006 11:48:39.143818 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:48:39 crc kubenswrapper[4958]: I1006 11:48:39.143842 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:48:39 crc kubenswrapper[4958]: I1006 11:48:39.143874 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:48:39 crc kubenswrapper[4958]: I1006 11:48:39.143896 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:48:39Z","lastTransitionTime":"2025-10-06T11:48:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:48:39 crc kubenswrapper[4958]: I1006 11:48:39.246534 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:48:39 crc kubenswrapper[4958]: I1006 11:48:39.246602 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:48:39 crc kubenswrapper[4958]: I1006 11:48:39.246619 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:48:39 crc kubenswrapper[4958]: I1006 11:48:39.246644 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:48:39 crc kubenswrapper[4958]: I1006 11:48:39.246661 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:48:39Z","lastTransitionTime":"2025-10-06T11:48:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:48:39 crc kubenswrapper[4958]: I1006 11:48:39.349616 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:48:39 crc kubenswrapper[4958]: I1006 11:48:39.349676 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:48:39 crc kubenswrapper[4958]: I1006 11:48:39.349690 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:48:39 crc kubenswrapper[4958]: I1006 11:48:39.349707 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:48:39 crc kubenswrapper[4958]: I1006 11:48:39.349719 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:48:39Z","lastTransitionTime":"2025-10-06T11:48:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:48:39 crc kubenswrapper[4958]: I1006 11:48:39.452915 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:48:39 crc kubenswrapper[4958]: I1006 11:48:39.452973 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:48:39 crc kubenswrapper[4958]: I1006 11:48:39.452988 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:48:39 crc kubenswrapper[4958]: I1006 11:48:39.453013 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:48:39 crc kubenswrapper[4958]: I1006 11:48:39.453030 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:48:39Z","lastTransitionTime":"2025-10-06T11:48:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:48:39 crc kubenswrapper[4958]: I1006 11:48:39.555694 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:48:39 crc kubenswrapper[4958]: I1006 11:48:39.555745 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:48:39 crc kubenswrapper[4958]: I1006 11:48:39.555760 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:48:39 crc kubenswrapper[4958]: I1006 11:48:39.555779 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:48:39 crc kubenswrapper[4958]: I1006 11:48:39.555791 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:48:39Z","lastTransitionTime":"2025-10-06T11:48:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:48:39 crc kubenswrapper[4958]: I1006 11:48:39.658742 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:48:39 crc kubenswrapper[4958]: I1006 11:48:39.658793 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:48:39 crc kubenswrapper[4958]: I1006 11:48:39.658805 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:48:39 crc kubenswrapper[4958]: I1006 11:48:39.658824 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:48:39 crc kubenswrapper[4958]: I1006 11:48:39.658835 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:48:39Z","lastTransitionTime":"2025-10-06T11:48:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:48:39 crc kubenswrapper[4958]: I1006 11:48:39.761092 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:48:39 crc kubenswrapper[4958]: I1006 11:48:39.761194 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:48:39 crc kubenswrapper[4958]: I1006 11:48:39.761215 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:48:39 crc kubenswrapper[4958]: I1006 11:48:39.761239 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:48:39 crc kubenswrapper[4958]: I1006 11:48:39.761255 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:48:39Z","lastTransitionTime":"2025-10-06T11:48:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:48:39 crc kubenswrapper[4958]: I1006 11:48:39.864005 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:48:39 crc kubenswrapper[4958]: I1006 11:48:39.864095 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:48:39 crc kubenswrapper[4958]: I1006 11:48:39.864113 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:48:39 crc kubenswrapper[4958]: I1006 11:48:39.864136 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:48:39 crc kubenswrapper[4958]: I1006 11:48:39.864186 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:48:39Z","lastTransitionTime":"2025-10-06T11:48:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:48:39 crc kubenswrapper[4958]: I1006 11:48:39.913181 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-4mxw5" Oct 06 11:48:39 crc kubenswrapper[4958]: E1006 11:48:39.913365 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-4mxw5" podUID="cbfbf2ea-df5f-4844-ba69-8c9f16a3a26c" Oct 06 11:48:39 crc kubenswrapper[4958]: I1006 11:48:39.966400 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:48:39 crc kubenswrapper[4958]: I1006 11:48:39.966465 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:48:39 crc kubenswrapper[4958]: I1006 11:48:39.966488 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:48:39 crc kubenswrapper[4958]: I1006 11:48:39.966517 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:48:39 crc kubenswrapper[4958]: I1006 11:48:39.966540 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:48:39Z","lastTransitionTime":"2025-10-06T11:48:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:48:40 crc kubenswrapper[4958]: I1006 11:48:40.069096 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:48:40 crc kubenswrapper[4958]: I1006 11:48:40.069178 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:48:40 crc kubenswrapper[4958]: I1006 11:48:40.069198 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:48:40 crc kubenswrapper[4958]: I1006 11:48:40.069221 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:48:40 crc kubenswrapper[4958]: I1006 11:48:40.069240 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:48:40Z","lastTransitionTime":"2025-10-06T11:48:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:48:40 crc kubenswrapper[4958]: I1006 11:48:40.172395 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:48:40 crc kubenswrapper[4958]: I1006 11:48:40.172467 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:48:40 crc kubenswrapper[4958]: I1006 11:48:40.172485 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:48:40 crc kubenswrapper[4958]: I1006 11:48:40.172510 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:48:40 crc kubenswrapper[4958]: I1006 11:48:40.172528 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:48:40Z","lastTransitionTime":"2025-10-06T11:48:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:48:40 crc kubenswrapper[4958]: I1006 11:48:40.275897 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:48:40 crc kubenswrapper[4958]: I1006 11:48:40.275968 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:48:40 crc kubenswrapper[4958]: I1006 11:48:40.275987 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:48:40 crc kubenswrapper[4958]: I1006 11:48:40.276011 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:48:40 crc kubenswrapper[4958]: I1006 11:48:40.276028 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:48:40Z","lastTransitionTime":"2025-10-06T11:48:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:48:40 crc kubenswrapper[4958]: I1006 11:48:40.379378 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:48:40 crc kubenswrapper[4958]: I1006 11:48:40.379435 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:48:40 crc kubenswrapper[4958]: I1006 11:48:40.379448 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:48:40 crc kubenswrapper[4958]: I1006 11:48:40.379466 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:48:40 crc kubenswrapper[4958]: I1006 11:48:40.379486 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:48:40Z","lastTransitionTime":"2025-10-06T11:48:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:48:40 crc kubenswrapper[4958]: I1006 11:48:40.482462 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:48:40 crc kubenswrapper[4958]: I1006 11:48:40.482504 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:48:40 crc kubenswrapper[4958]: I1006 11:48:40.482516 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:48:40 crc kubenswrapper[4958]: I1006 11:48:40.482531 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:48:40 crc kubenswrapper[4958]: I1006 11:48:40.482544 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:48:40Z","lastTransitionTime":"2025-10-06T11:48:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:48:40 crc kubenswrapper[4958]: I1006 11:48:40.585185 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:48:40 crc kubenswrapper[4958]: I1006 11:48:40.585243 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:48:40 crc kubenswrapper[4958]: I1006 11:48:40.585255 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:48:40 crc kubenswrapper[4958]: I1006 11:48:40.585272 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:48:40 crc kubenswrapper[4958]: I1006 11:48:40.585284 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:48:40Z","lastTransitionTime":"2025-10-06T11:48:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:48:40 crc kubenswrapper[4958]: I1006 11:48:40.687639 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:48:40 crc kubenswrapper[4958]: I1006 11:48:40.687683 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:48:40 crc kubenswrapper[4958]: I1006 11:48:40.687691 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:48:40 crc kubenswrapper[4958]: I1006 11:48:40.687706 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:48:40 crc kubenswrapper[4958]: I1006 11:48:40.687717 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:48:40Z","lastTransitionTime":"2025-10-06T11:48:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:48:40 crc kubenswrapper[4958]: I1006 11:48:40.813449 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:48:40 crc kubenswrapper[4958]: I1006 11:48:40.813512 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:48:40 crc kubenswrapper[4958]: I1006 11:48:40.813530 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:48:40 crc kubenswrapper[4958]: I1006 11:48:40.813556 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:48:40 crc kubenswrapper[4958]: I1006 11:48:40.813576 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:48:40Z","lastTransitionTime":"2025-10-06T11:48:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:48:40 crc kubenswrapper[4958]: I1006 11:48:40.912334 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 06 11:48:40 crc kubenswrapper[4958]: I1006 11:48:40.912371 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 06 11:48:40 crc kubenswrapper[4958]: E1006 11:48:40.912580 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Oct 06 11:48:40 crc kubenswrapper[4958]: I1006 11:48:40.912616 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 06 11:48:40 crc kubenswrapper[4958]: E1006 11:48:40.912759 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Oct 06 11:48:40 crc kubenswrapper[4958]: E1006 11:48:40.912928 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Oct 06 11:48:40 crc kubenswrapper[4958]: I1006 11:48:40.916053 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:48:40 crc kubenswrapper[4958]: I1006 11:48:40.916088 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:48:40 crc kubenswrapper[4958]: I1006 11:48:40.916099 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:48:40 crc kubenswrapper[4958]: I1006 11:48:40.916116 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:48:40 crc kubenswrapper[4958]: I1006 11:48:40.916129 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:48:40Z","lastTransitionTime":"2025-10-06T11:48:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:48:41 crc kubenswrapper[4958]: I1006 11:48:41.018607 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:48:41 crc kubenswrapper[4958]: I1006 11:48:41.018650 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:48:41 crc kubenswrapper[4958]: I1006 11:48:41.018663 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:48:41 crc kubenswrapper[4958]: I1006 11:48:41.018677 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:48:41 crc kubenswrapper[4958]: I1006 11:48:41.018687 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:48:41Z","lastTransitionTime":"2025-10-06T11:48:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:48:41 crc kubenswrapper[4958]: I1006 11:48:41.120795 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:48:41 crc kubenswrapper[4958]: I1006 11:48:41.120829 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:48:41 crc kubenswrapper[4958]: I1006 11:48:41.120837 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:48:41 crc kubenswrapper[4958]: I1006 11:48:41.120849 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:48:41 crc kubenswrapper[4958]: I1006 11:48:41.120858 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:48:41Z","lastTransitionTime":"2025-10-06T11:48:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:48:41 crc kubenswrapper[4958]: I1006 11:48:41.222961 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:48:41 crc kubenswrapper[4958]: I1006 11:48:41.222990 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:48:41 crc kubenswrapper[4958]: I1006 11:48:41.222998 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:48:41 crc kubenswrapper[4958]: I1006 11:48:41.223009 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:48:41 crc kubenswrapper[4958]: I1006 11:48:41.223047 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:48:41Z","lastTransitionTime":"2025-10-06T11:48:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:48:41 crc kubenswrapper[4958]: I1006 11:48:41.325651 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:48:41 crc kubenswrapper[4958]: I1006 11:48:41.325723 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:48:41 crc kubenswrapper[4958]: I1006 11:48:41.325741 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:48:41 crc kubenswrapper[4958]: I1006 11:48:41.325772 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:48:41 crc kubenswrapper[4958]: I1006 11:48:41.325795 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:48:41Z","lastTransitionTime":"2025-10-06T11:48:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:48:41 crc kubenswrapper[4958]: I1006 11:48:41.428546 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:48:41 crc kubenswrapper[4958]: I1006 11:48:41.428607 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:48:41 crc kubenswrapper[4958]: I1006 11:48:41.428625 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:48:41 crc kubenswrapper[4958]: I1006 11:48:41.428649 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:48:41 crc kubenswrapper[4958]: I1006 11:48:41.428667 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:48:41Z","lastTransitionTime":"2025-10-06T11:48:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:48:41 crc kubenswrapper[4958]: I1006 11:48:41.531315 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:48:41 crc kubenswrapper[4958]: I1006 11:48:41.531416 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:48:41 crc kubenswrapper[4958]: I1006 11:48:41.531464 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:48:41 crc kubenswrapper[4958]: I1006 11:48:41.531491 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:48:41 crc kubenswrapper[4958]: I1006 11:48:41.531508 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:48:41Z","lastTransitionTime":"2025-10-06T11:48:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:48:41 crc kubenswrapper[4958]: I1006 11:48:41.633915 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:48:41 crc kubenswrapper[4958]: I1006 11:48:41.633972 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:48:41 crc kubenswrapper[4958]: I1006 11:48:41.633989 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:48:41 crc kubenswrapper[4958]: I1006 11:48:41.634013 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:48:41 crc kubenswrapper[4958]: I1006 11:48:41.634030 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:48:41Z","lastTransitionTime":"2025-10-06T11:48:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:48:41 crc kubenswrapper[4958]: I1006 11:48:41.736695 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:48:41 crc kubenswrapper[4958]: I1006 11:48:41.736746 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:48:41 crc kubenswrapper[4958]: I1006 11:48:41.736762 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:48:41 crc kubenswrapper[4958]: I1006 11:48:41.736782 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:48:41 crc kubenswrapper[4958]: I1006 11:48:41.736795 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:48:41Z","lastTransitionTime":"2025-10-06T11:48:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:48:41 crc kubenswrapper[4958]: I1006 11:48:41.839175 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:48:41 crc kubenswrapper[4958]: I1006 11:48:41.839224 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:48:41 crc kubenswrapper[4958]: I1006 11:48:41.839238 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:48:41 crc kubenswrapper[4958]: I1006 11:48:41.839259 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:48:41 crc kubenswrapper[4958]: I1006 11:48:41.839274 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:48:41Z","lastTransitionTime":"2025-10-06T11:48:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:48:41 crc kubenswrapper[4958]: I1006 11:48:41.913232 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-4mxw5" Oct 06 11:48:41 crc kubenswrapper[4958]: E1006 11:48:41.913409 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-4mxw5" podUID="cbfbf2ea-df5f-4844-ba69-8c9f16a3a26c" Oct 06 11:48:41 crc kubenswrapper[4958]: I1006 11:48:41.941742 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:48:41 crc kubenswrapper[4958]: I1006 11:48:41.941772 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:48:41 crc kubenswrapper[4958]: I1006 11:48:41.941786 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:48:41 crc kubenswrapper[4958]: I1006 11:48:41.941800 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:48:41 crc kubenswrapper[4958]: I1006 11:48:41.941811 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:48:41Z","lastTransitionTime":"2025-10-06T11:48:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:48:42 crc kubenswrapper[4958]: I1006 11:48:42.045237 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:48:42 crc kubenswrapper[4958]: I1006 11:48:42.045269 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:48:42 crc kubenswrapper[4958]: I1006 11:48:42.045277 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:48:42 crc kubenswrapper[4958]: I1006 11:48:42.045289 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:48:42 crc kubenswrapper[4958]: I1006 11:48:42.045298 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:48:42Z","lastTransitionTime":"2025-10-06T11:48:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:48:42 crc kubenswrapper[4958]: I1006 11:48:42.148031 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:48:42 crc kubenswrapper[4958]: I1006 11:48:42.148086 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:48:42 crc kubenswrapper[4958]: I1006 11:48:42.148105 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:48:42 crc kubenswrapper[4958]: I1006 11:48:42.148138 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:48:42 crc kubenswrapper[4958]: I1006 11:48:42.148193 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:48:42Z","lastTransitionTime":"2025-10-06T11:48:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:48:42 crc kubenswrapper[4958]: I1006 11:48:42.250997 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:48:42 crc kubenswrapper[4958]: I1006 11:48:42.251055 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:48:42 crc kubenswrapper[4958]: I1006 11:48:42.251071 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:48:42 crc kubenswrapper[4958]: I1006 11:48:42.251091 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:48:42 crc kubenswrapper[4958]: I1006 11:48:42.251105 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:48:42Z","lastTransitionTime":"2025-10-06T11:48:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:48:42 crc kubenswrapper[4958]: I1006 11:48:42.353862 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:48:42 crc kubenswrapper[4958]: I1006 11:48:42.353923 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:48:42 crc kubenswrapper[4958]: I1006 11:48:42.353932 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:48:42 crc kubenswrapper[4958]: I1006 11:48:42.353946 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:48:42 crc kubenswrapper[4958]: I1006 11:48:42.353963 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:48:42Z","lastTransitionTime":"2025-10-06T11:48:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:48:42 crc kubenswrapper[4958]: I1006 11:48:42.456363 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:48:42 crc kubenswrapper[4958]: I1006 11:48:42.456393 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:48:42 crc kubenswrapper[4958]: I1006 11:48:42.456403 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:48:42 crc kubenswrapper[4958]: I1006 11:48:42.456418 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:48:42 crc kubenswrapper[4958]: I1006 11:48:42.456429 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:48:42Z","lastTransitionTime":"2025-10-06T11:48:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:48:42 crc kubenswrapper[4958]: I1006 11:48:42.558821 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:48:42 crc kubenswrapper[4958]: I1006 11:48:42.558896 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:48:42 crc kubenswrapper[4958]: I1006 11:48:42.558920 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:48:42 crc kubenswrapper[4958]: I1006 11:48:42.558948 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:48:42 crc kubenswrapper[4958]: I1006 11:48:42.558971 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:48:42Z","lastTransitionTime":"2025-10-06T11:48:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:48:42 crc kubenswrapper[4958]: I1006 11:48:42.661961 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:48:42 crc kubenswrapper[4958]: I1006 11:48:42.662005 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:48:42 crc kubenswrapper[4958]: I1006 11:48:42.662024 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:48:42 crc kubenswrapper[4958]: I1006 11:48:42.662052 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:48:42 crc kubenswrapper[4958]: I1006 11:48:42.662073 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:48:42Z","lastTransitionTime":"2025-10-06T11:48:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:48:42 crc kubenswrapper[4958]: I1006 11:48:42.764994 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:48:42 crc kubenswrapper[4958]: I1006 11:48:42.765025 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:48:42 crc kubenswrapper[4958]: I1006 11:48:42.765033 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:48:42 crc kubenswrapper[4958]: I1006 11:48:42.765045 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:48:42 crc kubenswrapper[4958]: I1006 11:48:42.765054 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:48:42Z","lastTransitionTime":"2025-10-06T11:48:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:48:42 crc kubenswrapper[4958]: I1006 11:48:42.867215 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:48:42 crc kubenswrapper[4958]: I1006 11:48:42.867243 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:48:42 crc kubenswrapper[4958]: I1006 11:48:42.867254 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:48:42 crc kubenswrapper[4958]: I1006 11:48:42.867268 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:48:42 crc kubenswrapper[4958]: I1006 11:48:42.867277 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:48:42Z","lastTransitionTime":"2025-10-06T11:48:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:48:42 crc kubenswrapper[4958]: I1006 11:48:42.912412 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 06 11:48:42 crc kubenswrapper[4958]: E1006 11:48:42.912580 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Oct 06 11:48:42 crc kubenswrapper[4958]: I1006 11:48:42.912664 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 06 11:48:42 crc kubenswrapper[4958]: E1006 11:48:42.912807 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Oct 06 11:48:42 crc kubenswrapper[4958]: I1006 11:48:42.912984 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 06 11:48:42 crc kubenswrapper[4958]: E1006 11:48:42.913059 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Oct 06 11:48:42 crc kubenswrapper[4958]: I1006 11:48:42.970329 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:48:42 crc kubenswrapper[4958]: I1006 11:48:42.970394 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:48:42 crc kubenswrapper[4958]: I1006 11:48:42.970412 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:48:42 crc kubenswrapper[4958]: I1006 11:48:42.970438 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:48:42 crc kubenswrapper[4958]: I1006 11:48:42.970456 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:48:42Z","lastTransitionTime":"2025-10-06T11:48:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:48:43 crc kubenswrapper[4958]: I1006 11:48:43.073601 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:48:43 crc kubenswrapper[4958]: I1006 11:48:43.073661 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:48:43 crc kubenswrapper[4958]: I1006 11:48:43.073688 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:48:43 crc kubenswrapper[4958]: I1006 11:48:43.073718 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:48:43 crc kubenswrapper[4958]: I1006 11:48:43.073743 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:48:43Z","lastTransitionTime":"2025-10-06T11:48:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:48:43 crc kubenswrapper[4958]: I1006 11:48:43.177594 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:48:43 crc kubenswrapper[4958]: I1006 11:48:43.177664 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:48:43 crc kubenswrapper[4958]: I1006 11:48:43.177690 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:48:43 crc kubenswrapper[4958]: I1006 11:48:43.177720 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:48:43 crc kubenswrapper[4958]: I1006 11:48:43.177743 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:48:43Z","lastTransitionTime":"2025-10-06T11:48:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:48:43 crc kubenswrapper[4958]: I1006 11:48:43.281192 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:48:43 crc kubenswrapper[4958]: I1006 11:48:43.281274 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:48:43 crc kubenswrapper[4958]: I1006 11:48:43.281300 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:48:43 crc kubenswrapper[4958]: I1006 11:48:43.281335 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:48:43 crc kubenswrapper[4958]: I1006 11:48:43.281360 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:48:43Z","lastTransitionTime":"2025-10-06T11:48:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:48:43 crc kubenswrapper[4958]: I1006 11:48:43.384685 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:48:43 crc kubenswrapper[4958]: I1006 11:48:43.384756 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:48:43 crc kubenswrapper[4958]: I1006 11:48:43.384773 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:48:43 crc kubenswrapper[4958]: I1006 11:48:43.384800 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:48:43 crc kubenswrapper[4958]: I1006 11:48:43.384821 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:48:43Z","lastTransitionTime":"2025-10-06T11:48:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:48:43 crc kubenswrapper[4958]: I1006 11:48:43.487553 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:48:43 crc kubenswrapper[4958]: I1006 11:48:43.487624 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:48:43 crc kubenswrapper[4958]: I1006 11:48:43.487642 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:48:43 crc kubenswrapper[4958]: I1006 11:48:43.487665 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:48:43 crc kubenswrapper[4958]: I1006 11:48:43.487682 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:48:43Z","lastTransitionTime":"2025-10-06T11:48:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:48:43 crc kubenswrapper[4958]: I1006 11:48:43.590975 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:48:43 crc kubenswrapper[4958]: I1006 11:48:43.591036 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:48:43 crc kubenswrapper[4958]: I1006 11:48:43.591055 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:48:43 crc kubenswrapper[4958]: I1006 11:48:43.591079 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:48:43 crc kubenswrapper[4958]: I1006 11:48:43.591096 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:48:43Z","lastTransitionTime":"2025-10-06T11:48:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:48:43 crc kubenswrapper[4958]: I1006 11:48:43.694277 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:48:43 crc kubenswrapper[4958]: I1006 11:48:43.694331 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:48:43 crc kubenswrapper[4958]: I1006 11:48:43.694347 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:48:43 crc kubenswrapper[4958]: I1006 11:48:43.694372 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:48:43 crc kubenswrapper[4958]: I1006 11:48:43.694389 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:48:43Z","lastTransitionTime":"2025-10-06T11:48:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:48:43 crc kubenswrapper[4958]: I1006 11:48:43.797247 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:48:43 crc kubenswrapper[4958]: I1006 11:48:43.797300 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:48:43 crc kubenswrapper[4958]: I1006 11:48:43.797317 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:48:43 crc kubenswrapper[4958]: I1006 11:48:43.797340 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:48:43 crc kubenswrapper[4958]: I1006 11:48:43.797358 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:48:43Z","lastTransitionTime":"2025-10-06T11:48:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:48:43 crc kubenswrapper[4958]: I1006 11:48:43.900408 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:48:43 crc kubenswrapper[4958]: I1006 11:48:43.900479 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:48:43 crc kubenswrapper[4958]: I1006 11:48:43.900505 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:48:43 crc kubenswrapper[4958]: I1006 11:48:43.900534 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:48:43 crc kubenswrapper[4958]: I1006 11:48:43.900556 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:48:43Z","lastTransitionTime":"2025-10-06T11:48:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:48:43 crc kubenswrapper[4958]: I1006 11:48:43.913211 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-4mxw5" Oct 06 11:48:43 crc kubenswrapper[4958]: E1006 11:48:43.913388 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-4mxw5" podUID="cbfbf2ea-df5f-4844-ba69-8c9f16a3a26c" Oct 06 11:48:44 crc kubenswrapper[4958]: I1006 11:48:44.003357 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:48:44 crc kubenswrapper[4958]: I1006 11:48:44.003488 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:48:44 crc kubenswrapper[4958]: I1006 11:48:44.003510 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:48:44 crc kubenswrapper[4958]: I1006 11:48:44.003532 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:48:44 crc kubenswrapper[4958]: I1006 11:48:44.003549 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:48:44Z","lastTransitionTime":"2025-10-06T11:48:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:48:44 crc kubenswrapper[4958]: I1006 11:48:44.106931 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:48:44 crc kubenswrapper[4958]: I1006 11:48:44.107017 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:48:44 crc kubenswrapper[4958]: I1006 11:48:44.107047 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:48:44 crc kubenswrapper[4958]: I1006 11:48:44.107079 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:48:44 crc kubenswrapper[4958]: I1006 11:48:44.107101 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:48:44Z","lastTransitionTime":"2025-10-06T11:48:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:48:44 crc kubenswrapper[4958]: I1006 11:48:44.210108 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:48:44 crc kubenswrapper[4958]: I1006 11:48:44.210227 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:48:44 crc kubenswrapper[4958]: I1006 11:48:44.210251 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:48:44 crc kubenswrapper[4958]: I1006 11:48:44.210275 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:48:44 crc kubenswrapper[4958]: I1006 11:48:44.210292 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:48:44Z","lastTransitionTime":"2025-10-06T11:48:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:48:44 crc kubenswrapper[4958]: I1006 11:48:44.313453 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:48:44 crc kubenswrapper[4958]: I1006 11:48:44.313516 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:48:44 crc kubenswrapper[4958]: I1006 11:48:44.313534 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:48:44 crc kubenswrapper[4958]: I1006 11:48:44.313557 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:48:44 crc kubenswrapper[4958]: I1006 11:48:44.313574 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:48:44Z","lastTransitionTime":"2025-10-06T11:48:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:48:44 crc kubenswrapper[4958]: I1006 11:48:44.417393 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:48:44 crc kubenswrapper[4958]: I1006 11:48:44.417488 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:48:44 crc kubenswrapper[4958]: I1006 11:48:44.417507 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:48:44 crc kubenswrapper[4958]: I1006 11:48:44.417534 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:48:44 crc kubenswrapper[4958]: I1006 11:48:44.417555 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:48:44Z","lastTransitionTime":"2025-10-06T11:48:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:48:44 crc kubenswrapper[4958]: I1006 11:48:44.521839 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:48:44 crc kubenswrapper[4958]: I1006 11:48:44.521924 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:48:44 crc kubenswrapper[4958]: I1006 11:48:44.521952 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:48:44 crc kubenswrapper[4958]: I1006 11:48:44.521983 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:48:44 crc kubenswrapper[4958]: I1006 11:48:44.522001 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:48:44Z","lastTransitionTime":"2025-10-06T11:48:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:48:44 crc kubenswrapper[4958]: I1006 11:48:44.625046 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:48:44 crc kubenswrapper[4958]: I1006 11:48:44.625116 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:48:44 crc kubenswrapper[4958]: I1006 11:48:44.625136 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:48:44 crc kubenswrapper[4958]: I1006 11:48:44.625183 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:48:44 crc kubenswrapper[4958]: I1006 11:48:44.625199 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:48:44Z","lastTransitionTime":"2025-10-06T11:48:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:48:44 crc kubenswrapper[4958]: I1006 11:48:44.727705 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:48:44 crc kubenswrapper[4958]: I1006 11:48:44.727767 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:48:44 crc kubenswrapper[4958]: I1006 11:48:44.727786 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:48:44 crc kubenswrapper[4958]: I1006 11:48:44.727809 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:48:44 crc kubenswrapper[4958]: I1006 11:48:44.727828 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:48:44Z","lastTransitionTime":"2025-10-06T11:48:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:48:44 crc kubenswrapper[4958]: I1006 11:48:44.831069 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:48:44 crc kubenswrapper[4958]: I1006 11:48:44.831129 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:48:44 crc kubenswrapper[4958]: I1006 11:48:44.831170 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:48:44 crc kubenswrapper[4958]: I1006 11:48:44.831194 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:48:44 crc kubenswrapper[4958]: I1006 11:48:44.831211 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:48:44Z","lastTransitionTime":"2025-10-06T11:48:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:48:44 crc kubenswrapper[4958]: I1006 11:48:44.913309 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 06 11:48:44 crc kubenswrapper[4958]: E1006 11:48:44.913483 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Oct 06 11:48:44 crc kubenswrapper[4958]: I1006 11:48:44.913644 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 06 11:48:44 crc kubenswrapper[4958]: I1006 11:48:44.913747 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 06 11:48:44 crc kubenswrapper[4958]: E1006 11:48:44.913925 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Oct 06 11:48:44 crc kubenswrapper[4958]: E1006 11:48:44.914008 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Oct 06 11:48:44 crc kubenswrapper[4958]: I1006 11:48:44.933690 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:48:44 crc kubenswrapper[4958]: I1006 11:48:44.933734 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:48:44 crc kubenswrapper[4958]: I1006 11:48:44.933751 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:48:44 crc kubenswrapper[4958]: I1006 11:48:44.933779 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:48:44 crc kubenswrapper[4958]: I1006 11:48:44.933796 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:48:44Z","lastTransitionTime":"2025-10-06T11:48:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:48:45 crc kubenswrapper[4958]: I1006 11:48:45.036882 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:48:45 crc kubenswrapper[4958]: I1006 11:48:45.036940 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:48:45 crc kubenswrapper[4958]: I1006 11:48:45.036960 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:48:45 crc kubenswrapper[4958]: I1006 11:48:45.036980 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:48:45 crc kubenswrapper[4958]: I1006 11:48:45.036997 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:48:45Z","lastTransitionTime":"2025-10-06T11:48:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:48:45 crc kubenswrapper[4958]: I1006 11:48:45.068430 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:48:45 crc kubenswrapper[4958]: I1006 11:48:45.068758 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:48:45 crc kubenswrapper[4958]: I1006 11:48:45.068886 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:48:45 crc kubenswrapper[4958]: I1006 11:48:45.069088 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:48:45 crc kubenswrapper[4958]: I1006 11:48:45.069324 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:48:45Z","lastTransitionTime":"2025-10-06T11:48:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:48:45 crc kubenswrapper[4958]: E1006 11:48:45.091269 4958 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-10-06T11:48:45Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-06T11:48:45Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-06T11:48:45Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-06T11:48:45Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-06T11:48:45Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-06T11:48:45Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-06T11:48:45Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-06T11:48:45Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"71775188-f0be-4b91-a34f-f469bf9337b6\\\",\\\"systemUUID\\\":\\\"c838f8b7-52cc-46da-a415-eff8b7b887b9\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:48:45Z is after 2025-08-24T17:21:41Z" Oct 06 11:48:45 crc kubenswrapper[4958]: I1006 11:48:45.097184 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:48:45 crc kubenswrapper[4958]: I1006 11:48:45.097256 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:48:45 crc kubenswrapper[4958]: I1006 11:48:45.097275 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:48:45 crc kubenswrapper[4958]: I1006 11:48:45.097299 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:48:45 crc kubenswrapper[4958]: I1006 11:48:45.097317 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:48:45Z","lastTransitionTime":"2025-10-06T11:48:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:48:45 crc kubenswrapper[4958]: E1006 11:48:45.114859 4958 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-10-06T11:48:45Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-06T11:48:45Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-06T11:48:45Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-06T11:48:45Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-06T11:48:45Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-06T11:48:45Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-06T11:48:45Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-06T11:48:45Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"71775188-f0be-4b91-a34f-f469bf9337b6\\\",\\\"systemUUID\\\":\\\"c838f8b7-52cc-46da-a415-eff8b7b887b9\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:48:45Z is after 2025-08-24T17:21:41Z" Oct 06 11:48:45 crc kubenswrapper[4958]: I1006 11:48:45.120620 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:48:45 crc kubenswrapper[4958]: I1006 11:48:45.120990 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:48:45 crc kubenswrapper[4958]: I1006 11:48:45.121602 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:48:45 crc kubenswrapper[4958]: I1006 11:48:45.121800 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:48:45 crc kubenswrapper[4958]: I1006 11:48:45.121986 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:48:45Z","lastTransitionTime":"2025-10-06T11:48:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:48:45 crc kubenswrapper[4958]: E1006 11:48:45.144844 4958 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-10-06T11:48:45Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-06T11:48:45Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-06T11:48:45Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-06T11:48:45Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-06T11:48:45Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-06T11:48:45Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-06T11:48:45Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-06T11:48:45Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"71775188-f0be-4b91-a34f-f469bf9337b6\\\",\\\"systemUUID\\\":\\\"c838f8b7-52cc-46da-a415-eff8b7b887b9\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:48:45Z is after 2025-08-24T17:21:41Z" Oct 06 11:48:45 crc kubenswrapper[4958]: I1006 11:48:45.149769 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:48:45 crc kubenswrapper[4958]: I1006 11:48:45.149833 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:48:45 crc kubenswrapper[4958]: I1006 11:48:45.149857 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:48:45 crc kubenswrapper[4958]: I1006 11:48:45.149887 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:48:45 crc kubenswrapper[4958]: I1006 11:48:45.149910 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:48:45Z","lastTransitionTime":"2025-10-06T11:48:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:48:45 crc kubenswrapper[4958]: E1006 11:48:45.170329 4958 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-10-06T11:48:45Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-06T11:48:45Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-06T11:48:45Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-06T11:48:45Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-06T11:48:45Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-06T11:48:45Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-06T11:48:45Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-06T11:48:45Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"71775188-f0be-4b91-a34f-f469bf9337b6\\\",\\\"systemUUID\\\":\\\"c838f8b7-52cc-46da-a415-eff8b7b887b9\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:48:45Z is after 2025-08-24T17:21:41Z" Oct 06 11:48:45 crc kubenswrapper[4958]: I1006 11:48:45.175657 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:48:45 crc kubenswrapper[4958]: I1006 11:48:45.175703 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:48:45 crc kubenswrapper[4958]: I1006 11:48:45.175737 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:48:45 crc kubenswrapper[4958]: I1006 11:48:45.175760 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:48:45 crc kubenswrapper[4958]: I1006 11:48:45.175773 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:48:45Z","lastTransitionTime":"2025-10-06T11:48:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:48:45 crc kubenswrapper[4958]: E1006 11:48:45.195125 4958 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-10-06T11:48:45Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-06T11:48:45Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-06T11:48:45Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-06T11:48:45Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-06T11:48:45Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-06T11:48:45Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-06T11:48:45Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-06T11:48:45Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"71775188-f0be-4b91-a34f-f469bf9337b6\\\",\\\"systemUUID\\\":\\\"c838f8b7-52cc-46da-a415-eff8b7b887b9\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:48:45Z is after 2025-08-24T17:21:41Z" Oct 06 11:48:45 crc kubenswrapper[4958]: E1006 11:48:45.195391 4958 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Oct 06 11:48:45 crc kubenswrapper[4958]: I1006 11:48:45.197484 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:48:45 crc kubenswrapper[4958]: I1006 11:48:45.197530 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:48:45 crc kubenswrapper[4958]: I1006 11:48:45.197566 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:48:45 crc kubenswrapper[4958]: I1006 11:48:45.197584 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:48:45 crc kubenswrapper[4958]: I1006 11:48:45.197596 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:48:45Z","lastTransitionTime":"2025-10-06T11:48:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:48:45 crc kubenswrapper[4958]: I1006 11:48:45.301009 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:48:45 crc kubenswrapper[4958]: I1006 11:48:45.301062 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:48:45 crc kubenswrapper[4958]: I1006 11:48:45.301081 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:48:45 crc kubenswrapper[4958]: I1006 11:48:45.301104 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:48:45 crc kubenswrapper[4958]: I1006 11:48:45.301120 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:48:45Z","lastTransitionTime":"2025-10-06T11:48:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:48:45 crc kubenswrapper[4958]: I1006 11:48:45.403958 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:48:45 crc kubenswrapper[4958]: I1006 11:48:45.404007 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:48:45 crc kubenswrapper[4958]: I1006 11:48:45.404023 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:48:45 crc kubenswrapper[4958]: I1006 11:48:45.404045 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:48:45 crc kubenswrapper[4958]: I1006 11:48:45.404062 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:48:45Z","lastTransitionTime":"2025-10-06T11:48:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:48:45 crc kubenswrapper[4958]: I1006 11:48:45.507752 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:48:45 crc kubenswrapper[4958]: I1006 11:48:45.507797 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:48:45 crc kubenswrapper[4958]: I1006 11:48:45.507829 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:48:45 crc kubenswrapper[4958]: I1006 11:48:45.507847 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:48:45 crc kubenswrapper[4958]: I1006 11:48:45.507859 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:48:45Z","lastTransitionTime":"2025-10-06T11:48:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:48:45 crc kubenswrapper[4958]: I1006 11:48:45.611585 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:48:45 crc kubenswrapper[4958]: I1006 11:48:45.611669 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:48:45 crc kubenswrapper[4958]: I1006 11:48:45.611734 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:48:45 crc kubenswrapper[4958]: I1006 11:48:45.612007 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:48:45 crc kubenswrapper[4958]: I1006 11:48:45.612059 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:48:45Z","lastTransitionTime":"2025-10-06T11:48:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:48:45 crc kubenswrapper[4958]: I1006 11:48:45.714610 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:48:45 crc kubenswrapper[4958]: I1006 11:48:45.714660 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:48:45 crc kubenswrapper[4958]: I1006 11:48:45.714676 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:48:45 crc kubenswrapper[4958]: I1006 11:48:45.714697 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:48:45 crc kubenswrapper[4958]: I1006 11:48:45.714716 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:48:45Z","lastTransitionTime":"2025-10-06T11:48:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:48:45 crc kubenswrapper[4958]: I1006 11:48:45.817612 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:48:45 crc kubenswrapper[4958]: I1006 11:48:45.817686 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:48:45 crc kubenswrapper[4958]: I1006 11:48:45.817704 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:48:45 crc kubenswrapper[4958]: I1006 11:48:45.817736 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:48:45 crc kubenswrapper[4958]: I1006 11:48:45.817754 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:48:45Z","lastTransitionTime":"2025-10-06T11:48:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:48:45 crc kubenswrapper[4958]: I1006 11:48:45.913308 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-4mxw5" Oct 06 11:48:45 crc kubenswrapper[4958]: E1006 11:48:45.913539 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-4mxw5" podUID="cbfbf2ea-df5f-4844-ba69-8c9f16a3a26c" Oct 06 11:48:45 crc kubenswrapper[4958]: I1006 11:48:45.914623 4958 scope.go:117] "RemoveContainer" containerID="3385ca172113f8a597490f651e5ab5d40777cf02ba40bb61b2ffa350d5598462" Oct 06 11:48:45 crc kubenswrapper[4958]: I1006 11:48:45.920682 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:48:45 crc kubenswrapper[4958]: I1006 11:48:45.920730 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:48:45 crc kubenswrapper[4958]: I1006 11:48:45.920747 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:48:45 crc kubenswrapper[4958]: I1006 11:48:45.920770 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:48:45 crc kubenswrapper[4958]: I1006 11:48:45.920787 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:48:45Z","lastTransitionTime":"2025-10-06T11:48:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:48:46 crc kubenswrapper[4958]: I1006 11:48:46.024238 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:48:46 crc kubenswrapper[4958]: I1006 11:48:46.024311 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:48:46 crc kubenswrapper[4958]: I1006 11:48:46.024329 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:48:46 crc kubenswrapper[4958]: I1006 11:48:46.024356 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:48:46 crc kubenswrapper[4958]: I1006 11:48:46.024374 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:48:46Z","lastTransitionTime":"2025-10-06T11:48:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:48:46 crc kubenswrapper[4958]: I1006 11:48:46.127556 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:48:46 crc kubenswrapper[4958]: I1006 11:48:46.127608 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:48:46 crc kubenswrapper[4958]: I1006 11:48:46.127624 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:48:46 crc kubenswrapper[4958]: I1006 11:48:46.127649 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:48:46 crc kubenswrapper[4958]: I1006 11:48:46.127668 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:48:46Z","lastTransitionTime":"2025-10-06T11:48:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:48:46 crc kubenswrapper[4958]: I1006 11:48:46.231060 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:48:46 crc kubenswrapper[4958]: I1006 11:48:46.231117 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:48:46 crc kubenswrapper[4958]: I1006 11:48:46.231188 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:48:46 crc kubenswrapper[4958]: I1006 11:48:46.231219 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:48:46 crc kubenswrapper[4958]: I1006 11:48:46.231240 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:48:46Z","lastTransitionTime":"2025-10-06T11:48:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:48:46 crc kubenswrapper[4958]: I1006 11:48:46.335044 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:48:46 crc kubenswrapper[4958]: I1006 11:48:46.335105 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:48:46 crc kubenswrapper[4958]: I1006 11:48:46.335123 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:48:46 crc kubenswrapper[4958]: I1006 11:48:46.335181 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:48:46 crc kubenswrapper[4958]: I1006 11:48:46.335202 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:48:46Z","lastTransitionTime":"2025-10-06T11:48:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:48:46 crc kubenswrapper[4958]: I1006 11:48:46.438229 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:48:46 crc kubenswrapper[4958]: I1006 11:48:46.438311 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:48:46 crc kubenswrapper[4958]: I1006 11:48:46.438334 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:48:46 crc kubenswrapper[4958]: I1006 11:48:46.438370 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:48:46 crc kubenswrapper[4958]: I1006 11:48:46.438398 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:48:46Z","lastTransitionTime":"2025-10-06T11:48:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:48:46 crc kubenswrapper[4958]: I1006 11:48:46.464796 4958 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-ntrlk_cd589959-144a-41bd-b6d5-a872e5c25cee/ovnkube-controller/2.log" Oct 06 11:48:46 crc kubenswrapper[4958]: I1006 11:48:46.467912 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-ntrlk" event={"ID":"cd589959-144a-41bd-b6d5-a872e5c25cee","Type":"ContainerStarted","Data":"0d26f1c937b4f4a0ec130d9451086f3770f092498d61310b65bdd8c74a68549e"} Oct 06 11:48:46 crc kubenswrapper[4958]: I1006 11:48:46.468488 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-ntrlk" Oct 06 11:48:46 crc kubenswrapper[4958]: I1006 11:48:46.506385 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-lwknw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5e46de0e-9f67-4dae-8601-65004d0d71c1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6adb713c9c2659566eee132d371a68bc10d24c6761c017a1e8b0ce8bf974b9fa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:47:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wghx2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://244104564b5ad6b646671bdc75f440e5568a0c2449c111e56431328365d044fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://244104564b5ad6b646671bdc75f440e5568a0c2449c111e56431328365d044fe\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-06T11:47:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-06T11:47:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wghx2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5f832e5afbc55af6f3de6756005ddfef328c3d8357b1bbc8765469e5b254cf8a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5f832e5afbc55af6f3de6756005ddfef328c3d8357b1bbc8765469e5b254cf8a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-06T11:47:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-06T11:47:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wghx2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://99bec1a2f08582cd407d108d771389ab034877ee58f59e7d899541d0c69809b0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://99bec1a2f08582cd407d108d771389ab034877ee58f59e7d899541d0c69809b0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-06T11:47:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-06T11:47:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wghx2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4d090f1099ca4b6472934163fb66f1875dc03819cf0907f3058aa5006d9d2198\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4d090f1099ca4b6472934163fb66f1875dc03819cf0907f3058aa5006d9d2198\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-06T11:47:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-06T11:47:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wghx2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://41bdbeadff713969b3914296b7f045991c7da126f8181ee9b60d4607d7a565dd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://41bdbeadff713969b3914296b7f045991c7da126f8181ee9b60d4607d7a565dd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-06T11:47:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-06T11:47:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wghx2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3e496dd75c5dd4e480d06cd46302b6172620b35eb24aff3308865a0931677bc4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3e496dd75c5dd4e480d06cd46302b6172620b35eb24aff3308865a0931677bc4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-06T11:47:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-06T11:47:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wghx2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T11:47:47Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-lwknw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:48:46Z is after 2025-08-24T17:21:41Z" Oct 06 11:48:46 crc kubenswrapper[4958]: I1006 11:48:46.581477 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:48:46 crc kubenswrapper[4958]: I1006 11:48:46.581550 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:48:46 crc kubenswrapper[4958]: I1006 11:48:46.581584 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:48:46 crc kubenswrapper[4958]: I1006 11:48:46.581614 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:48:46 crc kubenswrapper[4958]: I1006 11:48:46.581635 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:48:46Z","lastTransitionTime":"2025-10-06T11:48:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:48:46 crc kubenswrapper[4958]: I1006 11:48:46.583041 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-4w4h5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8198c8fd-8ec9-4a56-9e87-4e3967fb8ec7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:48:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:48:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4e73d50647a8c8e49d503c873fc1939a3747681ec2646bd44db73acf5c7740d0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://979bf29caab50b132b30dd6eff62aed5d2209ae1c43fde2433101410df35035b\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-10-06T11:48:35Z\\\",\\\"message\\\":\\\"2025-10-06T11:47:50+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_98b69acf-4cc1-4904-b48b-cf009f0cef75\\\\n2025-10-06T11:47:50+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_98b69acf-4cc1-4904-b48b-cf009f0cef75 to /host/opt/cni/bin/\\\\n2025-10-06T11:47:50Z [verbose] multus-daemon started\\\\n2025-10-06T11:47:50Z [verbose] Readiness Indicator file check\\\\n2025-10-06T11:48:35Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-06T11:47:48Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:48:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2ncxs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T11:47:47Z\\\"}}\" for pod \"openshift-multus\"/\"multus-4w4h5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:48:46Z is after 2025-08-24T17:21:41Z" Oct 06 11:48:46 crc kubenswrapper[4958]: I1006 11:48:46.600039 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-whw6z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1a63a5e9-6d00-40ff-a080-cfcaf03a1c1b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b4fae35f5c37636077eb5d2fee37236e39d009d77aa158458d267aed5fc29f48\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:47:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bwd9v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0be6251825a9cd485da028e6f7fd169dae74215a7191607dfd0a4b7a470e43c4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:47:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bwd9v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T11:47:47Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-whw6z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:48:46Z is after 2025-08-24T17:21:41Z" Oct 06 11:48:46 crc kubenswrapper[4958]: I1006 11:48:46.619462 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-8jbr9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ddd869cc-2703-4f0d-b694-32bb7735025e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:48:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:48:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:48:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:48:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9298df9f82818033b7c6c96e0d7fd59290a7ef86679ae958f19f1c841998715c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:48:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8vhr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5d95fcda3db5aafa2634bf9d4037a6b8b0456bf4be6fc620c742e57e86aed114\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:48:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8vhr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T11:48:00Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-8jbr9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:48:46Z is after 2025-08-24T17:21:41Z" Oct 06 11:48:46 crc kubenswrapper[4958]: I1006 11:48:46.635184 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ba205413-cfaf-4b44-a363-11756cfa7dda\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:48:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:48:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fb611e3b5c05dc18f2016ae86df59c40ce281cb8b0180bd570de0ef76740db43\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:47:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ec146c3b7bff7005ab104f50dbc189e81ee82b7b2275ddb606e60832db5a0046\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:47:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0dcb7478b9b3f837afca516fc35e7f3f41e293147c5f27119fcca05f0d4477b3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:47:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c7f87b2139a49d50f883f483bfbfff0af00d2580d85bbddb604d395afbae3a73\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c7f87b2139a49d50f883f483bfbfff0af00d2580d85bbddb604d395afbae3a73\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-06T11:47:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-06T11:47:28Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T11:47:27Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:48:46Z is after 2025-08-24T17:21:41Z" Oct 06 11:48:46 crc kubenswrapper[4958]: I1006 11:48:46.648650 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:48Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:48Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3a93985bd27818990d6f5ec103d8c0722e1317a74398d64300148018c4f77b71\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:47:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9f39e4dcc064167825d4f7323323a394d1a3d1a374b7104bffc6ad56b344d89e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:47:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:48:46Z is after 2025-08-24T17:21:41Z" Oct 06 11:48:46 crc kubenswrapper[4958]: I1006 11:48:46.664276 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:46Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:48:46Z is after 2025-08-24T17:21:41Z" Oct 06 11:48:46 crc kubenswrapper[4958]: I1006 11:48:46.680422 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-8xzjs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c3545ace-6477-4c0b-9576-f32c6748e0a0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d94d788ac8f6a4665db7852c21f55128c9afb7976c9e18fbc302c02d8c3a0c43\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:47:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lz2sh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T11:47:47Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-8xzjs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:48:46Z is after 2025-08-24T17:21:41Z" Oct 06 11:48:46 crc kubenswrapper[4958]: I1006 11:48:46.684274 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:48:46 crc kubenswrapper[4958]: I1006 11:48:46.684332 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:48:46 crc kubenswrapper[4958]: I1006 11:48:46.684344 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:48:46 crc kubenswrapper[4958]: I1006 11:48:46.684367 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:48:46 crc kubenswrapper[4958]: I1006 11:48:46.684380 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:48:46Z","lastTransitionTime":"2025-10-06T11:48:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:48:46 crc kubenswrapper[4958]: I1006 11:48:46.697656 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-vns7w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f33eae78-7219-4551-bea7-dcfaf22c4e62\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7c6a67770ab5e7e0af1ed6a7b4965bc2b3937e021ddf18a765edd7b65196ad5c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:47:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f8htn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T11:47:49Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-vns7w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:48:46Z is after 2025-08-24T17:21:41Z" Oct 06 11:48:46 crc kubenswrapper[4958]: I1006 11:48:46.711430 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e346fee2-2887-49c0-ad05-0e4ca19453e4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://42a6335b390e5de2da2d935de9e0cb1e60391521813d0928c59d1d9c27ae8e7f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:47:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://64dfdaf2d8faa6e7ebde08361728077d3db9118ec963dc4c1fc8caf9eb9c752b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:47:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5cbba96aaea45073aaeada1136c677ba1ba87248f004cf3d7394ddfccf412478\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:47:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://95e5c689ad307b63dec13d774d1ff8fe8fa0a11a7d33e18a0470000a4df0f6dc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:47:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T11:47:27Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:48:46Z is after 2025-08-24T17:21:41Z" Oct 06 11:48:46 crc kubenswrapper[4958]: I1006 11:48:46.735119 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:46Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:48:46Z is after 2025-08-24T17:21:41Z" Oct 06 11:48:46 crc kubenswrapper[4958]: I1006 11:48:46.758291 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:50Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:50Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fbac1bba642ea0238b738ec34631d44326b215575297c7ea3595aa9d691849d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:47:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:48:46Z is after 2025-08-24T17:21:41Z" Oct 06 11:48:46 crc kubenswrapper[4958]: I1006 11:48:46.772367 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-4mxw5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cbfbf2ea-df5f-4844-ba69-8c9f16a3a26c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:48:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:48:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:48:02Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:48:02Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cdkh7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cdkh7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T11:48:02Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-4mxw5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:48:46Z is after 2025-08-24T17:21:41Z" Oct 06 11:48:46 crc kubenswrapper[4958]: I1006 11:48:46.787171 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:48:46 crc kubenswrapper[4958]: I1006 11:48:46.787222 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:48:46 crc kubenswrapper[4958]: I1006 11:48:46.787234 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:48:46 crc kubenswrapper[4958]: I1006 11:48:46.787249 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:48:46 crc kubenswrapper[4958]: I1006 11:48:46.787260 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:48:46Z","lastTransitionTime":"2025-10-06T11:48:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:48:46 crc kubenswrapper[4958]: I1006 11:48:46.788759 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9bd5b11f-6414-4c57-99b8-516b3f4be804\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:48:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:48:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8f40315a451e12055987bec9804330dac96578f86d7fdedfd96ed187936df857\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:47:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0b16a70f5c26106f885601fdc9f41b6d606668ed3a4a527be3a7674dd4217d36\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:47:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://93cedfe1a3600f0eb73fd787333690ed1fd8cd5f766a54ba40eb6aad808593c7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:47:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://da49c178e22edd3e5f0b37f05dca576095bf83765cb9e46fcb68b1b362e304f7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fdb3ac1507f9a55981ed07786d2ab8bd2aab7072e8e625b17f15bc27f40f460d\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-10-06T11:47:48Z\\\",\\\"message\\\":\\\"file observer\\\\nW1006 11:47:47.776944 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1006 11:47:47.777183 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1006 11:47:47.778589 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-624858867/tls.crt::/tmp/serving-cert-624858867/tls.key\\\\\\\"\\\\nI1006 11:47:48.078332 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1006 11:47:48.084427 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1006 11:47:48.084453 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1006 11:47:48.084475 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1006 11:47:48.084480 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1006 11:47:48.108531 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI1006 11:47:48.108546 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW1006 11:47:48.108561 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1006 11:47:48.108582 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1006 11:47:48.108586 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1006 11:47:48.108589 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1006 11:47:48.108592 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1006 11:47:48.108594 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1006 11:47:48.110441 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-06T11:47:42Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:48:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://533257b553a482e13f8c8930f7ee7d6f8b70af1499cf861d3dfe73cb2225aa2a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:47:29Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://70c0a2bcb44c905e3d39402f0992d3e50a939252201b98e9a723f5a6777d7f91\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://70c0a2bcb44c905e3d39402f0992d3e50a939252201b98e9a723f5a6777d7f91\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-06T11:47:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-06T11:47:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T11:47:27Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:48:46Z is after 2025-08-24T17:21:41Z" Oct 06 11:48:46 crc kubenswrapper[4958]: I1006 11:48:46.805183 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:48Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:48Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b5c0c63694c68c28642df297d1ceff03425621c51b170c758b3fedeafbccc89e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:47:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:48:46Z is after 2025-08-24T17:21:41Z" Oct 06 11:48:46 crc kubenswrapper[4958]: I1006 11:48:46.819432 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:46Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:48:46Z is after 2025-08-24T17:21:41Z" Oct 06 11:48:46 crc kubenswrapper[4958]: I1006 11:48:46.836946 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-ntrlk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cd589959-144a-41bd-b6d5-a872e5c25cee\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:48Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:48Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bc57f0ff822a9fcb8c3b73146bf4879f6e05c2f92b6a4317528530e54e419278\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:47:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vzpjm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a38398fb9a752d0059e1adfd4ee23557572492f2d27f63ef90ed4730b742fd0e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:47:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vzpjm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://484bfcace3003fb125465d9296c1b04852447d674b6c4abc8fedbd3a1b6ea8be\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:47:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vzpjm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d7834f1b3141381f7d868acc4aceb2809e81896cb7082de9888c3d540d062078\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:47:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vzpjm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://051e26a11e5f65ab6e49664f99a5f6a87bf781bf7284f2e2e3996c0e26e77c84\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:47:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vzpjm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d90731b1e085c514ef0665d4df356b853f432a1012f84a41ed82fa257d57d9bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:47:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vzpjm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0d26f1c937b4f4a0ec130d9451086f3770f092498d61310b65bdd8c74a68549e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3385ca172113f8a597490f651e5ab5d40777cf02ba40bb61b2ffa350d5598462\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-10-06T11:48:18Z\\\",\\\"message\\\":\\\"edroute/v1/apis/informers/externalversions/factory.go:140\\\\nI1006 11:48:18.897422 6666 reflector.go:311] Stopping reflector *v1alpha1.AdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI1006 11:48:18.897824 6666 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI1006 11:48:18.897885 6666 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI1006 11:48:18.897895 6666 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI1006 11:48:18.897916 6666 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI1006 11:48:18.897986 6666 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI1006 11:48:18.897991 6666 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI1006 11:48:18.897989 6666 handler.go:208] Removed *v1.Pod event handler 3\\\\nI1006 11:48:18.898014 6666 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI1006 11:48:18.898007 6666 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI1006 11:48:18.898024 6666 handler.go:208] Removed *v1.Pod event handler 6\\\\nI1006 11:48:18.898039 6666 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI1006 11:48:18.898072 6666 factory.go:656] Stopping watch factory\\\\nI1006 11:48:18.898105 6666 ovnkube.go:599] Stopped ovnkube\\\\nI1006 11\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-06T11:48:18Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:48:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vzpjm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1398aae03a42cf60100ed334a3d65c9c4fd501855d5316e7e1ed3974d0c7e22e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:47:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vzpjm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://738882f7b5c7b87e6d6426a997dd7efe5daf1d97973cf1085097cc0f5e790f00\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://738882f7b5c7b87e6d6426a997dd7efe5daf1d97973cf1085097cc0f5e790f00\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-06T11:47:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-06T11:47:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vzpjm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T11:47:48Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-ntrlk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:48:46Z is after 2025-08-24T17:21:41Z" Oct 06 11:48:46 crc kubenswrapper[4958]: I1006 11:48:46.889548 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:48:46 crc kubenswrapper[4958]: I1006 11:48:46.889592 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:48:46 crc kubenswrapper[4958]: I1006 11:48:46.889603 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:48:46 crc kubenswrapper[4958]: I1006 11:48:46.889621 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:48:46 crc kubenswrapper[4958]: I1006 11:48:46.889635 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:48:46Z","lastTransitionTime":"2025-10-06T11:48:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:48:46 crc kubenswrapper[4958]: I1006 11:48:46.912890 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 06 11:48:46 crc kubenswrapper[4958]: I1006 11:48:46.912923 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 06 11:48:46 crc kubenswrapper[4958]: E1006 11:48:46.913047 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Oct 06 11:48:46 crc kubenswrapper[4958]: I1006 11:48:46.913100 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 06 11:48:46 crc kubenswrapper[4958]: E1006 11:48:46.913165 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Oct 06 11:48:46 crc kubenswrapper[4958]: E1006 11:48:46.913348 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Oct 06 11:48:46 crc kubenswrapper[4958]: I1006 11:48:46.927342 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-4mxw5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cbfbf2ea-df5f-4844-ba69-8c9f16a3a26c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:48:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:48:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:48:02Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:48:02Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cdkh7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cdkh7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T11:48:02Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-4mxw5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:48:46Z is after 2025-08-24T17:21:41Z" Oct 06 11:48:46 crc kubenswrapper[4958]: I1006 11:48:46.951030 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9bd5b11f-6414-4c57-99b8-516b3f4be804\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:48:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:48:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8f40315a451e12055987bec9804330dac96578f86d7fdedfd96ed187936df857\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:47:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0b16a70f5c26106f885601fdc9f41b6d606668ed3a4a527be3a7674dd4217d36\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:47:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://93cedfe1a3600f0eb73fd787333690ed1fd8cd5f766a54ba40eb6aad808593c7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:47:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://da49c178e22edd3e5f0b37f05dca576095bf83765cb9e46fcb68b1b362e304f7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fdb3ac1507f9a55981ed07786d2ab8bd2aab7072e8e625b17f15bc27f40f460d\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-10-06T11:47:48Z\\\",\\\"message\\\":\\\"file observer\\\\nW1006 11:47:47.776944 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1006 11:47:47.777183 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1006 11:47:47.778589 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-624858867/tls.crt::/tmp/serving-cert-624858867/tls.key\\\\\\\"\\\\nI1006 11:47:48.078332 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1006 11:47:48.084427 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1006 11:47:48.084453 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1006 11:47:48.084475 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1006 11:47:48.084480 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1006 11:47:48.108531 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI1006 11:47:48.108546 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW1006 11:47:48.108561 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1006 11:47:48.108582 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1006 11:47:48.108586 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1006 11:47:48.108589 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1006 11:47:48.108592 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1006 11:47:48.108594 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1006 11:47:48.110441 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-06T11:47:42Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:48:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://533257b553a482e13f8c8930f7ee7d6f8b70af1499cf861d3dfe73cb2225aa2a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:47:29Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://70c0a2bcb44c905e3d39402f0992d3e50a939252201b98e9a723f5a6777d7f91\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://70c0a2bcb44c905e3d39402f0992d3e50a939252201b98e9a723f5a6777d7f91\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-06T11:47:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-06T11:47:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T11:47:27Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:48:46Z is after 2025-08-24T17:21:41Z" Oct 06 11:48:46 crc kubenswrapper[4958]: I1006 11:48:46.970791 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:48Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:48Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b5c0c63694c68c28642df297d1ceff03425621c51b170c758b3fedeafbccc89e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:47:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:48:46Z is after 2025-08-24T17:21:41Z" Oct 06 11:48:46 crc kubenswrapper[4958]: I1006 11:48:46.981847 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:50Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:50Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fbac1bba642ea0238b738ec34631d44326b215575297c7ea3595aa9d691849d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:47:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:48:46Z is after 2025-08-24T17:21:41Z" Oct 06 11:48:46 crc kubenswrapper[4958]: I1006 11:48:46.991533 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:48:46 crc kubenswrapper[4958]: I1006 11:48:46.991589 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:48:46 crc kubenswrapper[4958]: I1006 11:48:46.991609 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:48:46 crc kubenswrapper[4958]: I1006 11:48:46.991633 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:48:46 crc kubenswrapper[4958]: I1006 11:48:46.991652 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:48:46Z","lastTransitionTime":"2025-10-06T11:48:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:48:46 crc kubenswrapper[4958]: I1006 11:48:46.999075 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:46Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:48:46Z is after 2025-08-24T17:21:41Z" Oct 06 11:48:47 crc kubenswrapper[4958]: I1006 11:48:47.019082 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-ntrlk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cd589959-144a-41bd-b6d5-a872e5c25cee\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:48Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:48Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bc57f0ff822a9fcb8c3b73146bf4879f6e05c2f92b6a4317528530e54e419278\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:47:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vzpjm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a38398fb9a752d0059e1adfd4ee23557572492f2d27f63ef90ed4730b742fd0e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:47:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vzpjm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://484bfcace3003fb125465d9296c1b04852447d674b6c4abc8fedbd3a1b6ea8be\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:47:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vzpjm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d7834f1b3141381f7d868acc4aceb2809e81896cb7082de9888c3d540d062078\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:47:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vzpjm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://051e26a11e5f65ab6e49664f99a5f6a87bf781bf7284f2e2e3996c0e26e77c84\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:47:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vzpjm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d90731b1e085c514ef0665d4df356b853f432a1012f84a41ed82fa257d57d9bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:47:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vzpjm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0d26f1c937b4f4a0ec130d9451086f3770f092498d61310b65bdd8c74a68549e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3385ca172113f8a597490f651e5ab5d40777cf02ba40bb61b2ffa350d5598462\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-10-06T11:48:18Z\\\",\\\"message\\\":\\\"edroute/v1/apis/informers/externalversions/factory.go:140\\\\nI1006 11:48:18.897422 6666 reflector.go:311] Stopping reflector *v1alpha1.AdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI1006 11:48:18.897824 6666 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI1006 11:48:18.897885 6666 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI1006 11:48:18.897895 6666 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI1006 11:48:18.897916 6666 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI1006 11:48:18.897986 6666 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI1006 11:48:18.897991 6666 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI1006 11:48:18.897989 6666 handler.go:208] Removed *v1.Pod event handler 3\\\\nI1006 11:48:18.898014 6666 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI1006 11:48:18.898007 6666 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI1006 11:48:18.898024 6666 handler.go:208] Removed *v1.Pod event handler 6\\\\nI1006 11:48:18.898039 6666 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI1006 11:48:18.898072 6666 factory.go:656] Stopping watch factory\\\\nI1006 11:48:18.898105 6666 ovnkube.go:599] Stopped ovnkube\\\\nI1006 11\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-06T11:48:18Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:48:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vzpjm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1398aae03a42cf60100ed334a3d65c9c4fd501855d5316e7e1ed3974d0c7e22e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:47:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vzpjm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://738882f7b5c7b87e6d6426a997dd7efe5daf1d97973cf1085097cc0f5e790f00\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://738882f7b5c7b87e6d6426a997dd7efe5daf1d97973cf1085097cc0f5e790f00\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-06T11:47:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-06T11:47:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vzpjm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T11:47:48Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-ntrlk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:48:47Z is after 2025-08-24T17:21:41Z" Oct 06 11:48:47 crc kubenswrapper[4958]: I1006 11:48:47.037636 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-4w4h5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8198c8fd-8ec9-4a56-9e87-4e3967fb8ec7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:48:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:48:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4e73d50647a8c8e49d503c873fc1939a3747681ec2646bd44db73acf5c7740d0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://979bf29caab50b132b30dd6eff62aed5d2209ae1c43fde2433101410df35035b\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-10-06T11:48:35Z\\\",\\\"message\\\":\\\"2025-10-06T11:47:50+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_98b69acf-4cc1-4904-b48b-cf009f0cef75\\\\n2025-10-06T11:47:50+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_98b69acf-4cc1-4904-b48b-cf009f0cef75 to /host/opt/cni/bin/\\\\n2025-10-06T11:47:50Z [verbose] multus-daemon started\\\\n2025-10-06T11:47:50Z [verbose] Readiness Indicator file check\\\\n2025-10-06T11:48:35Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-06T11:47:48Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:48:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2ncxs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T11:47:47Z\\\"}}\" for pod \"openshift-multus\"/\"multus-4w4h5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:48:47Z is after 2025-08-24T17:21:41Z" Oct 06 11:48:47 crc kubenswrapper[4958]: I1006 11:48:47.053790 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-whw6z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1a63a5e9-6d00-40ff-a080-cfcaf03a1c1b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b4fae35f5c37636077eb5d2fee37236e39d009d77aa158458d267aed5fc29f48\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:47:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bwd9v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0be6251825a9cd485da028e6f7fd169dae74215a7191607dfd0a4b7a470e43c4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:47:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bwd9v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T11:47:47Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-whw6z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:48:47Z is after 2025-08-24T17:21:41Z" Oct 06 11:48:47 crc kubenswrapper[4958]: I1006 11:48:47.065522 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-8jbr9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ddd869cc-2703-4f0d-b694-32bb7735025e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:48:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:48:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:48:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:48:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9298df9f82818033b7c6c96e0d7fd59290a7ef86679ae958f19f1c841998715c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:48:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8vhr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5d95fcda3db5aafa2634bf9d4037a6b8b0456bf4be6fc620c742e57e86aed114\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:48:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8vhr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T11:48:00Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-8jbr9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:48:47Z is after 2025-08-24T17:21:41Z" Oct 06 11:48:47 crc kubenswrapper[4958]: I1006 11:48:47.079852 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ba205413-cfaf-4b44-a363-11756cfa7dda\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:48:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:48:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fb611e3b5c05dc18f2016ae86df59c40ce281cb8b0180bd570de0ef76740db43\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:47:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ec146c3b7bff7005ab104f50dbc189e81ee82b7b2275ddb606e60832db5a0046\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:47:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0dcb7478b9b3f837afca516fc35e7f3f41e293147c5f27119fcca05f0d4477b3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:47:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c7f87b2139a49d50f883f483bfbfff0af00d2580d85bbddb604d395afbae3a73\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c7f87b2139a49d50f883f483bfbfff0af00d2580d85bbddb604d395afbae3a73\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-06T11:47:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-06T11:47:28Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T11:47:27Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:48:47Z is after 2025-08-24T17:21:41Z" Oct 06 11:48:47 crc kubenswrapper[4958]: I1006 11:48:47.094574 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:48:47 crc kubenswrapper[4958]: I1006 11:48:47.094646 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:48:47 crc kubenswrapper[4958]: I1006 11:48:47.094666 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:48:47 crc kubenswrapper[4958]: I1006 11:48:47.094690 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:48:47 crc kubenswrapper[4958]: I1006 11:48:47.094709 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:48:47Z","lastTransitionTime":"2025-10-06T11:48:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:48:47 crc kubenswrapper[4958]: I1006 11:48:47.095624 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:48Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:48Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3a93985bd27818990d6f5ec103d8c0722e1317a74398d64300148018c4f77b71\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:47:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9f39e4dcc064167825d4f7323323a394d1a3d1a374b7104bffc6ad56b344d89e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:47:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:48:47Z is after 2025-08-24T17:21:41Z" Oct 06 11:48:47 crc kubenswrapper[4958]: I1006 11:48:47.110184 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:46Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:48:47Z is after 2025-08-24T17:21:41Z" Oct 06 11:48:47 crc kubenswrapper[4958]: I1006 11:48:47.125577 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-lwknw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5e46de0e-9f67-4dae-8601-65004d0d71c1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6adb713c9c2659566eee132d371a68bc10d24c6761c017a1e8b0ce8bf974b9fa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:47:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wghx2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://244104564b5ad6b646671bdc75f440e5568a0c2449c111e56431328365d044fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://244104564b5ad6b646671bdc75f440e5568a0c2449c111e56431328365d044fe\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-06T11:47:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-06T11:47:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wghx2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5f832e5afbc55af6f3de6756005ddfef328c3d8357b1bbc8765469e5b254cf8a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5f832e5afbc55af6f3de6756005ddfef328c3d8357b1bbc8765469e5b254cf8a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-06T11:47:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-06T11:47:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wghx2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://99bec1a2f08582cd407d108d771389ab034877ee58f59e7d899541d0c69809b0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://99bec1a2f08582cd407d108d771389ab034877ee58f59e7d899541d0c69809b0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-06T11:47:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-06T11:47:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wghx2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4d090f1099ca4b6472934163fb66f1875dc03819cf0907f3058aa5006d9d2198\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4d090f1099ca4b6472934163fb66f1875dc03819cf0907f3058aa5006d9d2198\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-06T11:47:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-06T11:47:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wghx2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://41bdbeadff713969b3914296b7f045991c7da126f8181ee9b60d4607d7a565dd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://41bdbeadff713969b3914296b7f045991c7da126f8181ee9b60d4607d7a565dd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-06T11:47:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-06T11:47:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wghx2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3e496dd75c5dd4e480d06cd46302b6172620b35eb24aff3308865a0931677bc4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3e496dd75c5dd4e480d06cd46302b6172620b35eb24aff3308865a0931677bc4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-06T11:47:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-06T11:47:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wghx2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T11:47:47Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-lwknw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:48:47Z is after 2025-08-24T17:21:41Z" Oct 06 11:48:47 crc kubenswrapper[4958]: I1006 11:48:47.141463 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-vns7w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f33eae78-7219-4551-bea7-dcfaf22c4e62\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7c6a67770ab5e7e0af1ed6a7b4965bc2b3937e021ddf18a765edd7b65196ad5c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:47:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f8htn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T11:47:49Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-vns7w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:48:47Z is after 2025-08-24T17:21:41Z" Oct 06 11:48:47 crc kubenswrapper[4958]: I1006 11:48:47.154951 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e346fee2-2887-49c0-ad05-0e4ca19453e4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://42a6335b390e5de2da2d935de9e0cb1e60391521813d0928c59d1d9c27ae8e7f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:47:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://64dfdaf2d8faa6e7ebde08361728077d3db9118ec963dc4c1fc8caf9eb9c752b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:47:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5cbba96aaea45073aaeada1136c677ba1ba87248f004cf3d7394ddfccf412478\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:47:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://95e5c689ad307b63dec13d774d1ff8fe8fa0a11a7d33e18a0470000a4df0f6dc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:47:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T11:47:27Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:48:47Z is after 2025-08-24T17:21:41Z" Oct 06 11:48:47 crc kubenswrapper[4958]: I1006 11:48:47.169385 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:46Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:48:47Z is after 2025-08-24T17:21:41Z" Oct 06 11:48:47 crc kubenswrapper[4958]: I1006 11:48:47.184067 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-8xzjs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c3545ace-6477-4c0b-9576-f32c6748e0a0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d94d788ac8f6a4665db7852c21f55128c9afb7976c9e18fbc302c02d8c3a0c43\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:47:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lz2sh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T11:47:47Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-8xzjs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:48:47Z is after 2025-08-24T17:21:41Z" Oct 06 11:48:47 crc kubenswrapper[4958]: I1006 11:48:47.196942 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:48:47 crc kubenswrapper[4958]: I1006 11:48:47.197000 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:48:47 crc kubenswrapper[4958]: I1006 11:48:47.197019 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:48:47 crc kubenswrapper[4958]: I1006 11:48:47.197043 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:48:47 crc kubenswrapper[4958]: I1006 11:48:47.197062 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:48:47Z","lastTransitionTime":"2025-10-06T11:48:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:48:47 crc kubenswrapper[4958]: I1006 11:48:47.300018 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:48:47 crc kubenswrapper[4958]: I1006 11:48:47.300078 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:48:47 crc kubenswrapper[4958]: I1006 11:48:47.300097 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:48:47 crc kubenswrapper[4958]: I1006 11:48:47.300123 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:48:47 crc kubenswrapper[4958]: I1006 11:48:47.300140 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:48:47Z","lastTransitionTime":"2025-10-06T11:48:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:48:47 crc kubenswrapper[4958]: I1006 11:48:47.403360 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:48:47 crc kubenswrapper[4958]: I1006 11:48:47.403415 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:48:47 crc kubenswrapper[4958]: I1006 11:48:47.403434 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:48:47 crc kubenswrapper[4958]: I1006 11:48:47.403458 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:48:47 crc kubenswrapper[4958]: I1006 11:48:47.403478 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:48:47Z","lastTransitionTime":"2025-10-06T11:48:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:48:47 crc kubenswrapper[4958]: I1006 11:48:47.474687 4958 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-ntrlk_cd589959-144a-41bd-b6d5-a872e5c25cee/ovnkube-controller/3.log" Oct 06 11:48:47 crc kubenswrapper[4958]: I1006 11:48:47.475684 4958 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-ntrlk_cd589959-144a-41bd-b6d5-a872e5c25cee/ovnkube-controller/2.log" Oct 06 11:48:47 crc kubenswrapper[4958]: I1006 11:48:47.479610 4958 generic.go:334] "Generic (PLEG): container finished" podID="cd589959-144a-41bd-b6d5-a872e5c25cee" containerID="0d26f1c937b4f4a0ec130d9451086f3770f092498d61310b65bdd8c74a68549e" exitCode=1 Oct 06 11:48:47 crc kubenswrapper[4958]: I1006 11:48:47.479665 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-ntrlk" event={"ID":"cd589959-144a-41bd-b6d5-a872e5c25cee","Type":"ContainerDied","Data":"0d26f1c937b4f4a0ec130d9451086f3770f092498d61310b65bdd8c74a68549e"} Oct 06 11:48:47 crc kubenswrapper[4958]: I1006 11:48:47.479709 4958 scope.go:117] "RemoveContainer" containerID="3385ca172113f8a597490f651e5ab5d40777cf02ba40bb61b2ffa350d5598462" Oct 06 11:48:47 crc kubenswrapper[4958]: I1006 11:48:47.480647 4958 scope.go:117] "RemoveContainer" containerID="0d26f1c937b4f4a0ec130d9451086f3770f092498d61310b65bdd8c74a68549e" Oct 06 11:48:47 crc kubenswrapper[4958]: E1006 11:48:47.480900 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-ntrlk_openshift-ovn-kubernetes(cd589959-144a-41bd-b6d5-a872e5c25cee)\"" pod="openshift-ovn-kubernetes/ovnkube-node-ntrlk" podUID="cd589959-144a-41bd-b6d5-a872e5c25cee" Oct 06 11:48:47 crc kubenswrapper[4958]: I1006 11:48:47.502986 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9bd5b11f-6414-4c57-99b8-516b3f4be804\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:48:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:48:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8f40315a451e12055987bec9804330dac96578f86d7fdedfd96ed187936df857\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:47:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0b16a70f5c26106f885601fdc9f41b6d606668ed3a4a527be3a7674dd4217d36\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:47:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://93cedfe1a3600f0eb73fd787333690ed1fd8cd5f766a54ba40eb6aad808593c7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:47:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://da49c178e22edd3e5f0b37f05dca576095bf83765cb9e46fcb68b1b362e304f7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fdb3ac1507f9a55981ed07786d2ab8bd2aab7072e8e625b17f15bc27f40f460d\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-10-06T11:47:48Z\\\",\\\"message\\\":\\\"file observer\\\\nW1006 11:47:47.776944 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1006 11:47:47.777183 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1006 11:47:47.778589 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-624858867/tls.crt::/tmp/serving-cert-624858867/tls.key\\\\\\\"\\\\nI1006 11:47:48.078332 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1006 11:47:48.084427 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1006 11:47:48.084453 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1006 11:47:48.084475 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1006 11:47:48.084480 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1006 11:47:48.108531 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI1006 11:47:48.108546 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW1006 11:47:48.108561 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1006 11:47:48.108582 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1006 11:47:48.108586 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1006 11:47:48.108589 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1006 11:47:48.108592 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1006 11:47:48.108594 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1006 11:47:48.110441 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-06T11:47:42Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:48:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://533257b553a482e13f8c8930f7ee7d6f8b70af1499cf861d3dfe73cb2225aa2a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:47:29Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://70c0a2bcb44c905e3d39402f0992d3e50a939252201b98e9a723f5a6777d7f91\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://70c0a2bcb44c905e3d39402f0992d3e50a939252201b98e9a723f5a6777d7f91\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-06T11:47:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-06T11:47:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T11:47:27Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:48:47Z is after 2025-08-24T17:21:41Z" Oct 06 11:48:47 crc kubenswrapper[4958]: I1006 11:48:47.506594 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:48:47 crc kubenswrapper[4958]: I1006 11:48:47.506648 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:48:47 crc kubenswrapper[4958]: I1006 11:48:47.506666 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:48:47 crc kubenswrapper[4958]: I1006 11:48:47.506691 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:48:47 crc kubenswrapper[4958]: I1006 11:48:47.506710 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:48:47Z","lastTransitionTime":"2025-10-06T11:48:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:48:47 crc kubenswrapper[4958]: I1006 11:48:47.523758 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:48Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:48Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b5c0c63694c68c28642df297d1ceff03425621c51b170c758b3fedeafbccc89e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:47:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:48:47Z is after 2025-08-24T17:21:41Z" Oct 06 11:48:47 crc kubenswrapper[4958]: I1006 11:48:47.540779 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:50Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:50Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fbac1bba642ea0238b738ec34631d44326b215575297c7ea3595aa9d691849d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:47:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:48:47Z is after 2025-08-24T17:21:41Z" Oct 06 11:48:47 crc kubenswrapper[4958]: I1006 11:48:47.558621 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-4mxw5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cbfbf2ea-df5f-4844-ba69-8c9f16a3a26c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:48:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:48:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:48:02Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:48:02Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cdkh7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cdkh7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T11:48:02Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-4mxw5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:48:47Z is after 2025-08-24T17:21:41Z" Oct 06 11:48:47 crc kubenswrapper[4958]: I1006 11:48:47.579008 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:46Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:48:47Z is after 2025-08-24T17:21:41Z" Oct 06 11:48:47 crc kubenswrapper[4958]: I1006 11:48:47.609863 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:48:47 crc kubenswrapper[4958]: I1006 11:48:47.609921 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:48:47 crc kubenswrapper[4958]: I1006 11:48:47.609938 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:48:47 crc kubenswrapper[4958]: I1006 11:48:47.609962 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:48:47 crc kubenswrapper[4958]: I1006 11:48:47.609980 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:48:47Z","lastTransitionTime":"2025-10-06T11:48:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:48:47 crc kubenswrapper[4958]: I1006 11:48:47.615561 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-ntrlk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cd589959-144a-41bd-b6d5-a872e5c25cee\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:48Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:48Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bc57f0ff822a9fcb8c3b73146bf4879f6e05c2f92b6a4317528530e54e419278\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:47:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vzpjm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a38398fb9a752d0059e1adfd4ee23557572492f2d27f63ef90ed4730b742fd0e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:47:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vzpjm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://484bfcace3003fb125465d9296c1b04852447d674b6c4abc8fedbd3a1b6ea8be\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:47:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vzpjm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d7834f1b3141381f7d868acc4aceb2809e81896cb7082de9888c3d540d062078\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:47:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vzpjm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://051e26a11e5f65ab6e49664f99a5f6a87bf781bf7284f2e2e3996c0e26e77c84\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:47:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vzpjm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d90731b1e085c514ef0665d4df356b853f432a1012f84a41ed82fa257d57d9bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:47:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vzpjm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0d26f1c937b4f4a0ec130d9451086f3770f092498d61310b65bdd8c74a68549e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3385ca172113f8a597490f651e5ab5d40777cf02ba40bb61b2ffa350d5598462\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-10-06T11:48:18Z\\\",\\\"message\\\":\\\"edroute/v1/apis/informers/externalversions/factory.go:140\\\\nI1006 11:48:18.897422 6666 reflector.go:311] Stopping reflector *v1alpha1.AdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI1006 11:48:18.897824 6666 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI1006 11:48:18.897885 6666 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI1006 11:48:18.897895 6666 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI1006 11:48:18.897916 6666 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI1006 11:48:18.897986 6666 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI1006 11:48:18.897991 6666 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI1006 11:48:18.897989 6666 handler.go:208] Removed *v1.Pod event handler 3\\\\nI1006 11:48:18.898014 6666 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI1006 11:48:18.898007 6666 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI1006 11:48:18.898024 6666 handler.go:208] Removed *v1.Pod event handler 6\\\\nI1006 11:48:18.898039 6666 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI1006 11:48:18.898072 6666 factory.go:656] Stopping watch factory\\\\nI1006 11:48:18.898105 6666 ovnkube.go:599] Stopped ovnkube\\\\nI1006 11\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-06T11:48:18Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0d26f1c937b4f4a0ec130d9451086f3770f092498d61310b65bdd8c74a68549e\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-10-06T11:48:46Z\\\",\\\"message\\\":\\\"ExternalRoute (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/adminpolicybasedroute/v1/apis/informers/externalversions/factory.go:140\\\\nI1006 11:48:46.912470 7033 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI1006 11:48:46.912521 7033 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI1006 11:48:46.912526 7033 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI1006 11:48:46.912534 7033 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI1006 11:48:46.912552 7033 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI1006 11:48:46.912589 7033 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI1006 11:48:46.912650 7033 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI1006 11:48:46.912667 7033 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI1006 11:48:46.912704 7033 factory.go:656] Stopping watch factory\\\\nI1006 11:48:46.912725 7033 handler.go:208] Removed *v1.Node event handler 7\\\\nI1006 11:48:46.912741 7033 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI1006 11:48:46.912757 7033 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI1006 11:48:46.912772 7033 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI1006 11:48:46.912859 7033 handler.go:208] Removed *v1.Node ev\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-06T11:48:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vzpjm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1398aae03a42cf60100ed334a3d65c9c4fd501855d5316e7e1ed3974d0c7e22e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:47:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vzpjm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://738882f7b5c7b87e6d6426a997dd7efe5daf1d97973cf1085097cc0f5e790f00\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://738882f7b5c7b87e6d6426a997dd7efe5daf1d97973cf1085097cc0f5e790f00\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-06T11:47:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-06T11:47:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vzpjm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T11:47:48Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-ntrlk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:48:47Z is after 2025-08-24T17:21:41Z" Oct 06 11:48:47 crc kubenswrapper[4958]: I1006 11:48:47.634094 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ba205413-cfaf-4b44-a363-11756cfa7dda\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:48:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:48:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fb611e3b5c05dc18f2016ae86df59c40ce281cb8b0180bd570de0ef76740db43\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:47:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ec146c3b7bff7005ab104f50dbc189e81ee82b7b2275ddb606e60832db5a0046\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:47:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0dcb7478b9b3f837afca516fc35e7f3f41e293147c5f27119fcca05f0d4477b3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:47:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c7f87b2139a49d50f883f483bfbfff0af00d2580d85bbddb604d395afbae3a73\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c7f87b2139a49d50f883f483bfbfff0af00d2580d85bbddb604d395afbae3a73\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-06T11:47:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-06T11:47:28Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T11:47:27Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:48:47Z is after 2025-08-24T17:21:41Z" Oct 06 11:48:47 crc kubenswrapper[4958]: I1006 11:48:47.654522 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:48Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:48Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3a93985bd27818990d6f5ec103d8c0722e1317a74398d64300148018c4f77b71\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:47:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9f39e4dcc064167825d4f7323323a394d1a3d1a374b7104bffc6ad56b344d89e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:47:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:48:47Z is after 2025-08-24T17:21:41Z" Oct 06 11:48:47 crc kubenswrapper[4958]: I1006 11:48:47.671861 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:46Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:48:47Z is after 2025-08-24T17:21:41Z" Oct 06 11:48:47 crc kubenswrapper[4958]: I1006 11:48:47.695576 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-lwknw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5e46de0e-9f67-4dae-8601-65004d0d71c1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6adb713c9c2659566eee132d371a68bc10d24c6761c017a1e8b0ce8bf974b9fa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:47:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wghx2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://244104564b5ad6b646671bdc75f440e5568a0c2449c111e56431328365d044fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://244104564b5ad6b646671bdc75f440e5568a0c2449c111e56431328365d044fe\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-06T11:47:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-06T11:47:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wghx2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5f832e5afbc55af6f3de6756005ddfef328c3d8357b1bbc8765469e5b254cf8a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5f832e5afbc55af6f3de6756005ddfef328c3d8357b1bbc8765469e5b254cf8a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-06T11:47:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-06T11:47:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wghx2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://99bec1a2f08582cd407d108d771389ab034877ee58f59e7d899541d0c69809b0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://99bec1a2f08582cd407d108d771389ab034877ee58f59e7d899541d0c69809b0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-06T11:47:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-06T11:47:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wghx2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4d090f1099ca4b6472934163fb66f1875dc03819cf0907f3058aa5006d9d2198\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4d090f1099ca4b6472934163fb66f1875dc03819cf0907f3058aa5006d9d2198\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-06T11:47:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-06T11:47:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wghx2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://41bdbeadff713969b3914296b7f045991c7da126f8181ee9b60d4607d7a565dd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://41bdbeadff713969b3914296b7f045991c7da126f8181ee9b60d4607d7a565dd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-06T11:47:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-06T11:47:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wghx2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3e496dd75c5dd4e480d06cd46302b6172620b35eb24aff3308865a0931677bc4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3e496dd75c5dd4e480d06cd46302b6172620b35eb24aff3308865a0931677bc4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-06T11:47:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-06T11:47:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wghx2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T11:47:47Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-lwknw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:48:47Z is after 2025-08-24T17:21:41Z" Oct 06 11:48:47 crc kubenswrapper[4958]: I1006 11:48:47.713778 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:48:47 crc kubenswrapper[4958]: I1006 11:48:47.713864 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:48:47 crc kubenswrapper[4958]: I1006 11:48:47.713886 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:48:47 crc kubenswrapper[4958]: I1006 11:48:47.713914 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:48:47 crc kubenswrapper[4958]: I1006 11:48:47.713934 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:48:47Z","lastTransitionTime":"2025-10-06T11:48:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:48:47 crc kubenswrapper[4958]: I1006 11:48:47.720604 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-4w4h5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8198c8fd-8ec9-4a56-9e87-4e3967fb8ec7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:48:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:48:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4e73d50647a8c8e49d503c873fc1939a3747681ec2646bd44db73acf5c7740d0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://979bf29caab50b132b30dd6eff62aed5d2209ae1c43fde2433101410df35035b\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-10-06T11:48:35Z\\\",\\\"message\\\":\\\"2025-10-06T11:47:50+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_98b69acf-4cc1-4904-b48b-cf009f0cef75\\\\n2025-10-06T11:47:50+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_98b69acf-4cc1-4904-b48b-cf009f0cef75 to /host/opt/cni/bin/\\\\n2025-10-06T11:47:50Z [verbose] multus-daemon started\\\\n2025-10-06T11:47:50Z [verbose] Readiness Indicator file check\\\\n2025-10-06T11:48:35Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-06T11:47:48Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:48:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2ncxs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T11:47:47Z\\\"}}\" for pod \"openshift-multus\"/\"multus-4w4h5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:48:47Z is after 2025-08-24T17:21:41Z" Oct 06 11:48:47 crc kubenswrapper[4958]: I1006 11:48:47.738319 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-whw6z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1a63a5e9-6d00-40ff-a080-cfcaf03a1c1b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b4fae35f5c37636077eb5d2fee37236e39d009d77aa158458d267aed5fc29f48\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:47:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bwd9v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0be6251825a9cd485da028e6f7fd169dae74215a7191607dfd0a4b7a470e43c4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:47:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bwd9v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T11:47:47Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-whw6z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:48:47Z is after 2025-08-24T17:21:41Z" Oct 06 11:48:47 crc kubenswrapper[4958]: I1006 11:48:47.757458 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-8jbr9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ddd869cc-2703-4f0d-b694-32bb7735025e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:48:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:48:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:48:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:48:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9298df9f82818033b7c6c96e0d7fd59290a7ef86679ae958f19f1c841998715c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:48:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8vhr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5d95fcda3db5aafa2634bf9d4037a6b8b0456bf4be6fc620c742e57e86aed114\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:48:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8vhr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T11:48:00Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-8jbr9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:48:47Z is after 2025-08-24T17:21:41Z" Oct 06 11:48:47 crc kubenswrapper[4958]: I1006 11:48:47.774431 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e346fee2-2887-49c0-ad05-0e4ca19453e4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://42a6335b390e5de2da2d935de9e0cb1e60391521813d0928c59d1d9c27ae8e7f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:47:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://64dfdaf2d8faa6e7ebde08361728077d3db9118ec963dc4c1fc8caf9eb9c752b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:47:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5cbba96aaea45073aaeada1136c677ba1ba87248f004cf3d7394ddfccf412478\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:47:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://95e5c689ad307b63dec13d774d1ff8fe8fa0a11a7d33e18a0470000a4df0f6dc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:47:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T11:47:27Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:48:47Z is after 2025-08-24T17:21:41Z" Oct 06 11:48:47 crc kubenswrapper[4958]: I1006 11:48:47.816522 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:48:47 crc kubenswrapper[4958]: I1006 11:48:47.816571 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:48:47 crc kubenswrapper[4958]: I1006 11:48:47.816588 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:48:47 crc kubenswrapper[4958]: I1006 11:48:47.816608 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:48:47 crc kubenswrapper[4958]: I1006 11:48:47.816624 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:48:47Z","lastTransitionTime":"2025-10-06T11:48:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:48:47 crc kubenswrapper[4958]: I1006 11:48:47.828641 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:46Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:48:47Z is after 2025-08-24T17:21:41Z" Oct 06 11:48:47 crc kubenswrapper[4958]: I1006 11:48:47.841557 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-8xzjs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c3545ace-6477-4c0b-9576-f32c6748e0a0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d94d788ac8f6a4665db7852c21f55128c9afb7976c9e18fbc302c02d8c3a0c43\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:47:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lz2sh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T11:47:47Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-8xzjs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:48:47Z is after 2025-08-24T17:21:41Z" Oct 06 11:48:47 crc kubenswrapper[4958]: I1006 11:48:47.854565 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-vns7w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f33eae78-7219-4551-bea7-dcfaf22c4e62\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7c6a67770ab5e7e0af1ed6a7b4965bc2b3937e021ddf18a765edd7b65196ad5c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:47:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f8htn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T11:47:49Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-vns7w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:48:47Z is after 2025-08-24T17:21:41Z" Oct 06 11:48:47 crc kubenswrapper[4958]: I1006 11:48:47.912872 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-4mxw5" Oct 06 11:48:47 crc kubenswrapper[4958]: E1006 11:48:47.913083 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-4mxw5" podUID="cbfbf2ea-df5f-4844-ba69-8c9f16a3a26c" Oct 06 11:48:47 crc kubenswrapper[4958]: I1006 11:48:47.920245 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:48:47 crc kubenswrapper[4958]: I1006 11:48:47.920282 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:48:47 crc kubenswrapper[4958]: I1006 11:48:47.920293 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:48:47 crc kubenswrapper[4958]: I1006 11:48:47.920309 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:48:47 crc kubenswrapper[4958]: I1006 11:48:47.920322 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:48:47Z","lastTransitionTime":"2025-10-06T11:48:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:48:48 crc kubenswrapper[4958]: I1006 11:48:48.023130 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:48:48 crc kubenswrapper[4958]: I1006 11:48:48.023252 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:48:48 crc kubenswrapper[4958]: I1006 11:48:48.023278 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:48:48 crc kubenswrapper[4958]: I1006 11:48:48.023307 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:48:48 crc kubenswrapper[4958]: I1006 11:48:48.023329 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:48:48Z","lastTransitionTime":"2025-10-06T11:48:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:48:48 crc kubenswrapper[4958]: I1006 11:48:48.126017 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:48:48 crc kubenswrapper[4958]: I1006 11:48:48.126086 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:48:48 crc kubenswrapper[4958]: I1006 11:48:48.126107 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:48:48 crc kubenswrapper[4958]: I1006 11:48:48.126139 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:48:48 crc kubenswrapper[4958]: I1006 11:48:48.126209 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:48:48Z","lastTransitionTime":"2025-10-06T11:48:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:48:48 crc kubenswrapper[4958]: I1006 11:48:48.229042 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:48:48 crc kubenswrapper[4958]: I1006 11:48:48.229122 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:48:48 crc kubenswrapper[4958]: I1006 11:48:48.229141 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:48:48 crc kubenswrapper[4958]: I1006 11:48:48.229198 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:48:48 crc kubenswrapper[4958]: I1006 11:48:48.229219 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:48:48Z","lastTransitionTime":"2025-10-06T11:48:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:48:48 crc kubenswrapper[4958]: I1006 11:48:48.332735 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:48:48 crc kubenswrapper[4958]: I1006 11:48:48.332797 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:48:48 crc kubenswrapper[4958]: I1006 11:48:48.332815 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:48:48 crc kubenswrapper[4958]: I1006 11:48:48.332841 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:48:48 crc kubenswrapper[4958]: I1006 11:48:48.332860 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:48:48Z","lastTransitionTime":"2025-10-06T11:48:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:48:48 crc kubenswrapper[4958]: I1006 11:48:48.435979 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:48:48 crc kubenswrapper[4958]: I1006 11:48:48.436059 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:48:48 crc kubenswrapper[4958]: I1006 11:48:48.436084 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:48:48 crc kubenswrapper[4958]: I1006 11:48:48.436114 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:48:48 crc kubenswrapper[4958]: I1006 11:48:48.436135 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:48:48Z","lastTransitionTime":"2025-10-06T11:48:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:48:48 crc kubenswrapper[4958]: I1006 11:48:48.488048 4958 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-ntrlk_cd589959-144a-41bd-b6d5-a872e5c25cee/ovnkube-controller/3.log" Oct 06 11:48:48 crc kubenswrapper[4958]: I1006 11:48:48.492993 4958 scope.go:117] "RemoveContainer" containerID="0d26f1c937b4f4a0ec130d9451086f3770f092498d61310b65bdd8c74a68549e" Oct 06 11:48:48 crc kubenswrapper[4958]: E1006 11:48:48.493490 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-ntrlk_openshift-ovn-kubernetes(cd589959-144a-41bd-b6d5-a872e5c25cee)\"" pod="openshift-ovn-kubernetes/ovnkube-node-ntrlk" podUID="cd589959-144a-41bd-b6d5-a872e5c25cee" Oct 06 11:48:48 crc kubenswrapper[4958]: I1006 11:48:48.510290 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-4mxw5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cbfbf2ea-df5f-4844-ba69-8c9f16a3a26c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:48:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:48:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:48:02Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:48:02Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cdkh7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cdkh7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T11:48:02Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-4mxw5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:48:48Z is after 2025-08-24T17:21:41Z" Oct 06 11:48:48 crc kubenswrapper[4958]: I1006 11:48:48.536463 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9bd5b11f-6414-4c57-99b8-516b3f4be804\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:48:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:48:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8f40315a451e12055987bec9804330dac96578f86d7fdedfd96ed187936df857\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:47:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0b16a70f5c26106f885601fdc9f41b6d606668ed3a4a527be3a7674dd4217d36\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:47:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://93cedfe1a3600f0eb73fd787333690ed1fd8cd5f766a54ba40eb6aad808593c7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:47:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://da49c178e22edd3e5f0b37f05dca576095bf83765cb9e46fcb68b1b362e304f7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fdb3ac1507f9a55981ed07786d2ab8bd2aab7072e8e625b17f15bc27f40f460d\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-10-06T11:47:48Z\\\",\\\"message\\\":\\\"file observer\\\\nW1006 11:47:47.776944 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1006 11:47:47.777183 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1006 11:47:47.778589 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-624858867/tls.crt::/tmp/serving-cert-624858867/tls.key\\\\\\\"\\\\nI1006 11:47:48.078332 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1006 11:47:48.084427 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1006 11:47:48.084453 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1006 11:47:48.084475 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1006 11:47:48.084480 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1006 11:47:48.108531 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI1006 11:47:48.108546 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW1006 11:47:48.108561 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1006 11:47:48.108582 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1006 11:47:48.108586 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1006 11:47:48.108589 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1006 11:47:48.108592 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1006 11:47:48.108594 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1006 11:47:48.110441 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-06T11:47:42Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:48:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://533257b553a482e13f8c8930f7ee7d6f8b70af1499cf861d3dfe73cb2225aa2a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:47:29Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://70c0a2bcb44c905e3d39402f0992d3e50a939252201b98e9a723f5a6777d7f91\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://70c0a2bcb44c905e3d39402f0992d3e50a939252201b98e9a723f5a6777d7f91\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-06T11:47:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-06T11:47:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T11:47:27Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:48:48Z is after 2025-08-24T17:21:41Z" Oct 06 11:48:48 crc kubenswrapper[4958]: I1006 11:48:48.539104 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:48:48 crc kubenswrapper[4958]: I1006 11:48:48.539227 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:48:48 crc kubenswrapper[4958]: I1006 11:48:48.539249 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:48:48 crc kubenswrapper[4958]: I1006 11:48:48.539277 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:48:48 crc kubenswrapper[4958]: I1006 11:48:48.539298 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:48:48Z","lastTransitionTime":"2025-10-06T11:48:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:48:48 crc kubenswrapper[4958]: I1006 11:48:48.557788 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:48Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:48Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b5c0c63694c68c28642df297d1ceff03425621c51b170c758b3fedeafbccc89e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:47:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:48:48Z is after 2025-08-24T17:21:41Z" Oct 06 11:48:48 crc kubenswrapper[4958]: I1006 11:48:48.576129 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:50Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:50Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fbac1bba642ea0238b738ec34631d44326b215575297c7ea3595aa9d691849d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:47:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:48:48Z is after 2025-08-24T17:21:41Z" Oct 06 11:48:48 crc kubenswrapper[4958]: I1006 11:48:48.596906 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:46Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:48:48Z is after 2025-08-24T17:21:41Z" Oct 06 11:48:48 crc kubenswrapper[4958]: I1006 11:48:48.627138 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-ntrlk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cd589959-144a-41bd-b6d5-a872e5c25cee\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:48Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:48Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bc57f0ff822a9fcb8c3b73146bf4879f6e05c2f92b6a4317528530e54e419278\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:47:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vzpjm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a38398fb9a752d0059e1adfd4ee23557572492f2d27f63ef90ed4730b742fd0e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:47:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vzpjm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://484bfcace3003fb125465d9296c1b04852447d674b6c4abc8fedbd3a1b6ea8be\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:47:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vzpjm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d7834f1b3141381f7d868acc4aceb2809e81896cb7082de9888c3d540d062078\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:47:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vzpjm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://051e26a11e5f65ab6e49664f99a5f6a87bf781bf7284f2e2e3996c0e26e77c84\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:47:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vzpjm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d90731b1e085c514ef0665d4df356b853f432a1012f84a41ed82fa257d57d9bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:47:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vzpjm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0d26f1c937b4f4a0ec130d9451086f3770f092498d61310b65bdd8c74a68549e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0d26f1c937b4f4a0ec130d9451086f3770f092498d61310b65bdd8c74a68549e\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-10-06T11:48:46Z\\\",\\\"message\\\":\\\"ExternalRoute (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/adminpolicybasedroute/v1/apis/informers/externalversions/factory.go:140\\\\nI1006 11:48:46.912470 7033 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI1006 11:48:46.912521 7033 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI1006 11:48:46.912526 7033 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI1006 11:48:46.912534 7033 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI1006 11:48:46.912552 7033 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI1006 11:48:46.912589 7033 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI1006 11:48:46.912650 7033 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI1006 11:48:46.912667 7033 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI1006 11:48:46.912704 7033 factory.go:656] Stopping watch factory\\\\nI1006 11:48:46.912725 7033 handler.go:208] Removed *v1.Node event handler 7\\\\nI1006 11:48:46.912741 7033 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI1006 11:48:46.912757 7033 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI1006 11:48:46.912772 7033 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI1006 11:48:46.912859 7033 handler.go:208] Removed *v1.Node ev\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-06T11:48:46Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-ntrlk_openshift-ovn-kubernetes(cd589959-144a-41bd-b6d5-a872e5c25cee)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vzpjm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1398aae03a42cf60100ed334a3d65c9c4fd501855d5316e7e1ed3974d0c7e22e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:47:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vzpjm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://738882f7b5c7b87e6d6426a997dd7efe5daf1d97973cf1085097cc0f5e790f00\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://738882f7b5c7b87e6d6426a997dd7efe5daf1d97973cf1085097cc0f5e790f00\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-06T11:47:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-06T11:47:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vzpjm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T11:47:48Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-ntrlk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:48:48Z is after 2025-08-24T17:21:41Z" Oct 06 11:48:48 crc kubenswrapper[4958]: I1006 11:48:48.641858 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:48:48 crc kubenswrapper[4958]: I1006 11:48:48.641912 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:48:48 crc kubenswrapper[4958]: I1006 11:48:48.641933 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:48:48 crc kubenswrapper[4958]: I1006 11:48:48.641957 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:48:48 crc kubenswrapper[4958]: I1006 11:48:48.641975 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:48:48Z","lastTransitionTime":"2025-10-06T11:48:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:48:48 crc kubenswrapper[4958]: I1006 11:48:48.646680 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-4w4h5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8198c8fd-8ec9-4a56-9e87-4e3967fb8ec7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:48:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:48:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4e73d50647a8c8e49d503c873fc1939a3747681ec2646bd44db73acf5c7740d0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://979bf29caab50b132b30dd6eff62aed5d2209ae1c43fde2433101410df35035b\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-10-06T11:48:35Z\\\",\\\"message\\\":\\\"2025-10-06T11:47:50+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_98b69acf-4cc1-4904-b48b-cf009f0cef75\\\\n2025-10-06T11:47:50+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_98b69acf-4cc1-4904-b48b-cf009f0cef75 to /host/opt/cni/bin/\\\\n2025-10-06T11:47:50Z [verbose] multus-daemon started\\\\n2025-10-06T11:47:50Z [verbose] Readiness Indicator file check\\\\n2025-10-06T11:48:35Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-06T11:47:48Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:48:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2ncxs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T11:47:47Z\\\"}}\" for pod \"openshift-multus\"/\"multus-4w4h5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:48:48Z is after 2025-08-24T17:21:41Z" Oct 06 11:48:48 crc kubenswrapper[4958]: I1006 11:48:48.665005 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-whw6z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1a63a5e9-6d00-40ff-a080-cfcaf03a1c1b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b4fae35f5c37636077eb5d2fee37236e39d009d77aa158458d267aed5fc29f48\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:47:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bwd9v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0be6251825a9cd485da028e6f7fd169dae74215a7191607dfd0a4b7a470e43c4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:47:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bwd9v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T11:47:47Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-whw6z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:48:48Z is after 2025-08-24T17:21:41Z" Oct 06 11:48:48 crc kubenswrapper[4958]: I1006 11:48:48.684384 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-8jbr9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ddd869cc-2703-4f0d-b694-32bb7735025e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:48:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:48:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:48:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:48:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9298df9f82818033b7c6c96e0d7fd59290a7ef86679ae958f19f1c841998715c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:48:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8vhr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5d95fcda3db5aafa2634bf9d4037a6b8b0456bf4be6fc620c742e57e86aed114\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:48:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8vhr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T11:48:00Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-8jbr9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:48:48Z is after 2025-08-24T17:21:41Z" Oct 06 11:48:48 crc kubenswrapper[4958]: I1006 11:48:48.703273 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ba205413-cfaf-4b44-a363-11756cfa7dda\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:48:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:48:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fb611e3b5c05dc18f2016ae86df59c40ce281cb8b0180bd570de0ef76740db43\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:47:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ec146c3b7bff7005ab104f50dbc189e81ee82b7b2275ddb606e60832db5a0046\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:47:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0dcb7478b9b3f837afca516fc35e7f3f41e293147c5f27119fcca05f0d4477b3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:47:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c7f87b2139a49d50f883f483bfbfff0af00d2580d85bbddb604d395afbae3a73\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c7f87b2139a49d50f883f483bfbfff0af00d2580d85bbddb604d395afbae3a73\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-06T11:47:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-06T11:47:28Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T11:47:27Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:48:48Z is after 2025-08-24T17:21:41Z" Oct 06 11:48:48 crc kubenswrapper[4958]: I1006 11:48:48.720029 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:48Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:48Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3a93985bd27818990d6f5ec103d8c0722e1317a74398d64300148018c4f77b71\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:47:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9f39e4dcc064167825d4f7323323a394d1a3d1a374b7104bffc6ad56b344d89e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:47:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:48:48Z is after 2025-08-24T17:21:41Z" Oct 06 11:48:48 crc kubenswrapper[4958]: I1006 11:48:48.738915 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:46Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:48:48Z is after 2025-08-24T17:21:41Z" Oct 06 11:48:48 crc kubenswrapper[4958]: I1006 11:48:48.744555 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:48:48 crc kubenswrapper[4958]: I1006 11:48:48.744751 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:48:48 crc kubenswrapper[4958]: I1006 11:48:48.744851 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:48:48 crc kubenswrapper[4958]: I1006 11:48:48.744962 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:48:48 crc kubenswrapper[4958]: I1006 11:48:48.745065 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:48:48Z","lastTransitionTime":"2025-10-06T11:48:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:48:48 crc kubenswrapper[4958]: I1006 11:48:48.757059 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-lwknw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5e46de0e-9f67-4dae-8601-65004d0d71c1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6adb713c9c2659566eee132d371a68bc10d24c6761c017a1e8b0ce8bf974b9fa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:47:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wghx2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://244104564b5ad6b646671bdc75f440e5568a0c2449c111e56431328365d044fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://244104564b5ad6b646671bdc75f440e5568a0c2449c111e56431328365d044fe\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-06T11:47:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-06T11:47:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wghx2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5f832e5afbc55af6f3de6756005ddfef328c3d8357b1bbc8765469e5b254cf8a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5f832e5afbc55af6f3de6756005ddfef328c3d8357b1bbc8765469e5b254cf8a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-06T11:47:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-06T11:47:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wghx2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://99bec1a2f08582cd407d108d771389ab034877ee58f59e7d899541d0c69809b0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://99bec1a2f08582cd407d108d771389ab034877ee58f59e7d899541d0c69809b0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-06T11:47:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-06T11:47:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wghx2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4d090f1099ca4b6472934163fb66f1875dc03819cf0907f3058aa5006d9d2198\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4d090f1099ca4b6472934163fb66f1875dc03819cf0907f3058aa5006d9d2198\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-06T11:47:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-06T11:47:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wghx2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://41bdbeadff713969b3914296b7f045991c7da126f8181ee9b60d4607d7a565dd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://41bdbeadff713969b3914296b7f045991c7da126f8181ee9b60d4607d7a565dd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-06T11:47:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-06T11:47:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wghx2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3e496dd75c5dd4e480d06cd46302b6172620b35eb24aff3308865a0931677bc4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3e496dd75c5dd4e480d06cd46302b6172620b35eb24aff3308865a0931677bc4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-06T11:47:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-06T11:47:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wghx2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T11:47:47Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-lwknw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:48:48Z is after 2025-08-24T17:21:41Z" Oct 06 11:48:48 crc kubenswrapper[4958]: I1006 11:48:48.773809 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-vns7w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f33eae78-7219-4551-bea7-dcfaf22c4e62\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7c6a67770ab5e7e0af1ed6a7b4965bc2b3937e021ddf18a765edd7b65196ad5c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:47:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f8htn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T11:47:49Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-vns7w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:48:48Z is after 2025-08-24T17:21:41Z" Oct 06 11:48:48 crc kubenswrapper[4958]: I1006 11:48:48.793537 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e346fee2-2887-49c0-ad05-0e4ca19453e4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://42a6335b390e5de2da2d935de9e0cb1e60391521813d0928c59d1d9c27ae8e7f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:47:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://64dfdaf2d8faa6e7ebde08361728077d3db9118ec963dc4c1fc8caf9eb9c752b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:47:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5cbba96aaea45073aaeada1136c677ba1ba87248f004cf3d7394ddfccf412478\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:47:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://95e5c689ad307b63dec13d774d1ff8fe8fa0a11a7d33e18a0470000a4df0f6dc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:47:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T11:47:27Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:48:48Z is after 2025-08-24T17:21:41Z" Oct 06 11:48:48 crc kubenswrapper[4958]: I1006 11:48:48.812097 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:46Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:48:48Z is after 2025-08-24T17:21:41Z" Oct 06 11:48:48 crc kubenswrapper[4958]: I1006 11:48:48.828633 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-8xzjs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c3545ace-6477-4c0b-9576-f32c6748e0a0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d94d788ac8f6a4665db7852c21f55128c9afb7976c9e18fbc302c02d8c3a0c43\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:47:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lz2sh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T11:47:47Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-8xzjs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:48:48Z is after 2025-08-24T17:21:41Z" Oct 06 11:48:48 crc kubenswrapper[4958]: I1006 11:48:48.847933 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:48:48 crc kubenswrapper[4958]: I1006 11:48:48.847977 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:48:48 crc kubenswrapper[4958]: I1006 11:48:48.847994 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:48:48 crc kubenswrapper[4958]: I1006 11:48:48.848020 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:48:48 crc kubenswrapper[4958]: I1006 11:48:48.848037 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:48:48Z","lastTransitionTime":"2025-10-06T11:48:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:48:48 crc kubenswrapper[4958]: I1006 11:48:48.913103 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 06 11:48:48 crc kubenswrapper[4958]: I1006 11:48:48.913216 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 06 11:48:48 crc kubenswrapper[4958]: I1006 11:48:48.913269 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 06 11:48:48 crc kubenswrapper[4958]: E1006 11:48:48.913390 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Oct 06 11:48:48 crc kubenswrapper[4958]: E1006 11:48:48.913519 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Oct 06 11:48:48 crc kubenswrapper[4958]: E1006 11:48:48.913648 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Oct 06 11:48:48 crc kubenswrapper[4958]: I1006 11:48:48.951308 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:48:48 crc kubenswrapper[4958]: I1006 11:48:48.951414 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:48:48 crc kubenswrapper[4958]: I1006 11:48:48.951441 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:48:48 crc kubenswrapper[4958]: I1006 11:48:48.951478 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:48:48 crc kubenswrapper[4958]: I1006 11:48:48.951506 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:48:48Z","lastTransitionTime":"2025-10-06T11:48:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:48:49 crc kubenswrapper[4958]: I1006 11:48:49.055099 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:48:49 crc kubenswrapper[4958]: I1006 11:48:49.055195 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:48:49 crc kubenswrapper[4958]: I1006 11:48:49.055213 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:48:49 crc kubenswrapper[4958]: I1006 11:48:49.055240 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:48:49 crc kubenswrapper[4958]: I1006 11:48:49.055343 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:48:49Z","lastTransitionTime":"2025-10-06T11:48:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:48:49 crc kubenswrapper[4958]: I1006 11:48:49.158976 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:48:49 crc kubenswrapper[4958]: I1006 11:48:49.159054 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:48:49 crc kubenswrapper[4958]: I1006 11:48:49.159073 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:48:49 crc kubenswrapper[4958]: I1006 11:48:49.159099 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:48:49 crc kubenswrapper[4958]: I1006 11:48:49.159124 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:48:49Z","lastTransitionTime":"2025-10-06T11:48:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:48:49 crc kubenswrapper[4958]: I1006 11:48:49.262489 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:48:49 crc kubenswrapper[4958]: I1006 11:48:49.262553 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:48:49 crc kubenswrapper[4958]: I1006 11:48:49.262572 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:48:49 crc kubenswrapper[4958]: I1006 11:48:49.262598 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:48:49 crc kubenswrapper[4958]: I1006 11:48:49.262614 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:48:49Z","lastTransitionTime":"2025-10-06T11:48:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:48:49 crc kubenswrapper[4958]: I1006 11:48:49.365518 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:48:49 crc kubenswrapper[4958]: I1006 11:48:49.365592 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:48:49 crc kubenswrapper[4958]: I1006 11:48:49.365610 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:48:49 crc kubenswrapper[4958]: I1006 11:48:49.365642 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:48:49 crc kubenswrapper[4958]: I1006 11:48:49.365661 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:48:49Z","lastTransitionTime":"2025-10-06T11:48:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:48:49 crc kubenswrapper[4958]: I1006 11:48:49.468329 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:48:49 crc kubenswrapper[4958]: I1006 11:48:49.468399 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:48:49 crc kubenswrapper[4958]: I1006 11:48:49.468417 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:48:49 crc kubenswrapper[4958]: I1006 11:48:49.468449 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:48:49 crc kubenswrapper[4958]: I1006 11:48:49.468468 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:48:49Z","lastTransitionTime":"2025-10-06T11:48:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:48:49 crc kubenswrapper[4958]: I1006 11:48:49.571521 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:48:49 crc kubenswrapper[4958]: I1006 11:48:49.571605 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:48:49 crc kubenswrapper[4958]: I1006 11:48:49.571626 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:48:49 crc kubenswrapper[4958]: I1006 11:48:49.571654 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:48:49 crc kubenswrapper[4958]: I1006 11:48:49.571674 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:48:49Z","lastTransitionTime":"2025-10-06T11:48:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:48:49 crc kubenswrapper[4958]: I1006 11:48:49.677258 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:48:49 crc kubenswrapper[4958]: I1006 11:48:49.677332 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:48:49 crc kubenswrapper[4958]: I1006 11:48:49.677385 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:48:49 crc kubenswrapper[4958]: I1006 11:48:49.677415 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:48:49 crc kubenswrapper[4958]: I1006 11:48:49.677429 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:48:49Z","lastTransitionTime":"2025-10-06T11:48:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:48:49 crc kubenswrapper[4958]: I1006 11:48:49.780684 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:48:49 crc kubenswrapper[4958]: I1006 11:48:49.780745 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:48:49 crc kubenswrapper[4958]: I1006 11:48:49.780761 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:48:49 crc kubenswrapper[4958]: I1006 11:48:49.780789 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:48:49 crc kubenswrapper[4958]: I1006 11:48:49.780805 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:48:49Z","lastTransitionTime":"2025-10-06T11:48:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:48:49 crc kubenswrapper[4958]: I1006 11:48:49.884643 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:48:49 crc kubenswrapper[4958]: I1006 11:48:49.885610 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:48:49 crc kubenswrapper[4958]: I1006 11:48:49.885772 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:48:49 crc kubenswrapper[4958]: I1006 11:48:49.885922 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:48:49 crc kubenswrapper[4958]: I1006 11:48:49.886052 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:48:49Z","lastTransitionTime":"2025-10-06T11:48:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:48:49 crc kubenswrapper[4958]: I1006 11:48:49.912697 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-4mxw5" Oct 06 11:48:49 crc kubenswrapper[4958]: E1006 11:48:49.912925 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-4mxw5" podUID="cbfbf2ea-df5f-4844-ba69-8c9f16a3a26c" Oct 06 11:48:49 crc kubenswrapper[4958]: I1006 11:48:49.989036 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:48:49 crc kubenswrapper[4958]: I1006 11:48:49.989349 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:48:49 crc kubenswrapper[4958]: I1006 11:48:49.989505 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:48:49 crc kubenswrapper[4958]: I1006 11:48:49.989653 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:48:49 crc kubenswrapper[4958]: I1006 11:48:49.989791 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:48:49Z","lastTransitionTime":"2025-10-06T11:48:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:48:50 crc kubenswrapper[4958]: I1006 11:48:50.093680 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:48:50 crc kubenswrapper[4958]: I1006 11:48:50.093758 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:48:50 crc kubenswrapper[4958]: I1006 11:48:50.093780 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:48:50 crc kubenswrapper[4958]: I1006 11:48:50.093808 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:48:50 crc kubenswrapper[4958]: I1006 11:48:50.093828 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:48:50Z","lastTransitionTime":"2025-10-06T11:48:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:48:50 crc kubenswrapper[4958]: I1006 11:48:50.196863 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:48:50 crc kubenswrapper[4958]: I1006 11:48:50.197137 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:48:50 crc kubenswrapper[4958]: I1006 11:48:50.197339 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:48:50 crc kubenswrapper[4958]: I1006 11:48:50.197483 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:48:50 crc kubenswrapper[4958]: I1006 11:48:50.197626 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:48:50Z","lastTransitionTime":"2025-10-06T11:48:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:48:50 crc kubenswrapper[4958]: I1006 11:48:50.301500 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:48:50 crc kubenswrapper[4958]: I1006 11:48:50.301570 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:48:50 crc kubenswrapper[4958]: I1006 11:48:50.301589 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:48:50 crc kubenswrapper[4958]: I1006 11:48:50.301616 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:48:50 crc kubenswrapper[4958]: I1006 11:48:50.301635 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:48:50Z","lastTransitionTime":"2025-10-06T11:48:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:48:50 crc kubenswrapper[4958]: I1006 11:48:50.404665 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:48:50 crc kubenswrapper[4958]: I1006 11:48:50.404902 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:48:50 crc kubenswrapper[4958]: I1006 11:48:50.405056 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:48:50 crc kubenswrapper[4958]: I1006 11:48:50.405238 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:48:50 crc kubenswrapper[4958]: I1006 11:48:50.405391 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:48:50Z","lastTransitionTime":"2025-10-06T11:48:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:48:50 crc kubenswrapper[4958]: I1006 11:48:50.507819 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:48:50 crc kubenswrapper[4958]: I1006 11:48:50.507877 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:48:50 crc kubenswrapper[4958]: I1006 11:48:50.507894 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:48:50 crc kubenswrapper[4958]: I1006 11:48:50.507919 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:48:50 crc kubenswrapper[4958]: I1006 11:48:50.507939 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:48:50Z","lastTransitionTime":"2025-10-06T11:48:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:48:50 crc kubenswrapper[4958]: I1006 11:48:50.611768 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:48:50 crc kubenswrapper[4958]: I1006 11:48:50.611860 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:48:50 crc kubenswrapper[4958]: I1006 11:48:50.611886 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:48:50 crc kubenswrapper[4958]: I1006 11:48:50.611925 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:48:50 crc kubenswrapper[4958]: I1006 11:48:50.611954 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:48:50Z","lastTransitionTime":"2025-10-06T11:48:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:48:50 crc kubenswrapper[4958]: I1006 11:48:50.657387 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 06 11:48:50 crc kubenswrapper[4958]: E1006 11:48:50.657687 4958 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-06 11:49:54.657652388 +0000 UTC m=+148.543677736 (durationBeforeRetry 1m4s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 06 11:48:50 crc kubenswrapper[4958]: I1006 11:48:50.715858 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:48:50 crc kubenswrapper[4958]: I1006 11:48:50.715943 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:48:50 crc kubenswrapper[4958]: I1006 11:48:50.715969 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:48:50 crc kubenswrapper[4958]: I1006 11:48:50.716002 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:48:50 crc kubenswrapper[4958]: I1006 11:48:50.716026 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:48:50Z","lastTransitionTime":"2025-10-06T11:48:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:48:50 crc kubenswrapper[4958]: I1006 11:48:50.759693 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 06 11:48:50 crc kubenswrapper[4958]: I1006 11:48:50.759761 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 06 11:48:50 crc kubenswrapper[4958]: I1006 11:48:50.759817 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 06 11:48:50 crc kubenswrapper[4958]: I1006 11:48:50.759858 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 06 11:48:50 crc kubenswrapper[4958]: E1006 11:48:50.760014 4958 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Oct 06 11:48:50 crc kubenswrapper[4958]: E1006 11:48:50.760033 4958 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Oct 06 11:48:50 crc kubenswrapper[4958]: E1006 11:48:50.760050 4958 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Oct 06 11:48:50 crc kubenswrapper[4958]: E1006 11:48:50.760094 4958 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Oct 06 11:48:50 crc kubenswrapper[4958]: E1006 11:48:50.760120 4958 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Oct 06 11:48:50 crc kubenswrapper[4958]: E1006 11:48:50.760132 4958 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Oct 06 11:48:50 crc kubenswrapper[4958]: E1006 11:48:50.760198 4958 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-10-06 11:49:54.760119218 +0000 UTC m=+148.646144566 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Oct 06 11:48:50 crc kubenswrapper[4958]: E1006 11:48:50.760211 4958 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Oct 06 11:48:50 crc kubenswrapper[4958]: E1006 11:48:50.760237 4958 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Oct 06 11:48:50 crc kubenswrapper[4958]: E1006 11:48:50.760248 4958 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2025-10-06 11:49:54.760220801 +0000 UTC m=+148.646246189 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Oct 06 11:48:50 crc kubenswrapper[4958]: E1006 11:48:50.760297 4958 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-10-06 11:49:54.760273422 +0000 UTC m=+148.646298810 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Oct 06 11:48:50 crc kubenswrapper[4958]: E1006 11:48:50.760328 4958 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2025-10-06 11:49:54.760312643 +0000 UTC m=+148.646338151 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Oct 06 11:48:50 crc kubenswrapper[4958]: I1006 11:48:50.819125 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:48:50 crc kubenswrapper[4958]: I1006 11:48:50.819564 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:48:50 crc kubenswrapper[4958]: I1006 11:48:50.819719 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:48:50 crc kubenswrapper[4958]: I1006 11:48:50.819871 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:48:50 crc kubenswrapper[4958]: I1006 11:48:50.820009 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:48:50Z","lastTransitionTime":"2025-10-06T11:48:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:48:50 crc kubenswrapper[4958]: I1006 11:48:50.912829 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 06 11:48:50 crc kubenswrapper[4958]: I1006 11:48:50.912912 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 06 11:48:50 crc kubenswrapper[4958]: I1006 11:48:50.912851 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 06 11:48:50 crc kubenswrapper[4958]: E1006 11:48:50.913001 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Oct 06 11:48:50 crc kubenswrapper[4958]: E1006 11:48:50.913265 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Oct 06 11:48:50 crc kubenswrapper[4958]: E1006 11:48:50.913406 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Oct 06 11:48:50 crc kubenswrapper[4958]: I1006 11:48:50.922202 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:48:50 crc kubenswrapper[4958]: I1006 11:48:50.922447 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:48:50 crc kubenswrapper[4958]: I1006 11:48:50.922595 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:48:50 crc kubenswrapper[4958]: I1006 11:48:50.922750 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:48:50 crc kubenswrapper[4958]: I1006 11:48:50.922902 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:48:50Z","lastTransitionTime":"2025-10-06T11:48:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:48:51 crc kubenswrapper[4958]: I1006 11:48:51.025412 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:48:51 crc kubenswrapper[4958]: I1006 11:48:51.025497 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:48:51 crc kubenswrapper[4958]: I1006 11:48:51.025515 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:48:51 crc kubenswrapper[4958]: I1006 11:48:51.025536 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:48:51 crc kubenswrapper[4958]: I1006 11:48:51.025550 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:48:51Z","lastTransitionTime":"2025-10-06T11:48:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:48:51 crc kubenswrapper[4958]: I1006 11:48:51.128872 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:48:51 crc kubenswrapper[4958]: I1006 11:48:51.128926 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:48:51 crc kubenswrapper[4958]: I1006 11:48:51.128944 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:48:51 crc kubenswrapper[4958]: I1006 11:48:51.128967 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:48:51 crc kubenswrapper[4958]: I1006 11:48:51.128986 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:48:51Z","lastTransitionTime":"2025-10-06T11:48:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:48:51 crc kubenswrapper[4958]: I1006 11:48:51.231695 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:48:51 crc kubenswrapper[4958]: I1006 11:48:51.231813 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:48:51 crc kubenswrapper[4958]: I1006 11:48:51.231966 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:48:51 crc kubenswrapper[4958]: I1006 11:48:51.232027 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:48:51 crc kubenswrapper[4958]: I1006 11:48:51.232054 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:48:51Z","lastTransitionTime":"2025-10-06T11:48:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:48:51 crc kubenswrapper[4958]: I1006 11:48:51.334536 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:48:51 crc kubenswrapper[4958]: I1006 11:48:51.334610 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:48:51 crc kubenswrapper[4958]: I1006 11:48:51.334628 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:48:51 crc kubenswrapper[4958]: I1006 11:48:51.334652 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:48:51 crc kubenswrapper[4958]: I1006 11:48:51.334670 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:48:51Z","lastTransitionTime":"2025-10-06T11:48:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:48:51 crc kubenswrapper[4958]: I1006 11:48:51.437837 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:48:51 crc kubenswrapper[4958]: I1006 11:48:51.437904 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:48:51 crc kubenswrapper[4958]: I1006 11:48:51.437922 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:48:51 crc kubenswrapper[4958]: I1006 11:48:51.437947 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:48:51 crc kubenswrapper[4958]: I1006 11:48:51.437964 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:48:51Z","lastTransitionTime":"2025-10-06T11:48:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:48:51 crc kubenswrapper[4958]: I1006 11:48:51.541016 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:48:51 crc kubenswrapper[4958]: I1006 11:48:51.541067 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:48:51 crc kubenswrapper[4958]: I1006 11:48:51.541082 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:48:51 crc kubenswrapper[4958]: I1006 11:48:51.541103 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:48:51 crc kubenswrapper[4958]: I1006 11:48:51.541120 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:48:51Z","lastTransitionTime":"2025-10-06T11:48:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:48:51 crc kubenswrapper[4958]: I1006 11:48:51.644000 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:48:51 crc kubenswrapper[4958]: I1006 11:48:51.644050 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:48:51 crc kubenswrapper[4958]: I1006 11:48:51.644062 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:48:51 crc kubenswrapper[4958]: I1006 11:48:51.644084 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:48:51 crc kubenswrapper[4958]: I1006 11:48:51.644098 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:48:51Z","lastTransitionTime":"2025-10-06T11:48:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:48:51 crc kubenswrapper[4958]: I1006 11:48:51.747202 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:48:51 crc kubenswrapper[4958]: I1006 11:48:51.747272 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:48:51 crc kubenswrapper[4958]: I1006 11:48:51.747295 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:48:51 crc kubenswrapper[4958]: I1006 11:48:51.747325 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:48:51 crc kubenswrapper[4958]: I1006 11:48:51.747343 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:48:51Z","lastTransitionTime":"2025-10-06T11:48:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:48:51 crc kubenswrapper[4958]: I1006 11:48:51.850635 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:48:51 crc kubenswrapper[4958]: I1006 11:48:51.850715 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:48:51 crc kubenswrapper[4958]: I1006 11:48:51.850740 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:48:51 crc kubenswrapper[4958]: I1006 11:48:51.850771 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:48:51 crc kubenswrapper[4958]: I1006 11:48:51.850797 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:48:51Z","lastTransitionTime":"2025-10-06T11:48:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:48:51 crc kubenswrapper[4958]: I1006 11:48:51.913357 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-4mxw5" Oct 06 11:48:51 crc kubenswrapper[4958]: E1006 11:48:51.913562 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-4mxw5" podUID="cbfbf2ea-df5f-4844-ba69-8c9f16a3a26c" Oct 06 11:48:51 crc kubenswrapper[4958]: I1006 11:48:51.954852 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:48:51 crc kubenswrapper[4958]: I1006 11:48:51.954972 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:48:51 crc kubenswrapper[4958]: I1006 11:48:51.955048 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:48:51 crc kubenswrapper[4958]: I1006 11:48:51.955137 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:48:51 crc kubenswrapper[4958]: I1006 11:48:51.955189 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:48:51Z","lastTransitionTime":"2025-10-06T11:48:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:48:52 crc kubenswrapper[4958]: I1006 11:48:52.057781 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:48:52 crc kubenswrapper[4958]: I1006 11:48:52.057850 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:48:52 crc kubenswrapper[4958]: I1006 11:48:52.057873 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:48:52 crc kubenswrapper[4958]: I1006 11:48:52.057900 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:48:52 crc kubenswrapper[4958]: I1006 11:48:52.057922 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:48:52Z","lastTransitionTime":"2025-10-06T11:48:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:48:52 crc kubenswrapper[4958]: I1006 11:48:52.160947 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:48:52 crc kubenswrapper[4958]: I1006 11:48:52.161001 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:48:52 crc kubenswrapper[4958]: I1006 11:48:52.161022 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:48:52 crc kubenswrapper[4958]: I1006 11:48:52.161048 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:48:52 crc kubenswrapper[4958]: I1006 11:48:52.161065 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:48:52Z","lastTransitionTime":"2025-10-06T11:48:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:48:52 crc kubenswrapper[4958]: I1006 11:48:52.264647 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:48:52 crc kubenswrapper[4958]: I1006 11:48:52.264740 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:48:52 crc kubenswrapper[4958]: I1006 11:48:52.264762 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:48:52 crc kubenswrapper[4958]: I1006 11:48:52.264796 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:48:52 crc kubenswrapper[4958]: I1006 11:48:52.264820 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:48:52Z","lastTransitionTime":"2025-10-06T11:48:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:48:52 crc kubenswrapper[4958]: I1006 11:48:52.368429 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:48:52 crc kubenswrapper[4958]: I1006 11:48:52.368495 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:48:52 crc kubenswrapper[4958]: I1006 11:48:52.368518 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:48:52 crc kubenswrapper[4958]: I1006 11:48:52.368544 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:48:52 crc kubenswrapper[4958]: I1006 11:48:52.368561 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:48:52Z","lastTransitionTime":"2025-10-06T11:48:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:48:52 crc kubenswrapper[4958]: I1006 11:48:52.471135 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:48:52 crc kubenswrapper[4958]: I1006 11:48:52.471237 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:48:52 crc kubenswrapper[4958]: I1006 11:48:52.471255 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:48:52 crc kubenswrapper[4958]: I1006 11:48:52.471280 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:48:52 crc kubenswrapper[4958]: I1006 11:48:52.471297 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:48:52Z","lastTransitionTime":"2025-10-06T11:48:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:48:52 crc kubenswrapper[4958]: I1006 11:48:52.574510 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:48:52 crc kubenswrapper[4958]: I1006 11:48:52.574593 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:48:52 crc kubenswrapper[4958]: I1006 11:48:52.574611 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:48:52 crc kubenswrapper[4958]: I1006 11:48:52.574702 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:48:52 crc kubenswrapper[4958]: I1006 11:48:52.574756 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:48:52Z","lastTransitionTime":"2025-10-06T11:48:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:48:52 crc kubenswrapper[4958]: I1006 11:48:52.678045 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:48:52 crc kubenswrapper[4958]: I1006 11:48:52.678115 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:48:52 crc kubenswrapper[4958]: I1006 11:48:52.678138 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:48:52 crc kubenswrapper[4958]: I1006 11:48:52.678203 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:48:52 crc kubenswrapper[4958]: I1006 11:48:52.678221 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:48:52Z","lastTransitionTime":"2025-10-06T11:48:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:48:52 crc kubenswrapper[4958]: I1006 11:48:52.781686 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:48:52 crc kubenswrapper[4958]: I1006 11:48:52.781759 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:48:52 crc kubenswrapper[4958]: I1006 11:48:52.781788 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:48:52 crc kubenswrapper[4958]: I1006 11:48:52.781817 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:48:52 crc kubenswrapper[4958]: I1006 11:48:52.781839 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:48:52Z","lastTransitionTime":"2025-10-06T11:48:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:48:52 crc kubenswrapper[4958]: I1006 11:48:52.884428 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:48:52 crc kubenswrapper[4958]: I1006 11:48:52.884484 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:48:52 crc kubenswrapper[4958]: I1006 11:48:52.884499 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:48:52 crc kubenswrapper[4958]: I1006 11:48:52.884520 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:48:52 crc kubenswrapper[4958]: I1006 11:48:52.884537 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:48:52Z","lastTransitionTime":"2025-10-06T11:48:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:48:52 crc kubenswrapper[4958]: I1006 11:48:52.913036 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 06 11:48:52 crc kubenswrapper[4958]: I1006 11:48:52.913036 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 06 11:48:52 crc kubenswrapper[4958]: I1006 11:48:52.913049 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 06 11:48:52 crc kubenswrapper[4958]: E1006 11:48:52.913292 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Oct 06 11:48:52 crc kubenswrapper[4958]: E1006 11:48:52.913475 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Oct 06 11:48:52 crc kubenswrapper[4958]: E1006 11:48:52.913647 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Oct 06 11:48:52 crc kubenswrapper[4958]: I1006 11:48:52.987919 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:48:52 crc kubenswrapper[4958]: I1006 11:48:52.987976 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:48:52 crc kubenswrapper[4958]: I1006 11:48:52.987993 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:48:52 crc kubenswrapper[4958]: I1006 11:48:52.988016 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:48:52 crc kubenswrapper[4958]: I1006 11:48:52.988032 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:48:52Z","lastTransitionTime":"2025-10-06T11:48:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:48:53 crc kubenswrapper[4958]: I1006 11:48:53.091648 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:48:53 crc kubenswrapper[4958]: I1006 11:48:53.091732 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:48:53 crc kubenswrapper[4958]: I1006 11:48:53.091776 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:48:53 crc kubenswrapper[4958]: I1006 11:48:53.091810 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:48:53 crc kubenswrapper[4958]: I1006 11:48:53.091831 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:48:53Z","lastTransitionTime":"2025-10-06T11:48:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:48:53 crc kubenswrapper[4958]: I1006 11:48:53.195496 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:48:53 crc kubenswrapper[4958]: I1006 11:48:53.195569 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:48:53 crc kubenswrapper[4958]: I1006 11:48:53.195586 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:48:53 crc kubenswrapper[4958]: I1006 11:48:53.195613 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:48:53 crc kubenswrapper[4958]: I1006 11:48:53.195632 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:48:53Z","lastTransitionTime":"2025-10-06T11:48:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:48:53 crc kubenswrapper[4958]: I1006 11:48:53.299731 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:48:53 crc kubenswrapper[4958]: I1006 11:48:53.299833 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:48:53 crc kubenswrapper[4958]: I1006 11:48:53.299852 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:48:53 crc kubenswrapper[4958]: I1006 11:48:53.299910 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:48:53 crc kubenswrapper[4958]: I1006 11:48:53.299930 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:48:53Z","lastTransitionTime":"2025-10-06T11:48:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:48:53 crc kubenswrapper[4958]: I1006 11:48:53.403789 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:48:53 crc kubenswrapper[4958]: I1006 11:48:53.403846 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:48:53 crc kubenswrapper[4958]: I1006 11:48:53.403863 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:48:53 crc kubenswrapper[4958]: I1006 11:48:53.403887 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:48:53 crc kubenswrapper[4958]: I1006 11:48:53.403904 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:48:53Z","lastTransitionTime":"2025-10-06T11:48:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:48:53 crc kubenswrapper[4958]: I1006 11:48:53.507566 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:48:53 crc kubenswrapper[4958]: I1006 11:48:53.507631 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:48:53 crc kubenswrapper[4958]: I1006 11:48:53.507648 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:48:53 crc kubenswrapper[4958]: I1006 11:48:53.507673 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:48:53 crc kubenswrapper[4958]: I1006 11:48:53.507689 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:48:53Z","lastTransitionTime":"2025-10-06T11:48:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:48:53 crc kubenswrapper[4958]: I1006 11:48:53.610337 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:48:53 crc kubenswrapper[4958]: I1006 11:48:53.610405 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:48:53 crc kubenswrapper[4958]: I1006 11:48:53.610427 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:48:53 crc kubenswrapper[4958]: I1006 11:48:53.610456 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:48:53 crc kubenswrapper[4958]: I1006 11:48:53.610479 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:48:53Z","lastTransitionTime":"2025-10-06T11:48:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:48:53 crc kubenswrapper[4958]: I1006 11:48:53.713578 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:48:53 crc kubenswrapper[4958]: I1006 11:48:53.713654 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:48:53 crc kubenswrapper[4958]: I1006 11:48:53.713679 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:48:53 crc kubenswrapper[4958]: I1006 11:48:53.713708 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:48:53 crc kubenswrapper[4958]: I1006 11:48:53.713731 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:48:53Z","lastTransitionTime":"2025-10-06T11:48:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:48:53 crc kubenswrapper[4958]: I1006 11:48:53.816544 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:48:53 crc kubenswrapper[4958]: I1006 11:48:53.816619 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:48:53 crc kubenswrapper[4958]: I1006 11:48:53.816642 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:48:53 crc kubenswrapper[4958]: I1006 11:48:53.816672 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:48:53 crc kubenswrapper[4958]: I1006 11:48:53.816695 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:48:53Z","lastTransitionTime":"2025-10-06T11:48:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:48:53 crc kubenswrapper[4958]: I1006 11:48:53.913077 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-4mxw5" Oct 06 11:48:53 crc kubenswrapper[4958]: E1006 11:48:53.913288 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-4mxw5" podUID="cbfbf2ea-df5f-4844-ba69-8c9f16a3a26c" Oct 06 11:48:53 crc kubenswrapper[4958]: I1006 11:48:53.919427 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:48:53 crc kubenswrapper[4958]: I1006 11:48:53.919485 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:48:53 crc kubenswrapper[4958]: I1006 11:48:53.919503 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:48:53 crc kubenswrapper[4958]: I1006 11:48:53.919527 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:48:53 crc kubenswrapper[4958]: I1006 11:48:53.919543 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:48:53Z","lastTransitionTime":"2025-10-06T11:48:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:48:54 crc kubenswrapper[4958]: I1006 11:48:54.022421 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:48:54 crc kubenswrapper[4958]: I1006 11:48:54.022500 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:48:54 crc kubenswrapper[4958]: I1006 11:48:54.022520 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:48:54 crc kubenswrapper[4958]: I1006 11:48:54.022546 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:48:54 crc kubenswrapper[4958]: I1006 11:48:54.022563 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:48:54Z","lastTransitionTime":"2025-10-06T11:48:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:48:54 crc kubenswrapper[4958]: I1006 11:48:54.125641 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:48:54 crc kubenswrapper[4958]: I1006 11:48:54.125727 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:48:54 crc kubenswrapper[4958]: I1006 11:48:54.125751 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:48:54 crc kubenswrapper[4958]: I1006 11:48:54.125782 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:48:54 crc kubenswrapper[4958]: I1006 11:48:54.125802 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:48:54Z","lastTransitionTime":"2025-10-06T11:48:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:48:54 crc kubenswrapper[4958]: I1006 11:48:54.229101 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:48:54 crc kubenswrapper[4958]: I1006 11:48:54.229186 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:48:54 crc kubenswrapper[4958]: I1006 11:48:54.229204 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:48:54 crc kubenswrapper[4958]: I1006 11:48:54.229228 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:48:54 crc kubenswrapper[4958]: I1006 11:48:54.229249 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:48:54Z","lastTransitionTime":"2025-10-06T11:48:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:48:54 crc kubenswrapper[4958]: I1006 11:48:54.332728 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:48:54 crc kubenswrapper[4958]: I1006 11:48:54.332797 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:48:54 crc kubenswrapper[4958]: I1006 11:48:54.332819 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:48:54 crc kubenswrapper[4958]: I1006 11:48:54.332852 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:48:54 crc kubenswrapper[4958]: I1006 11:48:54.332873 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:48:54Z","lastTransitionTime":"2025-10-06T11:48:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:48:54 crc kubenswrapper[4958]: I1006 11:48:54.435617 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:48:54 crc kubenswrapper[4958]: I1006 11:48:54.435693 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:48:54 crc kubenswrapper[4958]: I1006 11:48:54.435717 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:48:54 crc kubenswrapper[4958]: I1006 11:48:54.435750 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:48:54 crc kubenswrapper[4958]: I1006 11:48:54.435777 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:48:54Z","lastTransitionTime":"2025-10-06T11:48:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:48:54 crc kubenswrapper[4958]: I1006 11:48:54.538368 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:48:54 crc kubenswrapper[4958]: I1006 11:48:54.538421 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:48:54 crc kubenswrapper[4958]: I1006 11:48:54.538439 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:48:54 crc kubenswrapper[4958]: I1006 11:48:54.538462 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:48:54 crc kubenswrapper[4958]: I1006 11:48:54.538481 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:48:54Z","lastTransitionTime":"2025-10-06T11:48:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:48:54 crc kubenswrapper[4958]: I1006 11:48:54.641182 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:48:54 crc kubenswrapper[4958]: I1006 11:48:54.641257 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:48:54 crc kubenswrapper[4958]: I1006 11:48:54.641291 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:48:54 crc kubenswrapper[4958]: I1006 11:48:54.641323 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:48:54 crc kubenswrapper[4958]: I1006 11:48:54.641346 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:48:54Z","lastTransitionTime":"2025-10-06T11:48:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:48:54 crc kubenswrapper[4958]: I1006 11:48:54.744379 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:48:54 crc kubenswrapper[4958]: I1006 11:48:54.744450 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:48:54 crc kubenswrapper[4958]: I1006 11:48:54.744473 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:48:54 crc kubenswrapper[4958]: I1006 11:48:54.744500 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:48:54 crc kubenswrapper[4958]: I1006 11:48:54.744525 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:48:54Z","lastTransitionTime":"2025-10-06T11:48:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:48:54 crc kubenswrapper[4958]: I1006 11:48:54.847712 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:48:54 crc kubenswrapper[4958]: I1006 11:48:54.847805 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:48:54 crc kubenswrapper[4958]: I1006 11:48:54.847833 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:48:54 crc kubenswrapper[4958]: I1006 11:48:54.847865 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:48:54 crc kubenswrapper[4958]: I1006 11:48:54.847887 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:48:54Z","lastTransitionTime":"2025-10-06T11:48:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:48:54 crc kubenswrapper[4958]: I1006 11:48:54.912985 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 06 11:48:54 crc kubenswrapper[4958]: I1006 11:48:54.913031 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 06 11:48:54 crc kubenswrapper[4958]: I1006 11:48:54.913057 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 06 11:48:54 crc kubenswrapper[4958]: E1006 11:48:54.913259 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Oct 06 11:48:54 crc kubenswrapper[4958]: E1006 11:48:54.913536 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Oct 06 11:48:54 crc kubenswrapper[4958]: E1006 11:48:54.913874 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Oct 06 11:48:54 crc kubenswrapper[4958]: I1006 11:48:54.951979 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:48:54 crc kubenswrapper[4958]: I1006 11:48:54.952046 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:48:54 crc kubenswrapper[4958]: I1006 11:48:54.952063 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:48:54 crc kubenswrapper[4958]: I1006 11:48:54.952091 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:48:54 crc kubenswrapper[4958]: I1006 11:48:54.952107 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:48:54Z","lastTransitionTime":"2025-10-06T11:48:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:48:55 crc kubenswrapper[4958]: I1006 11:48:55.055395 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:48:55 crc kubenswrapper[4958]: I1006 11:48:55.055458 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:48:55 crc kubenswrapper[4958]: I1006 11:48:55.055475 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:48:55 crc kubenswrapper[4958]: I1006 11:48:55.055503 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:48:55 crc kubenswrapper[4958]: I1006 11:48:55.055523 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:48:55Z","lastTransitionTime":"2025-10-06T11:48:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:48:55 crc kubenswrapper[4958]: I1006 11:48:55.158244 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:48:55 crc kubenswrapper[4958]: I1006 11:48:55.158285 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:48:55 crc kubenswrapper[4958]: I1006 11:48:55.158295 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:48:55 crc kubenswrapper[4958]: I1006 11:48:55.158308 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:48:55 crc kubenswrapper[4958]: I1006 11:48:55.158320 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:48:55Z","lastTransitionTime":"2025-10-06T11:48:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:48:55 crc kubenswrapper[4958]: I1006 11:48:55.261231 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:48:55 crc kubenswrapper[4958]: I1006 11:48:55.261309 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:48:55 crc kubenswrapper[4958]: I1006 11:48:55.261335 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:48:55 crc kubenswrapper[4958]: I1006 11:48:55.261363 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:48:55 crc kubenswrapper[4958]: I1006 11:48:55.261382 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:48:55Z","lastTransitionTime":"2025-10-06T11:48:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:48:55 crc kubenswrapper[4958]: I1006 11:48:55.364630 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:48:55 crc kubenswrapper[4958]: I1006 11:48:55.364707 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:48:55 crc kubenswrapper[4958]: I1006 11:48:55.364723 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:48:55 crc kubenswrapper[4958]: I1006 11:48:55.364756 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:48:55 crc kubenswrapper[4958]: I1006 11:48:55.364776 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:48:55Z","lastTransitionTime":"2025-10-06T11:48:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:48:55 crc kubenswrapper[4958]: I1006 11:48:55.423787 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:48:55 crc kubenswrapper[4958]: I1006 11:48:55.423877 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:48:55 crc kubenswrapper[4958]: I1006 11:48:55.423903 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:48:55 crc kubenswrapper[4958]: I1006 11:48:55.423935 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:48:55 crc kubenswrapper[4958]: I1006 11:48:55.423957 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:48:55Z","lastTransitionTime":"2025-10-06T11:48:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:48:55 crc kubenswrapper[4958]: E1006 11:48:55.447422 4958 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-10-06T11:48:55Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-06T11:48:55Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-06T11:48:55Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-06T11:48:55Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-06T11:48:55Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-06T11:48:55Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-06T11:48:55Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-06T11:48:55Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"71775188-f0be-4b91-a34f-f469bf9337b6\\\",\\\"systemUUID\\\":\\\"c838f8b7-52cc-46da-a415-eff8b7b887b9\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:48:55Z is after 2025-08-24T17:21:41Z" Oct 06 11:48:55 crc kubenswrapper[4958]: I1006 11:48:55.453393 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:48:55 crc kubenswrapper[4958]: I1006 11:48:55.453479 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:48:55 crc kubenswrapper[4958]: I1006 11:48:55.453506 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:48:55 crc kubenswrapper[4958]: I1006 11:48:55.453534 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:48:55 crc kubenswrapper[4958]: I1006 11:48:55.453555 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:48:55Z","lastTransitionTime":"2025-10-06T11:48:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:48:55 crc kubenswrapper[4958]: E1006 11:48:55.478349 4958 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-10-06T11:48:55Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-06T11:48:55Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-06T11:48:55Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-06T11:48:55Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-06T11:48:55Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-06T11:48:55Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-06T11:48:55Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-06T11:48:55Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"71775188-f0be-4b91-a34f-f469bf9337b6\\\",\\\"systemUUID\\\":\\\"c838f8b7-52cc-46da-a415-eff8b7b887b9\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:48:55Z is after 2025-08-24T17:21:41Z" Oct 06 11:48:55 crc kubenswrapper[4958]: I1006 11:48:55.487063 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:48:55 crc kubenswrapper[4958]: I1006 11:48:55.487383 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:48:55 crc kubenswrapper[4958]: I1006 11:48:55.487550 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:48:55 crc kubenswrapper[4958]: I1006 11:48:55.487717 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:48:55 crc kubenswrapper[4958]: I1006 11:48:55.487881 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:48:55Z","lastTransitionTime":"2025-10-06T11:48:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:48:55 crc kubenswrapper[4958]: E1006 11:48:55.508033 4958 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-10-06T11:48:55Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-06T11:48:55Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-06T11:48:55Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-06T11:48:55Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-06T11:48:55Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-06T11:48:55Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-06T11:48:55Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-06T11:48:55Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"71775188-f0be-4b91-a34f-f469bf9337b6\\\",\\\"systemUUID\\\":\\\"c838f8b7-52cc-46da-a415-eff8b7b887b9\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:48:55Z is after 2025-08-24T17:21:41Z" Oct 06 11:48:55 crc kubenswrapper[4958]: I1006 11:48:55.513424 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:48:55 crc kubenswrapper[4958]: I1006 11:48:55.513505 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:48:55 crc kubenswrapper[4958]: I1006 11:48:55.513530 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:48:55 crc kubenswrapper[4958]: I1006 11:48:55.513554 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:48:55 crc kubenswrapper[4958]: I1006 11:48:55.513575 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:48:55Z","lastTransitionTime":"2025-10-06T11:48:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:48:55 crc kubenswrapper[4958]: E1006 11:48:55.533728 4958 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-10-06T11:48:55Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-06T11:48:55Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-06T11:48:55Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-06T11:48:55Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-06T11:48:55Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-06T11:48:55Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-06T11:48:55Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-06T11:48:55Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"71775188-f0be-4b91-a34f-f469bf9337b6\\\",\\\"systemUUID\\\":\\\"c838f8b7-52cc-46da-a415-eff8b7b887b9\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:48:55Z is after 2025-08-24T17:21:41Z" Oct 06 11:48:55 crc kubenswrapper[4958]: I1006 11:48:55.538258 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:48:55 crc kubenswrapper[4958]: I1006 11:48:55.538309 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:48:55 crc kubenswrapper[4958]: I1006 11:48:55.538327 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:48:55 crc kubenswrapper[4958]: I1006 11:48:55.538375 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:48:55 crc kubenswrapper[4958]: I1006 11:48:55.538426 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:48:55Z","lastTransitionTime":"2025-10-06T11:48:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:48:55 crc kubenswrapper[4958]: E1006 11:48:55.557845 4958 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-10-06T11:48:55Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-06T11:48:55Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-06T11:48:55Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-06T11:48:55Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-06T11:48:55Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-06T11:48:55Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-06T11:48:55Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-06T11:48:55Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"71775188-f0be-4b91-a34f-f469bf9337b6\\\",\\\"systemUUID\\\":\\\"c838f8b7-52cc-46da-a415-eff8b7b887b9\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:48:55Z is after 2025-08-24T17:21:41Z" Oct 06 11:48:55 crc kubenswrapper[4958]: E1006 11:48:55.558064 4958 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Oct 06 11:48:55 crc kubenswrapper[4958]: I1006 11:48:55.560090 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:48:55 crc kubenswrapper[4958]: I1006 11:48:55.560187 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:48:55 crc kubenswrapper[4958]: I1006 11:48:55.560216 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:48:55 crc kubenswrapper[4958]: I1006 11:48:55.560248 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:48:55 crc kubenswrapper[4958]: I1006 11:48:55.560270 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:48:55Z","lastTransitionTime":"2025-10-06T11:48:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:48:55 crc kubenswrapper[4958]: I1006 11:48:55.663461 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:48:55 crc kubenswrapper[4958]: I1006 11:48:55.663511 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:48:55 crc kubenswrapper[4958]: I1006 11:48:55.663530 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:48:55 crc kubenswrapper[4958]: I1006 11:48:55.663552 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:48:55 crc kubenswrapper[4958]: I1006 11:48:55.663569 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:48:55Z","lastTransitionTime":"2025-10-06T11:48:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:48:55 crc kubenswrapper[4958]: I1006 11:48:55.767178 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:48:55 crc kubenswrapper[4958]: I1006 11:48:55.767331 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:48:55 crc kubenswrapper[4958]: I1006 11:48:55.767351 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:48:55 crc kubenswrapper[4958]: I1006 11:48:55.767375 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:48:55 crc kubenswrapper[4958]: I1006 11:48:55.767396 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:48:55Z","lastTransitionTime":"2025-10-06T11:48:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:48:55 crc kubenswrapper[4958]: I1006 11:48:55.870507 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:48:55 crc kubenswrapper[4958]: I1006 11:48:55.870576 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:48:55 crc kubenswrapper[4958]: I1006 11:48:55.870593 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:48:55 crc kubenswrapper[4958]: I1006 11:48:55.870618 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:48:55 crc kubenswrapper[4958]: I1006 11:48:55.870640 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:48:55Z","lastTransitionTime":"2025-10-06T11:48:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:48:55 crc kubenswrapper[4958]: I1006 11:48:55.913335 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-4mxw5" Oct 06 11:48:55 crc kubenswrapper[4958]: E1006 11:48:55.913525 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-4mxw5" podUID="cbfbf2ea-df5f-4844-ba69-8c9f16a3a26c" Oct 06 11:48:55 crc kubenswrapper[4958]: I1006 11:48:55.972922 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:48:55 crc kubenswrapper[4958]: I1006 11:48:55.973001 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:48:55 crc kubenswrapper[4958]: I1006 11:48:55.973026 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:48:55 crc kubenswrapper[4958]: I1006 11:48:55.973056 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:48:55 crc kubenswrapper[4958]: I1006 11:48:55.973079 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:48:55Z","lastTransitionTime":"2025-10-06T11:48:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:48:56 crc kubenswrapper[4958]: I1006 11:48:56.075794 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:48:56 crc kubenswrapper[4958]: I1006 11:48:56.075859 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:48:56 crc kubenswrapper[4958]: I1006 11:48:56.075880 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:48:56 crc kubenswrapper[4958]: I1006 11:48:56.075907 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:48:56 crc kubenswrapper[4958]: I1006 11:48:56.075931 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:48:56Z","lastTransitionTime":"2025-10-06T11:48:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:48:56 crc kubenswrapper[4958]: I1006 11:48:56.178648 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:48:56 crc kubenswrapper[4958]: I1006 11:48:56.178723 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:48:56 crc kubenswrapper[4958]: I1006 11:48:56.178747 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:48:56 crc kubenswrapper[4958]: I1006 11:48:56.178771 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:48:56 crc kubenswrapper[4958]: I1006 11:48:56.178790 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:48:56Z","lastTransitionTime":"2025-10-06T11:48:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:48:56 crc kubenswrapper[4958]: I1006 11:48:56.281936 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:48:56 crc kubenswrapper[4958]: I1006 11:48:56.281998 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:48:56 crc kubenswrapper[4958]: I1006 11:48:56.282017 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:48:56 crc kubenswrapper[4958]: I1006 11:48:56.282042 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:48:56 crc kubenswrapper[4958]: I1006 11:48:56.282060 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:48:56Z","lastTransitionTime":"2025-10-06T11:48:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:48:56 crc kubenswrapper[4958]: I1006 11:48:56.385555 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:48:56 crc kubenswrapper[4958]: I1006 11:48:56.385609 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:48:56 crc kubenswrapper[4958]: I1006 11:48:56.385626 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:48:56 crc kubenswrapper[4958]: I1006 11:48:56.385648 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:48:56 crc kubenswrapper[4958]: I1006 11:48:56.385664 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:48:56Z","lastTransitionTime":"2025-10-06T11:48:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:48:56 crc kubenswrapper[4958]: I1006 11:48:56.488216 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:48:56 crc kubenswrapper[4958]: I1006 11:48:56.488294 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:48:56 crc kubenswrapper[4958]: I1006 11:48:56.488317 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:48:56 crc kubenswrapper[4958]: I1006 11:48:56.488345 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:48:56 crc kubenswrapper[4958]: I1006 11:48:56.488365 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:48:56Z","lastTransitionTime":"2025-10-06T11:48:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:48:56 crc kubenswrapper[4958]: I1006 11:48:56.592183 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:48:56 crc kubenswrapper[4958]: I1006 11:48:56.592243 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:48:56 crc kubenswrapper[4958]: I1006 11:48:56.592260 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:48:56 crc kubenswrapper[4958]: I1006 11:48:56.592284 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:48:56 crc kubenswrapper[4958]: I1006 11:48:56.592304 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:48:56Z","lastTransitionTime":"2025-10-06T11:48:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:48:56 crc kubenswrapper[4958]: I1006 11:48:56.694462 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:48:56 crc kubenswrapper[4958]: I1006 11:48:56.694533 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:48:56 crc kubenswrapper[4958]: I1006 11:48:56.694557 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:48:56 crc kubenswrapper[4958]: I1006 11:48:56.694586 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:48:56 crc kubenswrapper[4958]: I1006 11:48:56.694607 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:48:56Z","lastTransitionTime":"2025-10-06T11:48:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:48:56 crc kubenswrapper[4958]: I1006 11:48:56.797640 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:48:56 crc kubenswrapper[4958]: I1006 11:48:56.797788 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:48:56 crc kubenswrapper[4958]: I1006 11:48:56.797874 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:48:56 crc kubenswrapper[4958]: I1006 11:48:56.797955 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:48:56 crc kubenswrapper[4958]: I1006 11:48:56.797986 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:48:56Z","lastTransitionTime":"2025-10-06T11:48:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:48:56 crc kubenswrapper[4958]: I1006 11:48:56.901274 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:48:56 crc kubenswrapper[4958]: I1006 11:48:56.901334 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:48:56 crc kubenswrapper[4958]: I1006 11:48:56.901350 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:48:56 crc kubenswrapper[4958]: I1006 11:48:56.901374 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:48:56 crc kubenswrapper[4958]: I1006 11:48:56.901391 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:48:56Z","lastTransitionTime":"2025-10-06T11:48:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:48:56 crc kubenswrapper[4958]: I1006 11:48:56.913709 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 06 11:48:56 crc kubenswrapper[4958]: E1006 11:48:56.913949 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Oct 06 11:48:56 crc kubenswrapper[4958]: I1006 11:48:56.914087 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 06 11:48:56 crc kubenswrapper[4958]: E1006 11:48:56.914538 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Oct 06 11:48:56 crc kubenswrapper[4958]: I1006 11:48:56.914810 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 06 11:48:56 crc kubenswrapper[4958]: E1006 11:48:56.915046 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Oct 06 11:48:56 crc kubenswrapper[4958]: I1006 11:48:56.948272 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9bd5b11f-6414-4c57-99b8-516b3f4be804\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:48:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:48:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8f40315a451e12055987bec9804330dac96578f86d7fdedfd96ed187936df857\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:47:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0b16a70f5c26106f885601fdc9f41b6d606668ed3a4a527be3a7674dd4217d36\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:47:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://93cedfe1a3600f0eb73fd787333690ed1fd8cd5f766a54ba40eb6aad808593c7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:47:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://da49c178e22edd3e5f0b37f05dca576095bf83765cb9e46fcb68b1b362e304f7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fdb3ac1507f9a55981ed07786d2ab8bd2aab7072e8e625b17f15bc27f40f460d\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-10-06T11:47:48Z\\\",\\\"message\\\":\\\"file observer\\\\nW1006 11:47:47.776944 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1006 11:47:47.777183 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1006 11:47:47.778589 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-624858867/tls.crt::/tmp/serving-cert-624858867/tls.key\\\\\\\"\\\\nI1006 11:47:48.078332 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1006 11:47:48.084427 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1006 11:47:48.084453 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1006 11:47:48.084475 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1006 11:47:48.084480 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1006 11:47:48.108531 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI1006 11:47:48.108546 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW1006 11:47:48.108561 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1006 11:47:48.108582 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1006 11:47:48.108586 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1006 11:47:48.108589 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1006 11:47:48.108592 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1006 11:47:48.108594 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1006 11:47:48.110441 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-06T11:47:42Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:48:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://533257b553a482e13f8c8930f7ee7d6f8b70af1499cf861d3dfe73cb2225aa2a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:47:29Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://70c0a2bcb44c905e3d39402f0992d3e50a939252201b98e9a723f5a6777d7f91\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://70c0a2bcb44c905e3d39402f0992d3e50a939252201b98e9a723f5a6777d7f91\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-06T11:47:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-06T11:47:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T11:47:27Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:48:56Z is after 2025-08-24T17:21:41Z" Oct 06 11:48:56 crc kubenswrapper[4958]: I1006 11:48:56.969028 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:48Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:48Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b5c0c63694c68c28642df297d1ceff03425621c51b170c758b3fedeafbccc89e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:47:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:48:56Z is after 2025-08-24T17:21:41Z" Oct 06 11:48:56 crc kubenswrapper[4958]: I1006 11:48:56.984798 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:50Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:50Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fbac1bba642ea0238b738ec34631d44326b215575297c7ea3595aa9d691849d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:47:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:48:56Z is after 2025-08-24T17:21:41Z" Oct 06 11:48:56 crc kubenswrapper[4958]: I1006 11:48:56.998764 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-4mxw5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cbfbf2ea-df5f-4844-ba69-8c9f16a3a26c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:48:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:48:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:48:02Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:48:02Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cdkh7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cdkh7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T11:48:02Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-4mxw5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:48:56Z is after 2025-08-24T17:21:41Z" Oct 06 11:48:57 crc kubenswrapper[4958]: I1006 11:48:57.005226 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:48:57 crc kubenswrapper[4958]: I1006 11:48:57.005284 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:48:57 crc kubenswrapper[4958]: I1006 11:48:57.005318 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:48:57 crc kubenswrapper[4958]: I1006 11:48:57.005345 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:48:57 crc kubenswrapper[4958]: I1006 11:48:57.005362 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:48:57Z","lastTransitionTime":"2025-10-06T11:48:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:48:57 crc kubenswrapper[4958]: I1006 11:48:57.016136 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:46Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:48:57Z is after 2025-08-24T17:21:41Z" Oct 06 11:48:57 crc kubenswrapper[4958]: I1006 11:48:57.033837 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-ntrlk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cd589959-144a-41bd-b6d5-a872e5c25cee\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:48Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:48Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bc57f0ff822a9fcb8c3b73146bf4879f6e05c2f92b6a4317528530e54e419278\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:47:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vzpjm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a38398fb9a752d0059e1adfd4ee23557572492f2d27f63ef90ed4730b742fd0e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:47:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vzpjm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://484bfcace3003fb125465d9296c1b04852447d674b6c4abc8fedbd3a1b6ea8be\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:47:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vzpjm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d7834f1b3141381f7d868acc4aceb2809e81896cb7082de9888c3d540d062078\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:47:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vzpjm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://051e26a11e5f65ab6e49664f99a5f6a87bf781bf7284f2e2e3996c0e26e77c84\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:47:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vzpjm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d90731b1e085c514ef0665d4df356b853f432a1012f84a41ed82fa257d57d9bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:47:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vzpjm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0d26f1c937b4f4a0ec130d9451086f3770f092498d61310b65bdd8c74a68549e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0d26f1c937b4f4a0ec130d9451086f3770f092498d61310b65bdd8c74a68549e\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-10-06T11:48:46Z\\\",\\\"message\\\":\\\"ExternalRoute (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/adminpolicybasedroute/v1/apis/informers/externalversions/factory.go:140\\\\nI1006 11:48:46.912470 7033 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI1006 11:48:46.912521 7033 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI1006 11:48:46.912526 7033 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI1006 11:48:46.912534 7033 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI1006 11:48:46.912552 7033 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI1006 11:48:46.912589 7033 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI1006 11:48:46.912650 7033 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI1006 11:48:46.912667 7033 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI1006 11:48:46.912704 7033 factory.go:656] Stopping watch factory\\\\nI1006 11:48:46.912725 7033 handler.go:208] Removed *v1.Node event handler 7\\\\nI1006 11:48:46.912741 7033 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI1006 11:48:46.912757 7033 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI1006 11:48:46.912772 7033 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI1006 11:48:46.912859 7033 handler.go:208] Removed *v1.Node ev\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-06T11:48:46Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-ntrlk_openshift-ovn-kubernetes(cd589959-144a-41bd-b6d5-a872e5c25cee)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vzpjm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1398aae03a42cf60100ed334a3d65c9c4fd501855d5316e7e1ed3974d0c7e22e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:47:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vzpjm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://738882f7b5c7b87e6d6426a997dd7efe5daf1d97973cf1085097cc0f5e790f00\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://738882f7b5c7b87e6d6426a997dd7efe5daf1d97973cf1085097cc0f5e790f00\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-06T11:47:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-06T11:47:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vzpjm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T11:47:48Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-ntrlk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:48:57Z is after 2025-08-24T17:21:41Z" Oct 06 11:48:57 crc kubenswrapper[4958]: I1006 11:48:57.043599 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-whw6z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1a63a5e9-6d00-40ff-a080-cfcaf03a1c1b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b4fae35f5c37636077eb5d2fee37236e39d009d77aa158458d267aed5fc29f48\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:47:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bwd9v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0be6251825a9cd485da028e6f7fd169dae74215a7191607dfd0a4b7a470e43c4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:47:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bwd9v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T11:47:47Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-whw6z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:48:57Z is after 2025-08-24T17:21:41Z" Oct 06 11:48:57 crc kubenswrapper[4958]: I1006 11:48:57.053312 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-8jbr9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ddd869cc-2703-4f0d-b694-32bb7735025e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:48:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:48:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:48:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:48:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9298df9f82818033b7c6c96e0d7fd59290a7ef86679ae958f19f1c841998715c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:48:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8vhr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5d95fcda3db5aafa2634bf9d4037a6b8b0456bf4be6fc620c742e57e86aed114\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:48:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8vhr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T11:48:00Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-8jbr9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:48:57Z is after 2025-08-24T17:21:41Z" Oct 06 11:48:57 crc kubenswrapper[4958]: I1006 11:48:57.063478 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ba205413-cfaf-4b44-a363-11756cfa7dda\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:48:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:48:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fb611e3b5c05dc18f2016ae86df59c40ce281cb8b0180bd570de0ef76740db43\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:47:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ec146c3b7bff7005ab104f50dbc189e81ee82b7b2275ddb606e60832db5a0046\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:47:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0dcb7478b9b3f837afca516fc35e7f3f41e293147c5f27119fcca05f0d4477b3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:47:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c7f87b2139a49d50f883f483bfbfff0af00d2580d85bbddb604d395afbae3a73\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c7f87b2139a49d50f883f483bfbfff0af00d2580d85bbddb604d395afbae3a73\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-06T11:47:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-06T11:47:28Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T11:47:27Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:48:57Z is after 2025-08-24T17:21:41Z" Oct 06 11:48:57 crc kubenswrapper[4958]: I1006 11:48:57.079878 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:48Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:48Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3a93985bd27818990d6f5ec103d8c0722e1317a74398d64300148018c4f77b71\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:47:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9f39e4dcc064167825d4f7323323a394d1a3d1a374b7104bffc6ad56b344d89e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:47:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:48:57Z is after 2025-08-24T17:21:41Z" Oct 06 11:48:57 crc kubenswrapper[4958]: I1006 11:48:57.091329 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:46Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:48:57Z is after 2025-08-24T17:21:41Z" Oct 06 11:48:57 crc kubenswrapper[4958]: I1006 11:48:57.103423 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-lwknw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5e46de0e-9f67-4dae-8601-65004d0d71c1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6adb713c9c2659566eee132d371a68bc10d24c6761c017a1e8b0ce8bf974b9fa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:47:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wghx2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://244104564b5ad6b646671bdc75f440e5568a0c2449c111e56431328365d044fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://244104564b5ad6b646671bdc75f440e5568a0c2449c111e56431328365d044fe\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-06T11:47:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-06T11:47:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wghx2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5f832e5afbc55af6f3de6756005ddfef328c3d8357b1bbc8765469e5b254cf8a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5f832e5afbc55af6f3de6756005ddfef328c3d8357b1bbc8765469e5b254cf8a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-06T11:47:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-06T11:47:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wghx2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://99bec1a2f08582cd407d108d771389ab034877ee58f59e7d899541d0c69809b0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://99bec1a2f08582cd407d108d771389ab034877ee58f59e7d899541d0c69809b0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-06T11:47:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-06T11:47:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wghx2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4d090f1099ca4b6472934163fb66f1875dc03819cf0907f3058aa5006d9d2198\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4d090f1099ca4b6472934163fb66f1875dc03819cf0907f3058aa5006d9d2198\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-06T11:47:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-06T11:47:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wghx2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://41bdbeadff713969b3914296b7f045991c7da126f8181ee9b60d4607d7a565dd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://41bdbeadff713969b3914296b7f045991c7da126f8181ee9b60d4607d7a565dd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-06T11:47:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-06T11:47:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wghx2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3e496dd75c5dd4e480d06cd46302b6172620b35eb24aff3308865a0931677bc4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3e496dd75c5dd4e480d06cd46302b6172620b35eb24aff3308865a0931677bc4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-06T11:47:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-06T11:47:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wghx2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T11:47:47Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-lwknw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:48:57Z is after 2025-08-24T17:21:41Z" Oct 06 11:48:57 crc kubenswrapper[4958]: I1006 11:48:57.108489 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:48:57 crc kubenswrapper[4958]: I1006 11:48:57.108558 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:48:57 crc kubenswrapper[4958]: I1006 11:48:57.108573 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:48:57 crc kubenswrapper[4958]: I1006 11:48:57.108591 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:48:57 crc kubenswrapper[4958]: I1006 11:48:57.108604 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:48:57Z","lastTransitionTime":"2025-10-06T11:48:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:48:57 crc kubenswrapper[4958]: I1006 11:48:57.121067 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-4w4h5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8198c8fd-8ec9-4a56-9e87-4e3967fb8ec7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:48:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:48:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4e73d50647a8c8e49d503c873fc1939a3747681ec2646bd44db73acf5c7740d0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://979bf29caab50b132b30dd6eff62aed5d2209ae1c43fde2433101410df35035b\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-10-06T11:48:35Z\\\",\\\"message\\\":\\\"2025-10-06T11:47:50+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_98b69acf-4cc1-4904-b48b-cf009f0cef75\\\\n2025-10-06T11:47:50+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_98b69acf-4cc1-4904-b48b-cf009f0cef75 to /host/opt/cni/bin/\\\\n2025-10-06T11:47:50Z [verbose] multus-daemon started\\\\n2025-10-06T11:47:50Z [verbose] Readiness Indicator file check\\\\n2025-10-06T11:48:35Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-06T11:47:48Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:48:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2ncxs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T11:47:47Z\\\"}}\" for pod \"openshift-multus\"/\"multus-4w4h5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:48:57Z is after 2025-08-24T17:21:41Z" Oct 06 11:48:57 crc kubenswrapper[4958]: I1006 11:48:57.135351 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e346fee2-2887-49c0-ad05-0e4ca19453e4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://42a6335b390e5de2da2d935de9e0cb1e60391521813d0928c59d1d9c27ae8e7f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:47:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://64dfdaf2d8faa6e7ebde08361728077d3db9118ec963dc4c1fc8caf9eb9c752b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:47:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5cbba96aaea45073aaeada1136c677ba1ba87248f004cf3d7394ddfccf412478\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:47:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://95e5c689ad307b63dec13d774d1ff8fe8fa0a11a7d33e18a0470000a4df0f6dc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:47:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T11:47:27Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:48:57Z is after 2025-08-24T17:21:41Z" Oct 06 11:48:57 crc kubenswrapper[4958]: I1006 11:48:57.148783 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:46Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:48:57Z is after 2025-08-24T17:21:41Z" Oct 06 11:48:57 crc kubenswrapper[4958]: I1006 11:48:57.157811 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-8xzjs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c3545ace-6477-4c0b-9576-f32c6748e0a0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d94d788ac8f6a4665db7852c21f55128c9afb7976c9e18fbc302c02d8c3a0c43\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:47:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lz2sh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T11:47:47Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-8xzjs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:48:57Z is after 2025-08-24T17:21:41Z" Oct 06 11:48:57 crc kubenswrapper[4958]: I1006 11:48:57.168506 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-vns7w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f33eae78-7219-4551-bea7-dcfaf22c4e62\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7c6a67770ab5e7e0af1ed6a7b4965bc2b3937e021ddf18a765edd7b65196ad5c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:47:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f8htn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T11:47:49Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-vns7w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:48:57Z is after 2025-08-24T17:21:41Z" Oct 06 11:48:57 crc kubenswrapper[4958]: I1006 11:48:57.211870 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:48:57 crc kubenswrapper[4958]: I1006 11:48:57.211920 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:48:57 crc kubenswrapper[4958]: I1006 11:48:57.211938 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:48:57 crc kubenswrapper[4958]: I1006 11:48:57.211959 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:48:57 crc kubenswrapper[4958]: I1006 11:48:57.211975 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:48:57Z","lastTransitionTime":"2025-10-06T11:48:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:48:57 crc kubenswrapper[4958]: I1006 11:48:57.314918 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:48:57 crc kubenswrapper[4958]: I1006 11:48:57.314984 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:48:57 crc kubenswrapper[4958]: I1006 11:48:57.315005 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:48:57 crc kubenswrapper[4958]: I1006 11:48:57.315032 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:48:57 crc kubenswrapper[4958]: I1006 11:48:57.315053 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:48:57Z","lastTransitionTime":"2025-10-06T11:48:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:48:57 crc kubenswrapper[4958]: I1006 11:48:57.417687 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:48:57 crc kubenswrapper[4958]: I1006 11:48:57.417753 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:48:57 crc kubenswrapper[4958]: I1006 11:48:57.417777 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:48:57 crc kubenswrapper[4958]: I1006 11:48:57.417810 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:48:57 crc kubenswrapper[4958]: I1006 11:48:57.417832 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:48:57Z","lastTransitionTime":"2025-10-06T11:48:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:48:57 crc kubenswrapper[4958]: I1006 11:48:57.520715 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:48:57 crc kubenswrapper[4958]: I1006 11:48:57.520788 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:48:57 crc kubenswrapper[4958]: I1006 11:48:57.520805 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:48:57 crc kubenswrapper[4958]: I1006 11:48:57.520829 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:48:57 crc kubenswrapper[4958]: I1006 11:48:57.520854 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:48:57Z","lastTransitionTime":"2025-10-06T11:48:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:48:57 crc kubenswrapper[4958]: I1006 11:48:57.624237 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:48:57 crc kubenswrapper[4958]: I1006 11:48:57.624340 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:48:57 crc kubenswrapper[4958]: I1006 11:48:57.624358 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:48:57 crc kubenswrapper[4958]: I1006 11:48:57.624384 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:48:57 crc kubenswrapper[4958]: I1006 11:48:57.624402 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:48:57Z","lastTransitionTime":"2025-10-06T11:48:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:48:57 crc kubenswrapper[4958]: I1006 11:48:57.727633 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:48:57 crc kubenswrapper[4958]: I1006 11:48:57.727684 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:48:57 crc kubenswrapper[4958]: I1006 11:48:57.727698 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:48:57 crc kubenswrapper[4958]: I1006 11:48:57.727713 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:48:57 crc kubenswrapper[4958]: I1006 11:48:57.727728 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:48:57Z","lastTransitionTime":"2025-10-06T11:48:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:48:57 crc kubenswrapper[4958]: I1006 11:48:57.830207 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:48:57 crc kubenswrapper[4958]: I1006 11:48:57.830266 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:48:57 crc kubenswrapper[4958]: I1006 11:48:57.830297 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:48:57 crc kubenswrapper[4958]: I1006 11:48:57.830320 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:48:57 crc kubenswrapper[4958]: I1006 11:48:57.830337 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:48:57Z","lastTransitionTime":"2025-10-06T11:48:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:48:57 crc kubenswrapper[4958]: I1006 11:48:57.912658 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-4mxw5" Oct 06 11:48:57 crc kubenswrapper[4958]: E1006 11:48:57.912959 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-4mxw5" podUID="cbfbf2ea-df5f-4844-ba69-8c9f16a3a26c" Oct 06 11:48:57 crc kubenswrapper[4958]: I1006 11:48:57.933757 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:48:57 crc kubenswrapper[4958]: I1006 11:48:57.933812 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:48:57 crc kubenswrapper[4958]: I1006 11:48:57.933829 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:48:57 crc kubenswrapper[4958]: I1006 11:48:57.933851 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:48:57 crc kubenswrapper[4958]: I1006 11:48:57.933868 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:48:57Z","lastTransitionTime":"2025-10-06T11:48:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:48:58 crc kubenswrapper[4958]: I1006 11:48:58.036037 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:48:58 crc kubenswrapper[4958]: I1006 11:48:58.036203 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:48:58 crc kubenswrapper[4958]: I1006 11:48:58.036220 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:48:58 crc kubenswrapper[4958]: I1006 11:48:58.036236 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:48:58 crc kubenswrapper[4958]: I1006 11:48:58.036248 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:48:58Z","lastTransitionTime":"2025-10-06T11:48:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:48:58 crc kubenswrapper[4958]: I1006 11:48:58.138896 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:48:58 crc kubenswrapper[4958]: I1006 11:48:58.138958 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:48:58 crc kubenswrapper[4958]: I1006 11:48:58.138981 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:48:58 crc kubenswrapper[4958]: I1006 11:48:58.139008 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:48:58 crc kubenswrapper[4958]: I1006 11:48:58.139031 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:48:58Z","lastTransitionTime":"2025-10-06T11:48:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:48:58 crc kubenswrapper[4958]: I1006 11:48:58.243226 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:48:58 crc kubenswrapper[4958]: I1006 11:48:58.243321 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:48:58 crc kubenswrapper[4958]: I1006 11:48:58.243355 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:48:58 crc kubenswrapper[4958]: I1006 11:48:58.243380 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:48:58 crc kubenswrapper[4958]: I1006 11:48:58.243392 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:48:58Z","lastTransitionTime":"2025-10-06T11:48:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:48:58 crc kubenswrapper[4958]: I1006 11:48:58.346933 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:48:58 crc kubenswrapper[4958]: I1006 11:48:58.346997 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:48:58 crc kubenswrapper[4958]: I1006 11:48:58.347013 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:48:58 crc kubenswrapper[4958]: I1006 11:48:58.347038 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:48:58 crc kubenswrapper[4958]: I1006 11:48:58.347060 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:48:58Z","lastTransitionTime":"2025-10-06T11:48:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:48:58 crc kubenswrapper[4958]: I1006 11:48:58.450189 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:48:58 crc kubenswrapper[4958]: I1006 11:48:58.450248 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:48:58 crc kubenswrapper[4958]: I1006 11:48:58.450265 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:48:58 crc kubenswrapper[4958]: I1006 11:48:58.450290 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:48:58 crc kubenswrapper[4958]: I1006 11:48:58.450308 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:48:58Z","lastTransitionTime":"2025-10-06T11:48:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:48:58 crc kubenswrapper[4958]: I1006 11:48:58.552560 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:48:58 crc kubenswrapper[4958]: I1006 11:48:58.552612 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:48:58 crc kubenswrapper[4958]: I1006 11:48:58.552629 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:48:58 crc kubenswrapper[4958]: I1006 11:48:58.552650 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:48:58 crc kubenswrapper[4958]: I1006 11:48:58.552668 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:48:58Z","lastTransitionTime":"2025-10-06T11:48:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:48:58 crc kubenswrapper[4958]: I1006 11:48:58.656301 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:48:58 crc kubenswrapper[4958]: I1006 11:48:58.656369 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:48:58 crc kubenswrapper[4958]: I1006 11:48:58.656390 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:48:58 crc kubenswrapper[4958]: I1006 11:48:58.656413 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:48:58 crc kubenswrapper[4958]: I1006 11:48:58.656432 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:48:58Z","lastTransitionTime":"2025-10-06T11:48:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:48:58 crc kubenswrapper[4958]: I1006 11:48:58.759348 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:48:58 crc kubenswrapper[4958]: I1006 11:48:58.759420 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:48:58 crc kubenswrapper[4958]: I1006 11:48:58.759444 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:48:58 crc kubenswrapper[4958]: I1006 11:48:58.759475 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:48:58 crc kubenswrapper[4958]: I1006 11:48:58.759496 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:48:58Z","lastTransitionTime":"2025-10-06T11:48:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:48:58 crc kubenswrapper[4958]: I1006 11:48:58.862726 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:48:58 crc kubenswrapper[4958]: I1006 11:48:58.862797 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:48:58 crc kubenswrapper[4958]: I1006 11:48:58.862829 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:48:58 crc kubenswrapper[4958]: I1006 11:48:58.862859 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:48:58 crc kubenswrapper[4958]: I1006 11:48:58.862886 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:48:58Z","lastTransitionTime":"2025-10-06T11:48:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:48:58 crc kubenswrapper[4958]: I1006 11:48:58.912615 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 06 11:48:58 crc kubenswrapper[4958]: I1006 11:48:58.912703 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 06 11:48:58 crc kubenswrapper[4958]: E1006 11:48:58.912818 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Oct 06 11:48:58 crc kubenswrapper[4958]: I1006 11:48:58.912852 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 06 11:48:58 crc kubenswrapper[4958]: E1006 11:48:58.913503 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Oct 06 11:48:58 crc kubenswrapper[4958]: E1006 11:48:58.913741 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Oct 06 11:48:58 crc kubenswrapper[4958]: I1006 11:48:58.914055 4958 scope.go:117] "RemoveContainer" containerID="0d26f1c937b4f4a0ec130d9451086f3770f092498d61310b65bdd8c74a68549e" Oct 06 11:48:58 crc kubenswrapper[4958]: E1006 11:48:58.914397 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-ntrlk_openshift-ovn-kubernetes(cd589959-144a-41bd-b6d5-a872e5c25cee)\"" pod="openshift-ovn-kubernetes/ovnkube-node-ntrlk" podUID="cd589959-144a-41bd-b6d5-a872e5c25cee" Oct 06 11:48:58 crc kubenswrapper[4958]: I1006 11:48:58.965192 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:48:58 crc kubenswrapper[4958]: I1006 11:48:58.965409 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:48:58 crc kubenswrapper[4958]: I1006 11:48:58.965431 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:48:58 crc kubenswrapper[4958]: I1006 11:48:58.965451 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:48:58 crc kubenswrapper[4958]: I1006 11:48:58.965467 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:48:58Z","lastTransitionTime":"2025-10-06T11:48:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:48:59 crc kubenswrapper[4958]: I1006 11:48:59.068364 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:48:59 crc kubenswrapper[4958]: I1006 11:48:59.068427 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:48:59 crc kubenswrapper[4958]: I1006 11:48:59.068444 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:48:59 crc kubenswrapper[4958]: I1006 11:48:59.068467 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:48:59 crc kubenswrapper[4958]: I1006 11:48:59.068487 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:48:59Z","lastTransitionTime":"2025-10-06T11:48:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:48:59 crc kubenswrapper[4958]: I1006 11:48:59.171624 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:48:59 crc kubenswrapper[4958]: I1006 11:48:59.171703 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:48:59 crc kubenswrapper[4958]: I1006 11:48:59.171727 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:48:59 crc kubenswrapper[4958]: I1006 11:48:59.171760 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:48:59 crc kubenswrapper[4958]: I1006 11:48:59.171783 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:48:59Z","lastTransitionTime":"2025-10-06T11:48:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:48:59 crc kubenswrapper[4958]: I1006 11:48:59.275202 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:48:59 crc kubenswrapper[4958]: I1006 11:48:59.275284 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:48:59 crc kubenswrapper[4958]: I1006 11:48:59.275307 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:48:59 crc kubenswrapper[4958]: I1006 11:48:59.275342 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:48:59 crc kubenswrapper[4958]: I1006 11:48:59.275365 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:48:59Z","lastTransitionTime":"2025-10-06T11:48:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:48:59 crc kubenswrapper[4958]: I1006 11:48:59.378366 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:48:59 crc kubenswrapper[4958]: I1006 11:48:59.378424 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:48:59 crc kubenswrapper[4958]: I1006 11:48:59.378442 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:48:59 crc kubenswrapper[4958]: I1006 11:48:59.378467 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:48:59 crc kubenswrapper[4958]: I1006 11:48:59.378488 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:48:59Z","lastTransitionTime":"2025-10-06T11:48:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:48:59 crc kubenswrapper[4958]: I1006 11:48:59.481438 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:48:59 crc kubenswrapper[4958]: I1006 11:48:59.481498 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:48:59 crc kubenswrapper[4958]: I1006 11:48:59.481514 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:48:59 crc kubenswrapper[4958]: I1006 11:48:59.481536 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:48:59 crc kubenswrapper[4958]: I1006 11:48:59.481553 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:48:59Z","lastTransitionTime":"2025-10-06T11:48:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:48:59 crc kubenswrapper[4958]: I1006 11:48:59.584640 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:48:59 crc kubenswrapper[4958]: I1006 11:48:59.584707 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:48:59 crc kubenswrapper[4958]: I1006 11:48:59.584729 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:48:59 crc kubenswrapper[4958]: I1006 11:48:59.584759 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:48:59 crc kubenswrapper[4958]: I1006 11:48:59.584783 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:48:59Z","lastTransitionTime":"2025-10-06T11:48:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:48:59 crc kubenswrapper[4958]: I1006 11:48:59.687412 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:48:59 crc kubenswrapper[4958]: I1006 11:48:59.687492 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:48:59 crc kubenswrapper[4958]: I1006 11:48:59.687518 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:48:59 crc kubenswrapper[4958]: I1006 11:48:59.687547 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:48:59 crc kubenswrapper[4958]: I1006 11:48:59.687569 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:48:59Z","lastTransitionTime":"2025-10-06T11:48:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:48:59 crc kubenswrapper[4958]: I1006 11:48:59.790848 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:48:59 crc kubenswrapper[4958]: I1006 11:48:59.790991 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:48:59 crc kubenswrapper[4958]: I1006 11:48:59.791008 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:48:59 crc kubenswrapper[4958]: I1006 11:48:59.791027 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:48:59 crc kubenswrapper[4958]: I1006 11:48:59.791042 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:48:59Z","lastTransitionTime":"2025-10-06T11:48:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:48:59 crc kubenswrapper[4958]: I1006 11:48:59.894509 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:48:59 crc kubenswrapper[4958]: I1006 11:48:59.894629 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:48:59 crc kubenswrapper[4958]: I1006 11:48:59.894649 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:48:59 crc kubenswrapper[4958]: I1006 11:48:59.894676 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:48:59 crc kubenswrapper[4958]: I1006 11:48:59.894694 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:48:59Z","lastTransitionTime":"2025-10-06T11:48:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:48:59 crc kubenswrapper[4958]: I1006 11:48:59.912981 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-4mxw5" Oct 06 11:48:59 crc kubenswrapper[4958]: E1006 11:48:59.913558 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-4mxw5" podUID="cbfbf2ea-df5f-4844-ba69-8c9f16a3a26c" Oct 06 11:48:59 crc kubenswrapper[4958]: I1006 11:48:59.934027 4958 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-etcd/etcd-crc"] Oct 06 11:48:59 crc kubenswrapper[4958]: I1006 11:48:59.997231 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:48:59 crc kubenswrapper[4958]: I1006 11:48:59.997302 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:48:59 crc kubenswrapper[4958]: I1006 11:48:59.997328 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:48:59 crc kubenswrapper[4958]: I1006 11:48:59.997357 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:48:59 crc kubenswrapper[4958]: I1006 11:48:59.997379 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:48:59Z","lastTransitionTime":"2025-10-06T11:48:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:49:00 crc kubenswrapper[4958]: I1006 11:49:00.098993 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:49:00 crc kubenswrapper[4958]: I1006 11:49:00.099027 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:49:00 crc kubenswrapper[4958]: I1006 11:49:00.099039 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:49:00 crc kubenswrapper[4958]: I1006 11:49:00.099054 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:49:00 crc kubenswrapper[4958]: I1006 11:49:00.099064 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:49:00Z","lastTransitionTime":"2025-10-06T11:49:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:49:00 crc kubenswrapper[4958]: I1006 11:49:00.207181 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:49:00 crc kubenswrapper[4958]: I1006 11:49:00.207254 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:49:00 crc kubenswrapper[4958]: I1006 11:49:00.207275 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:49:00 crc kubenswrapper[4958]: I1006 11:49:00.207313 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:49:00 crc kubenswrapper[4958]: I1006 11:49:00.207337 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:49:00Z","lastTransitionTime":"2025-10-06T11:49:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:49:00 crc kubenswrapper[4958]: I1006 11:49:00.310457 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:49:00 crc kubenswrapper[4958]: I1006 11:49:00.310535 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:49:00 crc kubenswrapper[4958]: I1006 11:49:00.310560 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:49:00 crc kubenswrapper[4958]: I1006 11:49:00.310590 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:49:00 crc kubenswrapper[4958]: I1006 11:49:00.310613 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:49:00Z","lastTransitionTime":"2025-10-06T11:49:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:49:00 crc kubenswrapper[4958]: I1006 11:49:00.413931 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:49:00 crc kubenswrapper[4958]: I1006 11:49:00.413984 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:49:00 crc kubenswrapper[4958]: I1006 11:49:00.414001 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:49:00 crc kubenswrapper[4958]: I1006 11:49:00.414022 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:49:00 crc kubenswrapper[4958]: I1006 11:49:00.414038 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:49:00Z","lastTransitionTime":"2025-10-06T11:49:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:49:00 crc kubenswrapper[4958]: I1006 11:49:00.517068 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:49:00 crc kubenswrapper[4958]: I1006 11:49:00.517127 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:49:00 crc kubenswrapper[4958]: I1006 11:49:00.517180 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:49:00 crc kubenswrapper[4958]: I1006 11:49:00.517206 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:49:00 crc kubenswrapper[4958]: I1006 11:49:00.517223 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:49:00Z","lastTransitionTime":"2025-10-06T11:49:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:49:00 crc kubenswrapper[4958]: I1006 11:49:00.620430 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:49:00 crc kubenswrapper[4958]: I1006 11:49:00.620486 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:49:00 crc kubenswrapper[4958]: I1006 11:49:00.620503 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:49:00 crc kubenswrapper[4958]: I1006 11:49:00.620526 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:49:00 crc kubenswrapper[4958]: I1006 11:49:00.620543 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:49:00Z","lastTransitionTime":"2025-10-06T11:49:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:49:00 crc kubenswrapper[4958]: I1006 11:49:00.723821 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:49:00 crc kubenswrapper[4958]: I1006 11:49:00.723865 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:49:00 crc kubenswrapper[4958]: I1006 11:49:00.723881 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:49:00 crc kubenswrapper[4958]: I1006 11:49:00.723901 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:49:00 crc kubenswrapper[4958]: I1006 11:49:00.723916 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:49:00Z","lastTransitionTime":"2025-10-06T11:49:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:49:00 crc kubenswrapper[4958]: I1006 11:49:00.858227 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:49:00 crc kubenswrapper[4958]: I1006 11:49:00.858471 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:49:00 crc kubenswrapper[4958]: I1006 11:49:00.858507 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:49:00 crc kubenswrapper[4958]: I1006 11:49:00.858543 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:49:00 crc kubenswrapper[4958]: I1006 11:49:00.858569 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:49:00Z","lastTransitionTime":"2025-10-06T11:49:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:49:00 crc kubenswrapper[4958]: I1006 11:49:00.912757 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 06 11:49:00 crc kubenswrapper[4958]: I1006 11:49:00.912903 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 06 11:49:00 crc kubenswrapper[4958]: E1006 11:49:00.912957 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Oct 06 11:49:00 crc kubenswrapper[4958]: E1006 11:49:00.913125 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Oct 06 11:49:00 crc kubenswrapper[4958]: I1006 11:49:00.913206 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 06 11:49:00 crc kubenswrapper[4958]: E1006 11:49:00.913425 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Oct 06 11:49:00 crc kubenswrapper[4958]: I1006 11:49:00.932100 4958 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/kube-rbac-proxy-crio-crc"] Oct 06 11:49:00 crc kubenswrapper[4958]: I1006 11:49:00.961322 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:49:00 crc kubenswrapper[4958]: I1006 11:49:00.961627 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:49:00 crc kubenswrapper[4958]: I1006 11:49:00.961673 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:49:00 crc kubenswrapper[4958]: I1006 11:49:00.961927 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:49:00 crc kubenswrapper[4958]: I1006 11:49:00.962206 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:49:00Z","lastTransitionTime":"2025-10-06T11:49:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:49:01 crc kubenswrapper[4958]: I1006 11:49:01.065556 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:49:01 crc kubenswrapper[4958]: I1006 11:49:01.065633 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:49:01 crc kubenswrapper[4958]: I1006 11:49:01.065649 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:49:01 crc kubenswrapper[4958]: I1006 11:49:01.065677 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:49:01 crc kubenswrapper[4958]: I1006 11:49:01.065696 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:49:01Z","lastTransitionTime":"2025-10-06T11:49:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:49:01 crc kubenswrapper[4958]: I1006 11:49:01.170615 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:49:01 crc kubenswrapper[4958]: I1006 11:49:01.170678 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:49:01 crc kubenswrapper[4958]: I1006 11:49:01.170698 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:49:01 crc kubenswrapper[4958]: I1006 11:49:01.170737 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:49:01 crc kubenswrapper[4958]: I1006 11:49:01.170756 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:49:01Z","lastTransitionTime":"2025-10-06T11:49:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:49:01 crc kubenswrapper[4958]: I1006 11:49:01.273954 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:49:01 crc kubenswrapper[4958]: I1006 11:49:01.274020 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:49:01 crc kubenswrapper[4958]: I1006 11:49:01.274037 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:49:01 crc kubenswrapper[4958]: I1006 11:49:01.274061 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:49:01 crc kubenswrapper[4958]: I1006 11:49:01.274077 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:49:01Z","lastTransitionTime":"2025-10-06T11:49:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:49:01 crc kubenswrapper[4958]: I1006 11:49:01.377627 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:49:01 crc kubenswrapper[4958]: I1006 11:49:01.377692 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:49:01 crc kubenswrapper[4958]: I1006 11:49:01.377709 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:49:01 crc kubenswrapper[4958]: I1006 11:49:01.377737 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:49:01 crc kubenswrapper[4958]: I1006 11:49:01.377755 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:49:01Z","lastTransitionTime":"2025-10-06T11:49:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:49:01 crc kubenswrapper[4958]: I1006 11:49:01.480628 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:49:01 crc kubenswrapper[4958]: I1006 11:49:01.480702 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:49:01 crc kubenswrapper[4958]: I1006 11:49:01.480718 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:49:01 crc kubenswrapper[4958]: I1006 11:49:01.480746 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:49:01 crc kubenswrapper[4958]: I1006 11:49:01.480769 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:49:01Z","lastTransitionTime":"2025-10-06T11:49:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:49:01 crc kubenswrapper[4958]: I1006 11:49:01.584468 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:49:01 crc kubenswrapper[4958]: I1006 11:49:01.584558 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:49:01 crc kubenswrapper[4958]: I1006 11:49:01.584581 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:49:01 crc kubenswrapper[4958]: I1006 11:49:01.584613 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:49:01 crc kubenswrapper[4958]: I1006 11:49:01.584641 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:49:01Z","lastTransitionTime":"2025-10-06T11:49:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:49:01 crc kubenswrapper[4958]: I1006 11:49:01.687823 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:49:01 crc kubenswrapper[4958]: I1006 11:49:01.687866 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:49:01 crc kubenswrapper[4958]: I1006 11:49:01.687878 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:49:01 crc kubenswrapper[4958]: I1006 11:49:01.687895 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:49:01 crc kubenswrapper[4958]: I1006 11:49:01.687908 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:49:01Z","lastTransitionTime":"2025-10-06T11:49:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:49:01 crc kubenswrapper[4958]: I1006 11:49:01.790992 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:49:01 crc kubenswrapper[4958]: I1006 11:49:01.791073 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:49:01 crc kubenswrapper[4958]: I1006 11:49:01.791099 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:49:01 crc kubenswrapper[4958]: I1006 11:49:01.791129 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:49:01 crc kubenswrapper[4958]: I1006 11:49:01.791190 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:49:01Z","lastTransitionTime":"2025-10-06T11:49:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:49:01 crc kubenswrapper[4958]: I1006 11:49:01.894445 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:49:01 crc kubenswrapper[4958]: I1006 11:49:01.894516 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:49:01 crc kubenswrapper[4958]: I1006 11:49:01.894540 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:49:01 crc kubenswrapper[4958]: I1006 11:49:01.894568 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:49:01 crc kubenswrapper[4958]: I1006 11:49:01.894594 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:49:01Z","lastTransitionTime":"2025-10-06T11:49:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:49:01 crc kubenswrapper[4958]: I1006 11:49:01.913128 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-4mxw5" Oct 06 11:49:01 crc kubenswrapper[4958]: E1006 11:49:01.913379 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-4mxw5" podUID="cbfbf2ea-df5f-4844-ba69-8c9f16a3a26c" Oct 06 11:49:01 crc kubenswrapper[4958]: I1006 11:49:01.997273 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:49:01 crc kubenswrapper[4958]: I1006 11:49:01.997352 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:49:01 crc kubenswrapper[4958]: I1006 11:49:01.997376 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:49:01 crc kubenswrapper[4958]: I1006 11:49:01.997407 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:49:01 crc kubenswrapper[4958]: I1006 11:49:01.997428 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:49:01Z","lastTransitionTime":"2025-10-06T11:49:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:49:02 crc kubenswrapper[4958]: I1006 11:49:02.101304 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:49:02 crc kubenswrapper[4958]: I1006 11:49:02.101361 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:49:02 crc kubenswrapper[4958]: I1006 11:49:02.101380 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:49:02 crc kubenswrapper[4958]: I1006 11:49:02.101406 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:49:02 crc kubenswrapper[4958]: I1006 11:49:02.101424 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:49:02Z","lastTransitionTime":"2025-10-06T11:49:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:49:02 crc kubenswrapper[4958]: I1006 11:49:02.204372 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:49:02 crc kubenswrapper[4958]: I1006 11:49:02.204437 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:49:02 crc kubenswrapper[4958]: I1006 11:49:02.204463 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:49:02 crc kubenswrapper[4958]: I1006 11:49:02.204489 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:49:02 crc kubenswrapper[4958]: I1006 11:49:02.204507 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:49:02Z","lastTransitionTime":"2025-10-06T11:49:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:49:02 crc kubenswrapper[4958]: I1006 11:49:02.307875 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:49:02 crc kubenswrapper[4958]: I1006 11:49:02.307941 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:49:02 crc kubenswrapper[4958]: I1006 11:49:02.307964 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:49:02 crc kubenswrapper[4958]: I1006 11:49:02.307998 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:49:02 crc kubenswrapper[4958]: I1006 11:49:02.308022 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:49:02Z","lastTransitionTime":"2025-10-06T11:49:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:49:02 crc kubenswrapper[4958]: I1006 11:49:02.412681 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:49:02 crc kubenswrapper[4958]: I1006 11:49:02.412746 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:49:02 crc kubenswrapper[4958]: I1006 11:49:02.412768 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:49:02 crc kubenswrapper[4958]: I1006 11:49:02.412793 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:49:02 crc kubenswrapper[4958]: I1006 11:49:02.412812 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:49:02Z","lastTransitionTime":"2025-10-06T11:49:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:49:02 crc kubenswrapper[4958]: I1006 11:49:02.515930 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:49:02 crc kubenswrapper[4958]: I1006 11:49:02.516051 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:49:02 crc kubenswrapper[4958]: I1006 11:49:02.516071 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:49:02 crc kubenswrapper[4958]: I1006 11:49:02.516094 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:49:02 crc kubenswrapper[4958]: I1006 11:49:02.516112 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:49:02Z","lastTransitionTime":"2025-10-06T11:49:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:49:02 crc kubenswrapper[4958]: I1006 11:49:02.619687 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:49:02 crc kubenswrapper[4958]: I1006 11:49:02.619742 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:49:02 crc kubenswrapper[4958]: I1006 11:49:02.619755 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:49:02 crc kubenswrapper[4958]: I1006 11:49:02.619778 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:49:02 crc kubenswrapper[4958]: I1006 11:49:02.619792 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:49:02Z","lastTransitionTime":"2025-10-06T11:49:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:49:02 crc kubenswrapper[4958]: I1006 11:49:02.724185 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:49:02 crc kubenswrapper[4958]: I1006 11:49:02.724264 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:49:02 crc kubenswrapper[4958]: I1006 11:49:02.724289 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:49:02 crc kubenswrapper[4958]: I1006 11:49:02.724322 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:49:02 crc kubenswrapper[4958]: I1006 11:49:02.724344 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:49:02Z","lastTransitionTime":"2025-10-06T11:49:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:49:02 crc kubenswrapper[4958]: I1006 11:49:02.827330 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:49:02 crc kubenswrapper[4958]: I1006 11:49:02.827394 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:49:02 crc kubenswrapper[4958]: I1006 11:49:02.827411 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:49:02 crc kubenswrapper[4958]: I1006 11:49:02.827434 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:49:02 crc kubenswrapper[4958]: I1006 11:49:02.827453 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:49:02Z","lastTransitionTime":"2025-10-06T11:49:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:49:02 crc kubenswrapper[4958]: I1006 11:49:02.913184 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 06 11:49:02 crc kubenswrapper[4958]: E1006 11:49:02.913383 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Oct 06 11:49:02 crc kubenswrapper[4958]: I1006 11:49:02.913211 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 06 11:49:02 crc kubenswrapper[4958]: E1006 11:49:02.913521 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Oct 06 11:49:02 crc kubenswrapper[4958]: I1006 11:49:02.914574 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 06 11:49:02 crc kubenswrapper[4958]: E1006 11:49:02.914761 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Oct 06 11:49:02 crc kubenswrapper[4958]: I1006 11:49:02.930106 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:49:02 crc kubenswrapper[4958]: I1006 11:49:02.930188 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:49:02 crc kubenswrapper[4958]: I1006 11:49:02.930208 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:49:02 crc kubenswrapper[4958]: I1006 11:49:02.930230 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:49:02 crc kubenswrapper[4958]: I1006 11:49:02.930247 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:49:02Z","lastTransitionTime":"2025-10-06T11:49:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:49:03 crc kubenswrapper[4958]: I1006 11:49:03.032588 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:49:03 crc kubenswrapper[4958]: I1006 11:49:03.032652 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:49:03 crc kubenswrapper[4958]: I1006 11:49:03.032670 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:49:03 crc kubenswrapper[4958]: I1006 11:49:03.032693 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:49:03 crc kubenswrapper[4958]: I1006 11:49:03.032711 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:49:03Z","lastTransitionTime":"2025-10-06T11:49:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:49:03 crc kubenswrapper[4958]: I1006 11:49:03.135860 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:49:03 crc kubenswrapper[4958]: I1006 11:49:03.135963 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:49:03 crc kubenswrapper[4958]: I1006 11:49:03.135981 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:49:03 crc kubenswrapper[4958]: I1006 11:49:03.136004 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:49:03 crc kubenswrapper[4958]: I1006 11:49:03.136020 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:49:03Z","lastTransitionTime":"2025-10-06T11:49:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:49:03 crc kubenswrapper[4958]: I1006 11:49:03.239362 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:49:03 crc kubenswrapper[4958]: I1006 11:49:03.239420 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:49:03 crc kubenswrapper[4958]: I1006 11:49:03.239443 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:49:03 crc kubenswrapper[4958]: I1006 11:49:03.239470 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:49:03 crc kubenswrapper[4958]: I1006 11:49:03.239490 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:49:03Z","lastTransitionTime":"2025-10-06T11:49:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:49:03 crc kubenswrapper[4958]: I1006 11:49:03.342701 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:49:03 crc kubenswrapper[4958]: I1006 11:49:03.342769 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:49:03 crc kubenswrapper[4958]: I1006 11:49:03.342788 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:49:03 crc kubenswrapper[4958]: I1006 11:49:03.342817 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:49:03 crc kubenswrapper[4958]: I1006 11:49:03.342835 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:49:03Z","lastTransitionTime":"2025-10-06T11:49:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:49:03 crc kubenswrapper[4958]: I1006 11:49:03.445697 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:49:03 crc kubenswrapper[4958]: I1006 11:49:03.445781 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:49:03 crc kubenswrapper[4958]: I1006 11:49:03.446258 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:49:03 crc kubenswrapper[4958]: I1006 11:49:03.446332 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:49:03 crc kubenswrapper[4958]: I1006 11:49:03.446357 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:49:03Z","lastTransitionTime":"2025-10-06T11:49:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:49:03 crc kubenswrapper[4958]: I1006 11:49:03.549594 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:49:03 crc kubenswrapper[4958]: I1006 11:49:03.549645 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:49:03 crc kubenswrapper[4958]: I1006 11:49:03.549664 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:49:03 crc kubenswrapper[4958]: I1006 11:49:03.549692 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:49:03 crc kubenswrapper[4958]: I1006 11:49:03.549710 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:49:03Z","lastTransitionTime":"2025-10-06T11:49:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:49:03 crc kubenswrapper[4958]: I1006 11:49:03.652700 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:49:03 crc kubenswrapper[4958]: I1006 11:49:03.652759 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:49:03 crc kubenswrapper[4958]: I1006 11:49:03.652780 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:49:03 crc kubenswrapper[4958]: I1006 11:49:03.652809 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:49:03 crc kubenswrapper[4958]: I1006 11:49:03.652827 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:49:03Z","lastTransitionTime":"2025-10-06T11:49:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:49:03 crc kubenswrapper[4958]: I1006 11:49:03.755385 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:49:03 crc kubenswrapper[4958]: I1006 11:49:03.755437 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:49:03 crc kubenswrapper[4958]: I1006 11:49:03.755449 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:49:03 crc kubenswrapper[4958]: I1006 11:49:03.755463 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:49:03 crc kubenswrapper[4958]: I1006 11:49:03.755473 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:49:03Z","lastTransitionTime":"2025-10-06T11:49:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:49:03 crc kubenswrapper[4958]: I1006 11:49:03.859784 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:49:03 crc kubenswrapper[4958]: I1006 11:49:03.859894 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:49:03 crc kubenswrapper[4958]: I1006 11:49:03.859914 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:49:03 crc kubenswrapper[4958]: I1006 11:49:03.859986 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:49:03 crc kubenswrapper[4958]: I1006 11:49:03.860007 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:49:03Z","lastTransitionTime":"2025-10-06T11:49:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:49:03 crc kubenswrapper[4958]: I1006 11:49:03.912881 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-4mxw5" Oct 06 11:49:03 crc kubenswrapper[4958]: E1006 11:49:03.913008 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-4mxw5" podUID="cbfbf2ea-df5f-4844-ba69-8c9f16a3a26c" Oct 06 11:49:03 crc kubenswrapper[4958]: I1006 11:49:03.963469 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:49:03 crc kubenswrapper[4958]: I1006 11:49:03.963533 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:49:03 crc kubenswrapper[4958]: I1006 11:49:03.963550 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:49:03 crc kubenswrapper[4958]: I1006 11:49:03.963578 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:49:03 crc kubenswrapper[4958]: I1006 11:49:03.963595 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:49:03Z","lastTransitionTime":"2025-10-06T11:49:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:49:04 crc kubenswrapper[4958]: I1006 11:49:04.067005 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:49:04 crc kubenswrapper[4958]: I1006 11:49:04.067091 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:49:04 crc kubenswrapper[4958]: I1006 11:49:04.067102 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:49:04 crc kubenswrapper[4958]: I1006 11:49:04.067121 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:49:04 crc kubenswrapper[4958]: I1006 11:49:04.067136 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:49:04Z","lastTransitionTime":"2025-10-06T11:49:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:49:04 crc kubenswrapper[4958]: I1006 11:49:04.170227 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:49:04 crc kubenswrapper[4958]: I1006 11:49:04.170289 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:49:04 crc kubenswrapper[4958]: I1006 11:49:04.170305 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:49:04 crc kubenswrapper[4958]: I1006 11:49:04.170327 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:49:04 crc kubenswrapper[4958]: I1006 11:49:04.170344 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:49:04Z","lastTransitionTime":"2025-10-06T11:49:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:49:04 crc kubenswrapper[4958]: I1006 11:49:04.274041 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:49:04 crc kubenswrapper[4958]: I1006 11:49:04.274119 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:49:04 crc kubenswrapper[4958]: I1006 11:49:04.274172 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:49:04 crc kubenswrapper[4958]: I1006 11:49:04.274206 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:49:04 crc kubenswrapper[4958]: I1006 11:49:04.274232 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:49:04Z","lastTransitionTime":"2025-10-06T11:49:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:49:04 crc kubenswrapper[4958]: I1006 11:49:04.377773 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:49:04 crc kubenswrapper[4958]: I1006 11:49:04.377856 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:49:04 crc kubenswrapper[4958]: I1006 11:49:04.377880 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:49:04 crc kubenswrapper[4958]: I1006 11:49:04.377912 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:49:04 crc kubenswrapper[4958]: I1006 11:49:04.377933 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:49:04Z","lastTransitionTime":"2025-10-06T11:49:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:49:04 crc kubenswrapper[4958]: I1006 11:49:04.481687 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:49:04 crc kubenswrapper[4958]: I1006 11:49:04.481756 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:49:04 crc kubenswrapper[4958]: I1006 11:49:04.481780 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:49:04 crc kubenswrapper[4958]: I1006 11:49:04.481812 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:49:04 crc kubenswrapper[4958]: I1006 11:49:04.481838 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:49:04Z","lastTransitionTime":"2025-10-06T11:49:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:49:04 crc kubenswrapper[4958]: I1006 11:49:04.584935 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:49:04 crc kubenswrapper[4958]: I1006 11:49:04.585075 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:49:04 crc kubenswrapper[4958]: I1006 11:49:04.585094 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:49:04 crc kubenswrapper[4958]: I1006 11:49:04.585121 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:49:04 crc kubenswrapper[4958]: I1006 11:49:04.585138 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:49:04Z","lastTransitionTime":"2025-10-06T11:49:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:49:04 crc kubenswrapper[4958]: I1006 11:49:04.688108 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:49:04 crc kubenswrapper[4958]: I1006 11:49:04.688199 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:49:04 crc kubenswrapper[4958]: I1006 11:49:04.688228 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:49:04 crc kubenswrapper[4958]: I1006 11:49:04.688253 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:49:04 crc kubenswrapper[4958]: I1006 11:49:04.688269 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:49:04Z","lastTransitionTime":"2025-10-06T11:49:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:49:04 crc kubenswrapper[4958]: I1006 11:49:04.791242 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:49:04 crc kubenswrapper[4958]: I1006 11:49:04.791441 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:49:04 crc kubenswrapper[4958]: I1006 11:49:04.791469 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:49:04 crc kubenswrapper[4958]: I1006 11:49:04.791497 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:49:04 crc kubenswrapper[4958]: I1006 11:49:04.791515 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:49:04Z","lastTransitionTime":"2025-10-06T11:49:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:49:04 crc kubenswrapper[4958]: I1006 11:49:04.894524 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:49:04 crc kubenswrapper[4958]: I1006 11:49:04.894598 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:49:04 crc kubenswrapper[4958]: I1006 11:49:04.894622 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:49:04 crc kubenswrapper[4958]: I1006 11:49:04.894654 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:49:04 crc kubenswrapper[4958]: I1006 11:49:04.894681 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:49:04Z","lastTransitionTime":"2025-10-06T11:49:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:49:04 crc kubenswrapper[4958]: I1006 11:49:04.912433 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 06 11:49:04 crc kubenswrapper[4958]: I1006 11:49:04.912495 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 06 11:49:04 crc kubenswrapper[4958]: I1006 11:49:04.912450 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 06 11:49:04 crc kubenswrapper[4958]: E1006 11:49:04.912659 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Oct 06 11:49:04 crc kubenswrapper[4958]: E1006 11:49:04.912768 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Oct 06 11:49:04 crc kubenswrapper[4958]: E1006 11:49:04.912901 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Oct 06 11:49:04 crc kubenswrapper[4958]: I1006 11:49:04.997632 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:49:04 crc kubenswrapper[4958]: I1006 11:49:04.997695 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:49:04 crc kubenswrapper[4958]: I1006 11:49:04.997718 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:49:04 crc kubenswrapper[4958]: I1006 11:49:04.997748 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:49:04 crc kubenswrapper[4958]: I1006 11:49:04.997768 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:49:04Z","lastTransitionTime":"2025-10-06T11:49:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:49:05 crc kubenswrapper[4958]: I1006 11:49:05.107320 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:49:05 crc kubenswrapper[4958]: I1006 11:49:05.107411 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:49:05 crc kubenswrapper[4958]: I1006 11:49:05.107433 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:49:05 crc kubenswrapper[4958]: I1006 11:49:05.107458 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:49:05 crc kubenswrapper[4958]: I1006 11:49:05.107476 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:49:05Z","lastTransitionTime":"2025-10-06T11:49:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:49:05 crc kubenswrapper[4958]: I1006 11:49:05.210103 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:49:05 crc kubenswrapper[4958]: I1006 11:49:05.210238 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:49:05 crc kubenswrapper[4958]: I1006 11:49:05.210264 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:49:05 crc kubenswrapper[4958]: I1006 11:49:05.210292 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:49:05 crc kubenswrapper[4958]: I1006 11:49:05.210313 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:49:05Z","lastTransitionTime":"2025-10-06T11:49:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:49:05 crc kubenswrapper[4958]: I1006 11:49:05.313220 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:49:05 crc kubenswrapper[4958]: I1006 11:49:05.313301 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:49:05 crc kubenswrapper[4958]: I1006 11:49:05.313313 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:49:05 crc kubenswrapper[4958]: I1006 11:49:05.313327 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:49:05 crc kubenswrapper[4958]: I1006 11:49:05.313339 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:49:05Z","lastTransitionTime":"2025-10-06T11:49:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:49:05 crc kubenswrapper[4958]: I1006 11:49:05.416109 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:49:05 crc kubenswrapper[4958]: I1006 11:49:05.416209 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:49:05 crc kubenswrapper[4958]: I1006 11:49:05.416228 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:49:05 crc kubenswrapper[4958]: I1006 11:49:05.416252 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:49:05 crc kubenswrapper[4958]: I1006 11:49:05.416268 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:49:05Z","lastTransitionTime":"2025-10-06T11:49:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:49:05 crc kubenswrapper[4958]: I1006 11:49:05.519314 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:49:05 crc kubenswrapper[4958]: I1006 11:49:05.519378 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:49:05 crc kubenswrapper[4958]: I1006 11:49:05.519395 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:49:05 crc kubenswrapper[4958]: I1006 11:49:05.519419 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:49:05 crc kubenswrapper[4958]: I1006 11:49:05.519436 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:49:05Z","lastTransitionTime":"2025-10-06T11:49:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:49:05 crc kubenswrapper[4958]: I1006 11:49:05.622437 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:49:05 crc kubenswrapper[4958]: I1006 11:49:05.622500 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:49:05 crc kubenswrapper[4958]: I1006 11:49:05.622517 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:49:05 crc kubenswrapper[4958]: I1006 11:49:05.622541 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:49:05 crc kubenswrapper[4958]: I1006 11:49:05.622561 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:49:05Z","lastTransitionTime":"2025-10-06T11:49:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:49:05 crc kubenswrapper[4958]: I1006 11:49:05.726444 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:49:05 crc kubenswrapper[4958]: I1006 11:49:05.726848 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:49:05 crc kubenswrapper[4958]: I1006 11:49:05.726893 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:49:05 crc kubenswrapper[4958]: I1006 11:49:05.726936 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:49:05 crc kubenswrapper[4958]: I1006 11:49:05.726959 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:49:05Z","lastTransitionTime":"2025-10-06T11:49:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:49:05 crc kubenswrapper[4958]: I1006 11:49:05.830743 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:49:05 crc kubenswrapper[4958]: I1006 11:49:05.830808 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:49:05 crc kubenswrapper[4958]: I1006 11:49:05.830824 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:49:05 crc kubenswrapper[4958]: I1006 11:49:05.830846 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:49:05 crc kubenswrapper[4958]: I1006 11:49:05.830862 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:49:05Z","lastTransitionTime":"2025-10-06T11:49:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:49:05 crc kubenswrapper[4958]: I1006 11:49:05.865674 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:49:05 crc kubenswrapper[4958]: I1006 11:49:05.865738 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:49:05 crc kubenswrapper[4958]: I1006 11:49:05.865795 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:49:05 crc kubenswrapper[4958]: I1006 11:49:05.865819 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:49:05 crc kubenswrapper[4958]: I1006 11:49:05.865838 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:49:05Z","lastTransitionTime":"2025-10-06T11:49:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:49:05 crc kubenswrapper[4958]: E1006 11:49:05.886911 4958 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-10-06T11:49:05Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-06T11:49:05Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-06T11:49:05Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-06T11:49:05Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-06T11:49:05Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-06T11:49:05Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-06T11:49:05Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-06T11:49:05Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"71775188-f0be-4b91-a34f-f469bf9337b6\\\",\\\"systemUUID\\\":\\\"c838f8b7-52cc-46da-a415-eff8b7b887b9\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:49:05Z is after 2025-08-24T17:21:41Z" Oct 06 11:49:05 crc kubenswrapper[4958]: I1006 11:49:05.891436 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:49:05 crc kubenswrapper[4958]: I1006 11:49:05.891486 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:49:05 crc kubenswrapper[4958]: I1006 11:49:05.891498 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:49:05 crc kubenswrapper[4958]: I1006 11:49:05.891514 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:49:05 crc kubenswrapper[4958]: I1006 11:49:05.891528 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:49:05Z","lastTransitionTime":"2025-10-06T11:49:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:49:05 crc kubenswrapper[4958]: E1006 11:49:05.908044 4958 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-10-06T11:49:05Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-06T11:49:05Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-06T11:49:05Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-06T11:49:05Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-06T11:49:05Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-06T11:49:05Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-06T11:49:05Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-06T11:49:05Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"71775188-f0be-4b91-a34f-f469bf9337b6\\\",\\\"systemUUID\\\":\\\"c838f8b7-52cc-46da-a415-eff8b7b887b9\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:49:05Z is after 2025-08-24T17:21:41Z" Oct 06 11:49:05 crc kubenswrapper[4958]: I1006 11:49:05.912995 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-4mxw5" Oct 06 11:49:05 crc kubenswrapper[4958]: E1006 11:49:05.913104 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-4mxw5" podUID="cbfbf2ea-df5f-4844-ba69-8c9f16a3a26c" Oct 06 11:49:05 crc kubenswrapper[4958]: I1006 11:49:05.914086 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:49:05 crc kubenswrapper[4958]: I1006 11:49:05.914177 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:49:05 crc kubenswrapper[4958]: I1006 11:49:05.914196 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:49:05 crc kubenswrapper[4958]: I1006 11:49:05.914218 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:49:05 crc kubenswrapper[4958]: I1006 11:49:05.914237 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:49:05Z","lastTransitionTime":"2025-10-06T11:49:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:49:05 crc kubenswrapper[4958]: E1006 11:49:05.927443 4958 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-10-06T11:49:05Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-06T11:49:05Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-06T11:49:05Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-06T11:49:05Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-06T11:49:05Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-06T11:49:05Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-06T11:49:05Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-06T11:49:05Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"71775188-f0be-4b91-a34f-f469bf9337b6\\\",\\\"systemUUID\\\":\\\"c838f8b7-52cc-46da-a415-eff8b7b887b9\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:49:05Z is after 2025-08-24T17:21:41Z" Oct 06 11:49:05 crc kubenswrapper[4958]: I1006 11:49:05.932025 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:49:05 crc kubenswrapper[4958]: I1006 11:49:05.932098 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:49:05 crc kubenswrapper[4958]: I1006 11:49:05.932124 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:49:05 crc kubenswrapper[4958]: I1006 11:49:05.932460 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:49:05 crc kubenswrapper[4958]: I1006 11:49:05.932586 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:49:05Z","lastTransitionTime":"2025-10-06T11:49:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:49:05 crc kubenswrapper[4958]: I1006 11:49:05.947210 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/cbfbf2ea-df5f-4844-ba69-8c9f16a3a26c-metrics-certs\") pod \"network-metrics-daemon-4mxw5\" (UID: \"cbfbf2ea-df5f-4844-ba69-8c9f16a3a26c\") " pod="openshift-multus/network-metrics-daemon-4mxw5" Oct 06 11:49:05 crc kubenswrapper[4958]: E1006 11:49:05.947388 4958 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Oct 06 11:49:05 crc kubenswrapper[4958]: E1006 11:49:05.947501 4958 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/cbfbf2ea-df5f-4844-ba69-8c9f16a3a26c-metrics-certs podName:cbfbf2ea-df5f-4844-ba69-8c9f16a3a26c nodeName:}" failed. No retries permitted until 2025-10-06 11:50:09.947474085 +0000 UTC m=+163.833499423 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/cbfbf2ea-df5f-4844-ba69-8c9f16a3a26c-metrics-certs") pod "network-metrics-daemon-4mxw5" (UID: "cbfbf2ea-df5f-4844-ba69-8c9f16a3a26c") : object "openshift-multus"/"metrics-daemon-secret" not registered Oct 06 11:49:05 crc kubenswrapper[4958]: E1006 11:49:05.952348 4958 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-10-06T11:49:05Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-06T11:49:05Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-06T11:49:05Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-06T11:49:05Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-06T11:49:05Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-06T11:49:05Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-06T11:49:05Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-06T11:49:05Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"71775188-f0be-4b91-a34f-f469bf9337b6\\\",\\\"systemUUID\\\":\\\"c838f8b7-52cc-46da-a415-eff8b7b887b9\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:49:05Z is after 2025-08-24T17:21:41Z" Oct 06 11:49:05 crc kubenswrapper[4958]: I1006 11:49:05.956996 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:49:05 crc kubenswrapper[4958]: I1006 11:49:05.957073 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:49:05 crc kubenswrapper[4958]: I1006 11:49:05.957091 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:49:05 crc kubenswrapper[4958]: I1006 11:49:05.957113 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:49:05 crc kubenswrapper[4958]: I1006 11:49:05.957130 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:49:05Z","lastTransitionTime":"2025-10-06T11:49:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:49:05 crc kubenswrapper[4958]: E1006 11:49:05.974274 4958 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-10-06T11:49:05Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-06T11:49:05Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-06T11:49:05Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-06T11:49:05Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-06T11:49:05Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-06T11:49:05Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-06T11:49:05Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-06T11:49:05Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"71775188-f0be-4b91-a34f-f469bf9337b6\\\",\\\"systemUUID\\\":\\\"c838f8b7-52cc-46da-a415-eff8b7b887b9\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:49:05Z is after 2025-08-24T17:21:41Z" Oct 06 11:49:05 crc kubenswrapper[4958]: E1006 11:49:05.974494 4958 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Oct 06 11:49:05 crc kubenswrapper[4958]: I1006 11:49:05.976508 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:49:05 crc kubenswrapper[4958]: I1006 11:49:05.976569 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:49:05 crc kubenswrapper[4958]: I1006 11:49:05.976592 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:49:05 crc kubenswrapper[4958]: I1006 11:49:05.976621 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:49:05 crc kubenswrapper[4958]: I1006 11:49:05.976643 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:49:05Z","lastTransitionTime":"2025-10-06T11:49:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:49:06 crc kubenswrapper[4958]: I1006 11:49:06.079415 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:49:06 crc kubenswrapper[4958]: I1006 11:49:06.079473 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:49:06 crc kubenswrapper[4958]: I1006 11:49:06.079491 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:49:06 crc kubenswrapper[4958]: I1006 11:49:06.079513 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:49:06 crc kubenswrapper[4958]: I1006 11:49:06.079530 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:49:06Z","lastTransitionTime":"2025-10-06T11:49:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:49:06 crc kubenswrapper[4958]: I1006 11:49:06.182559 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:49:06 crc kubenswrapper[4958]: I1006 11:49:06.182610 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:49:06 crc kubenswrapper[4958]: I1006 11:49:06.182626 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:49:06 crc kubenswrapper[4958]: I1006 11:49:06.182647 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:49:06 crc kubenswrapper[4958]: I1006 11:49:06.182665 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:49:06Z","lastTransitionTime":"2025-10-06T11:49:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:49:06 crc kubenswrapper[4958]: I1006 11:49:06.285663 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:49:06 crc kubenswrapper[4958]: I1006 11:49:06.285784 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:49:06 crc kubenswrapper[4958]: I1006 11:49:06.285807 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:49:06 crc kubenswrapper[4958]: I1006 11:49:06.285829 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:49:06 crc kubenswrapper[4958]: I1006 11:49:06.285845 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:49:06Z","lastTransitionTime":"2025-10-06T11:49:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:49:06 crc kubenswrapper[4958]: I1006 11:49:06.388434 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:49:06 crc kubenswrapper[4958]: I1006 11:49:06.388675 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:49:06 crc kubenswrapper[4958]: I1006 11:49:06.388703 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:49:06 crc kubenswrapper[4958]: I1006 11:49:06.388738 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:49:06 crc kubenswrapper[4958]: I1006 11:49:06.388755 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:49:06Z","lastTransitionTime":"2025-10-06T11:49:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:49:06 crc kubenswrapper[4958]: I1006 11:49:06.491102 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:49:06 crc kubenswrapper[4958]: I1006 11:49:06.491191 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:49:06 crc kubenswrapper[4958]: I1006 11:49:06.491208 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:49:06 crc kubenswrapper[4958]: I1006 11:49:06.491234 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:49:06 crc kubenswrapper[4958]: I1006 11:49:06.491252 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:49:06Z","lastTransitionTime":"2025-10-06T11:49:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:49:06 crc kubenswrapper[4958]: I1006 11:49:06.593801 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:49:06 crc kubenswrapper[4958]: I1006 11:49:06.593880 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:49:06 crc kubenswrapper[4958]: I1006 11:49:06.593900 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:49:06 crc kubenswrapper[4958]: I1006 11:49:06.593928 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:49:06 crc kubenswrapper[4958]: I1006 11:49:06.593948 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:49:06Z","lastTransitionTime":"2025-10-06T11:49:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:49:06 crc kubenswrapper[4958]: I1006 11:49:06.696964 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:49:06 crc kubenswrapper[4958]: I1006 11:49:06.697041 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:49:06 crc kubenswrapper[4958]: I1006 11:49:06.697067 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:49:06 crc kubenswrapper[4958]: I1006 11:49:06.697100 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:49:06 crc kubenswrapper[4958]: I1006 11:49:06.697129 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:49:06Z","lastTransitionTime":"2025-10-06T11:49:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:49:06 crc kubenswrapper[4958]: I1006 11:49:06.799662 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:49:06 crc kubenswrapper[4958]: I1006 11:49:06.799724 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:49:06 crc kubenswrapper[4958]: I1006 11:49:06.799743 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:49:06 crc kubenswrapper[4958]: I1006 11:49:06.799769 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:49:06 crc kubenswrapper[4958]: I1006 11:49:06.799793 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:49:06Z","lastTransitionTime":"2025-10-06T11:49:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:49:06 crc kubenswrapper[4958]: I1006 11:49:06.902946 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:49:06 crc kubenswrapper[4958]: I1006 11:49:06.902994 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:49:06 crc kubenswrapper[4958]: I1006 11:49:06.903007 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:49:06 crc kubenswrapper[4958]: I1006 11:49:06.903026 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:49:06 crc kubenswrapper[4958]: I1006 11:49:06.903041 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:49:06Z","lastTransitionTime":"2025-10-06T11:49:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:49:06 crc kubenswrapper[4958]: I1006 11:49:06.912634 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 06 11:49:06 crc kubenswrapper[4958]: E1006 11:49:06.912822 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Oct 06 11:49:06 crc kubenswrapper[4958]: I1006 11:49:06.912852 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 06 11:49:06 crc kubenswrapper[4958]: I1006 11:49:06.912913 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 06 11:49:06 crc kubenswrapper[4958]: E1006 11:49:06.913028 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Oct 06 11:49:06 crc kubenswrapper[4958]: E1006 11:49:06.913121 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Oct 06 11:49:06 crc kubenswrapper[4958]: I1006 11:49:06.932570 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:46Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:49:06Z is after 2025-08-24T17:21:41Z" Oct 06 11:49:06 crc kubenswrapper[4958]: I1006 11:49:06.964017 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-ntrlk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cd589959-144a-41bd-b6d5-a872e5c25cee\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:48Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:48Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bc57f0ff822a9fcb8c3b73146bf4879f6e05c2f92b6a4317528530e54e419278\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:47:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vzpjm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a38398fb9a752d0059e1adfd4ee23557572492f2d27f63ef90ed4730b742fd0e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:47:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vzpjm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://484bfcace3003fb125465d9296c1b04852447d674b6c4abc8fedbd3a1b6ea8be\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:47:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vzpjm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d7834f1b3141381f7d868acc4aceb2809e81896cb7082de9888c3d540d062078\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:47:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vzpjm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://051e26a11e5f65ab6e49664f99a5f6a87bf781bf7284f2e2e3996c0e26e77c84\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:47:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vzpjm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d90731b1e085c514ef0665d4df356b853f432a1012f84a41ed82fa257d57d9bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:47:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vzpjm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0d26f1c937b4f4a0ec130d9451086f3770f092498d61310b65bdd8c74a68549e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0d26f1c937b4f4a0ec130d9451086f3770f092498d61310b65bdd8c74a68549e\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-10-06T11:48:46Z\\\",\\\"message\\\":\\\"ExternalRoute (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/adminpolicybasedroute/v1/apis/informers/externalversions/factory.go:140\\\\nI1006 11:48:46.912470 7033 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI1006 11:48:46.912521 7033 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI1006 11:48:46.912526 7033 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI1006 11:48:46.912534 7033 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI1006 11:48:46.912552 7033 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI1006 11:48:46.912589 7033 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI1006 11:48:46.912650 7033 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI1006 11:48:46.912667 7033 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI1006 11:48:46.912704 7033 factory.go:656] Stopping watch factory\\\\nI1006 11:48:46.912725 7033 handler.go:208] Removed *v1.Node event handler 7\\\\nI1006 11:48:46.912741 7033 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI1006 11:48:46.912757 7033 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI1006 11:48:46.912772 7033 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI1006 11:48:46.912859 7033 handler.go:208] Removed *v1.Node ev\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-06T11:48:46Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-ntrlk_openshift-ovn-kubernetes(cd589959-144a-41bd-b6d5-a872e5c25cee)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vzpjm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1398aae03a42cf60100ed334a3d65c9c4fd501855d5316e7e1ed3974d0c7e22e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:47:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vzpjm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://738882f7b5c7b87e6d6426a997dd7efe5daf1d97973cf1085097cc0f5e790f00\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://738882f7b5c7b87e6d6426a997dd7efe5daf1d97973cf1085097cc0f5e790f00\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-06T11:47:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-06T11:47:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vzpjm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T11:47:48Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-ntrlk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:49:06Z is after 2025-08-24T17:21:41Z" Oct 06 11:49:06 crc kubenswrapper[4958]: I1006 11:49:06.981608 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ba205413-cfaf-4b44-a363-11756cfa7dda\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:48:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:48:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fb611e3b5c05dc18f2016ae86df59c40ce281cb8b0180bd570de0ef76740db43\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:47:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ec146c3b7bff7005ab104f50dbc189e81ee82b7b2275ddb606e60832db5a0046\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:47:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0dcb7478b9b3f837afca516fc35e7f3f41e293147c5f27119fcca05f0d4477b3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:47:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c7f87b2139a49d50f883f483bfbfff0af00d2580d85bbddb604d395afbae3a73\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c7f87b2139a49d50f883f483bfbfff0af00d2580d85bbddb604d395afbae3a73\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-06T11:47:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-06T11:47:28Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T11:47:27Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:49:06Z is after 2025-08-24T17:21:41Z" Oct 06 11:49:07 crc kubenswrapper[4958]: I1006 11:49:07.001958 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:48Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:48Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3a93985bd27818990d6f5ec103d8c0722e1317a74398d64300148018c4f77b71\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:47:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9f39e4dcc064167825d4f7323323a394d1a3d1a374b7104bffc6ad56b344d89e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:47:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:49:06Z is after 2025-08-24T17:21:41Z" Oct 06 11:49:07 crc kubenswrapper[4958]: I1006 11:49:07.009825 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:49:07 crc kubenswrapper[4958]: I1006 11:49:07.009894 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:49:07 crc kubenswrapper[4958]: I1006 11:49:07.009919 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:49:07 crc kubenswrapper[4958]: I1006 11:49:07.009949 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:49:07 crc kubenswrapper[4958]: I1006 11:49:07.009972 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:49:07Z","lastTransitionTime":"2025-10-06T11:49:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:49:07 crc kubenswrapper[4958]: I1006 11:49:07.024109 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:46Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:49:07Z is after 2025-08-24T17:21:41Z" Oct 06 11:49:07 crc kubenswrapper[4958]: I1006 11:49:07.046189 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-lwknw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5e46de0e-9f67-4dae-8601-65004d0d71c1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6adb713c9c2659566eee132d371a68bc10d24c6761c017a1e8b0ce8bf974b9fa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:47:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wghx2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://244104564b5ad6b646671bdc75f440e5568a0c2449c111e56431328365d044fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://244104564b5ad6b646671bdc75f440e5568a0c2449c111e56431328365d044fe\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-06T11:47:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-06T11:47:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wghx2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5f832e5afbc55af6f3de6756005ddfef328c3d8357b1bbc8765469e5b254cf8a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5f832e5afbc55af6f3de6756005ddfef328c3d8357b1bbc8765469e5b254cf8a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-06T11:47:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-06T11:47:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wghx2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://99bec1a2f08582cd407d108d771389ab034877ee58f59e7d899541d0c69809b0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://99bec1a2f08582cd407d108d771389ab034877ee58f59e7d899541d0c69809b0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-06T11:47:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-06T11:47:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wghx2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4d090f1099ca4b6472934163fb66f1875dc03819cf0907f3058aa5006d9d2198\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4d090f1099ca4b6472934163fb66f1875dc03819cf0907f3058aa5006d9d2198\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-06T11:47:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-06T11:47:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wghx2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://41bdbeadff713969b3914296b7f045991c7da126f8181ee9b60d4607d7a565dd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://41bdbeadff713969b3914296b7f045991c7da126f8181ee9b60d4607d7a565dd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-06T11:47:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-06T11:47:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wghx2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3e496dd75c5dd4e480d06cd46302b6172620b35eb24aff3308865a0931677bc4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3e496dd75c5dd4e480d06cd46302b6172620b35eb24aff3308865a0931677bc4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-06T11:47:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-06T11:47:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wghx2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T11:47:47Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-lwknw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:49:07Z is after 2025-08-24T17:21:41Z" Oct 06 11:49:07 crc kubenswrapper[4958]: I1006 11:49:07.068762 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-4w4h5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8198c8fd-8ec9-4a56-9e87-4e3967fb8ec7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:48:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:48:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4e73d50647a8c8e49d503c873fc1939a3747681ec2646bd44db73acf5c7740d0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://979bf29caab50b132b30dd6eff62aed5d2209ae1c43fde2433101410df35035b\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-10-06T11:48:35Z\\\",\\\"message\\\":\\\"2025-10-06T11:47:50+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_98b69acf-4cc1-4904-b48b-cf009f0cef75\\\\n2025-10-06T11:47:50+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_98b69acf-4cc1-4904-b48b-cf009f0cef75 to /host/opt/cni/bin/\\\\n2025-10-06T11:47:50Z [verbose] multus-daemon started\\\\n2025-10-06T11:47:50Z [verbose] Readiness Indicator file check\\\\n2025-10-06T11:48:35Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-06T11:47:48Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:48:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2ncxs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T11:47:47Z\\\"}}\" for pod \"openshift-multus\"/\"multus-4w4h5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:49:07Z is after 2025-08-24T17:21:41Z" Oct 06 11:49:07 crc kubenswrapper[4958]: I1006 11:49:07.085422 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-whw6z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1a63a5e9-6d00-40ff-a080-cfcaf03a1c1b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b4fae35f5c37636077eb5d2fee37236e39d009d77aa158458d267aed5fc29f48\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:47:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bwd9v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0be6251825a9cd485da028e6f7fd169dae74215a7191607dfd0a4b7a470e43c4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:47:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bwd9v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T11:47:47Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-whw6z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:49:07Z is after 2025-08-24T17:21:41Z" Oct 06 11:49:07 crc kubenswrapper[4958]: I1006 11:49:07.100534 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-8jbr9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ddd869cc-2703-4f0d-b694-32bb7735025e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:48:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:48:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:48:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:48:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9298df9f82818033b7c6c96e0d7fd59290a7ef86679ae958f19f1c841998715c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:48:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8vhr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5d95fcda3db5aafa2634bf9d4037a6b8b0456bf4be6fc620c742e57e86aed114\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:48:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8vhr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T11:48:00Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-8jbr9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:49:07Z is after 2025-08-24T17:21:41Z" Oct 06 11:49:07 crc kubenswrapper[4958]: I1006 11:49:07.112977 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:49:07 crc kubenswrapper[4958]: I1006 11:49:07.113021 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:49:07 crc kubenswrapper[4958]: I1006 11:49:07.113033 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:49:07 crc kubenswrapper[4958]: I1006 11:49:07.113048 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:49:07 crc kubenswrapper[4958]: I1006 11:49:07.113057 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:49:07Z","lastTransitionTime":"2025-10-06T11:49:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:49:07 crc kubenswrapper[4958]: I1006 11:49:07.124376 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a5d80adc-479d-45d3-8a5a-8b7a96d32770\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://376f1a7800688b910ce6fe69690e4138e4a549b8e25f18c8ee3016cd96c94ddb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:47:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d7e898f6fcf406fe849a73767f83f6dd71b56fae41fd8c8078425f1a76952480\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:47:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b68c0aaf26d798be1e54562398e215687d5d87d1365dcf6932e84bfdebbfef2c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:47:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9bd51f876ba05a7a6eaa8b3b5192e9481df601848d04f99ebaa118e46d7c73e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:47:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9824eb1afb8c5aae680c233213cd4177b7083863b1d57b11b73538f01bc19e74\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:47:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d37b0a556d9c93384881a2bd3d6d16342ed70f9917f49aa348dd3b32586575d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d37b0a556d9c93384881a2bd3d6d16342ed70f9917f49aa348dd3b32586575d7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-06T11:47:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-06T11:47:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2515eb7bd285f5fe76b86155a8d863d766852939740431dfdd9ddc148726b8dd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2515eb7bd285f5fe76b86155a8d863d766852939740431dfdd9ddc148726b8dd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-06T11:47:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-06T11:47:29Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://7e0a8bb1d689ef3eb1c2db4df213be7b4ba246e093424a0034b18ce5b68b2cfd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7e0a8bb1d689ef3eb1c2db4df213be7b4ba246e093424a0034b18ce5b68b2cfd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-06T11:47:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-06T11:47:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T11:47:27Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:49:07Z is after 2025-08-24T17:21:41Z" Oct 06 11:49:07 crc kubenswrapper[4958]: I1006 11:49:07.139186 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e346fee2-2887-49c0-ad05-0e4ca19453e4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://42a6335b390e5de2da2d935de9e0cb1e60391521813d0928c59d1d9c27ae8e7f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:47:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://64dfdaf2d8faa6e7ebde08361728077d3db9118ec963dc4c1fc8caf9eb9c752b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:47:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5cbba96aaea45073aaeada1136c677ba1ba87248f004cf3d7394ddfccf412478\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:47:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://95e5c689ad307b63dec13d774d1ff8fe8fa0a11a7d33e18a0470000a4df0f6dc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:47:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T11:47:27Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:49:07Z is after 2025-08-24T17:21:41Z" Oct 06 11:49:07 crc kubenswrapper[4958]: I1006 11:49:07.162452 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:46Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:49:07Z is after 2025-08-24T17:21:41Z" Oct 06 11:49:07 crc kubenswrapper[4958]: I1006 11:49:07.174100 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-8xzjs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c3545ace-6477-4c0b-9576-f32c6748e0a0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d94d788ac8f6a4665db7852c21f55128c9afb7976c9e18fbc302c02d8c3a0c43\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:47:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lz2sh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T11:47:47Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-8xzjs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:49:07Z is after 2025-08-24T17:21:41Z" Oct 06 11:49:07 crc kubenswrapper[4958]: I1006 11:49:07.184989 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-vns7w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f33eae78-7219-4551-bea7-dcfaf22c4e62\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7c6a67770ab5e7e0af1ed6a7b4965bc2b3937e021ddf18a765edd7b65196ad5c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:47:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f8htn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T11:47:49Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-vns7w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:49:07Z is after 2025-08-24T17:21:41Z" Oct 06 11:49:07 crc kubenswrapper[4958]: I1006 11:49:07.198079 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ac3e86aa-2266-4241-8a63-1787bc4dcad2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://35372e4733f8be2a27441a86646b882967d6b859b3810925f64856d952f15215\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:47:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1ceb14a5d5b24537500f212dc4a7714e58a3019cfa39ba72563b38b9e5ac539f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1ceb14a5d5b24537500f212dc4a7714e58a3019cfa39ba72563b38b9e5ac539f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-06T11:47:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-06T11:47:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T11:47:27Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:49:07Z is after 2025-08-24T17:21:41Z" Oct 06 11:49:07 crc kubenswrapper[4958]: I1006 11:49:07.211033 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9bd5b11f-6414-4c57-99b8-516b3f4be804\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:48:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:48:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8f40315a451e12055987bec9804330dac96578f86d7fdedfd96ed187936df857\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:47:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0b16a70f5c26106f885601fdc9f41b6d606668ed3a4a527be3a7674dd4217d36\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:47:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://93cedfe1a3600f0eb73fd787333690ed1fd8cd5f766a54ba40eb6aad808593c7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:47:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://da49c178e22edd3e5f0b37f05dca576095bf83765cb9e46fcb68b1b362e304f7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fdb3ac1507f9a55981ed07786d2ab8bd2aab7072e8e625b17f15bc27f40f460d\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-10-06T11:47:48Z\\\",\\\"message\\\":\\\"file observer\\\\nW1006 11:47:47.776944 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1006 11:47:47.777183 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1006 11:47:47.778589 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-624858867/tls.crt::/tmp/serving-cert-624858867/tls.key\\\\\\\"\\\\nI1006 11:47:48.078332 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1006 11:47:48.084427 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1006 11:47:48.084453 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1006 11:47:48.084475 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1006 11:47:48.084480 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1006 11:47:48.108531 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI1006 11:47:48.108546 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW1006 11:47:48.108561 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1006 11:47:48.108582 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1006 11:47:48.108586 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1006 11:47:48.108589 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1006 11:47:48.108592 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1006 11:47:48.108594 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1006 11:47:48.110441 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-06T11:47:42Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:48:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://533257b553a482e13f8c8930f7ee7d6f8b70af1499cf861d3dfe73cb2225aa2a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:47:29Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://70c0a2bcb44c905e3d39402f0992d3e50a939252201b98e9a723f5a6777d7f91\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://70c0a2bcb44c905e3d39402f0992d3e50a939252201b98e9a723f5a6777d7f91\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-06T11:47:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-06T11:47:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T11:47:27Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:49:07Z is after 2025-08-24T17:21:41Z" Oct 06 11:49:07 crc kubenswrapper[4958]: I1006 11:49:07.214907 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:49:07 crc kubenswrapper[4958]: I1006 11:49:07.214976 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:49:07 crc kubenswrapper[4958]: I1006 11:49:07.214992 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:49:07 crc kubenswrapper[4958]: I1006 11:49:07.215358 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:49:07 crc kubenswrapper[4958]: I1006 11:49:07.215403 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:49:07Z","lastTransitionTime":"2025-10-06T11:49:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:49:07 crc kubenswrapper[4958]: I1006 11:49:07.224285 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:48Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:48Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b5c0c63694c68c28642df297d1ceff03425621c51b170c758b3fedeafbccc89e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:47:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:49:07Z is after 2025-08-24T17:21:41Z" Oct 06 11:49:07 crc kubenswrapper[4958]: I1006 11:49:07.237000 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:50Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:47:50Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fbac1bba642ea0238b738ec34631d44326b215575297c7ea3595aa9d691849d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:47:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:49:07Z is after 2025-08-24T17:21:41Z" Oct 06 11:49:07 crc kubenswrapper[4958]: I1006 11:49:07.245890 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-4mxw5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cbfbf2ea-df5f-4844-ba69-8c9f16a3a26c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:48:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:48:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:48:02Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:48:02Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cdkh7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cdkh7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T11:48:02Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-4mxw5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:49:07Z is after 2025-08-24T17:21:41Z" Oct 06 11:49:07 crc kubenswrapper[4958]: I1006 11:49:07.318499 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:49:07 crc kubenswrapper[4958]: I1006 11:49:07.318560 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:49:07 crc kubenswrapper[4958]: I1006 11:49:07.318581 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:49:07 crc kubenswrapper[4958]: I1006 11:49:07.318608 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:49:07 crc kubenswrapper[4958]: I1006 11:49:07.318631 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:49:07Z","lastTransitionTime":"2025-10-06T11:49:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:49:07 crc kubenswrapper[4958]: I1006 11:49:07.421121 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:49:07 crc kubenswrapper[4958]: I1006 11:49:07.421214 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:49:07 crc kubenswrapper[4958]: I1006 11:49:07.421239 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:49:07 crc kubenswrapper[4958]: I1006 11:49:07.421267 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:49:07 crc kubenswrapper[4958]: I1006 11:49:07.421289 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:49:07Z","lastTransitionTime":"2025-10-06T11:49:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:49:07 crc kubenswrapper[4958]: I1006 11:49:07.524854 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:49:07 crc kubenswrapper[4958]: I1006 11:49:07.524932 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:49:07 crc kubenswrapper[4958]: I1006 11:49:07.524953 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:49:07 crc kubenswrapper[4958]: I1006 11:49:07.524979 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:49:07 crc kubenswrapper[4958]: I1006 11:49:07.524999 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:49:07Z","lastTransitionTime":"2025-10-06T11:49:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:49:07 crc kubenswrapper[4958]: I1006 11:49:07.627562 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:49:07 crc kubenswrapper[4958]: I1006 11:49:07.627611 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:49:07 crc kubenswrapper[4958]: I1006 11:49:07.627627 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:49:07 crc kubenswrapper[4958]: I1006 11:49:07.627649 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:49:07 crc kubenswrapper[4958]: I1006 11:49:07.627667 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:49:07Z","lastTransitionTime":"2025-10-06T11:49:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:49:07 crc kubenswrapper[4958]: I1006 11:49:07.730605 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:49:07 crc kubenswrapper[4958]: I1006 11:49:07.730651 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:49:07 crc kubenswrapper[4958]: I1006 11:49:07.730660 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:49:07 crc kubenswrapper[4958]: I1006 11:49:07.730680 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:49:07 crc kubenswrapper[4958]: I1006 11:49:07.730691 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:49:07Z","lastTransitionTime":"2025-10-06T11:49:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:49:07 crc kubenswrapper[4958]: I1006 11:49:07.834011 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:49:07 crc kubenswrapper[4958]: I1006 11:49:07.834048 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:49:07 crc kubenswrapper[4958]: I1006 11:49:07.834058 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:49:07 crc kubenswrapper[4958]: I1006 11:49:07.834073 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:49:07 crc kubenswrapper[4958]: I1006 11:49:07.834084 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:49:07Z","lastTransitionTime":"2025-10-06T11:49:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:49:07 crc kubenswrapper[4958]: I1006 11:49:07.912276 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-4mxw5" Oct 06 11:49:07 crc kubenswrapper[4958]: E1006 11:49:07.912519 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-4mxw5" podUID="cbfbf2ea-df5f-4844-ba69-8c9f16a3a26c" Oct 06 11:49:07 crc kubenswrapper[4958]: I1006 11:49:07.936748 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:49:07 crc kubenswrapper[4958]: I1006 11:49:07.936828 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:49:07 crc kubenswrapper[4958]: I1006 11:49:07.936866 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:49:07 crc kubenswrapper[4958]: I1006 11:49:07.936890 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:49:07 crc kubenswrapper[4958]: I1006 11:49:07.936908 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:49:07Z","lastTransitionTime":"2025-10-06T11:49:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:49:08 crc kubenswrapper[4958]: I1006 11:49:08.040355 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:49:08 crc kubenswrapper[4958]: I1006 11:49:08.040761 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:49:08 crc kubenswrapper[4958]: I1006 11:49:08.040910 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:49:08 crc kubenswrapper[4958]: I1006 11:49:08.041056 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:49:08 crc kubenswrapper[4958]: I1006 11:49:08.041530 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:49:08Z","lastTransitionTime":"2025-10-06T11:49:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:49:08 crc kubenswrapper[4958]: I1006 11:49:08.145065 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:49:08 crc kubenswrapper[4958]: I1006 11:49:08.145173 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:49:08 crc kubenswrapper[4958]: I1006 11:49:08.145201 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:49:08 crc kubenswrapper[4958]: I1006 11:49:08.145233 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:49:08 crc kubenswrapper[4958]: I1006 11:49:08.145255 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:49:08Z","lastTransitionTime":"2025-10-06T11:49:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:49:08 crc kubenswrapper[4958]: I1006 11:49:08.248222 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:49:08 crc kubenswrapper[4958]: I1006 11:49:08.248303 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:49:08 crc kubenswrapper[4958]: I1006 11:49:08.248324 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:49:08 crc kubenswrapper[4958]: I1006 11:49:08.248356 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:49:08 crc kubenswrapper[4958]: I1006 11:49:08.248377 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:49:08Z","lastTransitionTime":"2025-10-06T11:49:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:49:08 crc kubenswrapper[4958]: I1006 11:49:08.351524 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:49:08 crc kubenswrapper[4958]: I1006 11:49:08.351558 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:49:08 crc kubenswrapper[4958]: I1006 11:49:08.351569 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:49:08 crc kubenswrapper[4958]: I1006 11:49:08.351583 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:49:08 crc kubenswrapper[4958]: I1006 11:49:08.351594 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:49:08Z","lastTransitionTime":"2025-10-06T11:49:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:49:08 crc kubenswrapper[4958]: I1006 11:49:08.453587 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:49:08 crc kubenswrapper[4958]: I1006 11:49:08.453617 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:49:08 crc kubenswrapper[4958]: I1006 11:49:08.453626 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:49:08 crc kubenswrapper[4958]: I1006 11:49:08.453640 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:49:08 crc kubenswrapper[4958]: I1006 11:49:08.453650 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:49:08Z","lastTransitionTime":"2025-10-06T11:49:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:49:08 crc kubenswrapper[4958]: I1006 11:49:08.555975 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:49:08 crc kubenswrapper[4958]: I1006 11:49:08.556034 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:49:08 crc kubenswrapper[4958]: I1006 11:49:08.556056 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:49:08 crc kubenswrapper[4958]: I1006 11:49:08.556082 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:49:08 crc kubenswrapper[4958]: I1006 11:49:08.556103 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:49:08Z","lastTransitionTime":"2025-10-06T11:49:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:49:08 crc kubenswrapper[4958]: I1006 11:49:08.658950 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:49:08 crc kubenswrapper[4958]: I1006 11:49:08.659005 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:49:08 crc kubenswrapper[4958]: I1006 11:49:08.659061 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:49:08 crc kubenswrapper[4958]: I1006 11:49:08.659090 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:49:08 crc kubenswrapper[4958]: I1006 11:49:08.659113 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:49:08Z","lastTransitionTime":"2025-10-06T11:49:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:49:08 crc kubenswrapper[4958]: I1006 11:49:08.762921 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:49:08 crc kubenswrapper[4958]: I1006 11:49:08.763024 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:49:08 crc kubenswrapper[4958]: I1006 11:49:08.763046 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:49:08 crc kubenswrapper[4958]: I1006 11:49:08.763070 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:49:08 crc kubenswrapper[4958]: I1006 11:49:08.763087 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:49:08Z","lastTransitionTime":"2025-10-06T11:49:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:49:08 crc kubenswrapper[4958]: I1006 11:49:08.866458 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:49:08 crc kubenswrapper[4958]: I1006 11:49:08.866525 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:49:08 crc kubenswrapper[4958]: I1006 11:49:08.866543 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:49:08 crc kubenswrapper[4958]: I1006 11:49:08.866631 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:49:08 crc kubenswrapper[4958]: I1006 11:49:08.866704 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:49:08Z","lastTransitionTime":"2025-10-06T11:49:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:49:08 crc kubenswrapper[4958]: I1006 11:49:08.912569 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 06 11:49:08 crc kubenswrapper[4958]: I1006 11:49:08.912700 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 06 11:49:08 crc kubenswrapper[4958]: E1006 11:49:08.912793 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Oct 06 11:49:08 crc kubenswrapper[4958]: E1006 11:49:08.912894 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Oct 06 11:49:08 crc kubenswrapper[4958]: I1006 11:49:08.913111 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 06 11:49:08 crc kubenswrapper[4958]: E1006 11:49:08.913241 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Oct 06 11:49:08 crc kubenswrapper[4958]: I1006 11:49:08.970449 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:49:08 crc kubenswrapper[4958]: I1006 11:49:08.970526 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:49:08 crc kubenswrapper[4958]: I1006 11:49:08.970544 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:49:08 crc kubenswrapper[4958]: I1006 11:49:08.970581 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:49:08 crc kubenswrapper[4958]: I1006 11:49:08.970617 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:49:08Z","lastTransitionTime":"2025-10-06T11:49:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:49:09 crc kubenswrapper[4958]: I1006 11:49:09.074563 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:49:09 crc kubenswrapper[4958]: I1006 11:49:09.074648 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:49:09 crc kubenswrapper[4958]: I1006 11:49:09.074666 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:49:09 crc kubenswrapper[4958]: I1006 11:49:09.074689 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:49:09 crc kubenswrapper[4958]: I1006 11:49:09.074703 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:49:09Z","lastTransitionTime":"2025-10-06T11:49:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:49:09 crc kubenswrapper[4958]: I1006 11:49:09.178480 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:49:09 crc kubenswrapper[4958]: I1006 11:49:09.178542 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:49:09 crc kubenswrapper[4958]: I1006 11:49:09.178566 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:49:09 crc kubenswrapper[4958]: I1006 11:49:09.178596 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:49:09 crc kubenswrapper[4958]: I1006 11:49:09.178618 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:49:09Z","lastTransitionTime":"2025-10-06T11:49:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:49:09 crc kubenswrapper[4958]: I1006 11:49:09.281436 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:49:09 crc kubenswrapper[4958]: I1006 11:49:09.281486 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:49:09 crc kubenswrapper[4958]: I1006 11:49:09.281501 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:49:09 crc kubenswrapper[4958]: I1006 11:49:09.281517 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:49:09 crc kubenswrapper[4958]: I1006 11:49:09.281529 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:49:09Z","lastTransitionTime":"2025-10-06T11:49:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:49:09 crc kubenswrapper[4958]: I1006 11:49:09.384298 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:49:09 crc kubenswrapper[4958]: I1006 11:49:09.384374 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:49:09 crc kubenswrapper[4958]: I1006 11:49:09.384389 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:49:09 crc kubenswrapper[4958]: I1006 11:49:09.384407 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:49:09 crc kubenswrapper[4958]: I1006 11:49:09.384419 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:49:09Z","lastTransitionTime":"2025-10-06T11:49:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:49:09 crc kubenswrapper[4958]: I1006 11:49:09.487425 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:49:09 crc kubenswrapper[4958]: I1006 11:49:09.487465 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:49:09 crc kubenswrapper[4958]: I1006 11:49:09.487477 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:49:09 crc kubenswrapper[4958]: I1006 11:49:09.487497 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:49:09 crc kubenswrapper[4958]: I1006 11:49:09.487508 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:49:09Z","lastTransitionTime":"2025-10-06T11:49:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:49:09 crc kubenswrapper[4958]: I1006 11:49:09.590214 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:49:09 crc kubenswrapper[4958]: I1006 11:49:09.590272 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:49:09 crc kubenswrapper[4958]: I1006 11:49:09.590284 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:49:09 crc kubenswrapper[4958]: I1006 11:49:09.590302 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:49:09 crc kubenswrapper[4958]: I1006 11:49:09.590315 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:49:09Z","lastTransitionTime":"2025-10-06T11:49:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:49:09 crc kubenswrapper[4958]: I1006 11:49:09.693439 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:49:09 crc kubenswrapper[4958]: I1006 11:49:09.693534 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:49:09 crc kubenswrapper[4958]: I1006 11:49:09.693555 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:49:09 crc kubenswrapper[4958]: I1006 11:49:09.693579 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:49:09 crc kubenswrapper[4958]: I1006 11:49:09.693596 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:49:09Z","lastTransitionTime":"2025-10-06T11:49:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:49:09 crc kubenswrapper[4958]: I1006 11:49:09.796205 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:49:09 crc kubenswrapper[4958]: I1006 11:49:09.796318 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:49:09 crc kubenswrapper[4958]: I1006 11:49:09.796343 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:49:09 crc kubenswrapper[4958]: I1006 11:49:09.796374 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:49:09 crc kubenswrapper[4958]: I1006 11:49:09.796398 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:49:09Z","lastTransitionTime":"2025-10-06T11:49:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:49:09 crc kubenswrapper[4958]: I1006 11:49:09.899220 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:49:09 crc kubenswrapper[4958]: I1006 11:49:09.899290 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:49:09 crc kubenswrapper[4958]: I1006 11:49:09.899308 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:49:09 crc kubenswrapper[4958]: I1006 11:49:09.899331 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:49:09 crc kubenswrapper[4958]: I1006 11:49:09.899349 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:49:09Z","lastTransitionTime":"2025-10-06T11:49:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:49:09 crc kubenswrapper[4958]: I1006 11:49:09.913032 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-4mxw5" Oct 06 11:49:09 crc kubenswrapper[4958]: E1006 11:49:09.913249 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-4mxw5" podUID="cbfbf2ea-df5f-4844-ba69-8c9f16a3a26c" Oct 06 11:49:10 crc kubenswrapper[4958]: I1006 11:49:10.001937 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:49:10 crc kubenswrapper[4958]: I1006 11:49:10.002023 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:49:10 crc kubenswrapper[4958]: I1006 11:49:10.002042 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:49:10 crc kubenswrapper[4958]: I1006 11:49:10.002064 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:49:10 crc kubenswrapper[4958]: I1006 11:49:10.002082 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:49:10Z","lastTransitionTime":"2025-10-06T11:49:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:49:10 crc kubenswrapper[4958]: I1006 11:49:10.105761 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:49:10 crc kubenswrapper[4958]: I1006 11:49:10.105834 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:49:10 crc kubenswrapper[4958]: I1006 11:49:10.105854 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:49:10 crc kubenswrapper[4958]: I1006 11:49:10.105879 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:49:10 crc kubenswrapper[4958]: I1006 11:49:10.105898 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:49:10Z","lastTransitionTime":"2025-10-06T11:49:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:49:10 crc kubenswrapper[4958]: I1006 11:49:10.208745 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:49:10 crc kubenswrapper[4958]: I1006 11:49:10.208838 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:49:10 crc kubenswrapper[4958]: I1006 11:49:10.208858 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:49:10 crc kubenswrapper[4958]: I1006 11:49:10.208889 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:49:10 crc kubenswrapper[4958]: I1006 11:49:10.208913 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:49:10Z","lastTransitionTime":"2025-10-06T11:49:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:49:10 crc kubenswrapper[4958]: I1006 11:49:10.312093 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:49:10 crc kubenswrapper[4958]: I1006 11:49:10.312213 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:49:10 crc kubenswrapper[4958]: I1006 11:49:10.312231 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:49:10 crc kubenswrapper[4958]: I1006 11:49:10.312253 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:49:10 crc kubenswrapper[4958]: I1006 11:49:10.312270 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:49:10Z","lastTransitionTime":"2025-10-06T11:49:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:49:10 crc kubenswrapper[4958]: I1006 11:49:10.415661 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:49:10 crc kubenswrapper[4958]: I1006 11:49:10.415729 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:49:10 crc kubenswrapper[4958]: I1006 11:49:10.415750 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:49:10 crc kubenswrapper[4958]: I1006 11:49:10.415775 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:49:10 crc kubenswrapper[4958]: I1006 11:49:10.415792 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:49:10Z","lastTransitionTime":"2025-10-06T11:49:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:49:10 crc kubenswrapper[4958]: I1006 11:49:10.519340 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:49:10 crc kubenswrapper[4958]: I1006 11:49:10.519406 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:49:10 crc kubenswrapper[4958]: I1006 11:49:10.519427 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:49:10 crc kubenswrapper[4958]: I1006 11:49:10.519452 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:49:10 crc kubenswrapper[4958]: I1006 11:49:10.519469 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:49:10Z","lastTransitionTime":"2025-10-06T11:49:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:49:10 crc kubenswrapper[4958]: I1006 11:49:10.623402 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:49:10 crc kubenswrapper[4958]: I1006 11:49:10.623503 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:49:10 crc kubenswrapper[4958]: I1006 11:49:10.623528 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:49:10 crc kubenswrapper[4958]: I1006 11:49:10.623563 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:49:10 crc kubenswrapper[4958]: I1006 11:49:10.623588 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:49:10Z","lastTransitionTime":"2025-10-06T11:49:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:49:10 crc kubenswrapper[4958]: I1006 11:49:10.727048 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:49:10 crc kubenswrapper[4958]: I1006 11:49:10.727095 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:49:10 crc kubenswrapper[4958]: I1006 11:49:10.727107 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:49:10 crc kubenswrapper[4958]: I1006 11:49:10.727125 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:49:10 crc kubenswrapper[4958]: I1006 11:49:10.727139 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:49:10Z","lastTransitionTime":"2025-10-06T11:49:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:49:10 crc kubenswrapper[4958]: I1006 11:49:10.829761 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:49:10 crc kubenswrapper[4958]: I1006 11:49:10.829824 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:49:10 crc kubenswrapper[4958]: I1006 11:49:10.829833 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:49:10 crc kubenswrapper[4958]: I1006 11:49:10.829855 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:49:10 crc kubenswrapper[4958]: I1006 11:49:10.829869 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:49:10Z","lastTransitionTime":"2025-10-06T11:49:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:49:10 crc kubenswrapper[4958]: I1006 11:49:10.912778 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 06 11:49:10 crc kubenswrapper[4958]: I1006 11:49:10.912797 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 06 11:49:10 crc kubenswrapper[4958]: I1006 11:49:10.912951 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 06 11:49:10 crc kubenswrapper[4958]: E1006 11:49:10.913101 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Oct 06 11:49:10 crc kubenswrapper[4958]: E1006 11:49:10.913381 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Oct 06 11:49:10 crc kubenswrapper[4958]: E1006 11:49:10.913543 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Oct 06 11:49:10 crc kubenswrapper[4958]: I1006 11:49:10.933403 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:49:10 crc kubenswrapper[4958]: I1006 11:49:10.933462 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:49:10 crc kubenswrapper[4958]: I1006 11:49:10.933479 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:49:10 crc kubenswrapper[4958]: I1006 11:49:10.933504 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:49:10 crc kubenswrapper[4958]: I1006 11:49:10.933522 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:49:10Z","lastTransitionTime":"2025-10-06T11:49:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:49:11 crc kubenswrapper[4958]: I1006 11:49:11.036902 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:49:11 crc kubenswrapper[4958]: I1006 11:49:11.036990 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:49:11 crc kubenswrapper[4958]: I1006 11:49:11.037014 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:49:11 crc kubenswrapper[4958]: I1006 11:49:11.037046 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:49:11 crc kubenswrapper[4958]: I1006 11:49:11.037071 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:49:11Z","lastTransitionTime":"2025-10-06T11:49:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:49:11 crc kubenswrapper[4958]: I1006 11:49:11.140600 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:49:11 crc kubenswrapper[4958]: I1006 11:49:11.140659 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:49:11 crc kubenswrapper[4958]: I1006 11:49:11.140675 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:49:11 crc kubenswrapper[4958]: I1006 11:49:11.140696 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:49:11 crc kubenswrapper[4958]: I1006 11:49:11.140713 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:49:11Z","lastTransitionTime":"2025-10-06T11:49:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:49:11 crc kubenswrapper[4958]: I1006 11:49:11.243897 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:49:11 crc kubenswrapper[4958]: I1006 11:49:11.243971 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:49:11 crc kubenswrapper[4958]: I1006 11:49:11.243989 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:49:11 crc kubenswrapper[4958]: I1006 11:49:11.244009 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:49:11 crc kubenswrapper[4958]: I1006 11:49:11.244027 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:49:11Z","lastTransitionTime":"2025-10-06T11:49:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:49:11 crc kubenswrapper[4958]: I1006 11:49:11.346948 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:49:11 crc kubenswrapper[4958]: I1006 11:49:11.347013 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:49:11 crc kubenswrapper[4958]: I1006 11:49:11.347029 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:49:11 crc kubenswrapper[4958]: I1006 11:49:11.347051 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:49:11 crc kubenswrapper[4958]: I1006 11:49:11.347064 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:49:11Z","lastTransitionTime":"2025-10-06T11:49:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:49:11 crc kubenswrapper[4958]: I1006 11:49:11.450067 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:49:11 crc kubenswrapper[4958]: I1006 11:49:11.450192 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:49:11 crc kubenswrapper[4958]: I1006 11:49:11.450214 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:49:11 crc kubenswrapper[4958]: I1006 11:49:11.450246 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:49:11 crc kubenswrapper[4958]: I1006 11:49:11.450264 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:49:11Z","lastTransitionTime":"2025-10-06T11:49:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:49:11 crc kubenswrapper[4958]: I1006 11:49:11.553545 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:49:11 crc kubenswrapper[4958]: I1006 11:49:11.553599 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:49:11 crc kubenswrapper[4958]: I1006 11:49:11.553617 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:49:11 crc kubenswrapper[4958]: I1006 11:49:11.553640 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:49:11 crc kubenswrapper[4958]: I1006 11:49:11.553701 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:49:11Z","lastTransitionTime":"2025-10-06T11:49:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:49:11 crc kubenswrapper[4958]: I1006 11:49:11.656217 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:49:11 crc kubenswrapper[4958]: I1006 11:49:11.656281 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:49:11 crc kubenswrapper[4958]: I1006 11:49:11.656303 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:49:11 crc kubenswrapper[4958]: I1006 11:49:11.656331 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:49:11 crc kubenswrapper[4958]: I1006 11:49:11.656355 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:49:11Z","lastTransitionTime":"2025-10-06T11:49:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:49:11 crc kubenswrapper[4958]: I1006 11:49:11.759521 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:49:11 crc kubenswrapper[4958]: I1006 11:49:11.759754 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:49:11 crc kubenswrapper[4958]: I1006 11:49:11.759926 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:49:11 crc kubenswrapper[4958]: I1006 11:49:11.760135 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:49:11 crc kubenswrapper[4958]: I1006 11:49:11.760377 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:49:11Z","lastTransitionTime":"2025-10-06T11:49:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:49:11 crc kubenswrapper[4958]: I1006 11:49:11.863568 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:49:11 crc kubenswrapper[4958]: I1006 11:49:11.863644 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:49:11 crc kubenswrapper[4958]: I1006 11:49:11.863670 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:49:11 crc kubenswrapper[4958]: I1006 11:49:11.863699 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:49:11 crc kubenswrapper[4958]: I1006 11:49:11.863722 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:49:11Z","lastTransitionTime":"2025-10-06T11:49:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:49:11 crc kubenswrapper[4958]: I1006 11:49:11.912771 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-4mxw5" Oct 06 11:49:11 crc kubenswrapper[4958]: E1006 11:49:11.913256 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-4mxw5" podUID="cbfbf2ea-df5f-4844-ba69-8c9f16a3a26c" Oct 06 11:49:11 crc kubenswrapper[4958]: I1006 11:49:11.965968 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:49:11 crc kubenswrapper[4958]: I1006 11:49:11.966018 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:49:11 crc kubenswrapper[4958]: I1006 11:49:11.966035 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:49:11 crc kubenswrapper[4958]: I1006 11:49:11.966061 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:49:11 crc kubenswrapper[4958]: I1006 11:49:11.966080 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:49:11Z","lastTransitionTime":"2025-10-06T11:49:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:49:12 crc kubenswrapper[4958]: I1006 11:49:12.069651 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:49:12 crc kubenswrapper[4958]: I1006 11:49:12.069754 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:49:12 crc kubenswrapper[4958]: I1006 11:49:12.069771 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:49:12 crc kubenswrapper[4958]: I1006 11:49:12.069803 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:49:12 crc kubenswrapper[4958]: I1006 11:49:12.069824 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:49:12Z","lastTransitionTime":"2025-10-06T11:49:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:49:12 crc kubenswrapper[4958]: I1006 11:49:12.172426 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:49:12 crc kubenswrapper[4958]: I1006 11:49:12.172765 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:49:12 crc kubenswrapper[4958]: I1006 11:49:12.172874 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:49:12 crc kubenswrapper[4958]: I1006 11:49:12.172969 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:49:12 crc kubenswrapper[4958]: I1006 11:49:12.173064 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:49:12Z","lastTransitionTime":"2025-10-06T11:49:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:49:12 crc kubenswrapper[4958]: I1006 11:49:12.275905 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:49:12 crc kubenswrapper[4958]: I1006 11:49:12.275979 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:49:12 crc kubenswrapper[4958]: I1006 11:49:12.276000 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:49:12 crc kubenswrapper[4958]: I1006 11:49:12.276030 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:49:12 crc kubenswrapper[4958]: I1006 11:49:12.276050 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:49:12Z","lastTransitionTime":"2025-10-06T11:49:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:49:12 crc kubenswrapper[4958]: I1006 11:49:12.378980 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:49:12 crc kubenswrapper[4958]: I1006 11:49:12.379056 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:49:12 crc kubenswrapper[4958]: I1006 11:49:12.379082 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:49:12 crc kubenswrapper[4958]: I1006 11:49:12.379111 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:49:12 crc kubenswrapper[4958]: I1006 11:49:12.379133 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:49:12Z","lastTransitionTime":"2025-10-06T11:49:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:49:12 crc kubenswrapper[4958]: I1006 11:49:12.481877 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:49:12 crc kubenswrapper[4958]: I1006 11:49:12.481942 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:49:12 crc kubenswrapper[4958]: I1006 11:49:12.481961 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:49:12 crc kubenswrapper[4958]: I1006 11:49:12.481984 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:49:12 crc kubenswrapper[4958]: I1006 11:49:12.482001 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:49:12Z","lastTransitionTime":"2025-10-06T11:49:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:49:12 crc kubenswrapper[4958]: I1006 11:49:12.585060 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:49:12 crc kubenswrapper[4958]: I1006 11:49:12.585128 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:49:12 crc kubenswrapper[4958]: I1006 11:49:12.585212 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:49:12 crc kubenswrapper[4958]: I1006 11:49:12.585247 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:49:12 crc kubenswrapper[4958]: I1006 11:49:12.585269 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:49:12Z","lastTransitionTime":"2025-10-06T11:49:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:49:12 crc kubenswrapper[4958]: I1006 11:49:12.688140 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:49:12 crc kubenswrapper[4958]: I1006 11:49:12.688250 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:49:12 crc kubenswrapper[4958]: I1006 11:49:12.688275 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:49:12 crc kubenswrapper[4958]: I1006 11:49:12.688304 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:49:12 crc kubenswrapper[4958]: I1006 11:49:12.688325 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:49:12Z","lastTransitionTime":"2025-10-06T11:49:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:49:12 crc kubenswrapper[4958]: I1006 11:49:12.791798 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:49:12 crc kubenswrapper[4958]: I1006 11:49:12.791872 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:49:12 crc kubenswrapper[4958]: I1006 11:49:12.791907 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:49:12 crc kubenswrapper[4958]: I1006 11:49:12.791937 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:49:12 crc kubenswrapper[4958]: I1006 11:49:12.791962 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:49:12Z","lastTransitionTime":"2025-10-06T11:49:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:49:12 crc kubenswrapper[4958]: I1006 11:49:12.894970 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:49:12 crc kubenswrapper[4958]: I1006 11:49:12.895042 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:49:12 crc kubenswrapper[4958]: I1006 11:49:12.895078 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:49:12 crc kubenswrapper[4958]: I1006 11:49:12.895136 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:49:12 crc kubenswrapper[4958]: I1006 11:49:12.895190 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:49:12Z","lastTransitionTime":"2025-10-06T11:49:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:49:12 crc kubenswrapper[4958]: I1006 11:49:12.914181 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 06 11:49:12 crc kubenswrapper[4958]: E1006 11:49:12.914306 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Oct 06 11:49:12 crc kubenswrapper[4958]: I1006 11:49:12.914631 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 06 11:49:12 crc kubenswrapper[4958]: E1006 11:49:12.914725 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Oct 06 11:49:12 crc kubenswrapper[4958]: I1006 11:49:12.915021 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 06 11:49:12 crc kubenswrapper[4958]: E1006 11:49:12.915115 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Oct 06 11:49:12 crc kubenswrapper[4958]: I1006 11:49:12.999037 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:49:12 crc kubenswrapper[4958]: I1006 11:49:12.999121 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:49:12 crc kubenswrapper[4958]: I1006 11:49:12.999186 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:49:12 crc kubenswrapper[4958]: I1006 11:49:12.999212 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:49:12 crc kubenswrapper[4958]: I1006 11:49:12.999340 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:49:12Z","lastTransitionTime":"2025-10-06T11:49:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:49:13 crc kubenswrapper[4958]: I1006 11:49:13.103239 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:49:13 crc kubenswrapper[4958]: I1006 11:49:13.103304 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:49:13 crc kubenswrapper[4958]: I1006 11:49:13.103321 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:49:13 crc kubenswrapper[4958]: I1006 11:49:13.103345 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:49:13 crc kubenswrapper[4958]: I1006 11:49:13.103363 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:49:13Z","lastTransitionTime":"2025-10-06T11:49:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:49:13 crc kubenswrapper[4958]: I1006 11:49:13.206244 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:49:13 crc kubenswrapper[4958]: I1006 11:49:13.206312 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:49:13 crc kubenswrapper[4958]: I1006 11:49:13.206333 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:49:13 crc kubenswrapper[4958]: I1006 11:49:13.206360 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:49:13 crc kubenswrapper[4958]: I1006 11:49:13.206379 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:49:13Z","lastTransitionTime":"2025-10-06T11:49:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:49:13 crc kubenswrapper[4958]: I1006 11:49:13.309925 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:49:13 crc kubenswrapper[4958]: I1006 11:49:13.309983 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:49:13 crc kubenswrapper[4958]: I1006 11:49:13.309997 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:49:13 crc kubenswrapper[4958]: I1006 11:49:13.310015 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:49:13 crc kubenswrapper[4958]: I1006 11:49:13.310029 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:49:13Z","lastTransitionTime":"2025-10-06T11:49:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:49:13 crc kubenswrapper[4958]: I1006 11:49:13.413250 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:49:13 crc kubenswrapper[4958]: I1006 11:49:13.413322 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:49:13 crc kubenswrapper[4958]: I1006 11:49:13.413339 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:49:13 crc kubenswrapper[4958]: I1006 11:49:13.413363 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:49:13 crc kubenswrapper[4958]: I1006 11:49:13.413379 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:49:13Z","lastTransitionTime":"2025-10-06T11:49:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:49:13 crc kubenswrapper[4958]: I1006 11:49:13.516568 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:49:13 crc kubenswrapper[4958]: I1006 11:49:13.516636 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:49:13 crc kubenswrapper[4958]: I1006 11:49:13.516656 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:49:13 crc kubenswrapper[4958]: I1006 11:49:13.516681 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:49:13 crc kubenswrapper[4958]: I1006 11:49:13.516699 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:49:13Z","lastTransitionTime":"2025-10-06T11:49:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:49:13 crc kubenswrapper[4958]: I1006 11:49:13.619792 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:49:13 crc kubenswrapper[4958]: I1006 11:49:13.619930 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:49:13 crc kubenswrapper[4958]: I1006 11:49:13.619956 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:49:13 crc kubenswrapper[4958]: I1006 11:49:13.619988 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:49:13 crc kubenswrapper[4958]: I1006 11:49:13.620008 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:49:13Z","lastTransitionTime":"2025-10-06T11:49:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:49:13 crc kubenswrapper[4958]: I1006 11:49:13.724029 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:49:13 crc kubenswrapper[4958]: I1006 11:49:13.724099 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:49:13 crc kubenswrapper[4958]: I1006 11:49:13.724113 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:49:13 crc kubenswrapper[4958]: I1006 11:49:13.724135 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:49:13 crc kubenswrapper[4958]: I1006 11:49:13.724191 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:49:13Z","lastTransitionTime":"2025-10-06T11:49:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:49:13 crc kubenswrapper[4958]: I1006 11:49:13.827662 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:49:13 crc kubenswrapper[4958]: I1006 11:49:13.827724 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:49:13 crc kubenswrapper[4958]: I1006 11:49:13.827746 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:49:13 crc kubenswrapper[4958]: I1006 11:49:13.827773 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:49:13 crc kubenswrapper[4958]: I1006 11:49:13.827794 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:49:13Z","lastTransitionTime":"2025-10-06T11:49:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:49:13 crc kubenswrapper[4958]: I1006 11:49:13.913078 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-4mxw5" Oct 06 11:49:13 crc kubenswrapper[4958]: I1006 11:49:13.914356 4958 scope.go:117] "RemoveContainer" containerID="0d26f1c937b4f4a0ec130d9451086f3770f092498d61310b65bdd8c74a68549e" Oct 06 11:49:13 crc kubenswrapper[4958]: E1006 11:49:13.914475 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-4mxw5" podUID="cbfbf2ea-df5f-4844-ba69-8c9f16a3a26c" Oct 06 11:49:13 crc kubenswrapper[4958]: E1006 11:49:13.914643 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-ntrlk_openshift-ovn-kubernetes(cd589959-144a-41bd-b6d5-a872e5c25cee)\"" pod="openshift-ovn-kubernetes/ovnkube-node-ntrlk" podUID="cd589959-144a-41bd-b6d5-a872e5c25cee" Oct 06 11:49:13 crc kubenswrapper[4958]: I1006 11:49:13.930736 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:49:13 crc kubenswrapper[4958]: I1006 11:49:13.930777 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:49:13 crc kubenswrapper[4958]: I1006 11:49:13.930788 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:49:13 crc kubenswrapper[4958]: I1006 11:49:13.930803 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:49:13 crc kubenswrapper[4958]: I1006 11:49:13.930815 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:49:13Z","lastTransitionTime":"2025-10-06T11:49:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:49:14 crc kubenswrapper[4958]: I1006 11:49:14.033368 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:49:14 crc kubenswrapper[4958]: I1006 11:49:14.033434 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:49:14 crc kubenswrapper[4958]: I1006 11:49:14.033452 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:49:14 crc kubenswrapper[4958]: I1006 11:49:14.033476 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:49:14 crc kubenswrapper[4958]: I1006 11:49:14.033497 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:49:14Z","lastTransitionTime":"2025-10-06T11:49:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:49:14 crc kubenswrapper[4958]: I1006 11:49:14.136122 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:49:14 crc kubenswrapper[4958]: I1006 11:49:14.136211 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:49:14 crc kubenswrapper[4958]: I1006 11:49:14.136228 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:49:14 crc kubenswrapper[4958]: I1006 11:49:14.136252 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:49:14 crc kubenswrapper[4958]: I1006 11:49:14.136271 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:49:14Z","lastTransitionTime":"2025-10-06T11:49:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:49:14 crc kubenswrapper[4958]: I1006 11:49:14.238987 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:49:14 crc kubenswrapper[4958]: I1006 11:49:14.239043 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:49:14 crc kubenswrapper[4958]: I1006 11:49:14.239067 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:49:14 crc kubenswrapper[4958]: I1006 11:49:14.239097 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:49:14 crc kubenswrapper[4958]: I1006 11:49:14.239123 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:49:14Z","lastTransitionTime":"2025-10-06T11:49:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:49:14 crc kubenswrapper[4958]: I1006 11:49:14.341984 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:49:14 crc kubenswrapper[4958]: I1006 11:49:14.342050 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:49:14 crc kubenswrapper[4958]: I1006 11:49:14.342068 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:49:14 crc kubenswrapper[4958]: I1006 11:49:14.342095 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:49:14 crc kubenswrapper[4958]: I1006 11:49:14.342111 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:49:14Z","lastTransitionTime":"2025-10-06T11:49:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:49:14 crc kubenswrapper[4958]: I1006 11:49:14.445388 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:49:14 crc kubenswrapper[4958]: I1006 11:49:14.445442 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:49:14 crc kubenswrapper[4958]: I1006 11:49:14.445466 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:49:14 crc kubenswrapper[4958]: I1006 11:49:14.445491 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:49:14 crc kubenswrapper[4958]: I1006 11:49:14.445514 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:49:14Z","lastTransitionTime":"2025-10-06T11:49:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:49:14 crc kubenswrapper[4958]: I1006 11:49:14.548710 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:49:14 crc kubenswrapper[4958]: I1006 11:49:14.548816 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:49:14 crc kubenswrapper[4958]: I1006 11:49:14.548856 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:49:14 crc kubenswrapper[4958]: I1006 11:49:14.548895 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:49:14 crc kubenswrapper[4958]: I1006 11:49:14.548920 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:49:14Z","lastTransitionTime":"2025-10-06T11:49:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:49:14 crc kubenswrapper[4958]: I1006 11:49:14.652078 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:49:14 crc kubenswrapper[4958]: I1006 11:49:14.652226 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:49:14 crc kubenswrapper[4958]: I1006 11:49:14.652248 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:49:14 crc kubenswrapper[4958]: I1006 11:49:14.652273 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:49:14 crc kubenswrapper[4958]: I1006 11:49:14.652330 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:49:14Z","lastTransitionTime":"2025-10-06T11:49:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:49:14 crc kubenswrapper[4958]: I1006 11:49:14.755808 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:49:14 crc kubenswrapper[4958]: I1006 11:49:14.755881 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:49:14 crc kubenswrapper[4958]: I1006 11:49:14.755900 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:49:14 crc kubenswrapper[4958]: I1006 11:49:14.755926 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:49:14 crc kubenswrapper[4958]: I1006 11:49:14.755947 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:49:14Z","lastTransitionTime":"2025-10-06T11:49:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:49:14 crc kubenswrapper[4958]: I1006 11:49:14.859397 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:49:14 crc kubenswrapper[4958]: I1006 11:49:14.859494 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:49:14 crc kubenswrapper[4958]: I1006 11:49:14.859513 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:49:14 crc kubenswrapper[4958]: I1006 11:49:14.859543 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:49:14 crc kubenswrapper[4958]: I1006 11:49:14.859562 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:49:14Z","lastTransitionTime":"2025-10-06T11:49:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:49:14 crc kubenswrapper[4958]: I1006 11:49:14.912489 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 06 11:49:14 crc kubenswrapper[4958]: I1006 11:49:14.912633 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 06 11:49:14 crc kubenswrapper[4958]: E1006 11:49:14.912861 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Oct 06 11:49:14 crc kubenswrapper[4958]: I1006 11:49:14.912903 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 06 11:49:14 crc kubenswrapper[4958]: E1006 11:49:14.913065 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Oct 06 11:49:14 crc kubenswrapper[4958]: E1006 11:49:14.913565 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Oct 06 11:49:14 crc kubenswrapper[4958]: I1006 11:49:14.962026 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:49:14 crc kubenswrapper[4958]: I1006 11:49:14.962080 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:49:14 crc kubenswrapper[4958]: I1006 11:49:14.962096 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:49:14 crc kubenswrapper[4958]: I1006 11:49:14.962116 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:49:14 crc kubenswrapper[4958]: I1006 11:49:14.962133 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:49:14Z","lastTransitionTime":"2025-10-06T11:49:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:49:15 crc kubenswrapper[4958]: I1006 11:49:15.065681 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:49:15 crc kubenswrapper[4958]: I1006 11:49:15.065807 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:49:15 crc kubenswrapper[4958]: I1006 11:49:15.065825 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:49:15 crc kubenswrapper[4958]: I1006 11:49:15.065848 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:49:15 crc kubenswrapper[4958]: I1006 11:49:15.065867 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:49:15Z","lastTransitionTime":"2025-10-06T11:49:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:49:15 crc kubenswrapper[4958]: I1006 11:49:15.169041 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:49:15 crc kubenswrapper[4958]: I1006 11:49:15.169120 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:49:15 crc kubenswrapper[4958]: I1006 11:49:15.169176 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:49:15 crc kubenswrapper[4958]: I1006 11:49:15.169210 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:49:15 crc kubenswrapper[4958]: I1006 11:49:15.169234 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:49:15Z","lastTransitionTime":"2025-10-06T11:49:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:49:15 crc kubenswrapper[4958]: I1006 11:49:15.273023 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:49:15 crc kubenswrapper[4958]: I1006 11:49:15.273091 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:49:15 crc kubenswrapper[4958]: I1006 11:49:15.273121 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:49:15 crc kubenswrapper[4958]: I1006 11:49:15.273180 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:49:15 crc kubenswrapper[4958]: I1006 11:49:15.273210 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:49:15Z","lastTransitionTime":"2025-10-06T11:49:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:49:15 crc kubenswrapper[4958]: I1006 11:49:15.377042 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:49:15 crc kubenswrapper[4958]: I1006 11:49:15.377127 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:49:15 crc kubenswrapper[4958]: I1006 11:49:15.377202 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:49:15 crc kubenswrapper[4958]: I1006 11:49:15.377236 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:49:15 crc kubenswrapper[4958]: I1006 11:49:15.377299 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:49:15Z","lastTransitionTime":"2025-10-06T11:49:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:49:15 crc kubenswrapper[4958]: I1006 11:49:15.480642 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:49:15 crc kubenswrapper[4958]: I1006 11:49:15.480700 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:49:15 crc kubenswrapper[4958]: I1006 11:49:15.480723 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:49:15 crc kubenswrapper[4958]: I1006 11:49:15.480742 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:49:15 crc kubenswrapper[4958]: I1006 11:49:15.480754 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:49:15Z","lastTransitionTime":"2025-10-06T11:49:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:49:15 crc kubenswrapper[4958]: I1006 11:49:15.588303 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:49:15 crc kubenswrapper[4958]: I1006 11:49:15.588582 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:49:15 crc kubenswrapper[4958]: I1006 11:49:15.588600 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:49:15 crc kubenswrapper[4958]: I1006 11:49:15.589211 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:49:15 crc kubenswrapper[4958]: I1006 11:49:15.589265 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:49:15Z","lastTransitionTime":"2025-10-06T11:49:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:49:15 crc kubenswrapper[4958]: I1006 11:49:15.692383 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:49:15 crc kubenswrapper[4958]: I1006 11:49:15.692459 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:49:15 crc kubenswrapper[4958]: I1006 11:49:15.692481 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:49:15 crc kubenswrapper[4958]: I1006 11:49:15.692508 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:49:15 crc kubenswrapper[4958]: I1006 11:49:15.692528 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:49:15Z","lastTransitionTime":"2025-10-06T11:49:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:49:15 crc kubenswrapper[4958]: I1006 11:49:15.796049 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:49:15 crc kubenswrapper[4958]: I1006 11:49:15.796106 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:49:15 crc kubenswrapper[4958]: I1006 11:49:15.796123 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:49:15 crc kubenswrapper[4958]: I1006 11:49:15.796184 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:49:15 crc kubenswrapper[4958]: I1006 11:49:15.796202 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:49:15Z","lastTransitionTime":"2025-10-06T11:49:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:49:15 crc kubenswrapper[4958]: I1006 11:49:15.899470 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:49:15 crc kubenswrapper[4958]: I1006 11:49:15.899537 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:49:15 crc kubenswrapper[4958]: I1006 11:49:15.899557 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:49:15 crc kubenswrapper[4958]: I1006 11:49:15.899578 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:49:15 crc kubenswrapper[4958]: I1006 11:49:15.899623 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:49:15Z","lastTransitionTime":"2025-10-06T11:49:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:49:15 crc kubenswrapper[4958]: I1006 11:49:15.913437 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-4mxw5" Oct 06 11:49:15 crc kubenswrapper[4958]: E1006 11:49:15.913660 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-4mxw5" podUID="cbfbf2ea-df5f-4844-ba69-8c9f16a3a26c" Oct 06 11:49:16 crc kubenswrapper[4958]: I1006 11:49:16.002681 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:49:16 crc kubenswrapper[4958]: I1006 11:49:16.002732 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:49:16 crc kubenswrapper[4958]: I1006 11:49:16.002745 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:49:16 crc kubenswrapper[4958]: I1006 11:49:16.002760 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:49:16 crc kubenswrapper[4958]: I1006 11:49:16.002772 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:49:16Z","lastTransitionTime":"2025-10-06T11:49:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:49:16 crc kubenswrapper[4958]: I1006 11:49:16.106319 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:49:16 crc kubenswrapper[4958]: I1006 11:49:16.106393 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:49:16 crc kubenswrapper[4958]: I1006 11:49:16.106411 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:49:16 crc kubenswrapper[4958]: I1006 11:49:16.106437 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:49:16 crc kubenswrapper[4958]: I1006 11:49:16.106454 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:49:16Z","lastTransitionTime":"2025-10-06T11:49:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:49:16 crc kubenswrapper[4958]: I1006 11:49:16.209236 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:49:16 crc kubenswrapper[4958]: I1006 11:49:16.209300 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:49:16 crc kubenswrapper[4958]: I1006 11:49:16.209322 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:49:16 crc kubenswrapper[4958]: I1006 11:49:16.209350 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:49:16 crc kubenswrapper[4958]: I1006 11:49:16.209377 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:49:16Z","lastTransitionTime":"2025-10-06T11:49:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:49:16 crc kubenswrapper[4958]: I1006 11:49:16.312936 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:49:16 crc kubenswrapper[4958]: I1006 11:49:16.313019 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:49:16 crc kubenswrapper[4958]: I1006 11:49:16.313037 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:49:16 crc kubenswrapper[4958]: I1006 11:49:16.313060 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:49:16 crc kubenswrapper[4958]: I1006 11:49:16.313075 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:49:16Z","lastTransitionTime":"2025-10-06T11:49:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:49:16 crc kubenswrapper[4958]: I1006 11:49:16.358651 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:49:16 crc kubenswrapper[4958]: I1006 11:49:16.358715 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:49:16 crc kubenswrapper[4958]: I1006 11:49:16.358734 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:49:16 crc kubenswrapper[4958]: I1006 11:49:16.358759 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:49:16 crc kubenswrapper[4958]: I1006 11:49:16.358776 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:49:16Z","lastTransitionTime":"2025-10-06T11:49:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:49:16 crc kubenswrapper[4958]: I1006 11:49:16.430997 4958 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-version/cluster-version-operator-5c965bbfc6-mglqw"] Oct 06 11:49:16 crc kubenswrapper[4958]: I1006 11:49:16.431911 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-mglqw" Oct 06 11:49:16 crc kubenswrapper[4958]: I1006 11:49:16.435855 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"default-dockercfg-gxtc4" Oct 06 11:49:16 crc kubenswrapper[4958]: I1006 11:49:16.435916 4958 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"openshift-service-ca.crt" Oct 06 11:49:16 crc kubenswrapper[4958]: I1006 11:49:16.436236 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"cluster-version-operator-serving-cert" Oct 06 11:49:16 crc kubenswrapper[4958]: I1006 11:49:16.436796 4958 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"kube-root-ca.crt" Oct 06 11:49:16 crc kubenswrapper[4958]: I1006 11:49:16.494543 4958 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-crc" podStartSLOduration=87.494514847 podStartE2EDuration="1m27.494514847s" podCreationTimestamp="2025-10-06 11:47:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-06 11:49:16.461598776 +0000 UTC m=+110.347624114" watchObservedRunningTime="2025-10-06 11:49:16.494514847 +0000 UTC m=+110.380540195" Oct 06 11:49:16 crc kubenswrapper[4958]: I1006 11:49:16.563107 4958 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" podStartSLOduration=16.56308701 podStartE2EDuration="16.56308701s" podCreationTimestamp="2025-10-06 11:49:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-06 11:49:16.56231745 +0000 UTC m=+110.448342778" watchObservedRunningTime="2025-10-06 11:49:16.56308701 +0000 UTC m=+110.449112328" Oct 06 11:49:16 crc kubenswrapper[4958]: I1006 11:49:16.575505 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/c3bbe2b7-0dc0-410e-a23e-dd229c675376-service-ca\") pod \"cluster-version-operator-5c965bbfc6-mglqw\" (UID: \"c3bbe2b7-0dc0-410e-a23e-dd229c675376\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-mglqw" Oct 06 11:49:16 crc kubenswrapper[4958]: I1006 11:49:16.575557 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/c3bbe2b7-0dc0-410e-a23e-dd229c675376-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-mglqw\" (UID: \"c3bbe2b7-0dc0-410e-a23e-dd229c675376\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-mglqw" Oct 06 11:49:16 crc kubenswrapper[4958]: I1006 11:49:16.575585 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/c3bbe2b7-0dc0-410e-a23e-dd229c675376-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-mglqw\" (UID: \"c3bbe2b7-0dc0-410e-a23e-dd229c675376\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-mglqw" Oct 06 11:49:16 crc kubenswrapper[4958]: I1006 11:49:16.575750 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/c3bbe2b7-0dc0-410e-a23e-dd229c675376-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-mglqw\" (UID: \"c3bbe2b7-0dc0-410e-a23e-dd229c675376\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-mglqw" Oct 06 11:49:16 crc kubenswrapper[4958]: I1006 11:49:16.575825 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c3bbe2b7-0dc0-410e-a23e-dd229c675376-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-mglqw\" (UID: \"c3bbe2b7-0dc0-410e-a23e-dd229c675376\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-mglqw" Oct 06 11:49:16 crc kubenswrapper[4958]: I1006 11:49:16.677360 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/c3bbe2b7-0dc0-410e-a23e-dd229c675376-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-mglqw\" (UID: \"c3bbe2b7-0dc0-410e-a23e-dd229c675376\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-mglqw" Oct 06 11:49:16 crc kubenswrapper[4958]: I1006 11:49:16.677479 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c3bbe2b7-0dc0-410e-a23e-dd229c675376-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-mglqw\" (UID: \"c3bbe2b7-0dc0-410e-a23e-dd229c675376\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-mglqw" Oct 06 11:49:16 crc kubenswrapper[4958]: I1006 11:49:16.677526 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/c3bbe2b7-0dc0-410e-a23e-dd229c675376-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-mglqw\" (UID: \"c3bbe2b7-0dc0-410e-a23e-dd229c675376\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-mglqw" Oct 06 11:49:16 crc kubenswrapper[4958]: I1006 11:49:16.677681 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/c3bbe2b7-0dc0-410e-a23e-dd229c675376-service-ca\") pod \"cluster-version-operator-5c965bbfc6-mglqw\" (UID: \"c3bbe2b7-0dc0-410e-a23e-dd229c675376\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-mglqw" Oct 06 11:49:16 crc kubenswrapper[4958]: I1006 11:49:16.677846 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/c3bbe2b7-0dc0-410e-a23e-dd229c675376-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-mglqw\" (UID: \"c3bbe2b7-0dc0-410e-a23e-dd229c675376\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-mglqw" Oct 06 11:49:16 crc kubenswrapper[4958]: I1006 11:49:16.677964 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/c3bbe2b7-0dc0-410e-a23e-dd229c675376-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-mglqw\" (UID: \"c3bbe2b7-0dc0-410e-a23e-dd229c675376\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-mglqw" Oct 06 11:49:16 crc kubenswrapper[4958]: I1006 11:49:16.678200 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/c3bbe2b7-0dc0-410e-a23e-dd229c675376-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-mglqw\" (UID: \"c3bbe2b7-0dc0-410e-a23e-dd229c675376\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-mglqw" Oct 06 11:49:16 crc kubenswrapper[4958]: I1006 11:49:16.679990 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/c3bbe2b7-0dc0-410e-a23e-dd229c675376-service-ca\") pod \"cluster-version-operator-5c965bbfc6-mglqw\" (UID: \"c3bbe2b7-0dc0-410e-a23e-dd229c675376\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-mglqw" Oct 06 11:49:16 crc kubenswrapper[4958]: I1006 11:49:16.686455 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c3bbe2b7-0dc0-410e-a23e-dd229c675376-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-mglqw\" (UID: \"c3bbe2b7-0dc0-410e-a23e-dd229c675376\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-mglqw" Oct 06 11:49:16 crc kubenswrapper[4958]: I1006 11:49:16.699439 4958 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-additional-cni-plugins-lwknw" podStartSLOduration=89.699414626 podStartE2EDuration="1m29.699414626s" podCreationTimestamp="2025-10-06 11:47:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-06 11:49:16.685289076 +0000 UTC m=+110.571314414" watchObservedRunningTime="2025-10-06 11:49:16.699414626 +0000 UTC m=+110.585439954" Oct 06 11:49:16 crc kubenswrapper[4958]: I1006 11:49:16.719076 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/c3bbe2b7-0dc0-410e-a23e-dd229c675376-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-mglqw\" (UID: \"c3bbe2b7-0dc0-410e-a23e-dd229c675376\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-mglqw" Oct 06 11:49:16 crc kubenswrapper[4958]: I1006 11:49:16.723181 4958 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-4w4h5" podStartSLOduration=89.723136986 podStartE2EDuration="1m29.723136986s" podCreationTimestamp="2025-10-06 11:47:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-06 11:49:16.721389491 +0000 UTC m=+110.607414809" watchObservedRunningTime="2025-10-06 11:49:16.723136986 +0000 UTC m=+110.609162304" Oct 06 11:49:16 crc kubenswrapper[4958]: I1006 11:49:16.753819 4958 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-daemon-whw6z" podStartSLOduration=89.753801888 podStartE2EDuration="1m29.753801888s" podCreationTimestamp="2025-10-06 11:47:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-06 11:49:16.740394688 +0000 UTC m=+110.626420016" watchObservedRunningTime="2025-10-06 11:49:16.753801888 +0000 UTC m=+110.639827206" Oct 06 11:49:16 crc kubenswrapper[4958]: I1006 11:49:16.754169 4958 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-8jbr9" podStartSLOduration=89.754141587 podStartE2EDuration="1m29.754141587s" podCreationTimestamp="2025-10-06 11:47:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-06 11:49:16.753704796 +0000 UTC m=+110.639730094" watchObservedRunningTime="2025-10-06 11:49:16.754141587 +0000 UTC m=+110.640166905" Oct 06 11:49:16 crc kubenswrapper[4958]: I1006 11:49:16.758342 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-mglqw" Oct 06 11:49:16 crc kubenswrapper[4958]: W1006 11:49:16.779870 4958 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc3bbe2b7_0dc0_410e_a23e_dd229c675376.slice/crio-7ae8c18e6f01c3e65f5171b45e0016ac3cdc2390d1db604a4273f2d8c445ed28 WatchSource:0}: Error finding container 7ae8c18e6f01c3e65f5171b45e0016ac3cdc2390d1db604a4273f2d8c445ed28: Status 404 returned error can't find the container with id 7ae8c18e6f01c3e65f5171b45e0016ac3cdc2390d1db604a4273f2d8c445ed28 Oct 06 11:49:16 crc kubenswrapper[4958]: I1006 11:49:16.781210 4958 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" podStartSLOduration=59.781185724 podStartE2EDuration="59.781185724s" podCreationTimestamp="2025-10-06 11:48:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-06 11:49:16.780370923 +0000 UTC m=+110.666396261" watchObservedRunningTime="2025-10-06 11:49:16.781185724 +0000 UTC m=+110.667211072" Oct 06 11:49:16 crc kubenswrapper[4958]: I1006 11:49:16.805640 4958 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podStartSLOduration=88.805617763 podStartE2EDuration="1m28.805617763s" podCreationTimestamp="2025-10-06 11:47:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-06 11:49:16.804827233 +0000 UTC m=+110.690852551" watchObservedRunningTime="2025-10-06 11:49:16.805617763 +0000 UTC m=+110.691643081" Oct 06 11:49:16 crc kubenswrapper[4958]: I1006 11:49:16.839990 4958 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns/node-resolver-8xzjs" podStartSLOduration=89.839971012 podStartE2EDuration="1m29.839971012s" podCreationTimestamp="2025-10-06 11:47:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-06 11:49:16.838962485 +0000 UTC m=+110.724987813" watchObservedRunningTime="2025-10-06 11:49:16.839971012 +0000 UTC m=+110.725996330" Oct 06 11:49:16 crc kubenswrapper[4958]: I1006 11:49:16.853962 4958 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/node-ca-vns7w" podStartSLOduration=89.853947187 podStartE2EDuration="1m29.853947187s" podCreationTimestamp="2025-10-06 11:47:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-06 11:49:16.853321831 +0000 UTC m=+110.739347169" watchObservedRunningTime="2025-10-06 11:49:16.853947187 +0000 UTC m=+110.739972505" Oct 06 11:49:16 crc kubenswrapper[4958]: I1006 11:49:16.881712 4958 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-etcd/etcd-crc" podStartSLOduration=17.881692803 podStartE2EDuration="17.881692803s" podCreationTimestamp="2025-10-06 11:48:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-06 11:49:16.88045053 +0000 UTC m=+110.766475848" watchObservedRunningTime="2025-10-06 11:49:16.881692803 +0000 UTC m=+110.767718111" Oct 06 11:49:16 crc kubenswrapper[4958]: I1006 11:49:16.913129 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 06 11:49:16 crc kubenswrapper[4958]: I1006 11:49:16.913190 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 06 11:49:16 crc kubenswrapper[4958]: E1006 11:49:16.914068 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Oct 06 11:49:16 crc kubenswrapper[4958]: I1006 11:49:16.914089 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 06 11:49:16 crc kubenswrapper[4958]: E1006 11:49:16.914134 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Oct 06 11:49:16 crc kubenswrapper[4958]: E1006 11:49:16.914183 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Oct 06 11:49:17 crc kubenswrapper[4958]: I1006 11:49:17.610442 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-mglqw" event={"ID":"c3bbe2b7-0dc0-410e-a23e-dd229c675376","Type":"ContainerStarted","Data":"6e9e2157e47087640a25c9a89f2504b792f6f07284824924feb921ecc6281d3d"} Oct 06 11:49:17 crc kubenswrapper[4958]: I1006 11:49:17.610515 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-mglqw" event={"ID":"c3bbe2b7-0dc0-410e-a23e-dd229c675376","Type":"ContainerStarted","Data":"7ae8c18e6f01c3e65f5171b45e0016ac3cdc2390d1db604a4273f2d8c445ed28"} Oct 06 11:49:17 crc kubenswrapper[4958]: I1006 11:49:17.631783 4958 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-mglqw" podStartSLOduration=90.631760829 podStartE2EDuration="1m30.631760829s" podCreationTimestamp="2025-10-06 11:47:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-06 11:49:17.630752683 +0000 UTC m=+111.516778021" watchObservedRunningTime="2025-10-06 11:49:17.631760829 +0000 UTC m=+111.517786147" Oct 06 11:49:17 crc kubenswrapper[4958]: I1006 11:49:17.913054 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-4mxw5" Oct 06 11:49:17 crc kubenswrapper[4958]: E1006 11:49:17.913269 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-4mxw5" podUID="cbfbf2ea-df5f-4844-ba69-8c9f16a3a26c" Oct 06 11:49:18 crc kubenswrapper[4958]: I1006 11:49:18.913178 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 06 11:49:18 crc kubenswrapper[4958]: I1006 11:49:18.913215 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 06 11:49:18 crc kubenswrapper[4958]: E1006 11:49:18.913308 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Oct 06 11:49:18 crc kubenswrapper[4958]: I1006 11:49:18.913331 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 06 11:49:18 crc kubenswrapper[4958]: E1006 11:49:18.913454 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Oct 06 11:49:18 crc kubenswrapper[4958]: E1006 11:49:18.913520 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Oct 06 11:49:19 crc kubenswrapper[4958]: I1006 11:49:19.912247 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-4mxw5" Oct 06 11:49:19 crc kubenswrapper[4958]: E1006 11:49:19.912490 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-4mxw5" podUID="cbfbf2ea-df5f-4844-ba69-8c9f16a3a26c" Oct 06 11:49:20 crc kubenswrapper[4958]: I1006 11:49:20.913031 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 06 11:49:20 crc kubenswrapper[4958]: I1006 11:49:20.913132 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 06 11:49:20 crc kubenswrapper[4958]: E1006 11:49:20.913202 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Oct 06 11:49:20 crc kubenswrapper[4958]: E1006 11:49:20.913400 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Oct 06 11:49:20 crc kubenswrapper[4958]: I1006 11:49:20.913344 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 06 11:49:20 crc kubenswrapper[4958]: E1006 11:49:20.913843 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Oct 06 11:49:21 crc kubenswrapper[4958]: I1006 11:49:21.913049 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-4mxw5" Oct 06 11:49:21 crc kubenswrapper[4958]: E1006 11:49:21.913335 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-4mxw5" podUID="cbfbf2ea-df5f-4844-ba69-8c9f16a3a26c" Oct 06 11:49:22 crc kubenswrapper[4958]: I1006 11:49:22.626937 4958 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-4w4h5_8198c8fd-8ec9-4a56-9e87-4e3967fb8ec7/kube-multus/1.log" Oct 06 11:49:22 crc kubenswrapper[4958]: I1006 11:49:22.627571 4958 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-4w4h5_8198c8fd-8ec9-4a56-9e87-4e3967fb8ec7/kube-multus/0.log" Oct 06 11:49:22 crc kubenswrapper[4958]: I1006 11:49:22.627616 4958 generic.go:334] "Generic (PLEG): container finished" podID="8198c8fd-8ec9-4a56-9e87-4e3967fb8ec7" containerID="4e73d50647a8c8e49d503c873fc1939a3747681ec2646bd44db73acf5c7740d0" exitCode=1 Oct 06 11:49:22 crc kubenswrapper[4958]: I1006 11:49:22.627651 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-4w4h5" event={"ID":"8198c8fd-8ec9-4a56-9e87-4e3967fb8ec7","Type":"ContainerDied","Data":"4e73d50647a8c8e49d503c873fc1939a3747681ec2646bd44db73acf5c7740d0"} Oct 06 11:49:22 crc kubenswrapper[4958]: I1006 11:49:22.627687 4958 scope.go:117] "RemoveContainer" containerID="979bf29caab50b132b30dd6eff62aed5d2209ae1c43fde2433101410df35035b" Oct 06 11:49:22 crc kubenswrapper[4958]: I1006 11:49:22.628067 4958 scope.go:117] "RemoveContainer" containerID="4e73d50647a8c8e49d503c873fc1939a3747681ec2646bd44db73acf5c7740d0" Oct 06 11:49:22 crc kubenswrapper[4958]: E1006 11:49:22.628443 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-multus\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-multus pod=multus-4w4h5_openshift-multus(8198c8fd-8ec9-4a56-9e87-4e3967fb8ec7)\"" pod="openshift-multus/multus-4w4h5" podUID="8198c8fd-8ec9-4a56-9e87-4e3967fb8ec7" Oct 06 11:49:22 crc kubenswrapper[4958]: I1006 11:49:22.912857 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 06 11:49:22 crc kubenswrapper[4958]: I1006 11:49:22.912896 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 06 11:49:22 crc kubenswrapper[4958]: I1006 11:49:22.912864 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 06 11:49:22 crc kubenswrapper[4958]: E1006 11:49:22.913031 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Oct 06 11:49:22 crc kubenswrapper[4958]: E1006 11:49:22.913202 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Oct 06 11:49:22 crc kubenswrapper[4958]: E1006 11:49:22.913425 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Oct 06 11:49:23 crc kubenswrapper[4958]: I1006 11:49:23.634975 4958 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-4w4h5_8198c8fd-8ec9-4a56-9e87-4e3967fb8ec7/kube-multus/1.log" Oct 06 11:49:23 crc kubenswrapper[4958]: I1006 11:49:23.912320 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-4mxw5" Oct 06 11:49:23 crc kubenswrapper[4958]: E1006 11:49:23.912530 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-4mxw5" podUID="cbfbf2ea-df5f-4844-ba69-8c9f16a3a26c" Oct 06 11:49:24 crc kubenswrapper[4958]: I1006 11:49:24.912432 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 06 11:49:24 crc kubenswrapper[4958]: I1006 11:49:24.912552 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 06 11:49:24 crc kubenswrapper[4958]: I1006 11:49:24.912672 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 06 11:49:24 crc kubenswrapper[4958]: E1006 11:49:24.913541 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Oct 06 11:49:24 crc kubenswrapper[4958]: E1006 11:49:24.913637 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Oct 06 11:49:24 crc kubenswrapper[4958]: I1006 11:49:24.913707 4958 scope.go:117] "RemoveContainer" containerID="0d26f1c937b4f4a0ec130d9451086f3770f092498d61310b65bdd8c74a68549e" Oct 06 11:49:24 crc kubenswrapper[4958]: E1006 11:49:24.913744 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Oct 06 11:49:24 crc kubenswrapper[4958]: E1006 11:49:24.913989 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-ntrlk_openshift-ovn-kubernetes(cd589959-144a-41bd-b6d5-a872e5c25cee)\"" pod="openshift-ovn-kubernetes/ovnkube-node-ntrlk" podUID="cd589959-144a-41bd-b6d5-a872e5c25cee" Oct 06 11:49:25 crc kubenswrapper[4958]: I1006 11:49:25.912220 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-4mxw5" Oct 06 11:49:25 crc kubenswrapper[4958]: E1006 11:49:25.912440 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-4mxw5" podUID="cbfbf2ea-df5f-4844-ba69-8c9f16a3a26c" Oct 06 11:49:26 crc kubenswrapper[4958]: I1006 11:49:26.912325 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 06 11:49:26 crc kubenswrapper[4958]: I1006 11:49:26.912358 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 06 11:49:26 crc kubenswrapper[4958]: I1006 11:49:26.912421 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 06 11:49:26 crc kubenswrapper[4958]: E1006 11:49:26.914102 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Oct 06 11:49:26 crc kubenswrapper[4958]: E1006 11:49:26.914311 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Oct 06 11:49:26 crc kubenswrapper[4958]: E1006 11:49:26.914445 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Oct 06 11:49:26 crc kubenswrapper[4958]: E1006 11:49:26.942925 4958 kubelet_node_status.go:497] "Node not becoming ready in time after startup" Oct 06 11:49:27 crc kubenswrapper[4958]: E1006 11:49:27.000372 4958 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Oct 06 11:49:27 crc kubenswrapper[4958]: I1006 11:49:27.913253 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-4mxw5" Oct 06 11:49:27 crc kubenswrapper[4958]: E1006 11:49:27.913480 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-4mxw5" podUID="cbfbf2ea-df5f-4844-ba69-8c9f16a3a26c" Oct 06 11:49:28 crc kubenswrapper[4958]: I1006 11:49:28.912608 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 06 11:49:28 crc kubenswrapper[4958]: E1006 11:49:28.912749 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Oct 06 11:49:28 crc kubenswrapper[4958]: I1006 11:49:28.912612 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 06 11:49:28 crc kubenswrapper[4958]: I1006 11:49:28.912621 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 06 11:49:28 crc kubenswrapper[4958]: E1006 11:49:28.912959 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Oct 06 11:49:28 crc kubenswrapper[4958]: E1006 11:49:28.913012 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Oct 06 11:49:29 crc kubenswrapper[4958]: I1006 11:49:29.912760 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-4mxw5" Oct 06 11:49:29 crc kubenswrapper[4958]: E1006 11:49:29.912931 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-4mxw5" podUID="cbfbf2ea-df5f-4844-ba69-8c9f16a3a26c" Oct 06 11:49:30 crc kubenswrapper[4958]: I1006 11:49:30.913098 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 06 11:49:30 crc kubenswrapper[4958]: E1006 11:49:30.913243 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Oct 06 11:49:30 crc kubenswrapper[4958]: I1006 11:49:30.913441 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 06 11:49:30 crc kubenswrapper[4958]: I1006 11:49:30.913443 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 06 11:49:30 crc kubenswrapper[4958]: E1006 11:49:30.913604 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Oct 06 11:49:30 crc kubenswrapper[4958]: E1006 11:49:30.913741 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Oct 06 11:49:31 crc kubenswrapper[4958]: I1006 11:49:31.912479 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-4mxw5" Oct 06 11:49:31 crc kubenswrapper[4958]: E1006 11:49:31.912725 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-4mxw5" podUID="cbfbf2ea-df5f-4844-ba69-8c9f16a3a26c" Oct 06 11:49:32 crc kubenswrapper[4958]: E1006 11:49:32.001507 4958 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Oct 06 11:49:32 crc kubenswrapper[4958]: I1006 11:49:32.913358 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 06 11:49:32 crc kubenswrapper[4958]: I1006 11:49:32.913395 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 06 11:49:32 crc kubenswrapper[4958]: I1006 11:49:32.913396 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 06 11:49:32 crc kubenswrapper[4958]: E1006 11:49:32.914429 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Oct 06 11:49:32 crc kubenswrapper[4958]: E1006 11:49:32.914566 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Oct 06 11:49:32 crc kubenswrapper[4958]: E1006 11:49:32.914645 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Oct 06 11:49:33 crc kubenswrapper[4958]: I1006 11:49:33.912706 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-4mxw5" Oct 06 11:49:33 crc kubenswrapper[4958]: E1006 11:49:33.912961 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-4mxw5" podUID="cbfbf2ea-df5f-4844-ba69-8c9f16a3a26c" Oct 06 11:49:34 crc kubenswrapper[4958]: I1006 11:49:34.913579 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 06 11:49:34 crc kubenswrapper[4958]: I1006 11:49:34.913579 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 06 11:49:34 crc kubenswrapper[4958]: I1006 11:49:34.913627 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 06 11:49:34 crc kubenswrapper[4958]: I1006 11:49:34.914226 4958 scope.go:117] "RemoveContainer" containerID="4e73d50647a8c8e49d503c873fc1939a3747681ec2646bd44db73acf5c7740d0" Oct 06 11:49:34 crc kubenswrapper[4958]: E1006 11:49:34.914335 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Oct 06 11:49:34 crc kubenswrapper[4958]: E1006 11:49:34.914484 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Oct 06 11:49:34 crc kubenswrapper[4958]: E1006 11:49:34.914684 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Oct 06 11:49:35 crc kubenswrapper[4958]: I1006 11:49:35.682036 4958 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-4w4h5_8198c8fd-8ec9-4a56-9e87-4e3967fb8ec7/kube-multus/1.log" Oct 06 11:49:35 crc kubenswrapper[4958]: I1006 11:49:35.682348 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-4w4h5" event={"ID":"8198c8fd-8ec9-4a56-9e87-4e3967fb8ec7","Type":"ContainerStarted","Data":"0bcbdb53e28bf48f1081ca622ed415816a291e7ae71edd74a7d1f241c95fe82e"} Oct 06 11:49:35 crc kubenswrapper[4958]: I1006 11:49:35.913113 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-4mxw5" Oct 06 11:49:35 crc kubenswrapper[4958]: E1006 11:49:35.913344 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-4mxw5" podUID="cbfbf2ea-df5f-4844-ba69-8c9f16a3a26c" Oct 06 11:49:36 crc kubenswrapper[4958]: I1006 11:49:36.912872 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 06 11:49:36 crc kubenswrapper[4958]: I1006 11:49:36.912955 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 06 11:49:36 crc kubenswrapper[4958]: I1006 11:49:36.913038 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 06 11:49:36 crc kubenswrapper[4958]: E1006 11:49:36.914779 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Oct 06 11:49:36 crc kubenswrapper[4958]: E1006 11:49:36.915070 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Oct 06 11:49:36 crc kubenswrapper[4958]: E1006 11:49:36.915239 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Oct 06 11:49:37 crc kubenswrapper[4958]: E1006 11:49:37.039939 4958 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Oct 06 11:49:37 crc kubenswrapper[4958]: I1006 11:49:37.913040 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-4mxw5" Oct 06 11:49:37 crc kubenswrapper[4958]: E1006 11:49:37.913339 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-4mxw5" podUID="cbfbf2ea-df5f-4844-ba69-8c9f16a3a26c" Oct 06 11:49:38 crc kubenswrapper[4958]: I1006 11:49:38.912345 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 06 11:49:38 crc kubenswrapper[4958]: I1006 11:49:38.912379 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 06 11:49:38 crc kubenswrapper[4958]: E1006 11:49:38.912516 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Oct 06 11:49:38 crc kubenswrapper[4958]: I1006 11:49:38.912770 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 06 11:49:38 crc kubenswrapper[4958]: E1006 11:49:38.912870 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Oct 06 11:49:38 crc kubenswrapper[4958]: E1006 11:49:38.913182 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Oct 06 11:49:39 crc kubenswrapper[4958]: I1006 11:49:39.912598 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-4mxw5" Oct 06 11:49:39 crc kubenswrapper[4958]: E1006 11:49:39.912788 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-4mxw5" podUID="cbfbf2ea-df5f-4844-ba69-8c9f16a3a26c" Oct 06 11:49:39 crc kubenswrapper[4958]: I1006 11:49:39.913858 4958 scope.go:117] "RemoveContainer" containerID="0d26f1c937b4f4a0ec130d9451086f3770f092498d61310b65bdd8c74a68549e" Oct 06 11:49:40 crc kubenswrapper[4958]: I1006 11:49:40.704508 4958 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-ntrlk_cd589959-144a-41bd-b6d5-a872e5c25cee/ovnkube-controller/3.log" Oct 06 11:49:40 crc kubenswrapper[4958]: I1006 11:49:40.708283 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-ntrlk" event={"ID":"cd589959-144a-41bd-b6d5-a872e5c25cee","Type":"ContainerStarted","Data":"31e3aeb719b255f884ea8f41ba8f2805ba18c0324cc985c66601ceb2096c177b"} Oct 06 11:49:40 crc kubenswrapper[4958]: I1006 11:49:40.708809 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-ntrlk" Oct 06 11:49:40 crc kubenswrapper[4958]: I1006 11:49:40.750557 4958 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-node-ntrlk" podStartSLOduration=113.75052772 podStartE2EDuration="1m53.75052772s" podCreationTimestamp="2025-10-06 11:47:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-06 11:49:40.74852692 +0000 UTC m=+134.634552268" watchObservedRunningTime="2025-10-06 11:49:40.75052772 +0000 UTC m=+134.636553058" Oct 06 11:49:40 crc kubenswrapper[4958]: I1006 11:49:40.912717 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 06 11:49:40 crc kubenswrapper[4958]: I1006 11:49:40.912818 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 06 11:49:40 crc kubenswrapper[4958]: E1006 11:49:40.912958 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Oct 06 11:49:40 crc kubenswrapper[4958]: I1006 11:49:40.913029 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 06 11:49:40 crc kubenswrapper[4958]: E1006 11:49:40.913119 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Oct 06 11:49:40 crc kubenswrapper[4958]: E1006 11:49:40.913264 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Oct 06 11:49:40 crc kubenswrapper[4958]: I1006 11:49:40.978347 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/network-metrics-daemon-4mxw5"] Oct 06 11:49:40 crc kubenswrapper[4958]: I1006 11:49:40.978470 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-4mxw5" Oct 06 11:49:40 crc kubenswrapper[4958]: E1006 11:49:40.978564 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-4mxw5" podUID="cbfbf2ea-df5f-4844-ba69-8c9f16a3a26c" Oct 06 11:49:42 crc kubenswrapper[4958]: E1006 11:49:42.041595 4958 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Oct 06 11:49:42 crc kubenswrapper[4958]: I1006 11:49:42.912560 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-4mxw5" Oct 06 11:49:42 crc kubenswrapper[4958]: I1006 11:49:42.912574 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 06 11:49:42 crc kubenswrapper[4958]: I1006 11:49:42.912616 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 06 11:49:42 crc kubenswrapper[4958]: I1006 11:49:42.912615 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 06 11:49:42 crc kubenswrapper[4958]: E1006 11:49:42.912965 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-4mxw5" podUID="cbfbf2ea-df5f-4844-ba69-8c9f16a3a26c" Oct 06 11:49:42 crc kubenswrapper[4958]: E1006 11:49:42.913200 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Oct 06 11:49:42 crc kubenswrapper[4958]: E1006 11:49:42.913309 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Oct 06 11:49:42 crc kubenswrapper[4958]: E1006 11:49:42.913512 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Oct 06 11:49:44 crc kubenswrapper[4958]: I1006 11:49:44.913407 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-4mxw5" Oct 06 11:49:44 crc kubenswrapper[4958]: I1006 11:49:44.913516 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 06 11:49:44 crc kubenswrapper[4958]: E1006 11:49:44.913605 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-4mxw5" podUID="cbfbf2ea-df5f-4844-ba69-8c9f16a3a26c" Oct 06 11:49:44 crc kubenswrapper[4958]: E1006 11:49:44.913698 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Oct 06 11:49:44 crc kubenswrapper[4958]: I1006 11:49:44.913732 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 06 11:49:44 crc kubenswrapper[4958]: E1006 11:49:44.913862 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Oct 06 11:49:44 crc kubenswrapper[4958]: I1006 11:49:44.914515 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 06 11:49:44 crc kubenswrapper[4958]: E1006 11:49:44.914796 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Oct 06 11:49:46 crc kubenswrapper[4958]: I1006 11:49:46.912990 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 06 11:49:46 crc kubenswrapper[4958]: I1006 11:49:46.913124 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 06 11:49:46 crc kubenswrapper[4958]: E1006 11:49:46.914856 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Oct 06 11:49:46 crc kubenswrapper[4958]: I1006 11:49:46.914929 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-4mxw5" Oct 06 11:49:46 crc kubenswrapper[4958]: I1006 11:49:46.914980 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 06 11:49:46 crc kubenswrapper[4958]: E1006 11:49:46.915079 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Oct 06 11:49:46 crc kubenswrapper[4958]: E1006 11:49:46.915266 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Oct 06 11:49:46 crc kubenswrapper[4958]: E1006 11:49:46.915487 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-4mxw5" podUID="cbfbf2ea-df5f-4844-ba69-8c9f16a3a26c" Oct 06 11:49:48 crc kubenswrapper[4958]: I1006 11:49:48.912532 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 06 11:49:48 crc kubenswrapper[4958]: I1006 11:49:48.912627 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 06 11:49:48 crc kubenswrapper[4958]: I1006 11:49:48.912530 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-4mxw5" Oct 06 11:49:48 crc kubenswrapper[4958]: I1006 11:49:48.913313 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 06 11:49:48 crc kubenswrapper[4958]: I1006 11:49:48.915772 4958 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-console"/"networking-console-plugin" Oct 06 11:49:48 crc kubenswrapper[4958]: I1006 11:49:48.916531 4958 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"openshift-service-ca.crt" Oct 06 11:49:48 crc kubenswrapper[4958]: I1006 11:49:48.917355 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-console"/"networking-console-plugin-cert" Oct 06 11:49:48 crc kubenswrapper[4958]: I1006 11:49:48.920020 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-sa-dockercfg-d427c" Oct 06 11:49:48 crc kubenswrapper[4958]: I1006 11:49:48.920086 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-secret" Oct 06 11:49:48 crc kubenswrapper[4958]: I1006 11:49:48.922004 4958 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"kube-root-ca.crt" Oct 06 11:49:53 crc kubenswrapper[4958]: I1006 11:49:53.802455 4958 patch_prober.go:28] interesting pod/machine-config-daemon-whw6z container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 06 11:49:53 crc kubenswrapper[4958]: I1006 11:49:53.802553 4958 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-whw6z" podUID="1a63a5e9-6d00-40ff-a080-cfcaf03a1c1b" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 06 11:49:54 crc kubenswrapper[4958]: I1006 11:49:54.717238 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 06 11:49:54 crc kubenswrapper[4958]: E1006 11:49:54.717479 4958 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-06 11:51:56.717438841 +0000 UTC m=+270.603464189 (durationBeforeRetry 2m2s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 06 11:49:54 crc kubenswrapper[4958]: I1006 11:49:54.819384 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 06 11:49:54 crc kubenswrapper[4958]: I1006 11:49:54.819468 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 06 11:49:54 crc kubenswrapper[4958]: I1006 11:49:54.819546 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 06 11:49:54 crc kubenswrapper[4958]: I1006 11:49:54.819604 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 06 11:49:54 crc kubenswrapper[4958]: I1006 11:49:54.820950 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 06 11:49:54 crc kubenswrapper[4958]: I1006 11:49:54.828222 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 06 11:49:54 crc kubenswrapper[4958]: I1006 11:49:54.829330 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 06 11:49:54 crc kubenswrapper[4958]: I1006 11:49:54.829406 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 06 11:49:54 crc kubenswrapper[4958]: I1006 11:49:54.945920 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 06 11:49:54 crc kubenswrapper[4958]: I1006 11:49:54.960588 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 06 11:49:54 crc kubenswrapper[4958]: I1006 11:49:54.991978 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 06 11:49:55 crc kubenswrapper[4958]: W1006 11:49:55.248082 4958 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9d751cbb_f2e2_430d_9754_c882a5e924a5.slice/crio-c94903c59ef0414d2b1424e286cc00a25811c275d4ceee409b8deeb4fb130bfc WatchSource:0}: Error finding container c94903c59ef0414d2b1424e286cc00a25811c275d4ceee409b8deeb4fb130bfc: Status 404 returned error can't find the container with id c94903c59ef0414d2b1424e286cc00a25811c275d4ceee409b8deeb4fb130bfc Oct 06 11:49:55 crc kubenswrapper[4958]: W1006 11:49:55.255422 4958 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3b6479f0_333b_4a96_9adf_2099afdc2447.slice/crio-4fe4a2b974258c52ea0c844fe85bfc7b961ea1fe579836fda9058214307461bf WatchSource:0}: Error finding container 4fe4a2b974258c52ea0c844fe85bfc7b961ea1fe579836fda9058214307461bf: Status 404 returned error can't find the container with id 4fe4a2b974258c52ea0c844fe85bfc7b961ea1fe579836fda9058214307461bf Oct 06 11:49:55 crc kubenswrapper[4958]: I1006 11:49:55.774992 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" event={"ID":"9d751cbb-f2e2-430d-9754-c882a5e924a5","Type":"ContainerStarted","Data":"3416414168c55063570b5f7f43f46f1a4c7f8b6c5e058fda0305274f4af8f777"} Oct 06 11:49:55 crc kubenswrapper[4958]: I1006 11:49:55.775959 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" event={"ID":"9d751cbb-f2e2-430d-9754-c882a5e924a5","Type":"ContainerStarted","Data":"c94903c59ef0414d2b1424e286cc00a25811c275d4ceee409b8deeb4fb130bfc"} Oct 06 11:49:55 crc kubenswrapper[4958]: I1006 11:49:55.779374 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" event={"ID":"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8","Type":"ContainerStarted","Data":"0eccde95c1b5c3ea3ccd4571e5fe69c0e6bc8c6fa7029698d0f1d06a36169738"} Oct 06 11:49:55 crc kubenswrapper[4958]: I1006 11:49:55.779741 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" event={"ID":"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8","Type":"ContainerStarted","Data":"31a9866e8d94c251bd0e9cb758946fa43dad1fba39300869aad0d3c7d8a75564"} Oct 06 11:49:55 crc kubenswrapper[4958]: I1006 11:49:55.783316 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" event={"ID":"3b6479f0-333b-4a96-9adf-2099afdc2447","Type":"ContainerStarted","Data":"54335ac99e0cd30971f5232fdae23ec8d7bc85cf6229415335c5efadc11c86ae"} Oct 06 11:49:55 crc kubenswrapper[4958]: I1006 11:49:55.783389 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" event={"ID":"3b6479f0-333b-4a96-9adf-2099afdc2447","Type":"ContainerStarted","Data":"4fe4a2b974258c52ea0c844fe85bfc7b961ea1fe579836fda9058214307461bf"} Oct 06 11:49:55 crc kubenswrapper[4958]: I1006 11:49:55.783625 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 06 11:49:57 crc kubenswrapper[4958]: I1006 11:49:57.200297 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeReady" Oct 06 11:49:57 crc kubenswrapper[4958]: I1006 11:49:57.248431 4958 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-57fbj"] Oct 06 11:49:57 crc kubenswrapper[4958]: I1006 11:49:57.249631 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-76f77b778f-57fbj" Oct 06 11:49:57 crc kubenswrapper[4958]: I1006 11:49:57.251684 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"serving-cert" Oct 06 11:49:57 crc kubenswrapper[4958]: I1006 11:49:57.252061 4958 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-fmlg2"] Oct 06 11:49:57 crc kubenswrapper[4958]: I1006 11:49:57.252714 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"encryption-config-1" Oct 06 11:49:57 crc kubenswrapper[4958]: I1006 11:49:57.252736 4958 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"openshift-service-ca.crt" Oct 06 11:49:57 crc kubenswrapper[4958]: I1006 11:49:57.252718 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-fmlg2" Oct 06 11:49:57 crc kubenswrapper[4958]: I1006 11:49:57.258621 4958 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"audit-1" Oct 06 11:49:57 crc kubenswrapper[4958]: I1006 11:49:57.258627 4958 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"etcd-serving-ca" Oct 06 11:49:57 crc kubenswrapper[4958]: I1006 11:49:57.258736 4958 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"image-import-ca" Oct 06 11:49:57 crc kubenswrapper[4958]: I1006 11:49:57.258986 4958 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"config" Oct 06 11:49:57 crc kubenswrapper[4958]: I1006 11:49:57.259005 4958 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"kube-root-ca.crt" Oct 06 11:49:57 crc kubenswrapper[4958]: I1006 11:49:57.266274 4958 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Oct 06 11:49:57 crc kubenswrapper[4958]: I1006 11:49:57.266844 4958 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Oct 06 11:49:57 crc kubenswrapper[4958]: I1006 11:49:57.266872 4958 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Oct 06 11:49:57 crc kubenswrapper[4958]: I1006 11:49:57.271269 4958 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-thrt8"] Oct 06 11:49:57 crc kubenswrapper[4958]: I1006 11:49:57.271684 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"openshift-apiserver-sa-dockercfg-djjff" Oct 06 11:49:57 crc kubenswrapper[4958]: I1006 11:49:57.286881 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"etcd-client" Oct 06 11:49:57 crc kubenswrapper[4958]: I1006 11:49:57.298066 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-thrt8" Oct 06 11:49:57 crc kubenswrapper[4958]: I1006 11:49:57.298830 4958 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Oct 06 11:49:57 crc kubenswrapper[4958]: I1006 11:49:57.299838 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Oct 06 11:49:57 crc kubenswrapper[4958]: I1006 11:49:57.300098 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Oct 06 11:49:57 crc kubenswrapper[4958]: I1006 11:49:57.301564 4958 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-x8mdr"] Oct 06 11:49:57 crc kubenswrapper[4958]: I1006 11:49:57.302254 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-5694c8668f-x8mdr" Oct 06 11:49:57 crc kubenswrapper[4958]: I1006 11:49:57.304524 4958 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"trusted-ca-bundle" Oct 06 11:49:57 crc kubenswrapper[4958]: I1006 11:49:57.312260 4958 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-v7kkf"] Oct 06 11:49:57 crc kubenswrapper[4958]: I1006 11:49:57.312654 4958 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console-operator/console-operator-58897d9998-z2w99"] Oct 06 11:49:57 crc kubenswrapper[4958]: I1006 11:49:57.312981 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-58897d9998-z2w99" Oct 06 11:49:57 crc kubenswrapper[4958]: I1006 11:49:57.313005 4958 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-root-ca.crt" Oct 06 11:49:57 crc kubenswrapper[4958]: I1006 11:49:57.313009 4958 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Oct 06 11:49:57 crc kubenswrapper[4958]: I1006 11:49:57.313558 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-v7kkf" Oct 06 11:49:57 crc kubenswrapper[4958]: I1006 11:49:57.313625 4958 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"machine-api-operator-images" Oct 06 11:49:57 crc kubenswrapper[4958]: I1006 11:49:57.313735 4958 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"openshift-service-ca.crt" Oct 06 11:49:57 crc kubenswrapper[4958]: I1006 11:49:57.313560 4958 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-m9l68"] Oct 06 11:49:57 crc kubenswrapper[4958]: I1006 11:49:57.313861 4958 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Oct 06 11:49:57 crc kubenswrapper[4958]: I1006 11:49:57.313979 4958 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-rbac-proxy" Oct 06 11:49:57 crc kubenswrapper[4958]: I1006 11:49:57.313992 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-tls" Oct 06 11:49:57 crc kubenswrapper[4958]: I1006 11:49:57.314100 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-dockercfg-mfbb7" Oct 06 11:49:57 crc kubenswrapper[4958]: I1006 11:49:57.314128 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-69f744f599-m9l68" Oct 06 11:49:57 crc kubenswrapper[4958]: I1006 11:49:57.314268 4958 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Oct 06 11:49:57 crc kubenswrapper[4958]: I1006 11:49:57.315074 4958 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-machine-approver/machine-approver-56656f9798-lswc6"] Oct 06 11:49:57 crc kubenswrapper[4958]: I1006 11:49:57.315270 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Oct 06 11:49:57 crc kubenswrapper[4958]: I1006 11:49:57.315748 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-lswc6" Oct 06 11:49:57 crc kubenswrapper[4958]: I1006 11:49:57.315977 4958 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Oct 06 11:49:57 crc kubenswrapper[4958]: I1006 11:49:57.316134 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Oct 06 11:49:57 crc kubenswrapper[4958]: I1006 11:49:57.317953 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"serving-cert" Oct 06 11:49:57 crc kubenswrapper[4958]: I1006 11:49:57.318720 4958 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-dftvm"] Oct 06 11:49:57 crc kubenswrapper[4958]: I1006 11:49:57.319049 4958 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"console-operator-config" Oct 06 11:49:57 crc kubenswrapper[4958]: I1006 11:49:57.319238 4958 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-vqq9g"] Oct 06 11:49:57 crc kubenswrapper[4958]: I1006 11:49:57.319728 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-vqq9g" Oct 06 11:49:57 crc kubenswrapper[4958]: I1006 11:49:57.319759 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-dftvm" Oct 06 11:49:57 crc kubenswrapper[4958]: I1006 11:49:57.324326 4958 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"kube-root-ca.crt" Oct 06 11:49:57 crc kubenswrapper[4958]: I1006 11:49:57.327825 4958 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"openshift-service-ca.crt" Oct 06 11:49:57 crc kubenswrapper[4958]: I1006 11:49:57.328203 4958 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"authentication-operator-config" Oct 06 11:49:57 crc kubenswrapper[4958]: I1006 11:49:57.328322 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"console-operator-dockercfg-4xjcr" Oct 06 11:49:57 crc kubenswrapper[4958]: I1006 11:49:57.329050 4958 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/downloads-7954f5f757-bs4m6"] Oct 06 11:49:57 crc kubenswrapper[4958]: I1006 11:49:57.329592 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-7954f5f757-bs4m6" Oct 06 11:49:57 crc kubenswrapper[4958]: I1006 11:49:57.330214 4958 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-gj6pw"] Oct 06 11:49:57 crc kubenswrapper[4958]: I1006 11:49:57.331286 4958 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-f2982"] Oct 06 11:49:57 crc kubenswrapper[4958]: I1006 11:49:57.331333 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-gj6pw" Oct 06 11:49:57 crc kubenswrapper[4958]: I1006 11:49:57.331828 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-f2982" Oct 06 11:49:57 crc kubenswrapper[4958]: I1006 11:49:57.342074 4958 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"openshift-service-ca.crt" Oct 06 11:49:57 crc kubenswrapper[4958]: I1006 11:49:57.343387 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-provider-selection" Oct 06 11:49:57 crc kubenswrapper[4958]: I1006 11:49:57.343487 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"etcd-client" Oct 06 11:49:57 crc kubenswrapper[4958]: I1006 11:49:57.343550 4958 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"machine-approver-config" Oct 06 11:49:57 crc kubenswrapper[4958]: I1006 11:49:57.343597 4958 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-config" Oct 06 11:49:57 crc kubenswrapper[4958]: I1006 11:49:57.343680 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"authentication-operator-dockercfg-mz9bj" Oct 06 11:49:57 crc kubenswrapper[4958]: I1006 11:49:57.343687 4958 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-service-ca.crt" Oct 06 11:49:57 crc kubenswrapper[4958]: I1006 11:49:57.343770 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-dockercfg-xtcjv" Oct 06 11:49:57 crc kubenswrapper[4958]: I1006 11:49:57.343809 4958 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"kube-root-ca.crt" Oct 06 11:49:57 crc kubenswrapper[4958]: I1006 11:49:57.343821 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"encryption-config-1" Oct 06 11:49:57 crc kubenswrapper[4958]: I1006 11:49:57.343897 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"serving-cert" Oct 06 11:49:57 crc kubenswrapper[4958]: I1006 11:49:57.343880 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-serving-cert" Oct 06 11:49:57 crc kubenswrapper[4958]: I1006 11:49:57.343922 4958 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"openshift-service-ca.crt" Oct 06 11:49:57 crc kubenswrapper[4958]: I1006 11:49:57.343991 4958 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"audit" Oct 06 11:49:57 crc kubenswrapper[4958]: I1006 11:49:57.344012 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-tls" Oct 06 11:49:57 crc kubenswrapper[4958]: I1006 11:49:57.344028 4958 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-rbac-proxy" Oct 06 11:49:57 crc kubenswrapper[4958]: I1006 11:49:57.344070 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-router-certs" Oct 06 11:49:57 crc kubenswrapper[4958]: I1006 11:49:57.344082 4958 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"kube-root-ca.crt" Oct 06 11:49:57 crc kubenswrapper[4958]: I1006 11:49:57.344138 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-idp-0-file-data" Oct 06 11:49:57 crc kubenswrapper[4958]: I1006 11:49:57.344162 4958 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"service-ca-bundle" Oct 06 11:49:57 crc kubenswrapper[4958]: I1006 11:49:57.344169 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-sa-dockercfg-nl2j4" Oct 06 11:49:57 crc kubenswrapper[4958]: I1006 11:49:57.343771 4958 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"openshift-service-ca.crt" Oct 06 11:49:57 crc kubenswrapper[4958]: I1006 11:49:57.344261 4958 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-service-ca" Oct 06 11:49:57 crc kubenswrapper[4958]: I1006 11:49:57.344273 4958 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-cliconfig" Oct 06 11:49:57 crc kubenswrapper[4958]: I1006 11:49:57.344285 4958 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"openshift-service-ca.crt" Oct 06 11:49:57 crc kubenswrapper[4958]: I1006 11:49:57.344293 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"oauth-openshift-dockercfg-znhcc" Oct 06 11:49:57 crc kubenswrapper[4958]: I1006 11:49:57.344344 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"serving-cert" Oct 06 11:49:57 crc kubenswrapper[4958]: I1006 11:49:57.344368 4958 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"trusted-ca-bundle" Oct 06 11:49:57 crc kubenswrapper[4958]: I1006 11:49:57.344392 4958 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"kube-root-ca.crt" Oct 06 11:49:57 crc kubenswrapper[4958]: I1006 11:49:57.344414 4958 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"kube-root-ca.crt" Oct 06 11:49:57 crc kubenswrapper[4958]: I1006 11:49:57.344421 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-session" Oct 06 11:49:57 crc kubenswrapper[4958]: I1006 11:49:57.344469 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-error" Oct 06 11:49:57 crc kubenswrapper[4958]: I1006 11:49:57.344499 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-serving-cert" Oct 06 11:49:57 crc kubenswrapper[4958]: I1006 11:49:57.344500 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"oauth-apiserver-sa-dockercfg-6r2bq" Oct 06 11:49:57 crc kubenswrapper[4958]: I1006 11:49:57.344541 4958 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"audit-1" Oct 06 11:49:57 crc kubenswrapper[4958]: I1006 11:49:57.344645 4958 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"etcd-serving-ca" Oct 06 11:49:57 crc kubenswrapper[4958]: I1006 11:49:57.344996 4958 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-root-ca.crt" Oct 06 11:49:57 crc kubenswrapper[4958]: I1006 11:49:57.359060 4958 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Oct 06 11:49:57 crc kubenswrapper[4958]: I1006 11:49:57.369789 4958 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"trusted-ca" Oct 06 11:49:57 crc kubenswrapper[4958]: I1006 11:49:57.409710 4958 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"openshift-service-ca.crt" Oct 06 11:49:57 crc kubenswrapper[4958]: I1006 11:49:57.412628 4958 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" Oct 06 11:49:57 crc kubenswrapper[4958]: I1006 11:49:57.422920 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-ocp-branding-template" Oct 06 11:49:57 crc kubenswrapper[4958]: I1006 11:49:57.424040 4958 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"trusted-ca-bundle" Oct 06 11:49:57 crc kubenswrapper[4958]: I1006 11:49:57.424922 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3f84a255-5034-424e-acf0-5ba9f4aa0531-config\") pod \"apiserver-76f77b778f-57fbj\" (UID: \"3f84a255-5034-424e-acf0-5ba9f4aa0531\") " pod="openshift-apiserver/apiserver-76f77b778f-57fbj" Oct 06 11:49:57 crc kubenswrapper[4958]: I1006 11:49:57.424969 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/3f84a255-5034-424e-acf0-5ba9f4aa0531-audit-dir\") pod \"apiserver-76f77b778f-57fbj\" (UID: \"3f84a255-5034-424e-acf0-5ba9f4aa0531\") " pod="openshift-apiserver/apiserver-76f77b778f-57fbj" Oct 06 11:49:57 crc kubenswrapper[4958]: I1006 11:49:57.425004 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5af269ee-1565-4c36-a416-c4e2e7397fc5-client-ca\") pod \"route-controller-manager-6576b87f9c-fmlg2\" (UID: \"5af269ee-1565-4c36-a416-c4e2e7397fc5\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-fmlg2" Oct 06 11:49:57 crc kubenswrapper[4958]: I1006 11:49:57.425028 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/3f84a255-5034-424e-acf0-5ba9f4aa0531-image-import-ca\") pod \"apiserver-76f77b778f-57fbj\" (UID: \"3f84a255-5034-424e-acf0-5ba9f4aa0531\") " pod="openshift-apiserver/apiserver-76f77b778f-57fbj" Oct 06 11:49:57 crc kubenswrapper[4958]: I1006 11:49:57.425072 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/3f84a255-5034-424e-acf0-5ba9f4aa0531-etcd-serving-ca\") pod \"apiserver-76f77b778f-57fbj\" (UID: \"3f84a255-5034-424e-acf0-5ba9f4aa0531\") " pod="openshift-apiserver/apiserver-76f77b778f-57fbj" Oct 06 11:49:57 crc kubenswrapper[4958]: I1006 11:49:57.425094 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8pkcl\" (UniqueName: \"kubernetes.io/projected/3f84a255-5034-424e-acf0-5ba9f4aa0531-kube-api-access-8pkcl\") pod \"apiserver-76f77b778f-57fbj\" (UID: \"3f84a255-5034-424e-acf0-5ba9f4aa0531\") " pod="openshift-apiserver/apiserver-76f77b778f-57fbj" Oct 06 11:49:57 crc kubenswrapper[4958]: I1006 11:49:57.425118 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5af269ee-1565-4c36-a416-c4e2e7397fc5-serving-cert\") pod \"route-controller-manager-6576b87f9c-fmlg2\" (UID: \"5af269ee-1565-4c36-a416-c4e2e7397fc5\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-fmlg2" Oct 06 11:49:57 crc kubenswrapper[4958]: I1006 11:49:57.425136 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/3f84a255-5034-424e-acf0-5ba9f4aa0531-encryption-config\") pod \"apiserver-76f77b778f-57fbj\" (UID: \"3f84a255-5034-424e-acf0-5ba9f4aa0531\") " pod="openshift-apiserver/apiserver-76f77b778f-57fbj" Oct 06 11:49:57 crc kubenswrapper[4958]: I1006 11:49:57.425193 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/3f84a255-5034-424e-acf0-5ba9f4aa0531-trusted-ca-bundle\") pod \"apiserver-76f77b778f-57fbj\" (UID: \"3f84a255-5034-424e-acf0-5ba9f4aa0531\") " pod="openshift-apiserver/apiserver-76f77b778f-57fbj" Oct 06 11:49:57 crc kubenswrapper[4958]: I1006 11:49:57.425215 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/3f84a255-5034-424e-acf0-5ba9f4aa0531-audit\") pod \"apiserver-76f77b778f-57fbj\" (UID: \"3f84a255-5034-424e-acf0-5ba9f4aa0531\") " pod="openshift-apiserver/apiserver-76f77b778f-57fbj" Oct 06 11:49:57 crc kubenswrapper[4958]: I1006 11:49:57.425237 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5af269ee-1565-4c36-a416-c4e2e7397fc5-config\") pod \"route-controller-manager-6576b87f9c-fmlg2\" (UID: \"5af269ee-1565-4c36-a416-c4e2e7397fc5\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-fmlg2" Oct 06 11:49:57 crc kubenswrapper[4958]: I1006 11:49:57.425258 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/3f84a255-5034-424e-acf0-5ba9f4aa0531-node-pullsecrets\") pod \"apiserver-76f77b778f-57fbj\" (UID: \"3f84a255-5034-424e-acf0-5ba9f4aa0531\") " pod="openshift-apiserver/apiserver-76f77b778f-57fbj" Oct 06 11:49:57 crc kubenswrapper[4958]: I1006 11:49:57.425276 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/3f84a255-5034-424e-acf0-5ba9f4aa0531-etcd-client\") pod \"apiserver-76f77b778f-57fbj\" (UID: \"3f84a255-5034-424e-acf0-5ba9f4aa0531\") " pod="openshift-apiserver/apiserver-76f77b778f-57fbj" Oct 06 11:49:57 crc kubenswrapper[4958]: I1006 11:49:57.425306 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3f84a255-5034-424e-acf0-5ba9f4aa0531-serving-cert\") pod \"apiserver-76f77b778f-57fbj\" (UID: \"3f84a255-5034-424e-acf0-5ba9f4aa0531\") " pod="openshift-apiserver/apiserver-76f77b778f-57fbj" Oct 06 11:49:57 crc kubenswrapper[4958]: I1006 11:49:57.425337 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cskx9\" (UniqueName: \"kubernetes.io/projected/5af269ee-1565-4c36-a416-c4e2e7397fc5-kube-api-access-cskx9\") pod \"route-controller-manager-6576b87f9c-fmlg2\" (UID: \"5af269ee-1565-4c36-a416-c4e2e7397fc5\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-fmlg2" Oct 06 11:49:57 crc kubenswrapper[4958]: I1006 11:49:57.428249 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-login" Oct 06 11:49:57 crc kubenswrapper[4958]: I1006 11:49:57.428641 4958 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"openshift-service-ca.crt" Oct 06 11:49:57 crc kubenswrapper[4958]: I1006 11:49:57.429230 4958 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/console-f9d7485db-ccpgf"] Oct 06 11:49:57 crc kubenswrapper[4958]: I1006 11:49:57.429687 4958 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-vqqpd"] Oct 06 11:49:57 crc kubenswrapper[4958]: I1006 11:49:57.430077 4958 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-kt8qb"] Oct 06 11:49:57 crc kubenswrapper[4958]: I1006 11:49:57.430388 4958 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-xnxs2"] Oct 06 11:49:57 crc kubenswrapper[4958]: I1006 11:49:57.430697 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-ccpgf" Oct 06 11:49:57 crc kubenswrapper[4958]: I1006 11:49:57.430790 4958 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-t2nxc"] Oct 06 11:49:57 crc kubenswrapper[4958]: I1006 11:49:57.431199 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-744455d44c-t2nxc" Oct 06 11:49:57 crc kubenswrapper[4958]: I1006 11:49:57.431453 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-7777fb866f-vqqpd" Oct 06 11:49:57 crc kubenswrapper[4958]: I1006 11:49:57.431730 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-kt8qb" Oct 06 11:49:57 crc kubenswrapper[4958]: I1006 11:49:57.432005 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-b45778765-xnxs2" Oct 06 11:49:57 crc kubenswrapper[4958]: I1006 11:49:57.433317 4958 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"kube-root-ca.crt" Oct 06 11:49:57 crc kubenswrapper[4958]: I1006 11:49:57.433355 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"default-dockercfg-chnjx" Oct 06 11:49:57 crc kubenswrapper[4958]: I1006 11:49:57.433474 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"cluster-samples-operator-dockercfg-xpp9w" Oct 06 11:49:57 crc kubenswrapper[4958]: I1006 11:49:57.433664 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-operator-tls" Oct 06 11:49:57 crc kubenswrapper[4958]: I1006 11:49:57.433780 4958 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"kube-root-ca.crt" Oct 06 11:49:57 crc kubenswrapper[4958]: I1006 11:49:57.433940 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"samples-operator-tls" Oct 06 11:49:57 crc kubenswrapper[4958]: I1006 11:49:57.434021 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"cluster-image-registry-operator-dockercfg-m4qtx" Oct 06 11:49:57 crc kubenswrapper[4958]: I1006 11:49:57.435177 4958 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-2gbnx"] Oct 06 11:49:57 crc kubenswrapper[4958]: I1006 11:49:57.435655 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-2gbnx" Oct 06 11:49:57 crc kubenswrapper[4958]: I1006 11:49:57.446130 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-oauth-config" Oct 06 11:49:57 crc kubenswrapper[4958]: I1006 11:49:57.447101 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-dockercfg-f62pw" Oct 06 11:49:57 crc kubenswrapper[4958]: I1006 11:49:57.447323 4958 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"console-config" Oct 06 11:49:57 crc kubenswrapper[4958]: I1006 11:49:57.447479 4958 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"service-ca" Oct 06 11:49:57 crc kubenswrapper[4958]: I1006 11:49:57.449455 4958 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"oauth-serving-cert" Oct 06 11:49:57 crc kubenswrapper[4958]: I1006 11:49:57.450910 4958 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"trusted-ca" Oct 06 11:49:57 crc kubenswrapper[4958]: I1006 11:49:57.451030 4958 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"openshift-service-ca.crt" Oct 06 11:49:57 crc kubenswrapper[4958]: I1006 11:49:57.451104 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-serving-cert" Oct 06 11:49:57 crc kubenswrapper[4958]: I1006 11:49:57.451275 4958 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"openshift-service-ca.crt" Oct 06 11:49:57 crc kubenswrapper[4958]: I1006 11:49:57.452569 4958 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-c97vp"] Oct 06 11:49:57 crc kubenswrapper[4958]: I1006 11:49:57.453198 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-c97vp" Oct 06 11:49:57 crc kubenswrapper[4958]: I1006 11:49:57.453271 4958 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"trusted-ca-bundle" Oct 06 11:49:57 crc kubenswrapper[4958]: I1006 11:49:57.453485 4958 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"openshift-service-ca.crt" Oct 06 11:49:57 crc kubenswrapper[4958]: I1006 11:49:57.453617 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"installation-pull-secrets" Oct 06 11:49:57 crc kubenswrapper[4958]: I1006 11:49:57.453707 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-serving-cert" Oct 06 11:49:57 crc kubenswrapper[4958]: I1006 11:49:57.453769 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"registry-dockercfg-kzzsd" Oct 06 11:49:57 crc kubenswrapper[4958]: I1006 11:49:57.453809 4958 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-ca-bundle" Oct 06 11:49:57 crc kubenswrapper[4958]: I1006 11:49:57.453922 4958 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-service-ca-bundle" Oct 06 11:49:57 crc kubenswrapper[4958]: I1006 11:49:57.454039 4958 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-operator-config" Oct 06 11:49:57 crc kubenswrapper[4958]: I1006 11:49:57.454182 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-tls" Oct 06 11:49:57 crc kubenswrapper[4958]: I1006 11:49:57.454305 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-dockercfg-r9srn" Oct 06 11:49:57 crc kubenswrapper[4958]: I1006 11:49:57.454421 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"dns-operator-dockercfg-9mqw5" Oct 06 11:49:57 crc kubenswrapper[4958]: I1006 11:49:57.454579 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-client" Oct 06 11:49:57 crc kubenswrapper[4958]: I1006 11:49:57.460473 4958 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress/router-default-5444994796-mfm8s"] Oct 06 11:49:57 crc kubenswrapper[4958]: I1006 11:49:57.461087 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress/router-default-5444994796-mfm8s" Oct 06 11:49:57 crc kubenswrapper[4958]: I1006 11:49:57.461830 4958 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-nsfvl"] Oct 06 11:49:57 crc kubenswrapper[4958]: I1006 11:49:57.462274 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-nsfvl" Oct 06 11:49:57 crc kubenswrapper[4958]: I1006 11:49:57.463062 4958 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-jxkkt"] Oct 06 11:49:57 crc kubenswrapper[4958]: I1006 11:49:57.463759 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-jxkkt" Oct 06 11:49:57 crc kubenswrapper[4958]: I1006 11:49:57.464326 4958 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-swm82"] Oct 06 11:49:57 crc kubenswrapper[4958]: I1006 11:49:57.465180 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-swm82" Oct 06 11:49:57 crc kubenswrapper[4958]: I1006 11:49:57.465408 4958 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-7ndvg"] Oct 06 11:49:57 crc kubenswrapper[4958]: I1006 11:49:57.465826 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-7ndvg" Oct 06 11:49:57 crc kubenswrapper[4958]: I1006 11:49:57.466509 4958 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-4cxkf"] Oct 06 11:49:57 crc kubenswrapper[4958]: I1006 11:49:57.467349 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-4cxkf" Oct 06 11:49:57 crc kubenswrapper[4958]: I1006 11:49:57.467776 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-thrt8"] Oct 06 11:49:57 crc kubenswrapper[4958]: I1006 11:49:57.470136 4958 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-m7q5b"] Oct 06 11:49:57 crc kubenswrapper[4958]: I1006 11:49:57.470610 4958 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-pp96v"] Oct 06 11:49:57 crc kubenswrapper[4958]: I1006 11:49:57.470984 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-pp96v" Oct 06 11:49:57 crc kubenswrapper[4958]: I1006 11:49:57.471237 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-m7q5b" Oct 06 11:49:57 crc kubenswrapper[4958]: I1006 11:49:57.471296 4958 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-zbr46"] Oct 06 11:49:57 crc kubenswrapper[4958]: I1006 11:49:57.472173 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-zbr46" Oct 06 11:49:57 crc kubenswrapper[4958]: I1006 11:49:57.472564 4958 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"kube-root-ca.crt" Oct 06 11:49:57 crc kubenswrapper[4958]: I1006 11:49:57.472736 4958 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-fhfvm"] Oct 06 11:49:57 crc kubenswrapper[4958]: I1006 11:49:57.473904 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-857f4d67dd-fhfvm" Oct 06 11:49:57 crc kubenswrapper[4958]: I1006 11:49:57.475294 4958 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-wlv4g"] Oct 06 11:49:57 crc kubenswrapper[4958]: I1006 11:49:57.476094 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-wlv4g" Oct 06 11:49:57 crc kubenswrapper[4958]: I1006 11:49:57.485531 4958 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-7zl8z"] Oct 06 11:49:57 crc kubenswrapper[4958]: I1006 11:49:57.486591 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-7zl8z" Oct 06 11:49:57 crc kubenswrapper[4958]: I1006 11:49:57.489394 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-fmlg2"] Oct 06 11:49:57 crc kubenswrapper[4958]: I1006 11:49:57.492468 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"metrics-tls" Oct 06 11:49:57 crc kubenswrapper[4958]: I1006 11:49:57.492880 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-57fbj"] Oct 06 11:49:57 crc kubenswrapper[4958]: I1006 11:49:57.493898 4958 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-server-n8gvd"] Oct 06 11:49:57 crc kubenswrapper[4958]: I1006 11:49:57.494629 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-server-n8gvd" Oct 06 11:49:57 crc kubenswrapper[4958]: I1006 11:49:57.494824 4958 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-928ch"] Oct 06 11:49:57 crc kubenswrapper[4958]: I1006 11:49:57.495972 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-928ch" Oct 06 11:49:57 crc kubenswrapper[4958]: I1006 11:49:57.497384 4958 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29329185-6dnq5"] Oct 06 11:49:57 crc kubenswrapper[4958]: I1006 11:49:57.497834 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29329185-6dnq5" Oct 06 11:49:57 crc kubenswrapper[4958]: I1006 11:49:57.497920 4958 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-9blls"] Oct 06 11:49:57 crc kubenswrapper[4958]: I1006 11:49:57.498447 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-9c57cc56f-9blls" Oct 06 11:49:57 crc kubenswrapper[4958]: I1006 11:49:57.499433 4958 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress-canary/ingress-canary-7flhj"] Oct 06 11:49:57 crc kubenswrapper[4958]: I1006 11:49:57.499910 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-7flhj" Oct 06 11:49:57 crc kubenswrapper[4958]: I1006 11:49:57.501956 4958 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-x42zz"] Oct 06 11:49:57 crc kubenswrapper[4958]: I1006 11:49:57.502389 4958 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-k94lv"] Oct 06 11:49:57 crc kubenswrapper[4958]: I1006 11:49:57.502980 4958 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-d4mss"] Oct 06 11:49:57 crc kubenswrapper[4958]: I1006 11:49:57.503432 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-777779d784-d4mss" Oct 06 11:49:57 crc kubenswrapper[4958]: I1006 11:49:57.503757 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-x42zz" Oct 06 11:49:57 crc kubenswrapper[4958]: I1006 11:49:57.503949 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-m9l68"] Oct 06 11:49:57 crc kubenswrapper[4958]: I1006 11:49:57.504638 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-k94lv" Oct 06 11:49:57 crc kubenswrapper[4958]: I1006 11:49:57.504815 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-v7kkf"] Oct 06 11:49:57 crc kubenswrapper[4958]: I1006 11:49:57.505850 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-t2nxc"] Oct 06 11:49:57 crc kubenswrapper[4958]: I1006 11:49:57.507092 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-nsfvl"] Oct 06 11:49:57 crc kubenswrapper[4958]: I1006 11:49:57.507560 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/downloads-7954f5f757-bs4m6"] Oct 06 11:49:57 crc kubenswrapper[4958]: I1006 11:49:57.508514 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-gj6pw"] Oct 06 11:49:57 crc kubenswrapper[4958]: I1006 11:49:57.509503 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-xnxs2"] Oct 06 11:49:57 crc kubenswrapper[4958]: I1006 11:49:57.510416 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-x8mdr"] Oct 06 11:49:57 crc kubenswrapper[4958]: I1006 11:49:57.510899 4958 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"kube-root-ca.crt" Oct 06 11:49:57 crc kubenswrapper[4958]: I1006 11:49:57.511350 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-4cxkf"] Oct 06 11:49:57 crc kubenswrapper[4958]: I1006 11:49:57.512311 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-vqq9g"] Oct 06 11:49:57 crc kubenswrapper[4958]: I1006 11:49:57.513373 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console-operator/console-operator-58897d9998-z2w99"] Oct 06 11:49:57 crc kubenswrapper[4958]: I1006 11:49:57.514138 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-7ndvg"] Oct 06 11:49:57 crc kubenswrapper[4958]: I1006 11:49:57.515045 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-f2982"] Oct 06 11:49:57 crc kubenswrapper[4958]: I1006 11:49:57.516395 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-928ch"] Oct 06 11:49:57 crc kubenswrapper[4958]: I1006 11:49:57.516832 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-m7q5b"] Oct 06 11:49:57 crc kubenswrapper[4958]: I1006 11:49:57.518168 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-jxkkt"] Oct 06 11:49:57 crc kubenswrapper[4958]: I1006 11:49:57.519511 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-x42zz"] Oct 06 11:49:57 crc kubenswrapper[4958]: I1006 11:49:57.520798 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-2gbnx"] Oct 06 11:49:57 crc kubenswrapper[4958]: I1006 11:49:57.521741 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-vqqpd"] Oct 06 11:49:57 crc kubenswrapper[4958]: I1006 11:49:57.523874 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-dftvm"] Oct 06 11:49:57 crc kubenswrapper[4958]: I1006 11:49:57.526123 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-kt8qb"] Oct 06 11:49:57 crc kubenswrapper[4958]: I1006 11:49:57.526424 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/3f84a255-5034-424e-acf0-5ba9f4aa0531-image-import-ca\") pod \"apiserver-76f77b778f-57fbj\" (UID: \"3f84a255-5034-424e-acf0-5ba9f4aa0531\") " pod="openshift-apiserver/apiserver-76f77b778f-57fbj" Oct 06 11:49:57 crc kubenswrapper[4958]: I1006 11:49:57.526555 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e72b392d-2c9c-462e-bbe5-8f839912c083-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-vqq9g\" (UID: \"e72b392d-2c9c-462e-bbe5-8f839912c083\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-vqq9g" Oct 06 11:49:57 crc kubenswrapper[4958]: I1006 11:49:57.527519 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/11d97a87-1e86-4e2c-aad1-c66fd9dafea3-config\") pod \"console-operator-58897d9998-z2w99\" (UID: \"11d97a87-1e86-4e2c-aad1-c66fd9dafea3\") " pod="openshift-console-operator/console-operator-58897d9998-z2w99" Oct 06 11:49:57 crc kubenswrapper[4958]: I1006 11:49:57.527620 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/2eb410e0-7f66-4f68-8448-35569e09f1c5-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-m9l68\" (UID: \"2eb410e0-7f66-4f68-8448-35569e09f1c5\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-m9l68" Oct 06 11:49:57 crc kubenswrapper[4958]: I1006 11:49:57.527211 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/3f84a255-5034-424e-acf0-5ba9f4aa0531-image-import-ca\") pod \"apiserver-76f77b778f-57fbj\" (UID: \"3f84a255-5034-424e-acf0-5ba9f4aa0531\") " pod="openshift-apiserver/apiserver-76f77b778f-57fbj" Oct 06 11:49:57 crc kubenswrapper[4958]: I1006 11:49:57.527699 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/799bd962-f454-498a-88e6-58793b08d732-console-config\") pod \"console-f9d7485db-ccpgf\" (UID: \"799bd962-f454-498a-88e6-58793b08d732\") " pod="openshift-console/console-f9d7485db-ccpgf" Oct 06 11:49:57 crc kubenswrapper[4958]: I1006 11:49:57.527818 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/87166f97-d9f0-4391-87b6-0ea7ce0208e1-audit-dir\") pod \"oauth-openshift-558db77b4-dftvm\" (UID: \"87166f97-d9f0-4391-87b6-0ea7ce0208e1\") " pod="openshift-authentication/oauth-openshift-558db77b4-dftvm" Oct 06 11:49:57 crc kubenswrapper[4958]: I1006 11:49:57.527852 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mh9tp\" (UniqueName: \"kubernetes.io/projected/11d97a87-1e86-4e2c-aad1-c66fd9dafea3-kube-api-access-mh9tp\") pod \"console-operator-58897d9998-z2w99\" (UID: \"11d97a87-1e86-4e2c-aad1-c66fd9dafea3\") " pod="openshift-console-operator/console-operator-58897d9998-z2w99" Oct 06 11:49:57 crc kubenswrapper[4958]: I1006 11:49:57.527887 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/2eb410e0-7f66-4f68-8448-35569e09f1c5-service-ca-bundle\") pod \"authentication-operator-69f744f599-m9l68\" (UID: \"2eb410e0-7f66-4f68-8448-35569e09f1c5\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-m9l68" Oct 06 11:49:57 crc kubenswrapper[4958]: I1006 11:49:57.527916 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/87166f97-d9f0-4391-87b6-0ea7ce0208e1-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-dftvm\" (UID: \"87166f97-d9f0-4391-87b6-0ea7ce0208e1\") " pod="openshift-authentication/oauth-openshift-558db77b4-dftvm" Oct 06 11:49:57 crc kubenswrapper[4958]: I1006 11:49:57.527943 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6604f8c6-1db7-4bc6-96a7-7bbd56ab2b4c-serving-cert\") pod \"etcd-operator-b45778765-xnxs2\" (UID: \"6604f8c6-1db7-4bc6-96a7-7bbd56ab2b4c\") " pod="openshift-etcd-operator/etcd-operator-b45778765-xnxs2" Oct 06 11:49:57 crc kubenswrapper[4958]: I1006 11:49:57.527978 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/3f84a255-5034-424e-acf0-5ba9f4aa0531-etcd-serving-ca\") pod \"apiserver-76f77b778f-57fbj\" (UID: \"3f84a255-5034-424e-acf0-5ba9f4aa0531\") " pod="openshift-apiserver/apiserver-76f77b778f-57fbj" Oct 06 11:49:57 crc kubenswrapper[4958]: I1006 11:49:57.528006 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/799bd962-f454-498a-88e6-58793b08d732-console-oauth-config\") pod \"console-f9d7485db-ccpgf\" (UID: \"799bd962-f454-498a-88e6-58793b08d732\") " pod="openshift-console/console-f9d7485db-ccpgf" Oct 06 11:49:57 crc kubenswrapper[4958]: I1006 11:49:57.528032 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8pkcl\" (UniqueName: \"kubernetes.io/projected/3f84a255-5034-424e-acf0-5ba9f4aa0531-kube-api-access-8pkcl\") pod \"apiserver-76f77b778f-57fbj\" (UID: \"3f84a255-5034-424e-acf0-5ba9f4aa0531\") " pod="openshift-apiserver/apiserver-76f77b778f-57fbj" Oct 06 11:49:57 crc kubenswrapper[4958]: I1006 11:49:57.528055 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/87166f97-d9f0-4391-87b6-0ea7ce0208e1-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-dftvm\" (UID: \"87166f97-d9f0-4391-87b6-0ea7ce0208e1\") " pod="openshift-authentication/oauth-openshift-558db77b4-dftvm" Oct 06 11:49:57 crc kubenswrapper[4958]: I1006 11:49:57.528079 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e72b392d-2c9c-462e-bbe5-8f839912c083-serving-cert\") pod \"apiserver-7bbb656c7d-vqq9g\" (UID: \"e72b392d-2c9c-462e-bbe5-8f839912c083\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-vqq9g" Oct 06 11:49:57 crc kubenswrapper[4958]: I1006 11:49:57.528100 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/799bd962-f454-498a-88e6-58793b08d732-console-serving-cert\") pod \"console-f9d7485db-ccpgf\" (UID: \"799bd962-f454-498a-88e6-58793b08d732\") " pod="openshift-console/console-f9d7485db-ccpgf" Oct 06 11:49:57 crc kubenswrapper[4958]: I1006 11:49:57.528115 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/6604f8c6-1db7-4bc6-96a7-7bbd56ab2b4c-etcd-client\") pod \"etcd-operator-b45778765-xnxs2\" (UID: \"6604f8c6-1db7-4bc6-96a7-7bbd56ab2b4c\") " pod="openshift-etcd-operator/etcd-operator-b45778765-xnxs2" Oct 06 11:49:57 crc kubenswrapper[4958]: I1006 11:49:57.528134 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5af269ee-1565-4c36-a416-c4e2e7397fc5-serving-cert\") pod \"route-controller-manager-6576b87f9c-fmlg2\" (UID: \"5af269ee-1565-4c36-a416-c4e2e7397fc5\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-fmlg2" Oct 06 11:49:57 crc kubenswrapper[4958]: I1006 11:49:57.528175 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/3f84a255-5034-424e-acf0-5ba9f4aa0531-encryption-config\") pod \"apiserver-76f77b778f-57fbj\" (UID: \"3f84a255-5034-424e-acf0-5ba9f4aa0531\") " pod="openshift-apiserver/apiserver-76f77b778f-57fbj" Oct 06 11:49:57 crc kubenswrapper[4958]: I1006 11:49:57.528199 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/87166f97-d9f0-4391-87b6-0ea7ce0208e1-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-dftvm\" (UID: \"87166f97-d9f0-4391-87b6-0ea7ce0208e1\") " pod="openshift-authentication/oauth-openshift-558db77b4-dftvm" Oct 06 11:49:57 crc kubenswrapper[4958]: I1006 11:49:57.528238 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2eb410e0-7f66-4f68-8448-35569e09f1c5-config\") pod \"authentication-operator-69f744f599-m9l68\" (UID: \"2eb410e0-7f66-4f68-8448-35569e09f1c5\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-m9l68" Oct 06 11:49:57 crc kubenswrapper[4958]: I1006 11:49:57.528256 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6604f8c6-1db7-4bc6-96a7-7bbd56ab2b4c-config\") pod \"etcd-operator-b45778765-xnxs2\" (UID: \"6604f8c6-1db7-4bc6-96a7-7bbd56ab2b4c\") " pod="openshift-etcd-operator/etcd-operator-b45778765-xnxs2" Oct 06 11:49:57 crc kubenswrapper[4958]: I1006 11:49:57.528284 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/3f84a255-5034-424e-acf0-5ba9f4aa0531-trusted-ca-bundle\") pod \"apiserver-76f77b778f-57fbj\" (UID: \"3f84a255-5034-424e-acf0-5ba9f4aa0531\") " pod="openshift-apiserver/apiserver-76f77b778f-57fbj" Oct 06 11:49:57 crc kubenswrapper[4958]: I1006 11:49:57.528305 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/e72b392d-2c9c-462e-bbe5-8f839912c083-etcd-client\") pod \"apiserver-7bbb656c7d-vqq9g\" (UID: \"e72b392d-2c9c-462e-bbe5-8f839912c083\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-vqq9g" Oct 06 11:49:57 crc kubenswrapper[4958]: I1006 11:49:57.528348 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/3f84a255-5034-424e-acf0-5ba9f4aa0531-audit\") pod \"apiserver-76f77b778f-57fbj\" (UID: \"3f84a255-5034-424e-acf0-5ba9f4aa0531\") " pod="openshift-apiserver/apiserver-76f77b778f-57fbj" Oct 06 11:49:57 crc kubenswrapper[4958]: I1006 11:49:57.528466 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5af269ee-1565-4c36-a416-c4e2e7397fc5-config\") pod \"route-controller-manager-6576b87f9c-fmlg2\" (UID: \"5af269ee-1565-4c36-a416-c4e2e7397fc5\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-fmlg2" Oct 06 11:49:57 crc kubenswrapper[4958]: I1006 11:49:57.528499 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/3f84a255-5034-424e-acf0-5ba9f4aa0531-node-pullsecrets\") pod \"apiserver-76f77b778f-57fbj\" (UID: \"3f84a255-5034-424e-acf0-5ba9f4aa0531\") " pod="openshift-apiserver/apiserver-76f77b778f-57fbj" Oct 06 11:49:57 crc kubenswrapper[4958]: I1006 11:49:57.528525 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/3f84a255-5034-424e-acf0-5ba9f4aa0531-etcd-client\") pod \"apiserver-76f77b778f-57fbj\" (UID: \"3f84a255-5034-424e-acf0-5ba9f4aa0531\") " pod="openshift-apiserver/apiserver-76f77b778f-57fbj" Oct 06 11:49:57 crc kubenswrapper[4958]: I1006 11:49:57.528555 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/87166f97-d9f0-4391-87b6-0ea7ce0208e1-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-dftvm\" (UID: \"87166f97-d9f0-4391-87b6-0ea7ce0208e1\") " pod="openshift-authentication/oauth-openshift-558db77b4-dftvm" Oct 06 11:49:57 crc kubenswrapper[4958]: I1006 11:49:57.528623 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/799bd962-f454-498a-88e6-58793b08d732-service-ca\") pod \"console-f9d7485db-ccpgf\" (UID: \"799bd962-f454-498a-88e6-58793b08d732\") " pod="openshift-console/console-f9d7485db-ccpgf" Oct 06 11:49:57 crc kubenswrapper[4958]: I1006 11:49:57.528649 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/87166f97-d9f0-4391-87b6-0ea7ce0208e1-audit-policies\") pod \"oauth-openshift-558db77b4-dftvm\" (UID: \"87166f97-d9f0-4391-87b6-0ea7ce0208e1\") " pod="openshift-authentication/oauth-openshift-558db77b4-dftvm" Oct 06 11:49:57 crc kubenswrapper[4958]: I1006 11:49:57.528854 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/3f84a255-5034-424e-acf0-5ba9f4aa0531-etcd-serving-ca\") pod \"apiserver-76f77b778f-57fbj\" (UID: \"3f84a255-5034-424e-acf0-5ba9f4aa0531\") " pod="openshift-apiserver/apiserver-76f77b778f-57fbj" Oct 06 11:49:57 crc kubenswrapper[4958]: I1006 11:49:57.528923 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/3f84a255-5034-424e-acf0-5ba9f4aa0531-node-pullsecrets\") pod \"apiserver-76f77b778f-57fbj\" (UID: \"3f84a255-5034-424e-acf0-5ba9f4aa0531\") " pod="openshift-apiserver/apiserver-76f77b778f-57fbj" Oct 06 11:49:57 crc kubenswrapper[4958]: I1006 11:49:57.529018 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/3f84a255-5034-424e-acf0-5ba9f4aa0531-audit\") pod \"apiserver-76f77b778f-57fbj\" (UID: \"3f84a255-5034-424e-acf0-5ba9f4aa0531\") " pod="openshift-apiserver/apiserver-76f77b778f-57fbj" Oct 06 11:49:57 crc kubenswrapper[4958]: I1006 11:49:57.529402 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qjr4j\" (UniqueName: \"kubernetes.io/projected/2eb410e0-7f66-4f68-8448-35569e09f1c5-kube-api-access-qjr4j\") pod \"authentication-operator-69f744f599-m9l68\" (UID: \"2eb410e0-7f66-4f68-8448-35569e09f1c5\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-m9l68" Oct 06 11:49:57 crc kubenswrapper[4958]: I1006 11:49:57.529432 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/87166f97-d9f0-4391-87b6-0ea7ce0208e1-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-dftvm\" (UID: \"87166f97-d9f0-4391-87b6-0ea7ce0208e1\") " pod="openshift-authentication/oauth-openshift-558db77b4-dftvm" Oct 06 11:49:57 crc kubenswrapper[4958]: I1006 11:49:57.529452 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/e72b392d-2c9c-462e-bbe5-8f839912c083-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-vqq9g\" (UID: \"e72b392d-2c9c-462e-bbe5-8f839912c083\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-vqq9g" Oct 06 11:49:57 crc kubenswrapper[4958]: I1006 11:49:57.529495 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/e72b392d-2c9c-462e-bbe5-8f839912c083-audit-dir\") pod \"apiserver-7bbb656c7d-vqq9g\" (UID: \"e72b392d-2c9c-462e-bbe5-8f839912c083\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-vqq9g" Oct 06 11:49:57 crc kubenswrapper[4958]: I1006 11:49:57.529544 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/799bd962-f454-498a-88e6-58793b08d732-oauth-serving-cert\") pod \"console-f9d7485db-ccpgf\" (UID: \"799bd962-f454-498a-88e6-58793b08d732\") " pod="openshift-console/console-f9d7485db-ccpgf" Oct 06 11:49:57 crc kubenswrapper[4958]: I1006 11:49:57.529562 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/3f84a255-5034-424e-acf0-5ba9f4aa0531-trusted-ca-bundle\") pod \"apiserver-76f77b778f-57fbj\" (UID: \"3f84a255-5034-424e-acf0-5ba9f4aa0531\") " pod="openshift-apiserver/apiserver-76f77b778f-57fbj" Oct 06 11:49:57 crc kubenswrapper[4958]: I1006 11:49:57.529574 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8v9pq\" (UniqueName: \"kubernetes.io/projected/975b4fb4-827e-4d99-b37a-5bf622b2c889-kube-api-access-8v9pq\") pod \"openshift-config-operator-7777fb866f-vqqpd\" (UID: \"975b4fb4-827e-4d99-b37a-5bf622b2c889\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-vqqpd" Oct 06 11:49:57 crc kubenswrapper[4958]: I1006 11:49:57.529596 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2eb410e0-7f66-4f68-8448-35569e09f1c5-serving-cert\") pod \"authentication-operator-69f744f599-m9l68\" (UID: \"2eb410e0-7f66-4f68-8448-35569e09f1c5\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-m9l68" Oct 06 11:49:57 crc kubenswrapper[4958]: I1006 11:49:57.529599 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-c97vp"] Oct 06 11:49:57 crc kubenswrapper[4958]: I1006 11:49:57.529618 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-92twf\" (UniqueName: \"kubernetes.io/projected/e72b392d-2c9c-462e-bbe5-8f839912c083-kube-api-access-92twf\") pod \"apiserver-7bbb656c7d-vqq9g\" (UID: \"e72b392d-2c9c-462e-bbe5-8f839912c083\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-vqq9g" Oct 06 11:49:57 crc kubenswrapper[4958]: I1006 11:49:57.529640 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x5ztw\" (UniqueName: \"kubernetes.io/projected/6604f8c6-1db7-4bc6-96a7-7bbd56ab2b4c-kube-api-access-x5ztw\") pod \"etcd-operator-b45778765-xnxs2\" (UID: \"6604f8c6-1db7-4bc6-96a7-7bbd56ab2b4c\") " pod="openshift-etcd-operator/etcd-operator-b45778765-xnxs2" Oct 06 11:49:57 crc kubenswrapper[4958]: I1006 11:49:57.529662 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3f84a255-5034-424e-acf0-5ba9f4aa0531-serving-cert\") pod \"apiserver-76f77b778f-57fbj\" (UID: \"3f84a255-5034-424e-acf0-5ba9f4aa0531\") " pod="openshift-apiserver/apiserver-76f77b778f-57fbj" Oct 06 11:49:57 crc kubenswrapper[4958]: I1006 11:49:57.529684 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/87166f97-d9f0-4391-87b6-0ea7ce0208e1-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-dftvm\" (UID: \"87166f97-d9f0-4391-87b6-0ea7ce0208e1\") " pod="openshift-authentication/oauth-openshift-558db77b4-dftvm" Oct 06 11:49:57 crc kubenswrapper[4958]: I1006 11:49:57.529732 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/e72b392d-2c9c-462e-bbe5-8f839912c083-encryption-config\") pod \"apiserver-7bbb656c7d-vqq9g\" (UID: \"e72b392d-2c9c-462e-bbe5-8f839912c083\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-vqq9g" Oct 06 11:49:57 crc kubenswrapper[4958]: I1006 11:49:57.529751 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/799bd962-f454-498a-88e6-58793b08d732-trusted-ca-bundle\") pod \"console-f9d7485db-ccpgf\" (UID: \"799bd962-f454-498a-88e6-58793b08d732\") " pod="openshift-console/console-f9d7485db-ccpgf" Oct 06 11:49:57 crc kubenswrapper[4958]: I1006 11:49:57.529778 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cskx9\" (UniqueName: \"kubernetes.io/projected/5af269ee-1565-4c36-a416-c4e2e7397fc5-kube-api-access-cskx9\") pod \"route-controller-manager-6576b87f9c-fmlg2\" (UID: \"5af269ee-1565-4c36-a416-c4e2e7397fc5\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-fmlg2" Oct 06 11:49:57 crc kubenswrapper[4958]: I1006 11:49:57.529799 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/87166f97-d9f0-4391-87b6-0ea7ce0208e1-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-dftvm\" (UID: \"87166f97-d9f0-4391-87b6-0ea7ce0208e1\") " pod="openshift-authentication/oauth-openshift-558db77b4-dftvm" Oct 06 11:49:57 crc kubenswrapper[4958]: I1006 11:49:57.529819 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fdq7w\" (UniqueName: \"kubernetes.io/projected/87166f97-d9f0-4391-87b6-0ea7ce0208e1-kube-api-access-fdq7w\") pod \"oauth-openshift-558db77b4-dftvm\" (UID: \"87166f97-d9f0-4391-87b6-0ea7ce0208e1\") " pod="openshift-authentication/oauth-openshift-558db77b4-dftvm" Oct 06 11:49:57 crc kubenswrapper[4958]: I1006 11:49:57.529851 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/6604f8c6-1db7-4bc6-96a7-7bbd56ab2b4c-etcd-ca\") pod \"etcd-operator-b45778765-xnxs2\" (UID: \"6604f8c6-1db7-4bc6-96a7-7bbd56ab2b4c\") " pod="openshift-etcd-operator/etcd-operator-b45778765-xnxs2" Oct 06 11:49:57 crc kubenswrapper[4958]: I1006 11:49:57.529873 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3f84a255-5034-424e-acf0-5ba9f4aa0531-config\") pod \"apiserver-76f77b778f-57fbj\" (UID: \"3f84a255-5034-424e-acf0-5ba9f4aa0531\") " pod="openshift-apiserver/apiserver-76f77b778f-57fbj" Oct 06 11:49:57 crc kubenswrapper[4958]: I1006 11:49:57.529893 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/975b4fb4-827e-4d99-b37a-5bf622b2c889-serving-cert\") pod \"openshift-config-operator-7777fb866f-vqqpd\" (UID: \"975b4fb4-827e-4d99-b37a-5bf622b2c889\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-vqqpd" Oct 06 11:49:57 crc kubenswrapper[4958]: I1006 11:49:57.529912 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/87166f97-d9f0-4391-87b6-0ea7ce0208e1-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-dftvm\" (UID: \"87166f97-d9f0-4391-87b6-0ea7ce0208e1\") " pod="openshift-authentication/oauth-openshift-558db77b4-dftvm" Oct 06 11:49:57 crc kubenswrapper[4958]: I1006 11:49:57.529932 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/87166f97-d9f0-4391-87b6-0ea7ce0208e1-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-dftvm\" (UID: \"87166f97-d9f0-4391-87b6-0ea7ce0208e1\") " pod="openshift-authentication/oauth-openshift-558db77b4-dftvm" Oct 06 11:49:57 crc kubenswrapper[4958]: I1006 11:49:57.529950 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/6604f8c6-1db7-4bc6-96a7-7bbd56ab2b4c-etcd-service-ca\") pod \"etcd-operator-b45778765-xnxs2\" (UID: \"6604f8c6-1db7-4bc6-96a7-7bbd56ab2b4c\") " pod="openshift-etcd-operator/etcd-operator-b45778765-xnxs2" Oct 06 11:49:57 crc kubenswrapper[4958]: I1006 11:49:57.529973 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/87166f97-d9f0-4391-87b6-0ea7ce0208e1-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-dftvm\" (UID: \"87166f97-d9f0-4391-87b6-0ea7ce0208e1\") " pod="openshift-authentication/oauth-openshift-558db77b4-dftvm" Oct 06 11:49:57 crc kubenswrapper[4958]: I1006 11:49:57.529999 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/3f84a255-5034-424e-acf0-5ba9f4aa0531-audit-dir\") pod \"apiserver-76f77b778f-57fbj\" (UID: \"3f84a255-5034-424e-acf0-5ba9f4aa0531\") " pod="openshift-apiserver/apiserver-76f77b778f-57fbj" Oct 06 11:49:57 crc kubenswrapper[4958]: I1006 11:49:57.530111 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5af269ee-1565-4c36-a416-c4e2e7397fc5-config\") pod \"route-controller-manager-6576b87f9c-fmlg2\" (UID: \"5af269ee-1565-4c36-a416-c4e2e7397fc5\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-fmlg2" Oct 06 11:49:57 crc kubenswrapper[4958]: I1006 11:49:57.530217 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/3f84a255-5034-424e-acf0-5ba9f4aa0531-audit-dir\") pod \"apiserver-76f77b778f-57fbj\" (UID: \"3f84a255-5034-424e-acf0-5ba9f4aa0531\") " pod="openshift-apiserver/apiserver-76f77b778f-57fbj" Oct 06 11:49:57 crc kubenswrapper[4958]: I1006 11:49:57.530230 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/975b4fb4-827e-4d99-b37a-5bf622b2c889-available-featuregates\") pod \"openshift-config-operator-7777fb866f-vqqpd\" (UID: \"975b4fb4-827e-4d99-b37a-5bf622b2c889\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-vqqpd" Oct 06 11:49:57 crc kubenswrapper[4958]: I1006 11:49:57.530254 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/87166f97-d9f0-4391-87b6-0ea7ce0208e1-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-dftvm\" (UID: \"87166f97-d9f0-4391-87b6-0ea7ce0208e1\") " pod="openshift-authentication/oauth-openshift-558db77b4-dftvm" Oct 06 11:49:57 crc kubenswrapper[4958]: I1006 11:49:57.530274 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4sdjd\" (UniqueName: \"kubernetes.io/projected/799bd962-f454-498a-88e6-58793b08d732-kube-api-access-4sdjd\") pod \"console-f9d7485db-ccpgf\" (UID: \"799bd962-f454-498a-88e6-58793b08d732\") " pod="openshift-console/console-f9d7485db-ccpgf" Oct 06 11:49:57 crc kubenswrapper[4958]: I1006 11:49:57.530291 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/11d97a87-1e86-4e2c-aad1-c66fd9dafea3-trusted-ca\") pod \"console-operator-58897d9998-z2w99\" (UID: \"11d97a87-1e86-4e2c-aad1-c66fd9dafea3\") " pod="openshift-console-operator/console-operator-58897d9998-z2w99" Oct 06 11:49:57 crc kubenswrapper[4958]: I1006 11:49:57.530309 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5af269ee-1565-4c36-a416-c4e2e7397fc5-client-ca\") pod \"route-controller-manager-6576b87f9c-fmlg2\" (UID: \"5af269ee-1565-4c36-a416-c4e2e7397fc5\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-fmlg2" Oct 06 11:49:57 crc kubenswrapper[4958]: I1006 11:49:57.530324 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/e72b392d-2c9c-462e-bbe5-8f839912c083-audit-policies\") pod \"apiserver-7bbb656c7d-vqq9g\" (UID: \"e72b392d-2c9c-462e-bbe5-8f839912c083\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-vqq9g" Oct 06 11:49:57 crc kubenswrapper[4958]: I1006 11:49:57.530340 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/11d97a87-1e86-4e2c-aad1-c66fd9dafea3-serving-cert\") pod \"console-operator-58897d9998-z2w99\" (UID: \"11d97a87-1e86-4e2c-aad1-c66fd9dafea3\") " pod="openshift-console-operator/console-operator-58897d9998-z2w99" Oct 06 11:49:57 crc kubenswrapper[4958]: I1006 11:49:57.530593 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3f84a255-5034-424e-acf0-5ba9f4aa0531-config\") pod \"apiserver-76f77b778f-57fbj\" (UID: \"3f84a255-5034-424e-acf0-5ba9f4aa0531\") " pod="openshift-apiserver/apiserver-76f77b778f-57fbj" Oct 06 11:49:57 crc kubenswrapper[4958]: I1006 11:49:57.530920 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5af269ee-1565-4c36-a416-c4e2e7397fc5-client-ca\") pod \"route-controller-manager-6576b87f9c-fmlg2\" (UID: \"5af269ee-1565-4c36-a416-c4e2e7397fc5\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-fmlg2" Oct 06 11:49:57 crc kubenswrapper[4958]: I1006 11:49:57.531604 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"openshift-config-operator-dockercfg-7pc5z" Oct 06 11:49:57 crc kubenswrapper[4958]: I1006 11:49:57.532197 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-pp96v"] Oct 06 11:49:57 crc kubenswrapper[4958]: I1006 11:49:57.533257 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/3f84a255-5034-424e-acf0-5ba9f4aa0531-etcd-client\") pod \"apiserver-76f77b778f-57fbj\" (UID: \"3f84a255-5034-424e-acf0-5ba9f4aa0531\") " pod="openshift-apiserver/apiserver-76f77b778f-57fbj" Oct 06 11:49:57 crc kubenswrapper[4958]: I1006 11:49:57.533276 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5af269ee-1565-4c36-a416-c4e2e7397fc5-serving-cert\") pod \"route-controller-manager-6576b87f9c-fmlg2\" (UID: \"5af269ee-1565-4c36-a416-c4e2e7397fc5\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-fmlg2" Oct 06 11:49:57 crc kubenswrapper[4958]: I1006 11:49:57.541859 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/3f84a255-5034-424e-acf0-5ba9f4aa0531-encryption-config\") pod \"apiserver-76f77b778f-57fbj\" (UID: \"3f84a255-5034-424e-acf0-5ba9f4aa0531\") " pod="openshift-apiserver/apiserver-76f77b778f-57fbj" Oct 06 11:49:57 crc kubenswrapper[4958]: I1006 11:49:57.541994 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3f84a255-5034-424e-acf0-5ba9f4aa0531-serving-cert\") pod \"apiserver-76f77b778f-57fbj\" (UID: \"3f84a255-5034-424e-acf0-5ba9f4aa0531\") " pod="openshift-apiserver/apiserver-76f77b778f-57fbj" Oct 06 11:49:57 crc kubenswrapper[4958]: I1006 11:49:57.543080 4958 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns/dns-default-kg57j"] Oct 06 11:49:57 crc kubenswrapper[4958]: I1006 11:49:57.543926 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-kg57j" Oct 06 11:49:57 crc kubenswrapper[4958]: I1006 11:49:57.545992 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-swm82"] Oct 06 11:49:57 crc kubenswrapper[4958]: I1006 11:49:57.547707 4958 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-m9cln"] Oct 06 11:49:57 crc kubenswrapper[4958]: I1006 11:49:57.548516 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-m9cln" Oct 06 11:49:57 crc kubenswrapper[4958]: I1006 11:49:57.549103 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-7zl8z"] Oct 06 11:49:57 crc kubenswrapper[4958]: I1006 11:49:57.550410 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-canary/ingress-canary-7flhj"] Oct 06 11:49:57 crc kubenswrapper[4958]: I1006 11:49:57.551323 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"config-operator-serving-cert" Oct 06 11:49:57 crc kubenswrapper[4958]: I1006 11:49:57.551873 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-zbr46"] Oct 06 11:49:57 crc kubenswrapper[4958]: I1006 11:49:57.553267 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-f9d7485db-ccpgf"] Oct 06 11:49:57 crc kubenswrapper[4958]: I1006 11:49:57.555651 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-k94lv"] Oct 06 11:49:57 crc kubenswrapper[4958]: I1006 11:49:57.556654 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29329185-6dnq5"] Oct 06 11:49:57 crc kubenswrapper[4958]: I1006 11:49:57.557650 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-m9cln"] Oct 06 11:49:57 crc kubenswrapper[4958]: I1006 11:49:57.565319 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-9blls"] Oct 06 11:49:57 crc kubenswrapper[4958]: I1006 11:49:57.567656 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-fhfvm"] Oct 06 11:49:57 crc kubenswrapper[4958]: I1006 11:49:57.569171 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-wlv4g"] Oct 06 11:49:57 crc kubenswrapper[4958]: I1006 11:49:57.571316 4958 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"kube-root-ca.crt" Oct 06 11:49:57 crc kubenswrapper[4958]: I1006 11:49:57.571407 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns/dns-default-kg57j"] Oct 06 11:49:57 crc kubenswrapper[4958]: I1006 11:49:57.571799 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-d4mss"] Oct 06 11:49:57 crc kubenswrapper[4958]: I1006 11:49:57.611311 4958 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-root-ca.crt" Oct 06 11:49:57 crc kubenswrapper[4958]: I1006 11:49:57.631491 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/799bd962-f454-498a-88e6-58793b08d732-console-serving-cert\") pod \"console-f9d7485db-ccpgf\" (UID: \"799bd962-f454-498a-88e6-58793b08d732\") " pod="openshift-console/console-f9d7485db-ccpgf" Oct 06 11:49:57 crc kubenswrapper[4958]: I1006 11:49:57.631537 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/6604f8c6-1db7-4bc6-96a7-7bbd56ab2b4c-etcd-client\") pod \"etcd-operator-b45778765-xnxs2\" (UID: \"6604f8c6-1db7-4bc6-96a7-7bbd56ab2b4c\") " pod="openshift-etcd-operator/etcd-operator-b45778765-xnxs2" Oct 06 11:49:57 crc kubenswrapper[4958]: I1006 11:49:57.631580 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/87166f97-d9f0-4391-87b6-0ea7ce0208e1-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-dftvm\" (UID: \"87166f97-d9f0-4391-87b6-0ea7ce0208e1\") " pod="openshift-authentication/oauth-openshift-558db77b4-dftvm" Oct 06 11:49:57 crc kubenswrapper[4958]: I1006 11:49:57.631607 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2eb410e0-7f66-4f68-8448-35569e09f1c5-config\") pod \"authentication-operator-69f744f599-m9l68\" (UID: \"2eb410e0-7f66-4f68-8448-35569e09f1c5\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-m9l68" Oct 06 11:49:57 crc kubenswrapper[4958]: I1006 11:49:57.631641 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6604f8c6-1db7-4bc6-96a7-7bbd56ab2b4c-config\") pod \"etcd-operator-b45778765-xnxs2\" (UID: \"6604f8c6-1db7-4bc6-96a7-7bbd56ab2b4c\") " pod="openshift-etcd-operator/etcd-operator-b45778765-xnxs2" Oct 06 11:49:57 crc kubenswrapper[4958]: I1006 11:49:57.631657 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/e72b392d-2c9c-462e-bbe5-8f839912c083-etcd-client\") pod \"apiserver-7bbb656c7d-vqq9g\" (UID: \"e72b392d-2c9c-462e-bbe5-8f839912c083\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-vqq9g" Oct 06 11:49:57 crc kubenswrapper[4958]: I1006 11:49:57.631675 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/87166f97-d9f0-4391-87b6-0ea7ce0208e1-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-dftvm\" (UID: \"87166f97-d9f0-4391-87b6-0ea7ce0208e1\") " pod="openshift-authentication/oauth-openshift-558db77b4-dftvm" Oct 06 11:49:57 crc kubenswrapper[4958]: I1006 11:49:57.631690 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/799bd962-f454-498a-88e6-58793b08d732-service-ca\") pod \"console-f9d7485db-ccpgf\" (UID: \"799bd962-f454-498a-88e6-58793b08d732\") " pod="openshift-console/console-f9d7485db-ccpgf" Oct 06 11:49:57 crc kubenswrapper[4958]: I1006 11:49:57.631705 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/87166f97-d9f0-4391-87b6-0ea7ce0208e1-audit-policies\") pod \"oauth-openshift-558db77b4-dftvm\" (UID: \"87166f97-d9f0-4391-87b6-0ea7ce0208e1\") " pod="openshift-authentication/oauth-openshift-558db77b4-dftvm" Oct 06 11:49:57 crc kubenswrapper[4958]: I1006 11:49:57.631731 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/e72b392d-2c9c-462e-bbe5-8f839912c083-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-vqq9g\" (UID: \"e72b392d-2c9c-462e-bbe5-8f839912c083\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-vqq9g" Oct 06 11:49:57 crc kubenswrapper[4958]: I1006 11:49:57.631747 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/e72b392d-2c9c-462e-bbe5-8f839912c083-audit-dir\") pod \"apiserver-7bbb656c7d-vqq9g\" (UID: \"e72b392d-2c9c-462e-bbe5-8f839912c083\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-vqq9g" Oct 06 11:49:57 crc kubenswrapper[4958]: I1006 11:49:57.631763 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/799bd962-f454-498a-88e6-58793b08d732-oauth-serving-cert\") pod \"console-f9d7485db-ccpgf\" (UID: \"799bd962-f454-498a-88e6-58793b08d732\") " pod="openshift-console/console-f9d7485db-ccpgf" Oct 06 11:49:57 crc kubenswrapper[4958]: I1006 11:49:57.631780 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qjr4j\" (UniqueName: \"kubernetes.io/projected/2eb410e0-7f66-4f68-8448-35569e09f1c5-kube-api-access-qjr4j\") pod \"authentication-operator-69f744f599-m9l68\" (UID: \"2eb410e0-7f66-4f68-8448-35569e09f1c5\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-m9l68" Oct 06 11:49:57 crc kubenswrapper[4958]: I1006 11:49:57.631797 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/87166f97-d9f0-4391-87b6-0ea7ce0208e1-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-dftvm\" (UID: \"87166f97-d9f0-4391-87b6-0ea7ce0208e1\") " pod="openshift-authentication/oauth-openshift-558db77b4-dftvm" Oct 06 11:49:57 crc kubenswrapper[4958]: I1006 11:49:57.631812 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2eb410e0-7f66-4f68-8448-35569e09f1c5-serving-cert\") pod \"authentication-operator-69f744f599-m9l68\" (UID: \"2eb410e0-7f66-4f68-8448-35569e09f1c5\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-m9l68" Oct 06 11:49:57 crc kubenswrapper[4958]: I1006 11:49:57.631827 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-92twf\" (UniqueName: \"kubernetes.io/projected/e72b392d-2c9c-462e-bbe5-8f839912c083-kube-api-access-92twf\") pod \"apiserver-7bbb656c7d-vqq9g\" (UID: \"e72b392d-2c9c-462e-bbe5-8f839912c083\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-vqq9g" Oct 06 11:49:57 crc kubenswrapper[4958]: I1006 11:49:57.631841 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8v9pq\" (UniqueName: \"kubernetes.io/projected/975b4fb4-827e-4d99-b37a-5bf622b2c889-kube-api-access-8v9pq\") pod \"openshift-config-operator-7777fb866f-vqqpd\" (UID: \"975b4fb4-827e-4d99-b37a-5bf622b2c889\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-vqqpd" Oct 06 11:49:57 crc kubenswrapper[4958]: I1006 11:49:57.631856 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x5ztw\" (UniqueName: \"kubernetes.io/projected/6604f8c6-1db7-4bc6-96a7-7bbd56ab2b4c-kube-api-access-x5ztw\") pod \"etcd-operator-b45778765-xnxs2\" (UID: \"6604f8c6-1db7-4bc6-96a7-7bbd56ab2b4c\") " pod="openshift-etcd-operator/etcd-operator-b45778765-xnxs2" Oct 06 11:49:57 crc kubenswrapper[4958]: I1006 11:49:57.631872 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/87166f97-d9f0-4391-87b6-0ea7ce0208e1-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-dftvm\" (UID: \"87166f97-d9f0-4391-87b6-0ea7ce0208e1\") " pod="openshift-authentication/oauth-openshift-558db77b4-dftvm" Oct 06 11:49:57 crc kubenswrapper[4958]: I1006 11:49:57.631891 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/e72b392d-2c9c-462e-bbe5-8f839912c083-encryption-config\") pod \"apiserver-7bbb656c7d-vqq9g\" (UID: \"e72b392d-2c9c-462e-bbe5-8f839912c083\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-vqq9g" Oct 06 11:49:57 crc kubenswrapper[4958]: I1006 11:49:57.631907 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/799bd962-f454-498a-88e6-58793b08d732-trusted-ca-bundle\") pod \"console-f9d7485db-ccpgf\" (UID: \"799bd962-f454-498a-88e6-58793b08d732\") " pod="openshift-console/console-f9d7485db-ccpgf" Oct 06 11:49:57 crc kubenswrapper[4958]: I1006 11:49:57.631928 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/87166f97-d9f0-4391-87b6-0ea7ce0208e1-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-dftvm\" (UID: \"87166f97-d9f0-4391-87b6-0ea7ce0208e1\") " pod="openshift-authentication/oauth-openshift-558db77b4-dftvm" Oct 06 11:49:57 crc kubenswrapper[4958]: I1006 11:49:57.631948 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fdq7w\" (UniqueName: \"kubernetes.io/projected/87166f97-d9f0-4391-87b6-0ea7ce0208e1-kube-api-access-fdq7w\") pod \"oauth-openshift-558db77b4-dftvm\" (UID: \"87166f97-d9f0-4391-87b6-0ea7ce0208e1\") " pod="openshift-authentication/oauth-openshift-558db77b4-dftvm" Oct 06 11:49:57 crc kubenswrapper[4958]: I1006 11:49:57.631963 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/6604f8c6-1db7-4bc6-96a7-7bbd56ab2b4c-etcd-ca\") pod \"etcd-operator-b45778765-xnxs2\" (UID: \"6604f8c6-1db7-4bc6-96a7-7bbd56ab2b4c\") " pod="openshift-etcd-operator/etcd-operator-b45778765-xnxs2" Oct 06 11:49:57 crc kubenswrapper[4958]: I1006 11:49:57.631979 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/87166f97-d9f0-4391-87b6-0ea7ce0208e1-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-dftvm\" (UID: \"87166f97-d9f0-4391-87b6-0ea7ce0208e1\") " pod="openshift-authentication/oauth-openshift-558db77b4-dftvm" Oct 06 11:49:57 crc kubenswrapper[4958]: I1006 11:49:57.631997 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/87166f97-d9f0-4391-87b6-0ea7ce0208e1-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-dftvm\" (UID: \"87166f97-d9f0-4391-87b6-0ea7ce0208e1\") " pod="openshift-authentication/oauth-openshift-558db77b4-dftvm" Oct 06 11:49:57 crc kubenswrapper[4958]: I1006 11:49:57.632013 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/6604f8c6-1db7-4bc6-96a7-7bbd56ab2b4c-etcd-service-ca\") pod \"etcd-operator-b45778765-xnxs2\" (UID: \"6604f8c6-1db7-4bc6-96a7-7bbd56ab2b4c\") " pod="openshift-etcd-operator/etcd-operator-b45778765-xnxs2" Oct 06 11:49:57 crc kubenswrapper[4958]: I1006 11:49:57.632028 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/975b4fb4-827e-4d99-b37a-5bf622b2c889-serving-cert\") pod \"openshift-config-operator-7777fb866f-vqqpd\" (UID: \"975b4fb4-827e-4d99-b37a-5bf622b2c889\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-vqqpd" Oct 06 11:49:57 crc kubenswrapper[4958]: I1006 11:49:57.632044 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/87166f97-d9f0-4391-87b6-0ea7ce0208e1-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-dftvm\" (UID: \"87166f97-d9f0-4391-87b6-0ea7ce0208e1\") " pod="openshift-authentication/oauth-openshift-558db77b4-dftvm" Oct 06 11:49:57 crc kubenswrapper[4958]: I1006 11:49:57.632062 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/975b4fb4-827e-4d99-b37a-5bf622b2c889-available-featuregates\") pod \"openshift-config-operator-7777fb866f-vqqpd\" (UID: \"975b4fb4-827e-4d99-b37a-5bf622b2c889\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-vqqpd" Oct 06 11:49:57 crc kubenswrapper[4958]: I1006 11:49:57.632077 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/87166f97-d9f0-4391-87b6-0ea7ce0208e1-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-dftvm\" (UID: \"87166f97-d9f0-4391-87b6-0ea7ce0208e1\") " pod="openshift-authentication/oauth-openshift-558db77b4-dftvm" Oct 06 11:49:57 crc kubenswrapper[4958]: I1006 11:49:57.632091 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4sdjd\" (UniqueName: \"kubernetes.io/projected/799bd962-f454-498a-88e6-58793b08d732-kube-api-access-4sdjd\") pod \"console-f9d7485db-ccpgf\" (UID: \"799bd962-f454-498a-88e6-58793b08d732\") " pod="openshift-console/console-f9d7485db-ccpgf" Oct 06 11:49:57 crc kubenswrapper[4958]: I1006 11:49:57.632105 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/11d97a87-1e86-4e2c-aad1-c66fd9dafea3-trusted-ca\") pod \"console-operator-58897d9998-z2w99\" (UID: \"11d97a87-1e86-4e2c-aad1-c66fd9dafea3\") " pod="openshift-console-operator/console-operator-58897d9998-z2w99" Oct 06 11:49:57 crc kubenswrapper[4958]: I1006 11:49:57.632120 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/e72b392d-2c9c-462e-bbe5-8f839912c083-audit-policies\") pod \"apiserver-7bbb656c7d-vqq9g\" (UID: \"e72b392d-2c9c-462e-bbe5-8f839912c083\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-vqq9g" Oct 06 11:49:57 crc kubenswrapper[4958]: I1006 11:49:57.632138 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/11d97a87-1e86-4e2c-aad1-c66fd9dafea3-serving-cert\") pod \"console-operator-58897d9998-z2w99\" (UID: \"11d97a87-1e86-4e2c-aad1-c66fd9dafea3\") " pod="openshift-console-operator/console-operator-58897d9998-z2w99" Oct 06 11:49:57 crc kubenswrapper[4958]: I1006 11:49:57.632168 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e72b392d-2c9c-462e-bbe5-8f839912c083-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-vqq9g\" (UID: \"e72b392d-2c9c-462e-bbe5-8f839912c083\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-vqq9g" Oct 06 11:49:57 crc kubenswrapper[4958]: I1006 11:49:57.632182 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/11d97a87-1e86-4e2c-aad1-c66fd9dafea3-config\") pod \"console-operator-58897d9998-z2w99\" (UID: \"11d97a87-1e86-4e2c-aad1-c66fd9dafea3\") " pod="openshift-console-operator/console-operator-58897d9998-z2w99" Oct 06 11:49:57 crc kubenswrapper[4958]: I1006 11:49:57.632199 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/2eb410e0-7f66-4f68-8448-35569e09f1c5-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-m9l68\" (UID: \"2eb410e0-7f66-4f68-8448-35569e09f1c5\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-m9l68" Oct 06 11:49:57 crc kubenswrapper[4958]: I1006 11:49:57.632214 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/799bd962-f454-498a-88e6-58793b08d732-console-config\") pod \"console-f9d7485db-ccpgf\" (UID: \"799bd962-f454-498a-88e6-58793b08d732\") " pod="openshift-console/console-f9d7485db-ccpgf" Oct 06 11:49:57 crc kubenswrapper[4958]: I1006 11:49:57.632231 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/87166f97-d9f0-4391-87b6-0ea7ce0208e1-audit-dir\") pod \"oauth-openshift-558db77b4-dftvm\" (UID: \"87166f97-d9f0-4391-87b6-0ea7ce0208e1\") " pod="openshift-authentication/oauth-openshift-558db77b4-dftvm" Oct 06 11:49:57 crc kubenswrapper[4958]: I1006 11:49:57.632247 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mh9tp\" (UniqueName: \"kubernetes.io/projected/11d97a87-1e86-4e2c-aad1-c66fd9dafea3-kube-api-access-mh9tp\") pod \"console-operator-58897d9998-z2w99\" (UID: \"11d97a87-1e86-4e2c-aad1-c66fd9dafea3\") " pod="openshift-console-operator/console-operator-58897d9998-z2w99" Oct 06 11:49:57 crc kubenswrapper[4958]: I1006 11:49:57.632262 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/2eb410e0-7f66-4f68-8448-35569e09f1c5-service-ca-bundle\") pod \"authentication-operator-69f744f599-m9l68\" (UID: \"2eb410e0-7f66-4f68-8448-35569e09f1c5\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-m9l68" Oct 06 11:49:57 crc kubenswrapper[4958]: I1006 11:49:57.632280 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/87166f97-d9f0-4391-87b6-0ea7ce0208e1-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-dftvm\" (UID: \"87166f97-d9f0-4391-87b6-0ea7ce0208e1\") " pod="openshift-authentication/oauth-openshift-558db77b4-dftvm" Oct 06 11:49:57 crc kubenswrapper[4958]: I1006 11:49:57.632294 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6604f8c6-1db7-4bc6-96a7-7bbd56ab2b4c-serving-cert\") pod \"etcd-operator-b45778765-xnxs2\" (UID: \"6604f8c6-1db7-4bc6-96a7-7bbd56ab2b4c\") " pod="openshift-etcd-operator/etcd-operator-b45778765-xnxs2" Oct 06 11:49:57 crc kubenswrapper[4958]: I1006 11:49:57.632310 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/799bd962-f454-498a-88e6-58793b08d732-console-oauth-config\") pod \"console-f9d7485db-ccpgf\" (UID: \"799bd962-f454-498a-88e6-58793b08d732\") " pod="openshift-console/console-f9d7485db-ccpgf" Oct 06 11:49:57 crc kubenswrapper[4958]: I1006 11:49:57.632325 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/87166f97-d9f0-4391-87b6-0ea7ce0208e1-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-dftvm\" (UID: \"87166f97-d9f0-4391-87b6-0ea7ce0208e1\") " pod="openshift-authentication/oauth-openshift-558db77b4-dftvm" Oct 06 11:49:57 crc kubenswrapper[4958]: I1006 11:49:57.632339 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e72b392d-2c9c-462e-bbe5-8f839912c083-serving-cert\") pod \"apiserver-7bbb656c7d-vqq9g\" (UID: \"e72b392d-2c9c-462e-bbe5-8f839912c083\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-vqq9g" Oct 06 11:49:57 crc kubenswrapper[4958]: I1006 11:49:57.632667 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6604f8c6-1db7-4bc6-96a7-7bbd56ab2b4c-config\") pod \"etcd-operator-b45778765-xnxs2\" (UID: \"6604f8c6-1db7-4bc6-96a7-7bbd56ab2b4c\") " pod="openshift-etcd-operator/etcd-operator-b45778765-xnxs2" Oct 06 11:49:57 crc kubenswrapper[4958]: I1006 11:49:57.634047 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/87166f97-d9f0-4391-87b6-0ea7ce0208e1-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-dftvm\" (UID: \"87166f97-d9f0-4391-87b6-0ea7ce0208e1\") " pod="openshift-authentication/oauth-openshift-558db77b4-dftvm" Oct 06 11:49:57 crc kubenswrapper[4958]: I1006 11:49:57.634615 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e72b392d-2c9c-462e-bbe5-8f839912c083-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-vqq9g\" (UID: \"e72b392d-2c9c-462e-bbe5-8f839912c083\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-vqq9g" Oct 06 11:49:57 crc kubenswrapper[4958]: I1006 11:49:57.634633 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/11d97a87-1e86-4e2c-aad1-c66fd9dafea3-config\") pod \"console-operator-58897d9998-z2w99\" (UID: \"11d97a87-1e86-4e2c-aad1-c66fd9dafea3\") " pod="openshift-console-operator/console-operator-58897d9998-z2w99" Oct 06 11:49:57 crc kubenswrapper[4958]: I1006 11:49:57.634760 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/799bd962-f454-498a-88e6-58793b08d732-console-config\") pod \"console-f9d7485db-ccpgf\" (UID: \"799bd962-f454-498a-88e6-58793b08d732\") " pod="openshift-console/console-f9d7485db-ccpgf" Oct 06 11:49:57 crc kubenswrapper[4958]: I1006 11:49:57.634821 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/87166f97-d9f0-4391-87b6-0ea7ce0208e1-audit-dir\") pod \"oauth-openshift-558db77b4-dftvm\" (UID: \"87166f97-d9f0-4391-87b6-0ea7ce0208e1\") " pod="openshift-authentication/oauth-openshift-558db77b4-dftvm" Oct 06 11:49:57 crc kubenswrapper[4958]: I1006 11:49:57.635066 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/6604f8c6-1db7-4bc6-96a7-7bbd56ab2b4c-etcd-service-ca\") pod \"etcd-operator-b45778765-xnxs2\" (UID: \"6604f8c6-1db7-4bc6-96a7-7bbd56ab2b4c\") " pod="openshift-etcd-operator/etcd-operator-b45778765-xnxs2" Oct 06 11:49:57 crc kubenswrapper[4958]: I1006 11:49:57.635253 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2eb410e0-7f66-4f68-8448-35569e09f1c5-config\") pod \"authentication-operator-69f744f599-m9l68\" (UID: \"2eb410e0-7f66-4f68-8448-35569e09f1c5\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-m9l68" Oct 06 11:49:57 crc kubenswrapper[4958]: I1006 11:49:57.635628 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/87166f97-d9f0-4391-87b6-0ea7ce0208e1-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-dftvm\" (UID: \"87166f97-d9f0-4391-87b6-0ea7ce0208e1\") " pod="openshift-authentication/oauth-openshift-558db77b4-dftvm" Oct 06 11:49:57 crc kubenswrapper[4958]: I1006 11:49:57.635696 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/2eb410e0-7f66-4f68-8448-35569e09f1c5-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-m9l68\" (UID: \"2eb410e0-7f66-4f68-8448-35569e09f1c5\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-m9l68" Oct 06 11:49:57 crc kubenswrapper[4958]: I1006 11:49:57.631730 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-dockercfg-x57mr" Oct 06 11:49:57 crc kubenswrapper[4958]: I1006 11:49:57.636759 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/6604f8c6-1db7-4bc6-96a7-7bbd56ab2b4c-etcd-client\") pod \"etcd-operator-b45778765-xnxs2\" (UID: \"6604f8c6-1db7-4bc6-96a7-7bbd56ab2b4c\") " pod="openshift-etcd-operator/etcd-operator-b45778765-xnxs2" Oct 06 11:49:57 crc kubenswrapper[4958]: I1006 11:49:57.636900 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6604f8c6-1db7-4bc6-96a7-7bbd56ab2b4c-serving-cert\") pod \"etcd-operator-b45778765-xnxs2\" (UID: \"6604f8c6-1db7-4bc6-96a7-7bbd56ab2b4c\") " pod="openshift-etcd-operator/etcd-operator-b45778765-xnxs2" Oct 06 11:49:57 crc kubenswrapper[4958]: I1006 11:49:57.636928 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/2eb410e0-7f66-4f68-8448-35569e09f1c5-service-ca-bundle\") pod \"authentication-operator-69f744f599-m9l68\" (UID: \"2eb410e0-7f66-4f68-8448-35569e09f1c5\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-m9l68" Oct 06 11:49:57 crc kubenswrapper[4958]: I1006 11:49:57.637443 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/87166f97-d9f0-4391-87b6-0ea7ce0208e1-audit-policies\") pod \"oauth-openshift-558db77b4-dftvm\" (UID: \"87166f97-d9f0-4391-87b6-0ea7ce0208e1\") " pod="openshift-authentication/oauth-openshift-558db77b4-dftvm" Oct 06 11:49:57 crc kubenswrapper[4958]: I1006 11:49:57.637465 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/799bd962-f454-498a-88e6-58793b08d732-console-oauth-config\") pod \"console-f9d7485db-ccpgf\" (UID: \"799bd962-f454-498a-88e6-58793b08d732\") " pod="openshift-console/console-f9d7485db-ccpgf" Oct 06 11:49:57 crc kubenswrapper[4958]: I1006 11:49:57.637895 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/87166f97-d9f0-4391-87b6-0ea7ce0208e1-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-dftvm\" (UID: \"87166f97-d9f0-4391-87b6-0ea7ce0208e1\") " pod="openshift-authentication/oauth-openshift-558db77b4-dftvm" Oct 06 11:49:57 crc kubenswrapper[4958]: I1006 11:49:57.638018 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/799bd962-f454-498a-88e6-58793b08d732-service-ca\") pod \"console-f9d7485db-ccpgf\" (UID: \"799bd962-f454-498a-88e6-58793b08d732\") " pod="openshift-console/console-f9d7485db-ccpgf" Oct 06 11:49:57 crc kubenswrapper[4958]: I1006 11:49:57.638186 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/e72b392d-2c9c-462e-bbe5-8f839912c083-audit-dir\") pod \"apiserver-7bbb656c7d-vqq9g\" (UID: \"e72b392d-2c9c-462e-bbe5-8f839912c083\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-vqq9g" Oct 06 11:49:57 crc kubenswrapper[4958]: I1006 11:49:57.638129 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/799bd962-f454-498a-88e6-58793b08d732-oauth-serving-cert\") pod \"console-f9d7485db-ccpgf\" (UID: \"799bd962-f454-498a-88e6-58793b08d732\") " pod="openshift-console/console-f9d7485db-ccpgf" Oct 06 11:49:57 crc kubenswrapper[4958]: I1006 11:49:57.638126 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/e72b392d-2c9c-462e-bbe5-8f839912c083-etcd-client\") pod \"apiserver-7bbb656c7d-vqq9g\" (UID: \"e72b392d-2c9c-462e-bbe5-8f839912c083\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-vqq9g" Oct 06 11:49:57 crc kubenswrapper[4958]: I1006 11:49:57.638746 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/e72b392d-2c9c-462e-bbe5-8f839912c083-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-vqq9g\" (UID: \"e72b392d-2c9c-462e-bbe5-8f839912c083\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-vqq9g" Oct 06 11:49:57 crc kubenswrapper[4958]: I1006 11:49:57.639107 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/6604f8c6-1db7-4bc6-96a7-7bbd56ab2b4c-etcd-ca\") pod \"etcd-operator-b45778765-xnxs2\" (UID: \"6604f8c6-1db7-4bc6-96a7-7bbd56ab2b4c\") " pod="openshift-etcd-operator/etcd-operator-b45778765-xnxs2" Oct 06 11:49:57 crc kubenswrapper[4958]: I1006 11:49:57.639157 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/87166f97-d9f0-4391-87b6-0ea7ce0208e1-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-dftvm\" (UID: \"87166f97-d9f0-4391-87b6-0ea7ce0208e1\") " pod="openshift-authentication/oauth-openshift-558db77b4-dftvm" Oct 06 11:49:57 crc kubenswrapper[4958]: I1006 11:49:57.639640 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/799bd962-f454-498a-88e6-58793b08d732-trusted-ca-bundle\") pod \"console-f9d7485db-ccpgf\" (UID: \"799bd962-f454-498a-88e6-58793b08d732\") " pod="openshift-console/console-f9d7485db-ccpgf" Oct 06 11:49:57 crc kubenswrapper[4958]: I1006 11:49:57.639752 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/799bd962-f454-498a-88e6-58793b08d732-console-serving-cert\") pod \"console-f9d7485db-ccpgf\" (UID: \"799bd962-f454-498a-88e6-58793b08d732\") " pod="openshift-console/console-f9d7485db-ccpgf" Oct 06 11:49:57 crc kubenswrapper[4958]: I1006 11:49:57.640028 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/e72b392d-2c9c-462e-bbe5-8f839912c083-audit-policies\") pod \"apiserver-7bbb656c7d-vqq9g\" (UID: \"e72b392d-2c9c-462e-bbe5-8f839912c083\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-vqq9g" Oct 06 11:49:57 crc kubenswrapper[4958]: I1006 11:49:57.640408 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/11d97a87-1e86-4e2c-aad1-c66fd9dafea3-trusted-ca\") pod \"console-operator-58897d9998-z2w99\" (UID: \"11d97a87-1e86-4e2c-aad1-c66fd9dafea3\") " pod="openshift-console-operator/console-operator-58897d9998-z2w99" Oct 06 11:49:57 crc kubenswrapper[4958]: I1006 11:49:57.640936 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/e72b392d-2c9c-462e-bbe5-8f839912c083-encryption-config\") pod \"apiserver-7bbb656c7d-vqq9g\" (UID: \"e72b392d-2c9c-462e-bbe5-8f839912c083\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-vqq9g" Oct 06 11:49:57 crc kubenswrapper[4958]: I1006 11:49:57.641356 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/87166f97-d9f0-4391-87b6-0ea7ce0208e1-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-dftvm\" (UID: \"87166f97-d9f0-4391-87b6-0ea7ce0208e1\") " pod="openshift-authentication/oauth-openshift-558db77b4-dftvm" Oct 06 11:49:57 crc kubenswrapper[4958]: I1006 11:49:57.641546 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2eb410e0-7f66-4f68-8448-35569e09f1c5-serving-cert\") pod \"authentication-operator-69f744f599-m9l68\" (UID: \"2eb410e0-7f66-4f68-8448-35569e09f1c5\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-m9l68" Oct 06 11:49:57 crc kubenswrapper[4958]: I1006 11:49:57.642030 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/975b4fb4-827e-4d99-b37a-5bf622b2c889-serving-cert\") pod \"openshift-config-operator-7777fb866f-vqqpd\" (UID: \"975b4fb4-827e-4d99-b37a-5bf622b2c889\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-vqqpd" Oct 06 11:49:57 crc kubenswrapper[4958]: I1006 11:49:57.642434 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/87166f97-d9f0-4391-87b6-0ea7ce0208e1-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-dftvm\" (UID: \"87166f97-d9f0-4391-87b6-0ea7ce0208e1\") " pod="openshift-authentication/oauth-openshift-558db77b4-dftvm" Oct 06 11:49:57 crc kubenswrapper[4958]: I1006 11:49:57.642689 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e72b392d-2c9c-462e-bbe5-8f839912c083-serving-cert\") pod \"apiserver-7bbb656c7d-vqq9g\" (UID: \"e72b392d-2c9c-462e-bbe5-8f839912c083\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-vqq9g" Oct 06 11:49:57 crc kubenswrapper[4958]: I1006 11:49:57.642713 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/975b4fb4-827e-4d99-b37a-5bf622b2c889-available-featuregates\") pod \"openshift-config-operator-7777fb866f-vqqpd\" (UID: \"975b4fb4-827e-4d99-b37a-5bf622b2c889\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-vqqpd" Oct 06 11:49:57 crc kubenswrapper[4958]: I1006 11:49:57.643065 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/87166f97-d9f0-4391-87b6-0ea7ce0208e1-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-dftvm\" (UID: \"87166f97-d9f0-4391-87b6-0ea7ce0208e1\") " pod="openshift-authentication/oauth-openshift-558db77b4-dftvm" Oct 06 11:49:57 crc kubenswrapper[4958]: I1006 11:49:57.643161 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/87166f97-d9f0-4391-87b6-0ea7ce0208e1-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-dftvm\" (UID: \"87166f97-d9f0-4391-87b6-0ea7ce0208e1\") " pod="openshift-authentication/oauth-openshift-558db77b4-dftvm" Oct 06 11:49:57 crc kubenswrapper[4958]: I1006 11:49:57.643529 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/87166f97-d9f0-4391-87b6-0ea7ce0208e1-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-dftvm\" (UID: \"87166f97-d9f0-4391-87b6-0ea7ce0208e1\") " pod="openshift-authentication/oauth-openshift-558db77b4-dftvm" Oct 06 11:49:57 crc kubenswrapper[4958]: I1006 11:49:57.644137 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/87166f97-d9f0-4391-87b6-0ea7ce0208e1-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-dftvm\" (UID: \"87166f97-d9f0-4391-87b6-0ea7ce0208e1\") " pod="openshift-authentication/oauth-openshift-558db77b4-dftvm" Oct 06 11:49:57 crc kubenswrapper[4958]: I1006 11:49:57.645204 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/87166f97-d9f0-4391-87b6-0ea7ce0208e1-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-dftvm\" (UID: \"87166f97-d9f0-4391-87b6-0ea7ce0208e1\") " pod="openshift-authentication/oauth-openshift-558db77b4-dftvm" Oct 06 11:49:57 crc kubenswrapper[4958]: I1006 11:49:57.645811 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/11d97a87-1e86-4e2c-aad1-c66fd9dafea3-serving-cert\") pod \"console-operator-58897d9998-z2w99\" (UID: \"11d97a87-1e86-4e2c-aad1-c66fd9dafea3\") " pod="openshift-console-operator/console-operator-58897d9998-z2w99" Oct 06 11:49:57 crc kubenswrapper[4958]: I1006 11:49:57.651736 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-serving-cert" Oct 06 11:49:57 crc kubenswrapper[4958]: I1006 11:49:57.671386 4958 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-config" Oct 06 11:49:57 crc kubenswrapper[4958]: I1006 11:49:57.711869 4958 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"openshift-service-ca.crt" Oct 06 11:49:57 crc kubenswrapper[4958]: I1006 11:49:57.731409 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"ingress-operator-dockercfg-7lnqk" Oct 06 11:49:57 crc kubenswrapper[4958]: I1006 11:49:57.751271 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"metrics-tls" Oct 06 11:49:57 crc kubenswrapper[4958]: I1006 11:49:57.776339 4958 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"trusted-ca" Oct 06 11:49:57 crc kubenswrapper[4958]: I1006 11:49:57.792085 4958 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"kube-root-ca.crt" Oct 06 11:49:57 crc kubenswrapper[4958]: I1006 11:49:57.812221 4958 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"openshift-service-ca.crt" Oct 06 11:49:57 crc kubenswrapper[4958]: I1006 11:49:57.831906 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-dockercfg-zdk86" Oct 06 11:49:57 crc kubenswrapper[4958]: I1006 11:49:57.852141 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-certs-default" Oct 06 11:49:57 crc kubenswrapper[4958]: I1006 11:49:57.872859 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-stats-default" Oct 06 11:49:57 crc kubenswrapper[4958]: I1006 11:49:57.891916 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-metrics-certs-default" Oct 06 11:49:57 crc kubenswrapper[4958]: I1006 11:49:57.911481 4958 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"service-ca-bundle" Oct 06 11:49:57 crc kubenswrapper[4958]: I1006 11:49:57.945628 4958 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"kube-root-ca.crt" Oct 06 11:49:57 crc kubenswrapper[4958]: I1006 11:49:57.950907 4958 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-root-ca.crt" Oct 06 11:49:57 crc kubenswrapper[4958]: I1006 11:49:57.971775 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-dockercfg-gkqpw" Oct 06 11:49:57 crc kubenswrapper[4958]: I1006 11:49:57.991326 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-serving-cert" Oct 06 11:49:58 crc kubenswrapper[4958]: I1006 11:49:58.012970 4958 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-config" Oct 06 11:49:58 crc kubenswrapper[4958]: I1006 11:49:58.031767 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-dockercfg-k9rxt" Oct 06 11:49:58 crc kubenswrapper[4958]: I1006 11:49:58.052309 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-tls" Oct 06 11:49:58 crc kubenswrapper[4958]: I1006 11:49:58.073093 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-operator-dockercfg-98p87" Oct 06 11:49:58 crc kubenswrapper[4958]: I1006 11:49:58.092100 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mco-proxy-tls" Oct 06 11:49:58 crc kubenswrapper[4958]: I1006 11:49:58.113018 4958 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"machine-config-operator-images" Oct 06 11:49:58 crc kubenswrapper[4958]: I1006 11:49:58.132385 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator"/"kube-storage-version-migrator-sa-dockercfg-5xfcg" Oct 06 11:49:58 crc kubenswrapper[4958]: I1006 11:49:58.152503 4958 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"kube-root-ca.crt" Oct 06 11:49:58 crc kubenswrapper[4958]: I1006 11:49:58.172540 4958 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" Oct 06 11:49:58 crc kubenswrapper[4958]: I1006 11:49:58.192354 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serviceaccount-dockercfg-rq7zk" Oct 06 11:49:58 crc kubenswrapper[4958]: I1006 11:49:58.212063 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"pprof-cert" Oct 06 11:49:58 crc kubenswrapper[4958]: I1006 11:49:58.232599 4958 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"kube-root-ca.crt" Oct 06 11:49:58 crc kubenswrapper[4958]: I1006 11:49:58.252974 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serving-cert" Oct 06 11:49:58 crc kubenswrapper[4958]: I1006 11:49:58.271673 4958 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" Oct 06 11:49:58 crc kubenswrapper[4958]: I1006 11:49:58.291881 4958 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-service-ca.crt" Oct 06 11:49:58 crc kubenswrapper[4958]: I1006 11:49:58.312775 4958 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-config" Oct 06 11:49:58 crc kubenswrapper[4958]: I1006 11:49:58.331958 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-dockercfg-vw8fw" Oct 06 11:49:58 crc kubenswrapper[4958]: I1006 11:49:58.353039 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-serving-cert" Oct 06 11:49:58 crc kubenswrapper[4958]: I1006 11:49:58.372121 4958 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-config" Oct 06 11:49:58 crc kubenswrapper[4958]: I1006 11:49:58.392817 4958 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"kube-root-ca.crt" Oct 06 11:49:58 crc kubenswrapper[4958]: I1006 11:49:58.418950 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"catalog-operator-serving-cert" Oct 06 11:49:58 crc kubenswrapper[4958]: I1006 11:49:58.432466 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-dockercfg-qt55r" Oct 06 11:49:58 crc kubenswrapper[4958]: I1006 11:49:58.452353 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"kube-scheduler-operator-serving-cert" Oct 06 11:49:58 crc kubenswrapper[4958]: I1006 11:49:58.472353 4958 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"kube-root-ca.crt" Oct 06 11:49:58 crc kubenswrapper[4958]: I1006 11:49:58.490220 4958 request.go:700] Waited for 1.01605736s due to client-side throttling, not priority and fairness, request: GET:https://api-int.crc.testing:6443/api/v1/namespaces/openshift-multus/secrets?fieldSelector=metadata.name%3Dmultus-admission-controller-secret&limit=500&resourceVersion=0 Oct 06 11:49:58 crc kubenswrapper[4958]: I1006 11:49:58.492316 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-admission-controller-secret" Oct 06 11:49:58 crc kubenswrapper[4958]: I1006 11:49:58.511934 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ac-dockercfg-9lkdf" Oct 06 11:49:58 crc kubenswrapper[4958]: I1006 11:49:58.532983 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mcc-proxy-tls" Oct 06 11:49:58 crc kubenswrapper[4958]: I1006 11:49:58.551906 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-controller-dockercfg-c2lfx" Oct 06 11:49:58 crc kubenswrapper[4958]: I1006 11:49:58.573023 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"kube-storage-version-migrator-operator-dockercfg-2bh8d" Oct 06 11:49:58 crc kubenswrapper[4958]: I1006 11:49:58.592678 4958 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"config" Oct 06 11:49:58 crc kubenswrapper[4958]: I1006 11:49:58.611990 4958 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" Oct 06 11:49:58 crc kubenswrapper[4958]: I1006 11:49:58.632039 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"serving-cert" Oct 06 11:49:58 crc kubenswrapper[4958]: I1006 11:49:58.652276 4958 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" Oct 06 11:49:58 crc kubenswrapper[4958]: I1006 11:49:58.673220 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-tls" Oct 06 11:49:58 crc kubenswrapper[4958]: I1006 11:49:58.694006 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-dockercfg-qx5rd" Oct 06 11:49:58 crc kubenswrapper[4958]: I1006 11:49:58.712699 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"node-bootstrapper-token" Oct 06 11:49:58 crc kubenswrapper[4958]: I1006 11:49:58.732217 4958 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"openshift-service-ca.crt" Oct 06 11:49:58 crc kubenswrapper[4958]: I1006 11:49:58.753055 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-dockercfg-5nsgg" Oct 06 11:49:58 crc kubenswrapper[4958]: I1006 11:49:58.773878 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-metrics" Oct 06 11:49:58 crc kubenswrapper[4958]: I1006 11:49:58.796745 4958 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"marketplace-trusted-ca" Oct 06 11:49:58 crc kubenswrapper[4958]: I1006 11:49:58.811653 4958 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"kube-root-ca.crt" Oct 06 11:49:58 crc kubenswrapper[4958]: I1006 11:49:58.831815 4958 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Oct 06 11:49:58 crc kubenswrapper[4958]: I1006 11:49:58.852574 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Oct 06 11:49:58 crc kubenswrapper[4958]: I1006 11:49:58.872027 4958 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"openshift-service-ca.crt" Oct 06 11:49:58 crc kubenswrapper[4958]: I1006 11:49:58.892417 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"signing-key" Oct 06 11:49:58 crc kubenswrapper[4958]: I1006 11:49:58.913508 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"service-ca-dockercfg-pn86c" Oct 06 11:49:58 crc kubenswrapper[4958]: I1006 11:49:58.932246 4958 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"signing-cabundle" Oct 06 11:49:58 crc kubenswrapper[4958]: I1006 11:49:58.953217 4958 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"kube-root-ca.crt" Oct 06 11:49:58 crc kubenswrapper[4958]: I1006 11:49:58.972571 4958 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"kube-root-ca.crt" Oct 06 11:49:58 crc kubenswrapper[4958]: I1006 11:49:58.992718 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"canary-serving-cert" Oct 06 11:49:59 crc kubenswrapper[4958]: I1006 11:49:59.012715 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"default-dockercfg-2llfx" Oct 06 11:49:59 crc kubenswrapper[4958]: I1006 11:49:59.032186 4958 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"openshift-service-ca.crt" Oct 06 11:49:59 crc kubenswrapper[4958]: I1006 11:49:59.051634 4958 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"openshift-service-ca.crt" Oct 06 11:49:59 crc kubenswrapper[4958]: I1006 11:49:59.074190 4958 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"kube-root-ca.crt" Oct 06 11:49:59 crc kubenswrapper[4958]: I1006 11:49:59.092591 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"service-ca-operator-dockercfg-rg9jl" Oct 06 11:49:59 crc kubenswrapper[4958]: I1006 11:49:59.113267 4958 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"service-ca-operator-config" Oct 06 11:49:59 crc kubenswrapper[4958]: I1006 11:49:59.134809 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"serving-cert" Oct 06 11:49:59 crc kubenswrapper[4958]: I1006 11:49:59.152697 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"packageserver-service-cert" Oct 06 11:49:59 crc kubenswrapper[4958]: I1006 11:49:59.173360 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"package-server-manager-serving-cert" Oct 06 11:49:59 crc kubenswrapper[4958]: I1006 11:49:59.210085 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8pkcl\" (UniqueName: \"kubernetes.io/projected/3f84a255-5034-424e-acf0-5ba9f4aa0531-kube-api-access-8pkcl\") pod \"apiserver-76f77b778f-57fbj\" (UID: \"3f84a255-5034-424e-acf0-5ba9f4aa0531\") " pod="openshift-apiserver/apiserver-76f77b778f-57fbj" Oct 06 11:49:59 crc kubenswrapper[4958]: I1006 11:49:59.230138 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cskx9\" (UniqueName: \"kubernetes.io/projected/5af269ee-1565-4c36-a416-c4e2e7397fc5-kube-api-access-cskx9\") pod \"route-controller-manager-6576b87f9c-fmlg2\" (UID: \"5af269ee-1565-4c36-a416-c4e2e7397fc5\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-fmlg2" Oct 06 11:49:59 crc kubenswrapper[4958]: I1006 11:49:59.232442 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-dockercfg-jwfmh" Oct 06 11:49:59 crc kubenswrapper[4958]: I1006 11:49:59.252877 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-default-metrics-tls" Oct 06 11:49:59 crc kubenswrapper[4958]: I1006 11:49:59.271580 4958 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"dns-default" Oct 06 11:49:59 crc kubenswrapper[4958]: I1006 11:49:59.292353 4958 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"openshift-service-ca.crt" Oct 06 11:49:59 crc kubenswrapper[4958]: I1006 11:49:59.312996 4958 reflector.go:368] Caches populated for *v1.Secret from object-"hostpath-provisioner"/"csi-hostpath-provisioner-sa-dockercfg-qd74k" Oct 06 11:49:59 crc kubenswrapper[4958]: I1006 11:49:59.332330 4958 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"kube-root-ca.crt" Oct 06 11:49:59 crc kubenswrapper[4958]: I1006 11:49:59.398813 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mh9tp\" (UniqueName: \"kubernetes.io/projected/11d97a87-1e86-4e2c-aad1-c66fd9dafea3-kube-api-access-mh9tp\") pod \"console-operator-58897d9998-z2w99\" (UID: \"11d97a87-1e86-4e2c-aad1-c66fd9dafea3\") " pod="openshift-console-operator/console-operator-58897d9998-z2w99" Oct 06 11:49:59 crc kubenswrapper[4958]: I1006 11:49:59.408487 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-76f77b778f-57fbj" Oct 06 11:49:59 crc kubenswrapper[4958]: I1006 11:49:59.421318 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8v9pq\" (UniqueName: \"kubernetes.io/projected/975b4fb4-827e-4d99-b37a-5bf622b2c889-kube-api-access-8v9pq\") pod \"openshift-config-operator-7777fb866f-vqqpd\" (UID: \"975b4fb4-827e-4d99-b37a-5bf622b2c889\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-vqqpd" Oct 06 11:49:59 crc kubenswrapper[4958]: I1006 11:49:59.438194 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-fmlg2" Oct 06 11:49:59 crc kubenswrapper[4958]: I1006 11:49:59.441758 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x5ztw\" (UniqueName: \"kubernetes.io/projected/6604f8c6-1db7-4bc6-96a7-7bbd56ab2b4c-kube-api-access-x5ztw\") pod \"etcd-operator-b45778765-xnxs2\" (UID: \"6604f8c6-1db7-4bc6-96a7-7bbd56ab2b4c\") " pod="openshift-etcd-operator/etcd-operator-b45778765-xnxs2" Oct 06 11:49:59 crc kubenswrapper[4958]: I1006 11:49:59.459241 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s55zm\" (UniqueName: \"kubernetes.io/projected/6a7580eb-dece-41a6-8335-33c29bc41056-kube-api-access-s55zm\") pod \"image-registry-697d97f7c8-kt8qb\" (UID: \"6a7580eb-dece-41a6-8335-33c29bc41056\") " pod="openshift-image-registry/image-registry-697d97f7c8-kt8qb" Oct 06 11:49:59 crc kubenswrapper[4958]: I1006 11:49:59.459308 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/d949b5d0-2cd6-40ab-8bd1-9fb19fd615f8-images\") pod \"machine-api-operator-5694c8668f-x8mdr\" (UID: \"d949b5d0-2cd6-40ab-8bd1-9fb19fd615f8\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-x8mdr" Oct 06 11:49:59 crc kubenswrapper[4958]: I1006 11:49:59.459345 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/29169783-2273-40c2-abe9-85f3ee6c4f7f-metrics-tls\") pod \"dns-operator-744455d44c-t2nxc\" (UID: \"29169783-2273-40c2-abe9-85f3ee6c4f7f\") " pod="openshift-dns-operator/dns-operator-744455d44c-t2nxc" Oct 06 11:49:59 crc kubenswrapper[4958]: I1006 11:49:59.459377 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/6a7580eb-dece-41a6-8335-33c29bc41056-installation-pull-secrets\") pod \"image-registry-697d97f7c8-kt8qb\" (UID: \"6a7580eb-dece-41a6-8335-33c29bc41056\") " pod="openshift-image-registry/image-registry-697d97f7c8-kt8qb" Oct 06 11:49:59 crc kubenswrapper[4958]: I1006 11:49:59.459440 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/6a7580eb-dece-41a6-8335-33c29bc41056-trusted-ca\") pod \"image-registry-697d97f7c8-kt8qb\" (UID: \"6a7580eb-dece-41a6-8335-33c29bc41056\") " pod="openshift-image-registry/image-registry-697d97f7c8-kt8qb" Oct 06 11:49:59 crc kubenswrapper[4958]: I1006 11:49:59.459476 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/cbf992a4-d7af-455c-953b-c865445feb6c-serving-cert\") pod \"controller-manager-879f6c89f-thrt8\" (UID: \"cbf992a4-d7af-455c-953b-c865445feb6c\") " pod="openshift-controller-manager/controller-manager-879f6c89f-thrt8" Oct 06 11:49:59 crc kubenswrapper[4958]: I1006 11:49:59.459508 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/5715f094-2c1f-4d0c-8901-f4378d613048-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-f2982\" (UID: \"5715f094-2c1f-4d0c-8901-f4378d613048\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-f2982" Oct 06 11:49:59 crc kubenswrapper[4958]: I1006 11:49:59.459884 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/97ec580a-268f-4a8a-a6e1-9b64e463ca20-config\") pod \"openshift-apiserver-operator-796bbdcf4f-v7kkf\" (UID: \"97ec580a-268f-4a8a-a6e1-9b64e463ca20\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-v7kkf" Oct 06 11:49:59 crc kubenswrapper[4958]: I1006 11:49:59.459954 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b5zxx\" (UniqueName: \"kubernetes.io/projected/5715f094-2c1f-4d0c-8901-f4378d613048-kube-api-access-b5zxx\") pod \"cluster-image-registry-operator-dc59b4c8b-f2982\" (UID: \"5715f094-2c1f-4d0c-8901-f4378d613048\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-f2982" Oct 06 11:49:59 crc kubenswrapper[4958]: I1006 11:49:59.459998 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/6a7580eb-dece-41a6-8335-33c29bc41056-registry-certificates\") pod \"image-registry-697d97f7c8-kt8qb\" (UID: \"6a7580eb-dece-41a6-8335-33c29bc41056\") " pod="openshift-image-registry/image-registry-697d97f7c8-kt8qb" Oct 06 11:49:59 crc kubenswrapper[4958]: I1006 11:49:59.460118 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/97ec580a-268f-4a8a-a6e1-9b64e463ca20-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-v7kkf\" (UID: \"97ec580a-268f-4a8a-a6e1-9b64e463ca20\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-v7kkf" Oct 06 11:49:59 crc kubenswrapper[4958]: I1006 11:49:59.460216 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/5715f094-2c1f-4d0c-8901-f4378d613048-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-f2982\" (UID: \"5715f094-2c1f-4d0c-8901-f4378d613048\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-f2982" Oct 06 11:49:59 crc kubenswrapper[4958]: I1006 11:49:59.460292 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kn5dq\" (UniqueName: \"kubernetes.io/projected/97ec580a-268f-4a8a-a6e1-9b64e463ca20-kube-api-access-kn5dq\") pod \"openshift-apiserver-operator-796bbdcf4f-v7kkf\" (UID: \"97ec580a-268f-4a8a-a6e1-9b64e463ca20\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-v7kkf" Oct 06 11:49:59 crc kubenswrapper[4958]: I1006 11:49:59.460331 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/cbf992a4-d7af-455c-953b-c865445feb6c-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-thrt8\" (UID: \"cbf992a4-d7af-455c-953b-c865445feb6c\") " pod="openshift-controller-manager/controller-manager-879f6c89f-thrt8" Oct 06 11:49:59 crc kubenswrapper[4958]: I1006 11:49:59.460781 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/6a7580eb-dece-41a6-8335-33c29bc41056-ca-trust-extracted\") pod \"image-registry-697d97f7c8-kt8qb\" (UID: \"6a7580eb-dece-41a6-8335-33c29bc41056\") " pod="openshift-image-registry/image-registry-697d97f7c8-kt8qb" Oct 06 11:49:59 crc kubenswrapper[4958]: I1006 11:49:59.460891 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/7dd9d221-704f-4205-8a53-a7d42e162723-machine-approver-tls\") pod \"machine-approver-56656f9798-lswc6\" (UID: \"7dd9d221-704f-4205-8a53-a7d42e162723\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-lswc6" Oct 06 11:49:59 crc kubenswrapper[4958]: I1006 11:49:59.461419 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cbf992a4-d7af-455c-953b-c865445feb6c-config\") pod \"controller-manager-879f6c89f-thrt8\" (UID: \"cbf992a4-d7af-455c-953b-c865445feb6c\") " pod="openshift-controller-manager/controller-manager-879f6c89f-thrt8" Oct 06 11:49:59 crc kubenswrapper[4958]: I1006 11:49:59.462426 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-kt8qb\" (UID: \"6a7580eb-dece-41a6-8335-33c29bc41056\") " pod="openshift-image-registry/image-registry-697d97f7c8-kt8qb" Oct 06 11:49:59 crc kubenswrapper[4958]: I1006 11:49:59.462565 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d949b5d0-2cd6-40ab-8bd1-9fb19fd615f8-config\") pod \"machine-api-operator-5694c8668f-x8mdr\" (UID: \"d949b5d0-2cd6-40ab-8bd1-9fb19fd615f8\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-x8mdr" Oct 06 11:49:59 crc kubenswrapper[4958]: E1006 11:49:59.464067 4958 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-06 11:49:59.964041457 +0000 UTC m=+153.850066805 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-kt8qb" (UID: "6a7580eb-dece-41a6-8335-33c29bc41056") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 06 11:49:59 crc kubenswrapper[4958]: I1006 11:49:59.465056 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/cbf992a4-d7af-455c-953b-c865445feb6c-client-ca\") pod \"controller-manager-879f6c89f-thrt8\" (UID: \"cbf992a4-d7af-455c-953b-c865445feb6c\") " pod="openshift-controller-manager/controller-manager-879f6c89f-thrt8" Oct 06 11:49:59 crc kubenswrapper[4958]: I1006 11:49:59.465180 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-prbql\" (UniqueName: \"kubernetes.io/projected/b261e077-f9df-49ba-87df-a5d1988755bf-kube-api-access-prbql\") pod \"cluster-samples-operator-665b6dd947-gj6pw\" (UID: \"b261e077-f9df-49ba-87df-a5d1988755bf\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-gj6pw" Oct 06 11:49:59 crc kubenswrapper[4958]: I1006 11:49:59.465276 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c7cq6\" (UniqueName: \"kubernetes.io/projected/d949b5d0-2cd6-40ab-8bd1-9fb19fd615f8-kube-api-access-c7cq6\") pod \"machine-api-operator-5694c8668f-x8mdr\" (UID: \"d949b5d0-2cd6-40ab-8bd1-9fb19fd615f8\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-x8mdr" Oct 06 11:49:59 crc kubenswrapper[4958]: I1006 11:49:59.465396 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nn5cj\" (UniqueName: \"kubernetes.io/projected/29169783-2273-40c2-abe9-85f3ee6c4f7f-kube-api-access-nn5cj\") pod \"dns-operator-744455d44c-t2nxc\" (UID: \"29169783-2273-40c2-abe9-85f3ee6c4f7f\") " pod="openshift-dns-operator/dns-operator-744455d44c-t2nxc" Oct 06 11:49:59 crc kubenswrapper[4958]: I1006 11:49:59.465503 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/7dd9d221-704f-4205-8a53-a7d42e162723-auth-proxy-config\") pod \"machine-approver-56656f9798-lswc6\" (UID: \"7dd9d221-704f-4205-8a53-a7d42e162723\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-lswc6" Oct 06 11:49:59 crc kubenswrapper[4958]: I1006 11:49:59.465556 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bkg2g\" (UniqueName: \"kubernetes.io/projected/7dd9d221-704f-4205-8a53-a7d42e162723-kube-api-access-bkg2g\") pod \"machine-approver-56656f9798-lswc6\" (UID: \"7dd9d221-704f-4205-8a53-a7d42e162723\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-lswc6" Oct 06 11:49:59 crc kubenswrapper[4958]: I1006 11:49:59.465682 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/6a7580eb-dece-41a6-8335-33c29bc41056-registry-tls\") pod \"image-registry-697d97f7c8-kt8qb\" (UID: \"6a7580eb-dece-41a6-8335-33c29bc41056\") " pod="openshift-image-registry/image-registry-697d97f7c8-kt8qb" Oct 06 11:49:59 crc kubenswrapper[4958]: I1006 11:49:59.465779 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g5z44\" (UniqueName: \"kubernetes.io/projected/0538c536-f662-48c7-98fe-a1ceff33ade3-kube-api-access-g5z44\") pod \"downloads-7954f5f757-bs4m6\" (UID: \"0538c536-f662-48c7-98fe-a1ceff33ade3\") " pod="openshift-console/downloads-7954f5f757-bs4m6" Oct 06 11:49:59 crc kubenswrapper[4958]: I1006 11:49:59.465895 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7dd9d221-704f-4205-8a53-a7d42e162723-config\") pod \"machine-approver-56656f9798-lswc6\" (UID: \"7dd9d221-704f-4205-8a53-a7d42e162723\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-lswc6" Oct 06 11:49:59 crc kubenswrapper[4958]: I1006 11:49:59.465994 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/5715f094-2c1f-4d0c-8901-f4378d613048-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-f2982\" (UID: \"5715f094-2c1f-4d0c-8901-f4378d613048\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-f2982" Oct 06 11:49:59 crc kubenswrapper[4958]: I1006 11:49:59.466221 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/b261e077-f9df-49ba-87df-a5d1988755bf-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-gj6pw\" (UID: \"b261e077-f9df-49ba-87df-a5d1988755bf\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-gj6pw" Oct 06 11:49:59 crc kubenswrapper[4958]: I1006 11:49:59.466365 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/6a7580eb-dece-41a6-8335-33c29bc41056-bound-sa-token\") pod \"image-registry-697d97f7c8-kt8qb\" (UID: \"6a7580eb-dece-41a6-8335-33c29bc41056\") " pod="openshift-image-registry/image-registry-697d97f7c8-kt8qb" Oct 06 11:49:59 crc kubenswrapper[4958]: I1006 11:49:59.466420 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/d949b5d0-2cd6-40ab-8bd1-9fb19fd615f8-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-x8mdr\" (UID: \"d949b5d0-2cd6-40ab-8bd1-9fb19fd615f8\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-x8mdr" Oct 06 11:49:59 crc kubenswrapper[4958]: I1006 11:49:59.466531 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s5lk7\" (UniqueName: \"kubernetes.io/projected/cbf992a4-d7af-455c-953b-c865445feb6c-kube-api-access-s5lk7\") pod \"controller-manager-879f6c89f-thrt8\" (UID: \"cbf992a4-d7af-455c-953b-c865445feb6c\") " pod="openshift-controller-manager/controller-manager-879f6c89f-thrt8" Oct 06 11:49:59 crc kubenswrapper[4958]: I1006 11:49:59.469829 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-92twf\" (UniqueName: \"kubernetes.io/projected/e72b392d-2c9c-462e-bbe5-8f839912c083-kube-api-access-92twf\") pod \"apiserver-7bbb656c7d-vqq9g\" (UID: \"e72b392d-2c9c-462e-bbe5-8f839912c083\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-vqq9g" Oct 06 11:49:59 crc kubenswrapper[4958]: I1006 11:49:59.481195 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fdq7w\" (UniqueName: \"kubernetes.io/projected/87166f97-d9f0-4391-87b6-0ea7ce0208e1-kube-api-access-fdq7w\") pod \"oauth-openshift-558db77b4-dftvm\" (UID: \"87166f97-d9f0-4391-87b6-0ea7ce0208e1\") " pod="openshift-authentication/oauth-openshift-558db77b4-dftvm" Oct 06 11:49:59 crc kubenswrapper[4958]: I1006 11:49:59.490711 4958 request.go:700] Waited for 1.850801547s due to client-side throttling, not priority and fairness, request: POST:https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication-operator/serviceaccounts/authentication-operator/token Oct 06 11:49:59 crc kubenswrapper[4958]: I1006 11:49:59.500071 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4sdjd\" (UniqueName: \"kubernetes.io/projected/799bd962-f454-498a-88e6-58793b08d732-kube-api-access-4sdjd\") pod \"console-f9d7485db-ccpgf\" (UID: \"799bd962-f454-498a-88e6-58793b08d732\") " pod="openshift-console/console-f9d7485db-ccpgf" Oct 06 11:49:59 crc kubenswrapper[4958]: I1006 11:49:59.535593 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qjr4j\" (UniqueName: \"kubernetes.io/projected/2eb410e0-7f66-4f68-8448-35569e09f1c5-kube-api-access-qjr4j\") pod \"authentication-operator-69f744f599-m9l68\" (UID: \"2eb410e0-7f66-4f68-8448-35569e09f1c5\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-m9l68" Oct 06 11:49:59 crc kubenswrapper[4958]: I1006 11:49:59.541170 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-58897d9998-z2w99" Oct 06 11:49:59 crc kubenswrapper[4958]: I1006 11:49:59.565885 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-69f744f599-m9l68" Oct 06 11:49:59 crc kubenswrapper[4958]: I1006 11:49:59.567600 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 06 11:49:59 crc kubenswrapper[4958]: I1006 11:49:59.567880 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/221a0896-9d41-4bf4-b05e-57c067e8b885-secret-volume\") pod \"collect-profiles-29329185-6dnq5\" (UID: \"221a0896-9d41-4bf4-b05e-57c067e8b885\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29329185-6dnq5" Oct 06 11:49:59 crc kubenswrapper[4958]: I1006 11:49:59.567934 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/abd850de-cbf2-4d68-9226-223ba0e3cb72-trusted-ca\") pod \"ingress-operator-5b745b69d9-c97vp\" (UID: \"abd850de-cbf2-4d68-9226-223ba0e3cb72\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-c97vp" Oct 06 11:49:59 crc kubenswrapper[4958]: I1006 11:49:59.568026 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/8275a125-7a77-49fe-bebf-d140013d5a3f-cert\") pod \"ingress-canary-7flhj\" (UID: \"8275a125-7a77-49fe-bebf-d140013d5a3f\") " pod="openshift-ingress-canary/ingress-canary-7flhj" Oct 06 11:49:59 crc kubenswrapper[4958]: I1006 11:49:59.568081 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7dd9d221-704f-4205-8a53-a7d42e162723-config\") pod \"machine-approver-56656f9798-lswc6\" (UID: \"7dd9d221-704f-4205-8a53-a7d42e162723\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-lswc6" Oct 06 11:49:59 crc kubenswrapper[4958]: I1006 11:49:59.568130 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g5z44\" (UniqueName: \"kubernetes.io/projected/0538c536-f662-48c7-98fe-a1ceff33ade3-kube-api-access-g5z44\") pod \"downloads-7954f5f757-bs4m6\" (UID: \"0538c536-f662-48c7-98fe-a1ceff33ade3\") " pod="openshift-console/downloads-7954f5f757-bs4m6" Oct 06 11:49:59 crc kubenswrapper[4958]: I1006 11:49:59.568214 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/3a4bf8ec-4eaf-483f-b1c2-a981e3a83091-webhook-cert\") pod \"packageserver-d55dfcdfc-x42zz\" (UID: \"3a4bf8ec-4eaf-483f-b1c2-a981e3a83091\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-x42zz" Oct 06 11:49:59 crc kubenswrapper[4958]: I1006 11:49:59.568263 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qlcm7\" (UniqueName: \"kubernetes.io/projected/21e2c2d4-0ea2-4767-b88d-0229390bff9a-kube-api-access-qlcm7\") pod \"openshift-controller-manager-operator-756b6f6bc6-pp96v\" (UID: \"21e2c2d4-0ea2-4767-b88d-0229390bff9a\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-pp96v" Oct 06 11:49:59 crc kubenswrapper[4958]: I1006 11:49:59.568340 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/5715f094-2c1f-4d0c-8901-f4378d613048-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-f2982\" (UID: \"5715f094-2c1f-4d0c-8901-f4378d613048\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-f2982" Oct 06 11:49:59 crc kubenswrapper[4958]: E1006 11:49:59.568564 4958 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-06 11:50:00.068541612 +0000 UTC m=+153.954566930 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 06 11:49:59 crc kubenswrapper[4958]: I1006 11:49:59.569258 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7dd9d221-704f-4205-8a53-a7d42e162723-config\") pod \"machine-approver-56656f9798-lswc6\" (UID: \"7dd9d221-704f-4205-8a53-a7d42e162723\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-lswc6" Oct 06 11:49:59 crc kubenswrapper[4958]: I1006 11:49:59.570177 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tk8mt\" (UniqueName: \"kubernetes.io/projected/37c87c2e-92b8-4b83-be26-cd63ec636eda-kube-api-access-tk8mt\") pod \"router-default-5444994796-mfm8s\" (UID: \"37c87c2e-92b8-4b83-be26-cd63ec636eda\") " pod="openshift-ingress/router-default-5444994796-mfm8s" Oct 06 11:49:59 crc kubenswrapper[4958]: I1006 11:49:59.570247 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/9a23b5e6-14c9-46d0-aed5-4f8b88edfb98-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-2gbnx\" (UID: \"9a23b5e6-14c9-46d0-aed5-4f8b88edfb98\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-2gbnx" Oct 06 11:49:59 crc kubenswrapper[4958]: I1006 11:49:59.570289 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6h9bm\" (UniqueName: \"kubernetes.io/projected/12d2f911-e40e-44ea-9081-1bb8a08c7cbb-kube-api-access-6h9bm\") pod \"service-ca-operator-777779d784-d4mss\" (UID: \"12d2f911-e40e-44ea-9081-1bb8a08c7cbb\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-d4mss" Oct 06 11:49:59 crc kubenswrapper[4958]: I1006 11:49:59.570332 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/a0a306b6-7739-4ec5-9dac-68001b21447c-node-bootstrap-token\") pod \"machine-config-server-n8gvd\" (UID: \"a0a306b6-7739-4ec5-9dac-68001b21447c\") " pod="openshift-machine-config-operator/machine-config-server-n8gvd" Oct 06 11:49:59 crc kubenswrapper[4958]: I1006 11:49:59.570377 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b853b964-2cab-4b75-93fe-df7bd4e0f033-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-7zl8z\" (UID: \"b853b964-2cab-4b75-93fe-df7bd4e0f033\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-7zl8z" Oct 06 11:49:59 crc kubenswrapper[4958]: I1006 11:49:59.570420 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d829t\" (UniqueName: \"kubernetes.io/projected/3a4bf8ec-4eaf-483f-b1c2-a981e3a83091-kube-api-access-d829t\") pod \"packageserver-d55dfcdfc-x42zz\" (UID: \"3a4bf8ec-4eaf-483f-b1c2-a981e3a83091\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-x42zz" Oct 06 11:49:59 crc kubenswrapper[4958]: I1006 11:49:59.570518 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/d949b5d0-2cd6-40ab-8bd1-9fb19fd615f8-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-x8mdr\" (UID: \"d949b5d0-2cd6-40ab-8bd1-9fb19fd615f8\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-x8mdr" Oct 06 11:49:59 crc kubenswrapper[4958]: I1006 11:49:59.570565 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/b261e077-f9df-49ba-87df-a5d1988755bf-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-gj6pw\" (UID: \"b261e077-f9df-49ba-87df-a5d1988755bf\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-gj6pw" Oct 06 11:49:59 crc kubenswrapper[4958]: I1006 11:49:59.570610 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/8bce8671-7d06-4e9b-81cb-f6ddf56ba8a4-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-928ch\" (UID: \"8bce8671-7d06-4e9b-81cb-f6ddf56ba8a4\") " pod="openshift-marketplace/marketplace-operator-79b997595-928ch" Oct 06 11:49:59 crc kubenswrapper[4958]: I1006 11:49:59.570653 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s5lk7\" (UniqueName: \"kubernetes.io/projected/cbf992a4-d7af-455c-953b-c865445feb6c-kube-api-access-s5lk7\") pod \"controller-manager-879f6c89f-thrt8\" (UID: \"cbf992a4-d7af-455c-953b-c865445feb6c\") " pod="openshift-controller-manager/controller-manager-879f6c89f-thrt8" Oct 06 11:49:59 crc kubenswrapper[4958]: I1006 11:49:59.570698 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s55zm\" (UniqueName: \"kubernetes.io/projected/6a7580eb-dece-41a6-8335-33c29bc41056-kube-api-access-s55zm\") pod \"image-registry-697d97f7c8-kt8qb\" (UID: \"6a7580eb-dece-41a6-8335-33c29bc41056\") " pod="openshift-image-registry/image-registry-697d97f7c8-kt8qb" Oct 06 11:49:59 crc kubenswrapper[4958]: I1006 11:49:59.570740 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/37c87c2e-92b8-4b83-be26-cd63ec636eda-default-certificate\") pod \"router-default-5444994796-mfm8s\" (UID: \"37c87c2e-92b8-4b83-be26-cd63ec636eda\") " pod="openshift-ingress/router-default-5444994796-mfm8s" Oct 06 11:49:59 crc kubenswrapper[4958]: I1006 11:49:59.570780 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p4zs7\" (UniqueName: \"kubernetes.io/projected/6559fbed-d2e2-4578-992e-b4088643cd24-kube-api-access-p4zs7\") pod \"machine-config-operator-74547568cd-swm82\" (UID: \"6559fbed-d2e2-4578-992e-b4088643cd24\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-swm82" Oct 06 11:49:59 crc kubenswrapper[4958]: I1006 11:49:59.570818 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/f1163149-6bad-4274-87de-23e0b45a284e-srv-cert\") pod \"catalog-operator-68c6474976-zbr46\" (UID: \"f1163149-6bad-4274-87de-23e0b45a284e\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-zbr46" Oct 06 11:49:59 crc kubenswrapper[4958]: I1006 11:49:59.570861 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/d949b5d0-2cd6-40ab-8bd1-9fb19fd615f8-images\") pod \"machine-api-operator-5694c8668f-x8mdr\" (UID: \"d949b5d0-2cd6-40ab-8bd1-9fb19fd615f8\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-x8mdr" Oct 06 11:49:59 crc kubenswrapper[4958]: I1006 11:49:59.570911 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-97hxg\" (UniqueName: \"kubernetes.io/projected/221a0896-9d41-4bf4-b05e-57c067e8b885-kube-api-access-97hxg\") pod \"collect-profiles-29329185-6dnq5\" (UID: \"221a0896-9d41-4bf4-b05e-57c067e8b885\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29329185-6dnq5" Oct 06 11:49:59 crc kubenswrapper[4958]: I1006 11:49:59.570990 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/cbf992a4-d7af-455c-953b-c865445feb6c-serving-cert\") pod \"controller-manager-879f6c89f-thrt8\" (UID: \"cbf992a4-d7af-455c-953b-c865445feb6c\") " pod="openshift-controller-manager/controller-manager-879f6c89f-thrt8" Oct 06 11:49:59 crc kubenswrapper[4958]: I1006 11:49:59.571035 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/5715f094-2c1f-4d0c-8901-f4378d613048-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-f2982\" (UID: \"5715f094-2c1f-4d0c-8901-f4378d613048\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-f2982" Oct 06 11:49:59 crc kubenswrapper[4958]: I1006 11:49:59.571078 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8bce8671-7d06-4e9b-81cb-f6ddf56ba8a4-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-928ch\" (UID: \"8bce8671-7d06-4e9b-81cb-f6ddf56ba8a4\") " pod="openshift-marketplace/marketplace-operator-79b997595-928ch" Oct 06 11:49:59 crc kubenswrapper[4958]: I1006 11:49:59.571131 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/97ec580a-268f-4a8a-a6e1-9b64e463ca20-config\") pod \"openshift-apiserver-operator-796bbdcf4f-v7kkf\" (UID: \"97ec580a-268f-4a8a-a6e1-9b64e463ca20\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-v7kkf" Oct 06 11:49:59 crc kubenswrapper[4958]: I1006 11:49:59.571270 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/6a7580eb-dece-41a6-8335-33c29bc41056-registry-certificates\") pod \"image-registry-697d97f7c8-kt8qb\" (UID: \"6a7580eb-dece-41a6-8335-33c29bc41056\") " pod="openshift-image-registry/image-registry-697d97f7c8-kt8qb" Oct 06 11:49:59 crc kubenswrapper[4958]: I1006 11:49:59.571311 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/97ec580a-268f-4a8a-a6e1-9b64e463ca20-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-v7kkf\" (UID: \"97ec580a-268f-4a8a-a6e1-9b64e463ca20\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-v7kkf" Oct 06 11:49:59 crc kubenswrapper[4958]: I1006 11:49:59.571360 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/3331225c-8593-4a8c-a256-1aee839e9bb3-signing-cabundle\") pod \"service-ca-9c57cc56f-9blls\" (UID: \"3331225c-8593-4a8c-a256-1aee839e9bb3\") " pod="openshift-service-ca/service-ca-9c57cc56f-9blls" Oct 06 11:49:59 crc kubenswrapper[4958]: I1006 11:49:59.571406 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/37c87c2e-92b8-4b83-be26-cd63ec636eda-service-ca-bundle\") pod \"router-default-5444994796-mfm8s\" (UID: \"37c87c2e-92b8-4b83-be26-cd63ec636eda\") " pod="openshift-ingress/router-default-5444994796-mfm8s" Oct 06 11:49:59 crc kubenswrapper[4958]: I1006 11:49:59.571540 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/6559fbed-d2e2-4578-992e-b4088643cd24-auth-proxy-config\") pod \"machine-config-operator-74547568cd-swm82\" (UID: \"6559fbed-d2e2-4578-992e-b4088643cd24\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-swm82" Oct 06 11:49:59 crc kubenswrapper[4958]: I1006 11:49:59.571587 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/cbf992a4-d7af-455c-953b-c865445feb6c-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-thrt8\" (UID: \"cbf992a4-d7af-455c-953b-c865445feb6c\") " pod="openshift-controller-manager/controller-manager-879f6c89f-thrt8" Oct 06 11:49:59 crc kubenswrapper[4958]: I1006 11:49:59.571628 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/6a7580eb-dece-41a6-8335-33c29bc41056-ca-trust-extracted\") pod \"image-registry-697d97f7c8-kt8qb\" (UID: \"6a7580eb-dece-41a6-8335-33c29bc41056\") " pod="openshift-image-registry/image-registry-697d97f7c8-kt8qb" Oct 06 11:49:59 crc kubenswrapper[4958]: I1006 11:49:59.571673 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/12d2f911-e40e-44ea-9081-1bb8a08c7cbb-config\") pod \"service-ca-operator-777779d784-d4mss\" (UID: \"12d2f911-e40e-44ea-9081-1bb8a08c7cbb\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-d4mss" Oct 06 11:49:59 crc kubenswrapper[4958]: I1006 11:49:59.571712 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/59305dc8-3a12-401b-83d9-532e520b72b0-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-fhfvm\" (UID: \"59305dc8-3a12-401b-83d9-532e520b72b0\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-fhfvm" Oct 06 11:49:59 crc kubenswrapper[4958]: I1006 11:49:59.571757 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/7dd9d221-704f-4205-8a53-a7d42e162723-machine-approver-tls\") pod \"machine-approver-56656f9798-lswc6\" (UID: \"7dd9d221-704f-4205-8a53-a7d42e162723\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-lswc6" Oct 06 11:49:59 crc kubenswrapper[4958]: I1006 11:49:59.571854 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9a23b5e6-14c9-46d0-aed5-4f8b88edfb98-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-2gbnx\" (UID: \"9a23b5e6-14c9-46d0-aed5-4f8b88edfb98\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-2gbnx" Oct 06 11:49:59 crc kubenswrapper[4958]: I1006 11:49:59.571900 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/1538fe67-0070-42fe-86e4-3ad017710b44-metrics-tls\") pod \"dns-default-kg57j\" (UID: \"1538fe67-0070-42fe-86e4-3ad017710b44\") " pod="openshift-dns/dns-default-kg57j" Oct 06 11:49:59 crc kubenswrapper[4958]: I1006 11:49:59.571946 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cbf992a4-d7af-455c-953b-c865445feb6c-config\") pod \"controller-manager-879f6c89f-thrt8\" (UID: \"cbf992a4-d7af-455c-953b-c865445feb6c\") " pod="openshift-controller-manager/controller-manager-879f6c89f-thrt8" Oct 06 11:49:59 crc kubenswrapper[4958]: I1006 11:49:59.572027 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-kt8qb\" (UID: \"6a7580eb-dece-41a6-8335-33c29bc41056\") " pod="openshift-image-registry/image-registry-697d97f7c8-kt8qb" Oct 06 11:49:59 crc kubenswrapper[4958]: I1006 11:49:59.572086 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/dc407428-3c19-40aa-b476-c159f9b8f2a4-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-jxkkt\" (UID: \"dc407428-3c19-40aa-b476-c159f9b8f2a4\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-jxkkt" Oct 06 11:49:59 crc kubenswrapper[4958]: I1006 11:49:59.572136 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/2cc565c2-461e-4902-917d-51dc8aba5cf7-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-m7q5b\" (UID: \"2cc565c2-461e-4902-917d-51dc8aba5cf7\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-m7q5b" Oct 06 11:49:59 crc kubenswrapper[4958]: I1006 11:49:59.572303 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n5s8m\" (UniqueName: \"kubernetes.io/projected/b853b964-2cab-4b75-93fe-df7bd4e0f033-kube-api-access-n5s8m\") pod \"kube-storage-version-migrator-operator-b67b599dd-7zl8z\" (UID: \"b853b964-2cab-4b75-93fe-df7bd4e0f033\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-7zl8z" Oct 06 11:49:59 crc kubenswrapper[4958]: I1006 11:49:59.572373 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/abd850de-cbf2-4d68-9226-223ba0e3cb72-metrics-tls\") pod \"ingress-operator-5b745b69d9-c97vp\" (UID: \"abd850de-cbf2-4d68-9226-223ba0e3cb72\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-c97vp" Oct 06 11:49:59 crc kubenswrapper[4958]: I1006 11:49:59.572424 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v2wt6\" (UniqueName: \"kubernetes.io/projected/59305dc8-3a12-401b-83d9-532e520b72b0-kube-api-access-v2wt6\") pod \"multus-admission-controller-857f4d67dd-fhfvm\" (UID: \"59305dc8-3a12-401b-83d9-532e520b72b0\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-fhfvm" Oct 06 11:49:59 crc kubenswrapper[4958]: I1006 11:49:59.572474 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/5924455c-9450-47f0-a762-355eae4c73c4-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-wlv4g\" (UID: \"5924455c-9450-47f0-a762-355eae4c73c4\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-wlv4g" Oct 06 11:49:59 crc kubenswrapper[4958]: I1006 11:49:59.572539 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-prbql\" (UniqueName: \"kubernetes.io/projected/b261e077-f9df-49ba-87df-a5d1988755bf-kube-api-access-prbql\") pod \"cluster-samples-operator-665b6dd947-gj6pw\" (UID: \"b261e077-f9df-49ba-87df-a5d1988755bf\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-gj6pw" Oct 06 11:49:59 crc kubenswrapper[4958]: I1006 11:49:59.572596 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2cc565c2-461e-4902-917d-51dc8aba5cf7-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-m7q5b\" (UID: \"2cc565c2-461e-4902-917d-51dc8aba5cf7\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-m7q5b" Oct 06 11:49:59 crc kubenswrapper[4958]: I1006 11:49:59.572691 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9a23b5e6-14c9-46d0-aed5-4f8b88edfb98-config\") pod \"kube-apiserver-operator-766d6c64bb-2gbnx\" (UID: \"9a23b5e6-14c9-46d0-aed5-4f8b88edfb98\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-2gbnx" Oct 06 11:49:59 crc kubenswrapper[4958]: I1006 11:49:59.573138 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/8c14b3fd-7202-4e51-9528-7eb2825f4a5b-registration-dir\") pod \"csi-hostpathplugin-m9cln\" (UID: \"8c14b3fd-7202-4e51-9528-7eb2825f4a5b\") " pod="hostpath-provisioner/csi-hostpathplugin-m9cln" Oct 06 11:49:59 crc kubenswrapper[4958]: I1006 11:49:59.573245 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nn5cj\" (UniqueName: \"kubernetes.io/projected/29169783-2273-40c2-abe9-85f3ee6c4f7f-kube-api-access-nn5cj\") pod \"dns-operator-744455d44c-t2nxc\" (UID: \"29169783-2273-40c2-abe9-85f3ee6c4f7f\") " pod="openshift-dns-operator/dns-operator-744455d44c-t2nxc" Oct 06 11:49:59 crc kubenswrapper[4958]: I1006 11:49:59.573294 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zcwhl\" (UniqueName: \"kubernetes.io/projected/1658bc0b-def5-4f14-a28d-f20c83f435da-kube-api-access-zcwhl\") pod \"package-server-manager-789f6589d5-k94lv\" (UID: \"1658bc0b-def5-4f14-a28d-f20c83f435da\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-k94lv" Oct 06 11:49:59 crc kubenswrapper[4958]: I1006 11:49:59.573347 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/12d2f911-e40e-44ea-9081-1bb8a08c7cbb-serving-cert\") pod \"service-ca-operator-777779d784-d4mss\" (UID: \"12d2f911-e40e-44ea-9081-1bb8a08c7cbb\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-d4mss" Oct 06 11:49:59 crc kubenswrapper[4958]: I1006 11:49:59.573401 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/37c87c2e-92b8-4b83-be26-cd63ec636eda-metrics-certs\") pod \"router-default-5444994796-mfm8s\" (UID: \"37c87c2e-92b8-4b83-be26-cd63ec636eda\") " pod="openshift-ingress/router-default-5444994796-mfm8s" Oct 06 11:49:59 crc kubenswrapper[4958]: I1006 11:49:59.573474 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/6a7580eb-dece-41a6-8335-33c29bc41056-registry-tls\") pod \"image-registry-697d97f7c8-kt8qb\" (UID: \"6a7580eb-dece-41a6-8335-33c29bc41056\") " pod="openshift-image-registry/image-registry-697d97f7c8-kt8qb" Oct 06 11:49:59 crc kubenswrapper[4958]: I1006 11:49:59.573532 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b853b964-2cab-4b75-93fe-df7bd4e0f033-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-7zl8z\" (UID: \"b853b964-2cab-4b75-93fe-df7bd4e0f033\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-7zl8z" Oct 06 11:49:59 crc kubenswrapper[4958]: I1006 11:49:59.590265 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9kmgb\" (UniqueName: \"kubernetes.io/projected/d982ea17-3155-4d01-bf28-5045db9fe780-kube-api-access-9kmgb\") pod \"olm-operator-6b444d44fb-4cxkf\" (UID: \"d982ea17-3155-4d01-bf28-5045db9fe780\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-4cxkf" Oct 06 11:49:59 crc kubenswrapper[4958]: I1006 11:49:59.590397 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/6559fbed-d2e2-4578-992e-b4088643cd24-images\") pod \"machine-config-operator-74547568cd-swm82\" (UID: \"6559fbed-d2e2-4578-992e-b4088643cd24\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-swm82" Oct 06 11:49:59 crc kubenswrapper[4958]: I1006 11:49:59.590605 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rnhk5\" (UniqueName: \"kubernetes.io/projected/ef5f0384-c041-47c5-80cf-c545d95c5876-kube-api-access-rnhk5\") pod \"migrator-59844c95c7-7ndvg\" (UID: \"ef5f0384-c041-47c5-80cf-c545d95c5876\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-7ndvg" Oct 06 11:49:59 crc kubenswrapper[4958]: I1006 11:49:59.590680 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2cc565c2-461e-4902-917d-51dc8aba5cf7-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-m7q5b\" (UID: \"2cc565c2-461e-4902-917d-51dc8aba5cf7\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-m7q5b" Oct 06 11:49:59 crc kubenswrapper[4958]: I1006 11:49:59.590762 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/6a7580eb-dece-41a6-8335-33c29bc41056-bound-sa-token\") pod \"image-registry-697d97f7c8-kt8qb\" (UID: \"6a7580eb-dece-41a6-8335-33c29bc41056\") " pod="openshift-image-registry/image-registry-697d97f7c8-kt8qb" Oct 06 11:49:59 crc kubenswrapper[4958]: I1006 11:49:59.590821 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/abd850de-cbf2-4d68-9226-223ba0e3cb72-bound-sa-token\") pod \"ingress-operator-5b745b69d9-c97vp\" (UID: \"abd850de-cbf2-4d68-9226-223ba0e3cb72\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-c97vp" Oct 06 11:49:59 crc kubenswrapper[4958]: I1006 11:49:59.590885 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/3a4bf8ec-4eaf-483f-b1c2-a981e3a83091-apiservice-cert\") pod \"packageserver-d55dfcdfc-x42zz\" (UID: \"3a4bf8ec-4eaf-483f-b1c2-a981e3a83091\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-x42zz" Oct 06 11:49:59 crc kubenswrapper[4958]: I1006 11:49:59.590925 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6q4nj\" (UniqueName: \"kubernetes.io/projected/a0a306b6-7739-4ec5-9dac-68001b21447c-kube-api-access-6q4nj\") pod \"machine-config-server-n8gvd\" (UID: \"a0a306b6-7739-4ec5-9dac-68001b21447c\") " pod="openshift-machine-config-operator/machine-config-server-n8gvd" Oct 06 11:49:59 crc kubenswrapper[4958]: I1006 11:49:59.590988 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-htjlh\" (UniqueName: \"kubernetes.io/projected/abd850de-cbf2-4d68-9226-223ba0e3cb72-kube-api-access-htjlh\") pod \"ingress-operator-5b745b69d9-c97vp\" (UID: \"abd850de-cbf2-4d68-9226-223ba0e3cb72\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-c97vp" Oct 06 11:49:59 crc kubenswrapper[4958]: I1006 11:49:59.591023 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/37c87c2e-92b8-4b83-be26-cd63ec636eda-stats-auth\") pod \"router-default-5444994796-mfm8s\" (UID: \"37c87c2e-92b8-4b83-be26-cd63ec636eda\") " pod="openshift-ingress/router-default-5444994796-mfm8s" Oct 06 11:49:59 crc kubenswrapper[4958]: I1006 11:49:59.591875 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/5715f094-2c1f-4d0c-8901-f4378d613048-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-f2982\" (UID: \"5715f094-2c1f-4d0c-8901-f4378d613048\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-f2982" Oct 06 11:49:59 crc kubenswrapper[4958]: I1006 11:49:59.598690 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/29169783-2273-40c2-abe9-85f3ee6c4f7f-metrics-tls\") pod \"dns-operator-744455d44c-t2nxc\" (UID: \"29169783-2273-40c2-abe9-85f3ee6c4f7f\") " pod="openshift-dns-operator/dns-operator-744455d44c-t2nxc" Oct 06 11:49:59 crc kubenswrapper[4958]: I1006 11:49:59.598801 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/29169783-2273-40c2-abe9-85f3ee6c4f7f-metrics-tls\") pod \"dns-operator-744455d44c-t2nxc\" (UID: \"29169783-2273-40c2-abe9-85f3ee6c4f7f\") " pod="openshift-dns-operator/dns-operator-744455d44c-t2nxc" Oct 06 11:49:59 crc kubenswrapper[4958]: I1006 11:49:59.600689 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/b261e077-f9df-49ba-87df-a5d1988755bf-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-gj6pw\" (UID: \"b261e077-f9df-49ba-87df-a5d1988755bf\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-gj6pw" Oct 06 11:49:59 crc kubenswrapper[4958]: I1006 11:49:59.600861 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/6a7580eb-dece-41a6-8335-33c29bc41056-registry-tls\") pod \"image-registry-697d97f7c8-kt8qb\" (UID: \"6a7580eb-dece-41a6-8335-33c29bc41056\") " pod="openshift-image-registry/image-registry-697d97f7c8-kt8qb" Oct 06 11:49:59 crc kubenswrapper[4958]: I1006 11:49:59.603133 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/cbf992a4-d7af-455c-953b-c865445feb6c-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-thrt8\" (UID: \"cbf992a4-d7af-455c-953b-c865445feb6c\") " pod="openshift-controller-manager/controller-manager-879f6c89f-thrt8" Oct 06 11:49:59 crc kubenswrapper[4958]: I1006 11:49:59.603125 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/6a7580eb-dece-41a6-8335-33c29bc41056-registry-certificates\") pod \"image-registry-697d97f7c8-kt8qb\" (UID: \"6a7580eb-dece-41a6-8335-33c29bc41056\") " pod="openshift-image-registry/image-registry-697d97f7c8-kt8qb" Oct 06 11:49:59 crc kubenswrapper[4958]: I1006 11:49:59.604350 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/6a7580eb-dece-41a6-8335-33c29bc41056-ca-trust-extracted\") pod \"image-registry-697d97f7c8-kt8qb\" (UID: \"6a7580eb-dece-41a6-8335-33c29bc41056\") " pod="openshift-image-registry/image-registry-697d97f7c8-kt8qb" Oct 06 11:49:59 crc kubenswrapper[4958]: I1006 11:49:59.605054 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/97ec580a-268f-4a8a-a6e1-9b64e463ca20-config\") pod \"openshift-apiserver-operator-796bbdcf4f-v7kkf\" (UID: \"97ec580a-268f-4a8a-a6e1-9b64e463ca20\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-v7kkf" Oct 06 11:49:59 crc kubenswrapper[4958]: I1006 11:49:59.605463 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/d949b5d0-2cd6-40ab-8bd1-9fb19fd615f8-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-x8mdr\" (UID: \"d949b5d0-2cd6-40ab-8bd1-9fb19fd615f8\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-x8mdr" Oct 06 11:49:59 crc kubenswrapper[4958]: I1006 11:49:59.605842 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cbf992a4-d7af-455c-953b-c865445feb6c-config\") pod \"controller-manager-879f6c89f-thrt8\" (UID: \"cbf992a4-d7af-455c-953b-c865445feb6c\") " pod="openshift-controller-manager/controller-manager-879f6c89f-thrt8" Oct 06 11:49:59 crc kubenswrapper[4958]: I1006 11:49:59.606378 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/d949b5d0-2cd6-40ab-8bd1-9fb19fd615f8-images\") pod \"machine-api-operator-5694c8668f-x8mdr\" (UID: \"d949b5d0-2cd6-40ab-8bd1-9fb19fd615f8\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-x8mdr" Oct 06 11:49:59 crc kubenswrapper[4958]: I1006 11:49:59.606772 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hmslt\" (UniqueName: \"kubernetes.io/projected/dc407428-3c19-40aa-b476-c159f9b8f2a4-kube-api-access-hmslt\") pod \"control-plane-machine-set-operator-78cbb6b69f-jxkkt\" (UID: \"dc407428-3c19-40aa-b476-c159f9b8f2a4\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-jxkkt" Oct 06 11:49:59 crc kubenswrapper[4958]: I1006 11:49:59.606838 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/6a7580eb-dece-41a6-8335-33c29bc41056-installation-pull-secrets\") pod \"image-registry-697d97f7c8-kt8qb\" (UID: \"6a7580eb-dece-41a6-8335-33c29bc41056\") " pod="openshift-image-registry/image-registry-697d97f7c8-kt8qb" Oct 06 11:49:59 crc kubenswrapper[4958]: I1006 11:49:59.608455 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/cbf992a4-d7af-455c-953b-c865445feb6c-serving-cert\") pod \"controller-manager-879f6c89f-thrt8\" (UID: \"cbf992a4-d7af-455c-953b-c865445feb6c\") " pod="openshift-controller-manager/controller-manager-879f6c89f-thrt8" Oct 06 11:49:59 crc kubenswrapper[4958]: I1006 11:49:59.610765 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-dftvm" Oct 06 11:49:59 crc kubenswrapper[4958]: I1006 11:49:59.611020 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/6a7580eb-dece-41a6-8335-33c29bc41056-installation-pull-secrets\") pod \"image-registry-697d97f7c8-kt8qb\" (UID: \"6a7580eb-dece-41a6-8335-33c29bc41056\") " pod="openshift-image-registry/image-registry-697d97f7c8-kt8qb" Oct 06 11:49:59 crc kubenswrapper[4958]: I1006 11:49:59.611407 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/6a7580eb-dece-41a6-8335-33c29bc41056-trusted-ca\") pod \"image-registry-697d97f7c8-kt8qb\" (UID: \"6a7580eb-dece-41a6-8335-33c29bc41056\") " pod="openshift-image-registry/image-registry-697d97f7c8-kt8qb" Oct 06 11:49:59 crc kubenswrapper[4958]: I1006 11:49:59.612110 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-vqq9g" Oct 06 11:49:59 crc kubenswrapper[4958]: I1006 11:49:59.613321 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b5zxx\" (UniqueName: \"kubernetes.io/projected/5715f094-2c1f-4d0c-8901-f4378d613048-kube-api-access-b5zxx\") pod \"cluster-image-registry-operator-dc59b4c8b-f2982\" (UID: \"5715f094-2c1f-4d0c-8901-f4378d613048\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-f2982" Oct 06 11:49:59 crc kubenswrapper[4958]: I1006 11:49:59.613371 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/8c14b3fd-7202-4e51-9528-7eb2825f4a5b-plugins-dir\") pod \"csi-hostpathplugin-m9cln\" (UID: \"8c14b3fd-7202-4e51-9528-7eb2825f4a5b\") " pod="hostpath-provisioner/csi-hostpathplugin-m9cln" Oct 06 11:49:59 crc kubenswrapper[4958]: I1006 11:49:59.613825 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/8c14b3fd-7202-4e51-9528-7eb2825f4a5b-csi-data-dir\") pod \"csi-hostpathplugin-m9cln\" (UID: \"8c14b3fd-7202-4e51-9528-7eb2825f4a5b\") " pod="hostpath-provisioner/csi-hostpathplugin-m9cln" Oct 06 11:49:59 crc kubenswrapper[4958]: I1006 11:49:59.613871 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/5715f094-2c1f-4d0c-8901-f4378d613048-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-f2982\" (UID: \"5715f094-2c1f-4d0c-8901-f4378d613048\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-f2982" Oct 06 11:49:59 crc kubenswrapper[4958]: I1006 11:49:59.613902 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/1658bc0b-def5-4f14-a28d-f20c83f435da-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-k94lv\" (UID: \"1658bc0b-def5-4f14-a28d-f20c83f435da\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-k94lv" Oct 06 11:49:59 crc kubenswrapper[4958]: I1006 11:49:59.614305 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/1538fe67-0070-42fe-86e4-3ad017710b44-config-volume\") pod \"dns-default-kg57j\" (UID: \"1538fe67-0070-42fe-86e4-3ad017710b44\") " pod="openshift-dns/dns-default-kg57j" Oct 06 11:49:59 crc kubenswrapper[4958]: I1006 11:49:59.614345 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kn5dq\" (UniqueName: \"kubernetes.io/projected/97ec580a-268f-4a8a-a6e1-9b64e463ca20-kube-api-access-kn5dq\") pod \"openshift-apiserver-operator-796bbdcf4f-v7kkf\" (UID: \"97ec580a-268f-4a8a-a6e1-9b64e463ca20\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-v7kkf" Oct 06 11:49:59 crc kubenswrapper[4958]: I1006 11:49:59.614378 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1e722343-2649-43dd-a7dd-842e41136658-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-nsfvl\" (UID: \"1e722343-2649-43dd-a7dd-842e41136658\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-nsfvl" Oct 06 11:49:59 crc kubenswrapper[4958]: I1006 11:49:59.614564 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2kpsf\" (UniqueName: \"kubernetes.io/projected/5924455c-9450-47f0-a762-355eae4c73c4-kube-api-access-2kpsf\") pod \"machine-config-controller-84d6567774-wlv4g\" (UID: \"5924455c-9450-47f0-a762-355eae4c73c4\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-wlv4g" Oct 06 11:49:59 crc kubenswrapper[4958]: I1006 11:49:59.615128 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/d982ea17-3155-4d01-bf28-5045db9fe780-profile-collector-cert\") pod \"olm-operator-6b444d44fb-4cxkf\" (UID: \"d982ea17-3155-4d01-bf28-5045db9fe780\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-4cxkf" Oct 06 11:49:59 crc kubenswrapper[4958]: I1006 11:49:59.615181 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/21e2c2d4-0ea2-4767-b88d-0229390bff9a-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-pp96v\" (UID: \"21e2c2d4-0ea2-4767-b88d-0229390bff9a\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-pp96v" Oct 06 11:49:59 crc kubenswrapper[4958]: I1006 11:49:59.615208 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/a0a306b6-7739-4ec5-9dac-68001b21447c-certs\") pod \"machine-config-server-n8gvd\" (UID: \"a0a306b6-7739-4ec5-9dac-68001b21447c\") " pod="openshift-machine-config-operator/machine-config-server-n8gvd" Oct 06 11:49:59 crc kubenswrapper[4958]: E1006 11:49:59.616073 4958 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-06 11:50:00.116042275 +0000 UTC m=+154.002067593 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-kt8qb" (UID: "6a7580eb-dece-41a6-8335-33c29bc41056") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 06 11:49:59 crc kubenswrapper[4958]: I1006 11:49:59.618600 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/5715f094-2c1f-4d0c-8901-f4378d613048-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-f2982\" (UID: \"5715f094-2c1f-4d0c-8901-f4378d613048\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-f2982" Oct 06 11:49:59 crc kubenswrapper[4958]: I1006 11:49:59.619462 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/97ec580a-268f-4a8a-a6e1-9b64e463ca20-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-v7kkf\" (UID: \"97ec580a-268f-4a8a-a6e1-9b64e463ca20\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-v7kkf" Oct 06 11:49:59 crc kubenswrapper[4958]: I1006 11:49:59.621849 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d949b5d0-2cd6-40ab-8bd1-9fb19fd615f8-config\") pod \"machine-api-operator-5694c8668f-x8mdr\" (UID: \"d949b5d0-2cd6-40ab-8bd1-9fb19fd615f8\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-x8mdr" Oct 06 11:49:59 crc kubenswrapper[4958]: I1006 11:49:59.622278 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/7dd9d221-704f-4205-8a53-a7d42e162723-machine-approver-tls\") pod \"machine-approver-56656f9798-lswc6\" (UID: \"7dd9d221-704f-4205-8a53-a7d42e162723\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-lswc6" Oct 06 11:49:59 crc kubenswrapper[4958]: I1006 11:49:59.634081 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/8c14b3fd-7202-4e51-9528-7eb2825f4a5b-mountpoint-dir\") pod \"csi-hostpathplugin-m9cln\" (UID: \"8c14b3fd-7202-4e51-9528-7eb2825f4a5b\") " pod="hostpath-provisioner/csi-hostpathplugin-m9cln" Oct 06 11:49:59 crc kubenswrapper[4958]: I1006 11:49:59.634189 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/221a0896-9d41-4bf4-b05e-57c067e8b885-config-volume\") pod \"collect-profiles-29329185-6dnq5\" (UID: \"221a0896-9d41-4bf4-b05e-57c067e8b885\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29329185-6dnq5" Oct 06 11:49:59 crc kubenswrapper[4958]: I1006 11:49:59.634586 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z75cv\" (UniqueName: \"kubernetes.io/projected/3331225c-8593-4a8c-a256-1aee839e9bb3-kube-api-access-z75cv\") pod \"service-ca-9c57cc56f-9blls\" (UID: \"3331225c-8593-4a8c-a256-1aee839e9bb3\") " pod="openshift-service-ca/service-ca-9c57cc56f-9blls" Oct 06 11:49:59 crc kubenswrapper[4958]: I1006 11:49:59.634976 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/5924455c-9450-47f0-a762-355eae4c73c4-proxy-tls\") pod \"machine-config-controller-84d6567774-wlv4g\" (UID: \"5924455c-9450-47f0-a762-355eae4c73c4\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-wlv4g" Oct 06 11:49:59 crc kubenswrapper[4958]: I1006 11:49:59.635017 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/3a4bf8ec-4eaf-483f-b1c2-a981e3a83091-tmpfs\") pod \"packageserver-d55dfcdfc-x42zz\" (UID: \"3a4bf8ec-4eaf-483f-b1c2-a981e3a83091\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-x42zz" Oct 06 11:49:59 crc kubenswrapper[4958]: I1006 11:49:59.635053 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/f1163149-6bad-4274-87de-23e0b45a284e-profile-collector-cert\") pod \"catalog-operator-68c6474976-zbr46\" (UID: \"f1163149-6bad-4274-87de-23e0b45a284e\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-zbr46" Oct 06 11:49:59 crc kubenswrapper[4958]: I1006 11:49:59.635124 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2qpmd\" (UniqueName: \"kubernetes.io/projected/1538fe67-0070-42fe-86e4-3ad017710b44-kube-api-access-2qpmd\") pod \"dns-default-kg57j\" (UID: \"1538fe67-0070-42fe-86e4-3ad017710b44\") " pod="openshift-dns/dns-default-kg57j" Oct 06 11:49:59 crc kubenswrapper[4958]: I1006 11:49:59.635175 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/6559fbed-d2e2-4578-992e-b4088643cd24-proxy-tls\") pod \"machine-config-operator-74547568cd-swm82\" (UID: \"6559fbed-d2e2-4578-992e-b4088643cd24\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-swm82" Oct 06 11:49:59 crc kubenswrapper[4958]: I1006 11:49:59.635220 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/cbf992a4-d7af-455c-953b-c865445feb6c-client-ca\") pod \"controller-manager-879f6c89f-thrt8\" (UID: \"cbf992a4-d7af-455c-953b-c865445feb6c\") " pod="openshift-controller-manager/controller-manager-879f6c89f-thrt8" Oct 06 11:49:59 crc kubenswrapper[4958]: I1006 11:49:59.635246 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1e722343-2649-43dd-a7dd-842e41136658-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-nsfvl\" (UID: \"1e722343-2649-43dd-a7dd-842e41136658\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-nsfvl" Oct 06 11:49:59 crc kubenswrapper[4958]: I1006 11:49:59.635275 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5jhd4\" (UniqueName: \"kubernetes.io/projected/8275a125-7a77-49fe-bebf-d140013d5a3f-kube-api-access-5jhd4\") pod \"ingress-canary-7flhj\" (UID: \"8275a125-7a77-49fe-bebf-d140013d5a3f\") " pod="openshift-ingress-canary/ingress-canary-7flhj" Oct 06 11:49:59 crc kubenswrapper[4958]: I1006 11:49:59.635297 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/21e2c2d4-0ea2-4767-b88d-0229390bff9a-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-pp96v\" (UID: \"21e2c2d4-0ea2-4767-b88d-0229390bff9a\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-pp96v" Oct 06 11:49:59 crc kubenswrapper[4958]: I1006 11:49:59.635318 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/8c14b3fd-7202-4e51-9528-7eb2825f4a5b-socket-dir\") pod \"csi-hostpathplugin-m9cln\" (UID: \"8c14b3fd-7202-4e51-9528-7eb2825f4a5b\") " pod="hostpath-provisioner/csi-hostpathplugin-m9cln" Oct 06 11:49:59 crc kubenswrapper[4958]: I1006 11:49:59.635342 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zxswz\" (UniqueName: \"kubernetes.io/projected/8bce8671-7d06-4e9b-81cb-f6ddf56ba8a4-kube-api-access-zxswz\") pod \"marketplace-operator-79b997595-928ch\" (UID: \"8bce8671-7d06-4e9b-81cb-f6ddf56ba8a4\") " pod="openshift-marketplace/marketplace-operator-79b997595-928ch" Oct 06 11:49:59 crc kubenswrapper[4958]: I1006 11:49:59.635392 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c7cq6\" (UniqueName: \"kubernetes.io/projected/d949b5d0-2cd6-40ab-8bd1-9fb19fd615f8-kube-api-access-c7cq6\") pod \"machine-api-operator-5694c8668f-x8mdr\" (UID: \"d949b5d0-2cd6-40ab-8bd1-9fb19fd615f8\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-x8mdr" Oct 06 11:49:59 crc kubenswrapper[4958]: I1006 11:49:59.635415 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9gv4m\" (UniqueName: \"kubernetes.io/projected/f1163149-6bad-4274-87de-23e0b45a284e-kube-api-access-9gv4m\") pod \"catalog-operator-68c6474976-zbr46\" (UID: \"f1163149-6bad-4274-87de-23e0b45a284e\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-zbr46" Oct 06 11:49:59 crc kubenswrapper[4958]: I1006 11:49:59.635440 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1e722343-2649-43dd-a7dd-842e41136658-config\") pod \"kube-controller-manager-operator-78b949d7b-nsfvl\" (UID: \"1e722343-2649-43dd-a7dd-842e41136658\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-nsfvl" Oct 06 11:49:59 crc kubenswrapper[4958]: I1006 11:49:59.635440 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d949b5d0-2cd6-40ab-8bd1-9fb19fd615f8-config\") pod \"machine-api-operator-5694c8668f-x8mdr\" (UID: \"d949b5d0-2cd6-40ab-8bd1-9fb19fd615f8\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-x8mdr" Oct 06 11:49:59 crc kubenswrapper[4958]: I1006 11:49:59.635463 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/d982ea17-3155-4d01-bf28-5045db9fe780-srv-cert\") pod \"olm-operator-6b444d44fb-4cxkf\" (UID: \"d982ea17-3155-4d01-bf28-5045db9fe780\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-4cxkf" Oct 06 11:49:59 crc kubenswrapper[4958]: I1006 11:49:59.635486 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/7dd9d221-704f-4205-8a53-a7d42e162723-auth-proxy-config\") pod \"machine-approver-56656f9798-lswc6\" (UID: \"7dd9d221-704f-4205-8a53-a7d42e162723\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-lswc6" Oct 06 11:49:59 crc kubenswrapper[4958]: I1006 11:49:59.635512 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bkg2g\" (UniqueName: \"kubernetes.io/projected/7dd9d221-704f-4205-8a53-a7d42e162723-kube-api-access-bkg2g\") pod \"machine-approver-56656f9798-lswc6\" (UID: \"7dd9d221-704f-4205-8a53-a7d42e162723\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-lswc6" Oct 06 11:49:59 crc kubenswrapper[4958]: I1006 11:49:59.635537 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xjr8z\" (UniqueName: \"kubernetes.io/projected/8c14b3fd-7202-4e51-9528-7eb2825f4a5b-kube-api-access-xjr8z\") pod \"csi-hostpathplugin-m9cln\" (UID: \"8c14b3fd-7202-4e51-9528-7eb2825f4a5b\") " pod="hostpath-provisioner/csi-hostpathplugin-m9cln" Oct 06 11:49:59 crc kubenswrapper[4958]: I1006 11:49:59.635628 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/3331225c-8593-4a8c-a256-1aee839e9bb3-signing-key\") pod \"service-ca-9c57cc56f-9blls\" (UID: \"3331225c-8593-4a8c-a256-1aee839e9bb3\") " pod="openshift-service-ca/service-ca-9c57cc56f-9blls" Oct 06 11:49:59 crc kubenswrapper[4958]: I1006 11:49:59.636406 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/7dd9d221-704f-4205-8a53-a7d42e162723-auth-proxy-config\") pod \"machine-approver-56656f9798-lswc6\" (UID: \"7dd9d221-704f-4205-8a53-a7d42e162723\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-lswc6" Oct 06 11:49:59 crc kubenswrapper[4958]: I1006 11:49:59.636406 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/6a7580eb-dece-41a6-8335-33c29bc41056-trusted-ca\") pod \"image-registry-697d97f7c8-kt8qb\" (UID: \"6a7580eb-dece-41a6-8335-33c29bc41056\") " pod="openshift-image-registry/image-registry-697d97f7c8-kt8qb" Oct 06 11:49:59 crc kubenswrapper[4958]: I1006 11:49:59.637693 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/cbf992a4-d7af-455c-953b-c865445feb6c-client-ca\") pod \"controller-manager-879f6c89f-thrt8\" (UID: \"cbf992a4-d7af-455c-953b-c865445feb6c\") " pod="openshift-controller-manager/controller-manager-879f6c89f-thrt8" Oct 06 11:49:59 crc kubenswrapper[4958]: I1006 11:49:59.640407 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g5z44\" (UniqueName: \"kubernetes.io/projected/0538c536-f662-48c7-98fe-a1ceff33ade3-kube-api-access-g5z44\") pod \"downloads-7954f5f757-bs4m6\" (UID: \"0538c536-f662-48c7-98fe-a1ceff33ade3\") " pod="openshift-console/downloads-7954f5f757-bs4m6" Oct 06 11:49:59 crc kubenswrapper[4958]: I1006 11:49:59.640791 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/5715f094-2c1f-4d0c-8901-f4378d613048-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-f2982\" (UID: \"5715f094-2c1f-4d0c-8901-f4378d613048\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-f2982" Oct 06 11:49:59 crc kubenswrapper[4958]: I1006 11:49:59.645493 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/6a7580eb-dece-41a6-8335-33c29bc41056-bound-sa-token\") pod \"image-registry-697d97f7c8-kt8qb\" (UID: \"6a7580eb-dece-41a6-8335-33c29bc41056\") " pod="openshift-image-registry/image-registry-697d97f7c8-kt8qb" Oct 06 11:49:59 crc kubenswrapper[4958]: I1006 11:49:59.663974 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s5lk7\" (UniqueName: \"kubernetes.io/projected/cbf992a4-d7af-455c-953b-c865445feb6c-kube-api-access-s5lk7\") pod \"controller-manager-879f6c89f-thrt8\" (UID: \"cbf992a4-d7af-455c-953b-c865445feb6c\") " pod="openshift-controller-manager/controller-manager-879f6c89f-thrt8" Oct 06 11:49:59 crc kubenswrapper[4958]: I1006 11:49:59.666676 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s55zm\" (UniqueName: \"kubernetes.io/projected/6a7580eb-dece-41a6-8335-33c29bc41056-kube-api-access-s55zm\") pod \"image-registry-697d97f7c8-kt8qb\" (UID: \"6a7580eb-dece-41a6-8335-33c29bc41056\") " pod="openshift-image-registry/image-registry-697d97f7c8-kt8qb" Oct 06 11:49:59 crc kubenswrapper[4958]: I1006 11:49:59.667898 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-7777fb866f-vqqpd" Oct 06 11:49:59 crc kubenswrapper[4958]: I1006 11:49:59.681610 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-b45778765-xnxs2" Oct 06 11:49:59 crc kubenswrapper[4958]: I1006 11:49:59.687688 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-ccpgf" Oct 06 11:49:59 crc kubenswrapper[4958]: I1006 11:49:59.689873 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-prbql\" (UniqueName: \"kubernetes.io/projected/b261e077-f9df-49ba-87df-a5d1988755bf-kube-api-access-prbql\") pod \"cluster-samples-operator-665b6dd947-gj6pw\" (UID: \"b261e077-f9df-49ba-87df-a5d1988755bf\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-gj6pw" Oct 06 11:49:59 crc kubenswrapper[4958]: I1006 11:49:59.727358 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b5zxx\" (UniqueName: \"kubernetes.io/projected/5715f094-2c1f-4d0c-8901-f4378d613048-kube-api-access-b5zxx\") pod \"cluster-image-registry-operator-dc59b4c8b-f2982\" (UID: \"5715f094-2c1f-4d0c-8901-f4378d613048\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-f2982" Oct 06 11:49:59 crc kubenswrapper[4958]: I1006 11:49:59.738550 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 06 11:49:59 crc kubenswrapper[4958]: I1006 11:49:59.738760 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rnhk5\" (UniqueName: \"kubernetes.io/projected/ef5f0384-c041-47c5-80cf-c545d95c5876-kube-api-access-rnhk5\") pod \"migrator-59844c95c7-7ndvg\" (UID: \"ef5f0384-c041-47c5-80cf-c545d95c5876\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-7ndvg" Oct 06 11:49:59 crc kubenswrapper[4958]: I1006 11:49:59.738787 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2cc565c2-461e-4902-917d-51dc8aba5cf7-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-m7q5b\" (UID: \"2cc565c2-461e-4902-917d-51dc8aba5cf7\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-m7q5b" Oct 06 11:49:59 crc kubenswrapper[4958]: I1006 11:49:59.738813 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/abd850de-cbf2-4d68-9226-223ba0e3cb72-bound-sa-token\") pod \"ingress-operator-5b745b69d9-c97vp\" (UID: \"abd850de-cbf2-4d68-9226-223ba0e3cb72\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-c97vp" Oct 06 11:49:59 crc kubenswrapper[4958]: I1006 11:49:59.738837 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/3a4bf8ec-4eaf-483f-b1c2-a981e3a83091-apiservice-cert\") pod \"packageserver-d55dfcdfc-x42zz\" (UID: \"3a4bf8ec-4eaf-483f-b1c2-a981e3a83091\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-x42zz" Oct 06 11:49:59 crc kubenswrapper[4958]: I1006 11:49:59.738860 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6q4nj\" (UniqueName: \"kubernetes.io/projected/a0a306b6-7739-4ec5-9dac-68001b21447c-kube-api-access-6q4nj\") pod \"machine-config-server-n8gvd\" (UID: \"a0a306b6-7739-4ec5-9dac-68001b21447c\") " pod="openshift-machine-config-operator/machine-config-server-n8gvd" Oct 06 11:49:59 crc kubenswrapper[4958]: I1006 11:49:59.738881 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-htjlh\" (UniqueName: \"kubernetes.io/projected/abd850de-cbf2-4d68-9226-223ba0e3cb72-kube-api-access-htjlh\") pod \"ingress-operator-5b745b69d9-c97vp\" (UID: \"abd850de-cbf2-4d68-9226-223ba0e3cb72\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-c97vp" Oct 06 11:49:59 crc kubenswrapper[4958]: I1006 11:49:59.738905 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/37c87c2e-92b8-4b83-be26-cd63ec636eda-stats-auth\") pod \"router-default-5444994796-mfm8s\" (UID: \"37c87c2e-92b8-4b83-be26-cd63ec636eda\") " pod="openshift-ingress/router-default-5444994796-mfm8s" Oct 06 11:49:59 crc kubenswrapper[4958]: I1006 11:49:59.738931 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hmslt\" (UniqueName: \"kubernetes.io/projected/dc407428-3c19-40aa-b476-c159f9b8f2a4-kube-api-access-hmslt\") pod \"control-plane-machine-set-operator-78cbb6b69f-jxkkt\" (UID: \"dc407428-3c19-40aa-b476-c159f9b8f2a4\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-jxkkt" Oct 06 11:49:59 crc kubenswrapper[4958]: I1006 11:49:59.738956 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/8c14b3fd-7202-4e51-9528-7eb2825f4a5b-plugins-dir\") pod \"csi-hostpathplugin-m9cln\" (UID: \"8c14b3fd-7202-4e51-9528-7eb2825f4a5b\") " pod="hostpath-provisioner/csi-hostpathplugin-m9cln" Oct 06 11:49:59 crc kubenswrapper[4958]: I1006 11:49:59.738980 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/8c14b3fd-7202-4e51-9528-7eb2825f4a5b-csi-data-dir\") pod \"csi-hostpathplugin-m9cln\" (UID: \"8c14b3fd-7202-4e51-9528-7eb2825f4a5b\") " pod="hostpath-provisioner/csi-hostpathplugin-m9cln" Oct 06 11:49:59 crc kubenswrapper[4958]: I1006 11:49:59.739000 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/1658bc0b-def5-4f14-a28d-f20c83f435da-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-k94lv\" (UID: \"1658bc0b-def5-4f14-a28d-f20c83f435da\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-k94lv" Oct 06 11:49:59 crc kubenswrapper[4958]: I1006 11:49:59.739024 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/1538fe67-0070-42fe-86e4-3ad017710b44-config-volume\") pod \"dns-default-kg57j\" (UID: \"1538fe67-0070-42fe-86e4-3ad017710b44\") " pod="openshift-dns/dns-default-kg57j" Oct 06 11:49:59 crc kubenswrapper[4958]: I1006 11:49:59.739055 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1e722343-2649-43dd-a7dd-842e41136658-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-nsfvl\" (UID: \"1e722343-2649-43dd-a7dd-842e41136658\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-nsfvl" Oct 06 11:49:59 crc kubenswrapper[4958]: I1006 11:49:59.739077 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2kpsf\" (UniqueName: \"kubernetes.io/projected/5924455c-9450-47f0-a762-355eae4c73c4-kube-api-access-2kpsf\") pod \"machine-config-controller-84d6567774-wlv4g\" (UID: \"5924455c-9450-47f0-a762-355eae4c73c4\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-wlv4g" Oct 06 11:49:59 crc kubenswrapper[4958]: I1006 11:49:59.739101 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/d982ea17-3155-4d01-bf28-5045db9fe780-profile-collector-cert\") pod \"olm-operator-6b444d44fb-4cxkf\" (UID: \"d982ea17-3155-4d01-bf28-5045db9fe780\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-4cxkf" Oct 06 11:49:59 crc kubenswrapper[4958]: I1006 11:49:59.739124 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/21e2c2d4-0ea2-4767-b88d-0229390bff9a-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-pp96v\" (UID: \"21e2c2d4-0ea2-4767-b88d-0229390bff9a\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-pp96v" Oct 06 11:49:59 crc kubenswrapper[4958]: I1006 11:49:59.739166 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/a0a306b6-7739-4ec5-9dac-68001b21447c-certs\") pod \"machine-config-server-n8gvd\" (UID: \"a0a306b6-7739-4ec5-9dac-68001b21447c\") " pod="openshift-machine-config-operator/machine-config-server-n8gvd" Oct 06 11:49:59 crc kubenswrapper[4958]: I1006 11:49:59.739193 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/8c14b3fd-7202-4e51-9528-7eb2825f4a5b-mountpoint-dir\") pod \"csi-hostpathplugin-m9cln\" (UID: \"8c14b3fd-7202-4e51-9528-7eb2825f4a5b\") " pod="hostpath-provisioner/csi-hostpathplugin-m9cln" Oct 06 11:49:59 crc kubenswrapper[4958]: I1006 11:49:59.739217 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/221a0896-9d41-4bf4-b05e-57c067e8b885-config-volume\") pod \"collect-profiles-29329185-6dnq5\" (UID: \"221a0896-9d41-4bf4-b05e-57c067e8b885\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29329185-6dnq5" Oct 06 11:49:59 crc kubenswrapper[4958]: I1006 11:49:59.739243 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z75cv\" (UniqueName: \"kubernetes.io/projected/3331225c-8593-4a8c-a256-1aee839e9bb3-kube-api-access-z75cv\") pod \"service-ca-9c57cc56f-9blls\" (UID: \"3331225c-8593-4a8c-a256-1aee839e9bb3\") " pod="openshift-service-ca/service-ca-9c57cc56f-9blls" Oct 06 11:49:59 crc kubenswrapper[4958]: I1006 11:49:59.739266 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/5924455c-9450-47f0-a762-355eae4c73c4-proxy-tls\") pod \"machine-config-controller-84d6567774-wlv4g\" (UID: \"5924455c-9450-47f0-a762-355eae4c73c4\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-wlv4g" Oct 06 11:49:59 crc kubenswrapper[4958]: I1006 11:49:59.739288 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/3a4bf8ec-4eaf-483f-b1c2-a981e3a83091-tmpfs\") pod \"packageserver-d55dfcdfc-x42zz\" (UID: \"3a4bf8ec-4eaf-483f-b1c2-a981e3a83091\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-x42zz" Oct 06 11:49:59 crc kubenswrapper[4958]: I1006 11:49:59.739310 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/f1163149-6bad-4274-87de-23e0b45a284e-profile-collector-cert\") pod \"catalog-operator-68c6474976-zbr46\" (UID: \"f1163149-6bad-4274-87de-23e0b45a284e\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-zbr46" Oct 06 11:49:59 crc kubenswrapper[4958]: I1006 11:49:59.739334 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2qpmd\" (UniqueName: \"kubernetes.io/projected/1538fe67-0070-42fe-86e4-3ad017710b44-kube-api-access-2qpmd\") pod \"dns-default-kg57j\" (UID: \"1538fe67-0070-42fe-86e4-3ad017710b44\") " pod="openshift-dns/dns-default-kg57j" Oct 06 11:49:59 crc kubenswrapper[4958]: I1006 11:49:59.739357 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/6559fbed-d2e2-4578-992e-b4088643cd24-proxy-tls\") pod \"machine-config-operator-74547568cd-swm82\" (UID: \"6559fbed-d2e2-4578-992e-b4088643cd24\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-swm82" Oct 06 11:49:59 crc kubenswrapper[4958]: I1006 11:49:59.739382 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1e722343-2649-43dd-a7dd-842e41136658-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-nsfvl\" (UID: \"1e722343-2649-43dd-a7dd-842e41136658\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-nsfvl" Oct 06 11:49:59 crc kubenswrapper[4958]: I1006 11:49:59.739407 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5jhd4\" (UniqueName: \"kubernetes.io/projected/8275a125-7a77-49fe-bebf-d140013d5a3f-kube-api-access-5jhd4\") pod \"ingress-canary-7flhj\" (UID: \"8275a125-7a77-49fe-bebf-d140013d5a3f\") " pod="openshift-ingress-canary/ingress-canary-7flhj" Oct 06 11:49:59 crc kubenswrapper[4958]: I1006 11:49:59.739431 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/21e2c2d4-0ea2-4767-b88d-0229390bff9a-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-pp96v\" (UID: \"21e2c2d4-0ea2-4767-b88d-0229390bff9a\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-pp96v" Oct 06 11:49:59 crc kubenswrapper[4958]: I1006 11:49:59.739463 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/8c14b3fd-7202-4e51-9528-7eb2825f4a5b-socket-dir\") pod \"csi-hostpathplugin-m9cln\" (UID: \"8c14b3fd-7202-4e51-9528-7eb2825f4a5b\") " pod="hostpath-provisioner/csi-hostpathplugin-m9cln" Oct 06 11:49:59 crc kubenswrapper[4958]: I1006 11:49:59.739488 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zxswz\" (UniqueName: \"kubernetes.io/projected/8bce8671-7d06-4e9b-81cb-f6ddf56ba8a4-kube-api-access-zxswz\") pod \"marketplace-operator-79b997595-928ch\" (UID: \"8bce8671-7d06-4e9b-81cb-f6ddf56ba8a4\") " pod="openshift-marketplace/marketplace-operator-79b997595-928ch" Oct 06 11:49:59 crc kubenswrapper[4958]: I1006 11:49:59.739524 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9gv4m\" (UniqueName: \"kubernetes.io/projected/f1163149-6bad-4274-87de-23e0b45a284e-kube-api-access-9gv4m\") pod \"catalog-operator-68c6474976-zbr46\" (UID: \"f1163149-6bad-4274-87de-23e0b45a284e\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-zbr46" Oct 06 11:49:59 crc kubenswrapper[4958]: I1006 11:49:59.739550 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1e722343-2649-43dd-a7dd-842e41136658-config\") pod \"kube-controller-manager-operator-78b949d7b-nsfvl\" (UID: \"1e722343-2649-43dd-a7dd-842e41136658\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-nsfvl" Oct 06 11:49:59 crc kubenswrapper[4958]: I1006 11:49:59.739576 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/d982ea17-3155-4d01-bf28-5045db9fe780-srv-cert\") pod \"olm-operator-6b444d44fb-4cxkf\" (UID: \"d982ea17-3155-4d01-bf28-5045db9fe780\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-4cxkf" Oct 06 11:49:59 crc kubenswrapper[4958]: I1006 11:49:59.739607 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xjr8z\" (UniqueName: \"kubernetes.io/projected/8c14b3fd-7202-4e51-9528-7eb2825f4a5b-kube-api-access-xjr8z\") pod \"csi-hostpathplugin-m9cln\" (UID: \"8c14b3fd-7202-4e51-9528-7eb2825f4a5b\") " pod="hostpath-provisioner/csi-hostpathplugin-m9cln" Oct 06 11:49:59 crc kubenswrapper[4958]: I1006 11:49:59.739628 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/3331225c-8593-4a8c-a256-1aee839e9bb3-signing-key\") pod \"service-ca-9c57cc56f-9blls\" (UID: \"3331225c-8593-4a8c-a256-1aee839e9bb3\") " pod="openshift-service-ca/service-ca-9c57cc56f-9blls" Oct 06 11:49:59 crc kubenswrapper[4958]: I1006 11:49:59.739651 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/221a0896-9d41-4bf4-b05e-57c067e8b885-secret-volume\") pod \"collect-profiles-29329185-6dnq5\" (UID: \"221a0896-9d41-4bf4-b05e-57c067e8b885\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29329185-6dnq5" Oct 06 11:49:59 crc kubenswrapper[4958]: I1006 11:49:59.739673 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/abd850de-cbf2-4d68-9226-223ba0e3cb72-trusted-ca\") pod \"ingress-operator-5b745b69d9-c97vp\" (UID: \"abd850de-cbf2-4d68-9226-223ba0e3cb72\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-c97vp" Oct 06 11:49:59 crc kubenswrapper[4958]: I1006 11:49:59.739695 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/8275a125-7a77-49fe-bebf-d140013d5a3f-cert\") pod \"ingress-canary-7flhj\" (UID: \"8275a125-7a77-49fe-bebf-d140013d5a3f\") " pod="openshift-ingress-canary/ingress-canary-7flhj" Oct 06 11:49:59 crc kubenswrapper[4958]: I1006 11:49:59.739694 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/8c14b3fd-7202-4e51-9528-7eb2825f4a5b-mountpoint-dir\") pod \"csi-hostpathplugin-m9cln\" (UID: \"8c14b3fd-7202-4e51-9528-7eb2825f4a5b\") " pod="hostpath-provisioner/csi-hostpathplugin-m9cln" Oct 06 11:49:59 crc kubenswrapper[4958]: I1006 11:49:59.739720 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/3a4bf8ec-4eaf-483f-b1c2-a981e3a83091-webhook-cert\") pod \"packageserver-d55dfcdfc-x42zz\" (UID: \"3a4bf8ec-4eaf-483f-b1c2-a981e3a83091\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-x42zz" Oct 06 11:49:59 crc kubenswrapper[4958]: I1006 11:49:59.739772 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qlcm7\" (UniqueName: \"kubernetes.io/projected/21e2c2d4-0ea2-4767-b88d-0229390bff9a-kube-api-access-qlcm7\") pod \"openshift-controller-manager-operator-756b6f6bc6-pp96v\" (UID: \"21e2c2d4-0ea2-4767-b88d-0229390bff9a\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-pp96v" Oct 06 11:49:59 crc kubenswrapper[4958]: I1006 11:49:59.739803 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tk8mt\" (UniqueName: \"kubernetes.io/projected/37c87c2e-92b8-4b83-be26-cd63ec636eda-kube-api-access-tk8mt\") pod \"router-default-5444994796-mfm8s\" (UID: \"37c87c2e-92b8-4b83-be26-cd63ec636eda\") " pod="openshift-ingress/router-default-5444994796-mfm8s" Oct 06 11:49:59 crc kubenswrapper[4958]: I1006 11:49:59.739822 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/9a23b5e6-14c9-46d0-aed5-4f8b88edfb98-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-2gbnx\" (UID: \"9a23b5e6-14c9-46d0-aed5-4f8b88edfb98\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-2gbnx" Oct 06 11:49:59 crc kubenswrapper[4958]: I1006 11:49:59.739840 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6h9bm\" (UniqueName: \"kubernetes.io/projected/12d2f911-e40e-44ea-9081-1bb8a08c7cbb-kube-api-access-6h9bm\") pod \"service-ca-operator-777779d784-d4mss\" (UID: \"12d2f911-e40e-44ea-9081-1bb8a08c7cbb\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-d4mss" Oct 06 11:49:59 crc kubenswrapper[4958]: I1006 11:49:59.739858 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/a0a306b6-7739-4ec5-9dac-68001b21447c-node-bootstrap-token\") pod \"machine-config-server-n8gvd\" (UID: \"a0a306b6-7739-4ec5-9dac-68001b21447c\") " pod="openshift-machine-config-operator/machine-config-server-n8gvd" Oct 06 11:49:59 crc kubenswrapper[4958]: E1006 11:49:59.739883 4958 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-06 11:50:00.239858193 +0000 UTC m=+154.125883591 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 06 11:49:59 crc kubenswrapper[4958]: I1006 11:49:59.739925 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b853b964-2cab-4b75-93fe-df7bd4e0f033-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-7zl8z\" (UID: \"b853b964-2cab-4b75-93fe-df7bd4e0f033\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-7zl8z" Oct 06 11:49:59 crc kubenswrapper[4958]: I1006 11:49:59.739956 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d829t\" (UniqueName: \"kubernetes.io/projected/3a4bf8ec-4eaf-483f-b1c2-a981e3a83091-kube-api-access-d829t\") pod \"packageserver-d55dfcdfc-x42zz\" (UID: \"3a4bf8ec-4eaf-483f-b1c2-a981e3a83091\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-x42zz" Oct 06 11:49:59 crc kubenswrapper[4958]: I1006 11:49:59.740010 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/8bce8671-7d06-4e9b-81cb-f6ddf56ba8a4-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-928ch\" (UID: \"8bce8671-7d06-4e9b-81cb-f6ddf56ba8a4\") " pod="openshift-marketplace/marketplace-operator-79b997595-928ch" Oct 06 11:49:59 crc kubenswrapper[4958]: I1006 11:49:59.740053 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/37c87c2e-92b8-4b83-be26-cd63ec636eda-default-certificate\") pod \"router-default-5444994796-mfm8s\" (UID: \"37c87c2e-92b8-4b83-be26-cd63ec636eda\") " pod="openshift-ingress/router-default-5444994796-mfm8s" Oct 06 11:49:59 crc kubenswrapper[4958]: I1006 11:49:59.740070 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p4zs7\" (UniqueName: \"kubernetes.io/projected/6559fbed-d2e2-4578-992e-b4088643cd24-kube-api-access-p4zs7\") pod \"machine-config-operator-74547568cd-swm82\" (UID: \"6559fbed-d2e2-4578-992e-b4088643cd24\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-swm82" Oct 06 11:49:59 crc kubenswrapper[4958]: I1006 11:49:59.740089 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/f1163149-6bad-4274-87de-23e0b45a284e-srv-cert\") pod \"catalog-operator-68c6474976-zbr46\" (UID: \"f1163149-6bad-4274-87de-23e0b45a284e\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-zbr46" Oct 06 11:49:59 crc kubenswrapper[4958]: I1006 11:49:59.740108 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-97hxg\" (UniqueName: \"kubernetes.io/projected/221a0896-9d41-4bf4-b05e-57c067e8b885-kube-api-access-97hxg\") pod \"collect-profiles-29329185-6dnq5\" (UID: \"221a0896-9d41-4bf4-b05e-57c067e8b885\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29329185-6dnq5" Oct 06 11:49:59 crc kubenswrapper[4958]: I1006 11:49:59.740130 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8bce8671-7d06-4e9b-81cb-f6ddf56ba8a4-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-928ch\" (UID: \"8bce8671-7d06-4e9b-81cb-f6ddf56ba8a4\") " pod="openshift-marketplace/marketplace-operator-79b997595-928ch" Oct 06 11:49:59 crc kubenswrapper[4958]: I1006 11:49:59.740171 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/3331225c-8593-4a8c-a256-1aee839e9bb3-signing-cabundle\") pod \"service-ca-9c57cc56f-9blls\" (UID: \"3331225c-8593-4a8c-a256-1aee839e9bb3\") " pod="openshift-service-ca/service-ca-9c57cc56f-9blls" Oct 06 11:49:59 crc kubenswrapper[4958]: I1006 11:49:59.740191 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/37c87c2e-92b8-4b83-be26-cd63ec636eda-service-ca-bundle\") pod \"router-default-5444994796-mfm8s\" (UID: \"37c87c2e-92b8-4b83-be26-cd63ec636eda\") " pod="openshift-ingress/router-default-5444994796-mfm8s" Oct 06 11:49:59 crc kubenswrapper[4958]: I1006 11:49:59.740236 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/6559fbed-d2e2-4578-992e-b4088643cd24-auth-proxy-config\") pod \"machine-config-operator-74547568cd-swm82\" (UID: \"6559fbed-d2e2-4578-992e-b4088643cd24\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-swm82" Oct 06 11:49:59 crc kubenswrapper[4958]: I1006 11:49:59.740259 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/12d2f911-e40e-44ea-9081-1bb8a08c7cbb-config\") pod \"service-ca-operator-777779d784-d4mss\" (UID: \"12d2f911-e40e-44ea-9081-1bb8a08c7cbb\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-d4mss" Oct 06 11:49:59 crc kubenswrapper[4958]: I1006 11:49:59.740276 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/59305dc8-3a12-401b-83d9-532e520b72b0-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-fhfvm\" (UID: \"59305dc8-3a12-401b-83d9-532e520b72b0\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-fhfvm" Oct 06 11:49:59 crc kubenswrapper[4958]: I1006 11:49:59.740317 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9a23b5e6-14c9-46d0-aed5-4f8b88edfb98-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-2gbnx\" (UID: \"9a23b5e6-14c9-46d0-aed5-4f8b88edfb98\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-2gbnx" Oct 06 11:49:59 crc kubenswrapper[4958]: I1006 11:49:59.740335 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/1538fe67-0070-42fe-86e4-3ad017710b44-metrics-tls\") pod \"dns-default-kg57j\" (UID: \"1538fe67-0070-42fe-86e4-3ad017710b44\") " pod="openshift-dns/dns-default-kg57j" Oct 06 11:49:59 crc kubenswrapper[4958]: I1006 11:49:59.740338 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/8c14b3fd-7202-4e51-9528-7eb2825f4a5b-csi-data-dir\") pod \"csi-hostpathplugin-m9cln\" (UID: \"8c14b3fd-7202-4e51-9528-7eb2825f4a5b\") " pod="hostpath-provisioner/csi-hostpathplugin-m9cln" Oct 06 11:49:59 crc kubenswrapper[4958]: I1006 11:49:59.740363 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-kt8qb\" (UID: \"6a7580eb-dece-41a6-8335-33c29bc41056\") " pod="openshift-image-registry/image-registry-697d97f7c8-kt8qb" Oct 06 11:49:59 crc kubenswrapper[4958]: I1006 11:49:59.740385 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/dc407428-3c19-40aa-b476-c159f9b8f2a4-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-jxkkt\" (UID: \"dc407428-3c19-40aa-b476-c159f9b8f2a4\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-jxkkt" Oct 06 11:49:59 crc kubenswrapper[4958]: I1006 11:49:59.740404 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/2cc565c2-461e-4902-917d-51dc8aba5cf7-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-m7q5b\" (UID: \"2cc565c2-461e-4902-917d-51dc8aba5cf7\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-m7q5b" Oct 06 11:49:59 crc kubenswrapper[4958]: I1006 11:49:59.740426 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n5s8m\" (UniqueName: \"kubernetes.io/projected/b853b964-2cab-4b75-93fe-df7bd4e0f033-kube-api-access-n5s8m\") pod \"kube-storage-version-migrator-operator-b67b599dd-7zl8z\" (UID: \"b853b964-2cab-4b75-93fe-df7bd4e0f033\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-7zl8z" Oct 06 11:49:59 crc kubenswrapper[4958]: I1006 11:49:59.740446 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/abd850de-cbf2-4d68-9226-223ba0e3cb72-metrics-tls\") pod \"ingress-operator-5b745b69d9-c97vp\" (UID: \"abd850de-cbf2-4d68-9226-223ba0e3cb72\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-c97vp" Oct 06 11:49:59 crc kubenswrapper[4958]: I1006 11:49:59.740462 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v2wt6\" (UniqueName: \"kubernetes.io/projected/59305dc8-3a12-401b-83d9-532e520b72b0-kube-api-access-v2wt6\") pod \"multus-admission-controller-857f4d67dd-fhfvm\" (UID: \"59305dc8-3a12-401b-83d9-532e520b72b0\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-fhfvm" Oct 06 11:49:59 crc kubenswrapper[4958]: I1006 11:49:59.740480 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/5924455c-9450-47f0-a762-355eae4c73c4-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-wlv4g\" (UID: \"5924455c-9450-47f0-a762-355eae4c73c4\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-wlv4g" Oct 06 11:49:59 crc kubenswrapper[4958]: I1006 11:49:59.740503 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2cc565c2-461e-4902-917d-51dc8aba5cf7-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-m7q5b\" (UID: \"2cc565c2-461e-4902-917d-51dc8aba5cf7\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-m7q5b" Oct 06 11:49:59 crc kubenswrapper[4958]: I1006 11:49:59.740520 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9a23b5e6-14c9-46d0-aed5-4f8b88edfb98-config\") pod \"kube-apiserver-operator-766d6c64bb-2gbnx\" (UID: \"9a23b5e6-14c9-46d0-aed5-4f8b88edfb98\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-2gbnx" Oct 06 11:49:59 crc kubenswrapper[4958]: I1006 11:49:59.740539 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/8c14b3fd-7202-4e51-9528-7eb2825f4a5b-registration-dir\") pod \"csi-hostpathplugin-m9cln\" (UID: \"8c14b3fd-7202-4e51-9528-7eb2825f4a5b\") " pod="hostpath-provisioner/csi-hostpathplugin-m9cln" Oct 06 11:49:59 crc kubenswrapper[4958]: I1006 11:49:59.740563 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zcwhl\" (UniqueName: \"kubernetes.io/projected/1658bc0b-def5-4f14-a28d-f20c83f435da-kube-api-access-zcwhl\") pod \"package-server-manager-789f6589d5-k94lv\" (UID: \"1658bc0b-def5-4f14-a28d-f20c83f435da\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-k94lv" Oct 06 11:49:59 crc kubenswrapper[4958]: I1006 11:49:59.740573 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/221a0896-9d41-4bf4-b05e-57c067e8b885-config-volume\") pod \"collect-profiles-29329185-6dnq5\" (UID: \"221a0896-9d41-4bf4-b05e-57c067e8b885\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29329185-6dnq5" Oct 06 11:49:59 crc kubenswrapper[4958]: I1006 11:49:59.740580 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/12d2f911-e40e-44ea-9081-1bb8a08c7cbb-serving-cert\") pod \"service-ca-operator-777779d784-d4mss\" (UID: \"12d2f911-e40e-44ea-9081-1bb8a08c7cbb\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-d4mss" Oct 06 11:49:59 crc kubenswrapper[4958]: I1006 11:49:59.740621 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/37c87c2e-92b8-4b83-be26-cd63ec636eda-metrics-certs\") pod \"router-default-5444994796-mfm8s\" (UID: \"37c87c2e-92b8-4b83-be26-cd63ec636eda\") " pod="openshift-ingress/router-default-5444994796-mfm8s" Oct 06 11:49:59 crc kubenswrapper[4958]: I1006 11:49:59.740652 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b853b964-2cab-4b75-93fe-df7bd4e0f033-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-7zl8z\" (UID: \"b853b964-2cab-4b75-93fe-df7bd4e0f033\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-7zl8z" Oct 06 11:49:59 crc kubenswrapper[4958]: I1006 11:49:59.740682 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9kmgb\" (UniqueName: \"kubernetes.io/projected/d982ea17-3155-4d01-bf28-5045db9fe780-kube-api-access-9kmgb\") pod \"olm-operator-6b444d44fb-4cxkf\" (UID: \"d982ea17-3155-4d01-bf28-5045db9fe780\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-4cxkf" Oct 06 11:49:59 crc kubenswrapper[4958]: I1006 11:49:59.740700 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/6559fbed-d2e2-4578-992e-b4088643cd24-images\") pod \"machine-config-operator-74547568cd-swm82\" (UID: \"6559fbed-d2e2-4578-992e-b4088643cd24\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-swm82" Oct 06 11:49:59 crc kubenswrapper[4958]: I1006 11:49:59.740786 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/8c14b3fd-7202-4e51-9528-7eb2825f4a5b-plugins-dir\") pod \"csi-hostpathplugin-m9cln\" (UID: \"8c14b3fd-7202-4e51-9528-7eb2825f4a5b\") " pod="hostpath-provisioner/csi-hostpathplugin-m9cln" Oct 06 11:49:59 crc kubenswrapper[4958]: I1006 11:49:59.741236 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/6559fbed-d2e2-4578-992e-b4088643cd24-images\") pod \"machine-config-operator-74547568cd-swm82\" (UID: \"6559fbed-d2e2-4578-992e-b4088643cd24\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-swm82" Oct 06 11:49:59 crc kubenswrapper[4958]: I1006 11:49:59.742125 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/6559fbed-d2e2-4578-992e-b4088643cd24-auth-proxy-config\") pod \"machine-config-operator-74547568cd-swm82\" (UID: \"6559fbed-d2e2-4578-992e-b4088643cd24\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-swm82" Oct 06 11:49:59 crc kubenswrapper[4958]: I1006 11:49:59.742375 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/1538fe67-0070-42fe-86e4-3ad017710b44-config-volume\") pod \"dns-default-kg57j\" (UID: \"1538fe67-0070-42fe-86e4-3ad017710b44\") " pod="openshift-dns/dns-default-kg57j" Oct 06 11:49:59 crc kubenswrapper[4958]: I1006 11:49:59.744734 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-thrt8" Oct 06 11:49:59 crc kubenswrapper[4958]: I1006 11:49:59.745474 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/1658bc0b-def5-4f14-a28d-f20c83f435da-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-k94lv\" (UID: \"1658bc0b-def5-4f14-a28d-f20c83f435da\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-k94lv" Oct 06 11:49:59 crc kubenswrapper[4958]: I1006 11:49:59.746351 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/3a4bf8ec-4eaf-483f-b1c2-a981e3a83091-tmpfs\") pod \"packageserver-d55dfcdfc-x42zz\" (UID: \"3a4bf8ec-4eaf-483f-b1c2-a981e3a83091\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-x42zz" Oct 06 11:49:59 crc kubenswrapper[4958]: I1006 11:49:59.747334 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/12d2f911-e40e-44ea-9081-1bb8a08c7cbb-config\") pod \"service-ca-operator-777779d784-d4mss\" (UID: \"12d2f911-e40e-44ea-9081-1bb8a08c7cbb\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-d4mss" Oct 06 11:49:59 crc kubenswrapper[4958]: I1006 11:49:59.750122 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1e722343-2649-43dd-a7dd-842e41136658-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-nsfvl\" (UID: \"1e722343-2649-43dd-a7dd-842e41136658\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-nsfvl" Oct 06 11:49:59 crc kubenswrapper[4958]: I1006 11:49:59.750869 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/12d2f911-e40e-44ea-9081-1bb8a08c7cbb-serving-cert\") pod \"service-ca-operator-777779d784-d4mss\" (UID: \"12d2f911-e40e-44ea-9081-1bb8a08c7cbb\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-d4mss" Oct 06 11:49:59 crc kubenswrapper[4958]: I1006 11:49:59.750944 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/37c87c2e-92b8-4b83-be26-cd63ec636eda-stats-auth\") pod \"router-default-5444994796-mfm8s\" (UID: \"37c87c2e-92b8-4b83-be26-cd63ec636eda\") " pod="openshift-ingress/router-default-5444994796-mfm8s" Oct 06 11:49:59 crc kubenswrapper[4958]: I1006 11:49:59.752717 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/a0a306b6-7739-4ec5-9dac-68001b21447c-node-bootstrap-token\") pod \"machine-config-server-n8gvd\" (UID: \"a0a306b6-7739-4ec5-9dac-68001b21447c\") " pod="openshift-machine-config-operator/machine-config-server-n8gvd" Oct 06 11:49:59 crc kubenswrapper[4958]: I1006 11:49:59.753733 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/21e2c2d4-0ea2-4767-b88d-0229390bff9a-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-pp96v\" (UID: \"21e2c2d4-0ea2-4767-b88d-0229390bff9a\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-pp96v" Oct 06 11:49:59 crc kubenswrapper[4958]: I1006 11:49:59.758744 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b853b964-2cab-4b75-93fe-df7bd4e0f033-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-7zl8z\" (UID: \"b853b964-2cab-4b75-93fe-df7bd4e0f033\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-7zl8z" Oct 06 11:49:59 crc kubenswrapper[4958]: I1006 11:49:59.759410 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/37c87c2e-92b8-4b83-be26-cd63ec636eda-metrics-certs\") pod \"router-default-5444994796-mfm8s\" (UID: \"37c87c2e-92b8-4b83-be26-cd63ec636eda\") " pod="openshift-ingress/router-default-5444994796-mfm8s" Oct 06 11:49:59 crc kubenswrapper[4958]: E1006 11:49:59.759513 4958 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-06 11:50:00.259497836 +0000 UTC m=+154.145523144 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-kt8qb" (UID: "6a7580eb-dece-41a6-8335-33c29bc41056") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 06 11:49:59 crc kubenswrapper[4958]: I1006 11:49:59.760116 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8bce8671-7d06-4e9b-81cb-f6ddf56ba8a4-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-928ch\" (UID: \"8bce8671-7d06-4e9b-81cb-f6ddf56ba8a4\") " pod="openshift-marketplace/marketplace-operator-79b997595-928ch" Oct 06 11:49:59 crc kubenswrapper[4958]: I1006 11:49:59.760157 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/21e2c2d4-0ea2-4767-b88d-0229390bff9a-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-pp96v\" (UID: \"21e2c2d4-0ea2-4767-b88d-0229390bff9a\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-pp96v" Oct 06 11:49:59 crc kubenswrapper[4958]: I1006 11:49:59.760421 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2cc565c2-461e-4902-917d-51dc8aba5cf7-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-m7q5b\" (UID: \"2cc565c2-461e-4902-917d-51dc8aba5cf7\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-m7q5b" Oct 06 11:49:59 crc kubenswrapper[4958]: I1006 11:49:59.760925 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/37c87c2e-92b8-4b83-be26-cd63ec636eda-service-ca-bundle\") pod \"router-default-5444994796-mfm8s\" (UID: \"37c87c2e-92b8-4b83-be26-cd63ec636eda\") " pod="openshift-ingress/router-default-5444994796-mfm8s" Oct 06 11:49:59 crc kubenswrapper[4958]: I1006 11:49:59.761074 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/3331225c-8593-4a8c-a256-1aee839e9bb3-signing-cabundle\") pod \"service-ca-9c57cc56f-9blls\" (UID: \"3331225c-8593-4a8c-a256-1aee839e9bb3\") " pod="openshift-service-ca/service-ca-9c57cc56f-9blls" Oct 06 11:49:59 crc kubenswrapper[4958]: I1006 11:49:59.761307 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2cc565c2-461e-4902-917d-51dc8aba5cf7-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-m7q5b\" (UID: \"2cc565c2-461e-4902-917d-51dc8aba5cf7\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-m7q5b" Oct 06 11:49:59 crc kubenswrapper[4958]: I1006 11:49:59.761392 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/8c14b3fd-7202-4e51-9528-7eb2825f4a5b-registration-dir\") pod \"csi-hostpathplugin-m9cln\" (UID: \"8c14b3fd-7202-4e51-9528-7eb2825f4a5b\") " pod="hostpath-provisioner/csi-hostpathplugin-m9cln" Oct 06 11:49:59 crc kubenswrapper[4958]: I1006 11:49:59.761789 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/5924455c-9450-47f0-a762-355eae4c73c4-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-wlv4g\" (UID: \"5924455c-9450-47f0-a762-355eae4c73c4\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-wlv4g" Oct 06 11:49:59 crc kubenswrapper[4958]: I1006 11:49:59.761936 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9a23b5e6-14c9-46d0-aed5-4f8b88edfb98-config\") pod \"kube-apiserver-operator-766d6c64bb-2gbnx\" (UID: \"9a23b5e6-14c9-46d0-aed5-4f8b88edfb98\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-2gbnx" Oct 06 11:49:59 crc kubenswrapper[4958]: I1006 11:49:59.761967 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/abd850de-cbf2-4d68-9226-223ba0e3cb72-metrics-tls\") pod \"ingress-operator-5b745b69d9-c97vp\" (UID: \"abd850de-cbf2-4d68-9226-223ba0e3cb72\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-c97vp" Oct 06 11:49:59 crc kubenswrapper[4958]: I1006 11:49:59.762337 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/8c14b3fd-7202-4e51-9528-7eb2825f4a5b-socket-dir\") pod \"csi-hostpathplugin-m9cln\" (UID: \"8c14b3fd-7202-4e51-9528-7eb2825f4a5b\") " pod="hostpath-provisioner/csi-hostpathplugin-m9cln" Oct 06 11:49:59 crc kubenswrapper[4958]: I1006 11:49:59.763028 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1e722343-2649-43dd-a7dd-842e41136658-config\") pod \"kube-controller-manager-operator-78b949d7b-nsfvl\" (UID: \"1e722343-2649-43dd-a7dd-842e41136658\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-nsfvl" Oct 06 11:49:59 crc kubenswrapper[4958]: I1006 11:49:59.763245 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/5924455c-9450-47f0-a762-355eae4c73c4-proxy-tls\") pod \"machine-config-controller-84d6567774-wlv4g\" (UID: \"5924455c-9450-47f0-a762-355eae4c73c4\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-wlv4g" Oct 06 11:49:59 crc kubenswrapper[4958]: I1006 11:49:59.763853 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/3a4bf8ec-4eaf-483f-b1c2-a981e3a83091-webhook-cert\") pod \"packageserver-d55dfcdfc-x42zz\" (UID: \"3a4bf8ec-4eaf-483f-b1c2-a981e3a83091\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-x42zz" Oct 06 11:49:59 crc kubenswrapper[4958]: I1006 11:49:59.764696 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/6559fbed-d2e2-4578-992e-b4088643cd24-proxy-tls\") pod \"machine-config-operator-74547568cd-swm82\" (UID: \"6559fbed-d2e2-4578-992e-b4088643cd24\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-swm82" Oct 06 11:49:59 crc kubenswrapper[4958]: I1006 11:49:59.765032 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"certs\" (UniqueName: \"kubernetes.io/secret/a0a306b6-7739-4ec5-9dac-68001b21447c-certs\") pod \"machine-config-server-n8gvd\" (UID: \"a0a306b6-7739-4ec5-9dac-68001b21447c\") " pod="openshift-machine-config-operator/machine-config-server-n8gvd" Oct 06 11:49:59 crc kubenswrapper[4958]: I1006 11:49:59.767170 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b853b964-2cab-4b75-93fe-df7bd4e0f033-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-7zl8z\" (UID: \"b853b964-2cab-4b75-93fe-df7bd4e0f033\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-7zl8z" Oct 06 11:49:59 crc kubenswrapper[4958]: I1006 11:49:59.767631 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/abd850de-cbf2-4d68-9226-223ba0e3cb72-trusted-ca\") pod \"ingress-operator-5b745b69d9-c97vp\" (UID: \"abd850de-cbf2-4d68-9226-223ba0e3cb72\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-c97vp" Oct 06 11:49:59 crc kubenswrapper[4958]: I1006 11:49:59.767791 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/dc407428-3c19-40aa-b476-c159f9b8f2a4-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-jxkkt\" (UID: \"dc407428-3c19-40aa-b476-c159f9b8f2a4\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-jxkkt" Oct 06 11:49:59 crc kubenswrapper[4958]: I1006 11:49:59.768333 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/37c87c2e-92b8-4b83-be26-cd63ec636eda-default-certificate\") pod \"router-default-5444994796-mfm8s\" (UID: \"37c87c2e-92b8-4b83-be26-cd63ec636eda\") " pod="openshift-ingress/router-default-5444994796-mfm8s" Oct 06 11:49:59 crc kubenswrapper[4958]: I1006 11:49:59.768525 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nn5cj\" (UniqueName: \"kubernetes.io/projected/29169783-2273-40c2-abe9-85f3ee6c4f7f-kube-api-access-nn5cj\") pod \"dns-operator-744455d44c-t2nxc\" (UID: \"29169783-2273-40c2-abe9-85f3ee6c4f7f\") " pod="openshift-dns-operator/dns-operator-744455d44c-t2nxc" Oct 06 11:49:59 crc kubenswrapper[4958]: I1006 11:49:59.768548 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kn5dq\" (UniqueName: \"kubernetes.io/projected/97ec580a-268f-4a8a-a6e1-9b64e463ca20-kube-api-access-kn5dq\") pod \"openshift-apiserver-operator-796bbdcf4f-v7kkf\" (UID: \"97ec580a-268f-4a8a-a6e1-9b64e463ca20\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-v7kkf" Oct 06 11:49:59 crc kubenswrapper[4958]: I1006 11:49:59.768791 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/d982ea17-3155-4d01-bf28-5045db9fe780-srv-cert\") pod \"olm-operator-6b444d44fb-4cxkf\" (UID: \"d982ea17-3155-4d01-bf28-5045db9fe780\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-4cxkf" Oct 06 11:49:59 crc kubenswrapper[4958]: I1006 11:49:59.769181 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/8275a125-7a77-49fe-bebf-d140013d5a3f-cert\") pod \"ingress-canary-7flhj\" (UID: \"8275a125-7a77-49fe-bebf-d140013d5a3f\") " pod="openshift-ingress-canary/ingress-canary-7flhj" Oct 06 11:49:59 crc kubenswrapper[4958]: I1006 11:49:59.769937 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/221a0896-9d41-4bf4-b05e-57c067e8b885-secret-volume\") pod \"collect-profiles-29329185-6dnq5\" (UID: \"221a0896-9d41-4bf4-b05e-57c067e8b885\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29329185-6dnq5" Oct 06 11:49:59 crc kubenswrapper[4958]: I1006 11:49:59.770534 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/3a4bf8ec-4eaf-483f-b1c2-a981e3a83091-apiservice-cert\") pod \"packageserver-d55dfcdfc-x42zz\" (UID: \"3a4bf8ec-4eaf-483f-b1c2-a981e3a83091\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-x42zz" Oct 06 11:49:59 crc kubenswrapper[4958]: I1006 11:49:59.770591 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/59305dc8-3a12-401b-83d9-532e520b72b0-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-fhfvm\" (UID: \"59305dc8-3a12-401b-83d9-532e520b72b0\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-fhfvm" Oct 06 11:49:59 crc kubenswrapper[4958]: I1006 11:49:59.770892 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/8bce8671-7d06-4e9b-81cb-f6ddf56ba8a4-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-928ch\" (UID: \"8bce8671-7d06-4e9b-81cb-f6ddf56ba8a4\") " pod="openshift-marketplace/marketplace-operator-79b997595-928ch" Oct 06 11:49:59 crc kubenswrapper[4958]: I1006 11:49:59.771213 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/d982ea17-3155-4d01-bf28-5045db9fe780-profile-collector-cert\") pod \"olm-operator-6b444d44fb-4cxkf\" (UID: \"d982ea17-3155-4d01-bf28-5045db9fe780\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-4cxkf" Oct 06 11:49:59 crc kubenswrapper[4958]: I1006 11:49:59.778756 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/3331225c-8593-4a8c-a256-1aee839e9bb3-signing-key\") pod \"service-ca-9c57cc56f-9blls\" (UID: \"3331225c-8593-4a8c-a256-1aee839e9bb3\") " pod="openshift-service-ca/service-ca-9c57cc56f-9blls" Oct 06 11:49:59 crc kubenswrapper[4958]: I1006 11:49:59.779899 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9a23b5e6-14c9-46d0-aed5-4f8b88edfb98-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-2gbnx\" (UID: \"9a23b5e6-14c9-46d0-aed5-4f8b88edfb98\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-2gbnx" Oct 06 11:49:59 crc kubenswrapper[4958]: I1006 11:49:59.781514 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/f1163149-6bad-4274-87de-23e0b45a284e-srv-cert\") pod \"catalog-operator-68c6474976-zbr46\" (UID: \"f1163149-6bad-4274-87de-23e0b45a284e\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-zbr46" Oct 06 11:49:59 crc kubenswrapper[4958]: I1006 11:49:59.782869 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-fmlg2"] Oct 06 11:49:59 crc kubenswrapper[4958]: I1006 11:49:59.783273 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/1538fe67-0070-42fe-86e4-3ad017710b44-metrics-tls\") pod \"dns-default-kg57j\" (UID: \"1538fe67-0070-42fe-86e4-3ad017710b44\") " pod="openshift-dns/dns-default-kg57j" Oct 06 11:49:59 crc kubenswrapper[4958]: I1006 11:49:59.783878 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/f1163149-6bad-4274-87de-23e0b45a284e-profile-collector-cert\") pod \"catalog-operator-68c6474976-zbr46\" (UID: \"f1163149-6bad-4274-87de-23e0b45a284e\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-zbr46" Oct 06 11:49:59 crc kubenswrapper[4958]: I1006 11:49:59.799821 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bkg2g\" (UniqueName: \"kubernetes.io/projected/7dd9d221-704f-4205-8a53-a7d42e162723-kube-api-access-bkg2g\") pod \"machine-approver-56656f9798-lswc6\" (UID: \"7dd9d221-704f-4205-8a53-a7d42e162723\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-lswc6" Oct 06 11:49:59 crc kubenswrapper[4958]: I1006 11:49:59.808100 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c7cq6\" (UniqueName: \"kubernetes.io/projected/d949b5d0-2cd6-40ab-8bd1-9fb19fd615f8-kube-api-access-c7cq6\") pod \"machine-api-operator-5694c8668f-x8mdr\" (UID: \"d949b5d0-2cd6-40ab-8bd1-9fb19fd615f8\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-x8mdr" Oct 06 11:49:59 crc kubenswrapper[4958]: W1006 11:49:59.811726 4958 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5af269ee_1565_4c36_a416_c4e2e7397fc5.slice/crio-c0c05ff7108ed9a6e6874257ea66c5c2cd7dc794bdb1a691edd9db2ec827c41d WatchSource:0}: Error finding container c0c05ff7108ed9a6e6874257ea66c5c2cd7dc794bdb1a691edd9db2ec827c41d: Status 404 returned error can't find the container with id c0c05ff7108ed9a6e6874257ea66c5c2cd7dc794bdb1a691edd9db2ec827c41d Oct 06 11:49:59 crc kubenswrapper[4958]: I1006 11:49:59.819293 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console-operator/console-operator-58897d9998-z2w99"] Oct 06 11:49:59 crc kubenswrapper[4958]: I1006 11:49:59.831672 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-5694c8668f-x8mdr" Oct 06 11:49:59 crc kubenswrapper[4958]: I1006 11:49:59.841975 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 06 11:49:59 crc kubenswrapper[4958]: E1006 11:49:59.842389 4958 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-06 11:50:00.342373217 +0000 UTC m=+154.228398525 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 06 11:49:59 crc kubenswrapper[4958]: I1006 11:49:59.851227 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-v7kkf" Oct 06 11:49:59 crc kubenswrapper[4958]: I1006 11:49:59.854809 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2kpsf\" (UniqueName: \"kubernetes.io/projected/5924455c-9450-47f0-a762-355eae4c73c4-kube-api-access-2kpsf\") pod \"machine-config-controller-84d6567774-wlv4g\" (UID: \"5924455c-9450-47f0-a762-355eae4c73c4\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-wlv4g" Oct 06 11:49:59 crc kubenswrapper[4958]: I1006 11:49:59.870620 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z75cv\" (UniqueName: \"kubernetes.io/projected/3331225c-8593-4a8c-a256-1aee839e9bb3-kube-api-access-z75cv\") pod \"service-ca-9c57cc56f-9blls\" (UID: \"3331225c-8593-4a8c-a256-1aee839e9bb3\") " pod="openshift-service-ca/service-ca-9c57cc56f-9blls" Oct 06 11:49:59 crc kubenswrapper[4958]: I1006 11:49:59.885893 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-lswc6" Oct 06 11:49:59 crc kubenswrapper[4958]: I1006 11:49:59.904012 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-m9l68"] Oct 06 11:49:59 crc kubenswrapper[4958]: W1006 11:49:59.905959 4958 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod7dd9d221_704f_4205_8a53_a7d42e162723.slice/crio-cfd4f94775f34ca1f391263c0f049a0e63be4f76b6759989fdc4facad83b12b1 WatchSource:0}: Error finding container cfd4f94775f34ca1f391263c0f049a0e63be4f76b6759989fdc4facad83b12b1: Status 404 returned error can't find the container with id cfd4f94775f34ca1f391263c0f049a0e63be4f76b6759989fdc4facad83b12b1 Oct 06 11:49:59 crc kubenswrapper[4958]: I1006 11:49:59.907740 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-htjlh\" (UniqueName: \"kubernetes.io/projected/abd850de-cbf2-4d68-9226-223ba0e3cb72-kube-api-access-htjlh\") pod \"ingress-operator-5b745b69d9-c97vp\" (UID: \"abd850de-cbf2-4d68-9226-223ba0e3cb72\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-c97vp" Oct 06 11:49:59 crc kubenswrapper[4958]: I1006 11:49:59.912706 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-7954f5f757-bs4m6" Oct 06 11:49:59 crc kubenswrapper[4958]: I1006 11:49:59.920130 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-gj6pw" Oct 06 11:49:59 crc kubenswrapper[4958]: I1006 11:49:59.921861 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qlcm7\" (UniqueName: \"kubernetes.io/projected/21e2c2d4-0ea2-4767-b88d-0229390bff9a-kube-api-access-qlcm7\") pod \"openshift-controller-manager-operator-756b6f6bc6-pp96v\" (UID: \"21e2c2d4-0ea2-4767-b88d-0229390bff9a\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-pp96v" Oct 06 11:49:59 crc kubenswrapper[4958]: I1006 11:49:59.928854 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-f2982" Oct 06 11:49:59 crc kubenswrapper[4958]: I1006 11:49:59.933448 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-57fbj"] Oct 06 11:49:59 crc kubenswrapper[4958]: I1006 11:49:59.941390 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hmslt\" (UniqueName: \"kubernetes.io/projected/dc407428-3c19-40aa-b476-c159f9b8f2a4-kube-api-access-hmslt\") pod \"control-plane-machine-set-operator-78cbb6b69f-jxkkt\" (UID: \"dc407428-3c19-40aa-b476-c159f9b8f2a4\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-jxkkt" Oct 06 11:49:59 crc kubenswrapper[4958]: I1006 11:49:59.943270 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-kt8qb\" (UID: \"6a7580eb-dece-41a6-8335-33c29bc41056\") " pod="openshift-image-registry/image-registry-697d97f7c8-kt8qb" Oct 06 11:49:59 crc kubenswrapper[4958]: E1006 11:49:59.943736 4958 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-06 11:50:00.443723337 +0000 UTC m=+154.329748645 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-kt8qb" (UID: "6a7580eb-dece-41a6-8335-33c29bc41056") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 06 11:49:59 crc kubenswrapper[4958]: I1006 11:49:59.945426 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rnhk5\" (UniqueName: \"kubernetes.io/projected/ef5f0384-c041-47c5-80cf-c545d95c5876-kube-api-access-rnhk5\") pod \"migrator-59844c95c7-7ndvg\" (UID: \"ef5f0384-c041-47c5-80cf-c545d95c5876\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-7ndvg" Oct 06 11:49:59 crc kubenswrapper[4958]: I1006 11:49:59.956128 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-744455d44c-t2nxc" Oct 06 11:49:59 crc kubenswrapper[4958]: I1006 11:49:59.958308 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-vqq9g"] Oct 06 11:49:59 crc kubenswrapper[4958]: W1006 11:49:59.958460 4958 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2eb410e0_7f66_4f68_8448_35569e09f1c5.slice/crio-b33e7f3e42daa9452c1afa4ebf9c1eda238ea33bb834659e082ead1278195b83 WatchSource:0}: Error finding container b33e7f3e42daa9452c1afa4ebf9c1eda238ea33bb834659e082ead1278195b83: Status 404 returned error can't find the container with id b33e7f3e42daa9452c1afa4ebf9c1eda238ea33bb834659e082ead1278195b83 Oct 06 11:49:59 crc kubenswrapper[4958]: W1006 11:49:59.960442 4958 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3f84a255_5034_424e_acf0_5ba9f4aa0531.slice/crio-8a5f5a1c4402b41cd987859f6c33621026b4aef3d5d17318d8302752f6f27444 WatchSource:0}: Error finding container 8a5f5a1c4402b41cd987859f6c33621026b4aef3d5d17318d8302752f6f27444: Status 404 returned error can't find the container with id 8a5f5a1c4402b41cd987859f6c33621026b4aef3d5d17318d8302752f6f27444 Oct 06 11:49:59 crc kubenswrapper[4958]: I1006 11:49:59.969016 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tk8mt\" (UniqueName: \"kubernetes.io/projected/37c87c2e-92b8-4b83-be26-cd63ec636eda-kube-api-access-tk8mt\") pod \"router-default-5444994796-mfm8s\" (UID: \"37c87c2e-92b8-4b83-be26-cd63ec636eda\") " pod="openshift-ingress/router-default-5444994796-mfm8s" Oct 06 11:49:59 crc kubenswrapper[4958]: I1006 11:49:59.987751 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/9a23b5e6-14c9-46d0-aed5-4f8b88edfb98-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-2gbnx\" (UID: \"9a23b5e6-14c9-46d0-aed5-4f8b88edfb98\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-2gbnx" Oct 06 11:49:59 crc kubenswrapper[4958]: W1006 11:49:59.991331 4958 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode72b392d_2c9c_462e_bbe5_8f839912c083.slice/crio-a8047659dd84da8d06d9eeb75642631e5ba3f3545c5bbb47295625a3bfafdcb0 WatchSource:0}: Error finding container a8047659dd84da8d06d9eeb75642631e5ba3f3545c5bbb47295625a3bfafdcb0: Status 404 returned error can't find the container with id a8047659dd84da8d06d9eeb75642631e5ba3f3545c5bbb47295625a3bfafdcb0 Oct 06 11:50:00 crc kubenswrapper[4958]: I1006 11:50:00.000465 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-2gbnx" Oct 06 11:50:00 crc kubenswrapper[4958]: I1006 11:50:00.008135 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress/router-default-5444994796-mfm8s" Oct 06 11:50:00 crc kubenswrapper[4958]: I1006 11:50:00.010231 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6h9bm\" (UniqueName: \"kubernetes.io/projected/12d2f911-e40e-44ea-9081-1bb8a08c7cbb-kube-api-access-6h9bm\") pod \"service-ca-operator-777779d784-d4mss\" (UID: \"12d2f911-e40e-44ea-9081-1bb8a08c7cbb\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-d4mss" Oct 06 11:50:00 crc kubenswrapper[4958]: I1006 11:50:00.021998 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-jxkkt" Oct 06 11:50:00 crc kubenswrapper[4958]: I1006 11:50:00.029202 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6q4nj\" (UniqueName: \"kubernetes.io/projected/a0a306b6-7739-4ec5-9dac-68001b21447c-kube-api-access-6q4nj\") pod \"machine-config-server-n8gvd\" (UID: \"a0a306b6-7739-4ec5-9dac-68001b21447c\") " pod="openshift-machine-config-operator/machine-config-server-n8gvd" Oct 06 11:50:00 crc kubenswrapper[4958]: I1006 11:50:00.039159 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-7ndvg" Oct 06 11:50:00 crc kubenswrapper[4958]: I1006 11:50:00.046530 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 06 11:50:00 crc kubenswrapper[4958]: E1006 11:50:00.046977 4958 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-06 11:50:00.546960623 +0000 UTC m=+154.432985931 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 06 11:50:00 crc kubenswrapper[4958]: I1006 11:50:00.051397 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/abd850de-cbf2-4d68-9226-223ba0e3cb72-bound-sa-token\") pod \"ingress-operator-5b745b69d9-c97vp\" (UID: \"abd850de-cbf2-4d68-9226-223ba0e3cb72\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-c97vp" Oct 06 11:50:00 crc kubenswrapper[4958]: I1006 11:50:00.069667 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-f9d7485db-ccpgf"] Oct 06 11:50:00 crc kubenswrapper[4958]: I1006 11:50:00.072899 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n5s8m\" (UniqueName: \"kubernetes.io/projected/b853b964-2cab-4b75-93fe-df7bd4e0f033-kube-api-access-n5s8m\") pod \"kube-storage-version-migrator-operator-b67b599dd-7zl8z\" (UID: \"b853b964-2cab-4b75-93fe-df7bd4e0f033\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-7zl8z" Oct 06 11:50:00 crc kubenswrapper[4958]: I1006 11:50:00.084707 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-pp96v" Oct 06 11:50:00 crc kubenswrapper[4958]: I1006 11:50:00.087592 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2qpmd\" (UniqueName: \"kubernetes.io/projected/1538fe67-0070-42fe-86e4-3ad017710b44-kube-api-access-2qpmd\") pod \"dns-default-kg57j\" (UID: \"1538fe67-0070-42fe-86e4-3ad017710b44\") " pod="openshift-dns/dns-default-kg57j" Oct 06 11:50:00 crc kubenswrapper[4958]: I1006 11:50:00.106416 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p4zs7\" (UniqueName: \"kubernetes.io/projected/6559fbed-d2e2-4578-992e-b4088643cd24-kube-api-access-p4zs7\") pod \"machine-config-operator-74547568cd-swm82\" (UID: \"6559fbed-d2e2-4578-992e-b4088643cd24\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-swm82" Oct 06 11:50:00 crc kubenswrapper[4958]: I1006 11:50:00.109553 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-x8mdr"] Oct 06 11:50:00 crc kubenswrapper[4958]: I1006 11:50:00.112043 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-wlv4g" Oct 06 11:50:00 crc kubenswrapper[4958]: W1006 11:50:00.115057 4958 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod799bd962_f454_498a_88e6_58793b08d732.slice/crio-3cde3f926bcccfd98d212a28aff8f431f9e977c5333b923bbc8451e9e51fc56f WatchSource:0}: Error finding container 3cde3f926bcccfd98d212a28aff8f431f9e977c5333b923bbc8451e9e51fc56f: Status 404 returned error can't find the container with id 3cde3f926bcccfd98d212a28aff8f431f9e977c5333b923bbc8451e9e51fc56f Oct 06 11:50:00 crc kubenswrapper[4958]: I1006 11:50:00.121519 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-7zl8z" Oct 06 11:50:00 crc kubenswrapper[4958]: I1006 11:50:00.130813 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-server-n8gvd" Oct 06 11:50:00 crc kubenswrapper[4958]: I1006 11:50:00.142241 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1e722343-2649-43dd-a7dd-842e41136658-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-nsfvl\" (UID: \"1e722343-2649-43dd-a7dd-842e41136658\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-nsfvl" Oct 06 11:50:00 crc kubenswrapper[4958]: I1006 11:50:00.146195 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5jhd4\" (UniqueName: \"kubernetes.io/projected/8275a125-7a77-49fe-bebf-d140013d5a3f-kube-api-access-5jhd4\") pod \"ingress-canary-7flhj\" (UID: \"8275a125-7a77-49fe-bebf-d140013d5a3f\") " pod="openshift-ingress-canary/ingress-canary-7flhj" Oct 06 11:50:00 crc kubenswrapper[4958]: I1006 11:50:00.147954 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-kt8qb\" (UID: \"6a7580eb-dece-41a6-8335-33c29bc41056\") " pod="openshift-image-registry/image-registry-697d97f7c8-kt8qb" Oct 06 11:50:00 crc kubenswrapper[4958]: E1006 11:50:00.148379 4958 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-06 11:50:00.648363304 +0000 UTC m=+154.534388612 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-kt8qb" (UID: "6a7580eb-dece-41a6-8335-33c29bc41056") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 06 11:50:00 crc kubenswrapper[4958]: I1006 11:50:00.150469 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-9c57cc56f-9blls" Oct 06 11:50:00 crc kubenswrapper[4958]: I1006 11:50:00.156508 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-thrt8"] Oct 06 11:50:00 crc kubenswrapper[4958]: I1006 11:50:00.157907 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-7flhj" Oct 06 11:50:00 crc kubenswrapper[4958]: I1006 11:50:00.164961 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-777779d784-d4mss" Oct 06 11:50:00 crc kubenswrapper[4958]: I1006 11:50:00.173439 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9kmgb\" (UniqueName: \"kubernetes.io/projected/d982ea17-3155-4d01-bf28-5045db9fe780-kube-api-access-9kmgb\") pod \"olm-operator-6b444d44fb-4cxkf\" (UID: \"d982ea17-3155-4d01-bf28-5045db9fe780\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-4cxkf" Oct 06 11:50:00 crc kubenswrapper[4958]: I1006 11:50:00.190219 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-kg57j" Oct 06 11:50:00 crc kubenswrapper[4958]: I1006 11:50:00.197476 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-v7kkf"] Oct 06 11:50:00 crc kubenswrapper[4958]: I1006 11:50:00.206852 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-dftvm"] Oct 06 11:50:00 crc kubenswrapper[4958]: I1006 11:50:00.208303 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/2cc565c2-461e-4902-917d-51dc8aba5cf7-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-m7q5b\" (UID: \"2cc565c2-461e-4902-917d-51dc8aba5cf7\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-m7q5b" Oct 06 11:50:00 crc kubenswrapper[4958]: I1006 11:50:00.233808 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-97hxg\" (UniqueName: \"kubernetes.io/projected/221a0896-9d41-4bf4-b05e-57c067e8b885-kube-api-access-97hxg\") pod \"collect-profiles-29329185-6dnq5\" (UID: \"221a0896-9d41-4bf4-b05e-57c067e8b885\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29329185-6dnq5" Oct 06 11:50:00 crc kubenswrapper[4958]: I1006 11:50:00.235771 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-vqqpd"] Oct 06 11:50:00 crc kubenswrapper[4958]: I1006 11:50:00.238553 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v2wt6\" (UniqueName: \"kubernetes.io/projected/59305dc8-3a12-401b-83d9-532e520b72b0-kube-api-access-v2wt6\") pod \"multus-admission-controller-857f4d67dd-fhfvm\" (UID: \"59305dc8-3a12-401b-83d9-532e520b72b0\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-fhfvm" Oct 06 11:50:00 crc kubenswrapper[4958]: I1006 11:50:00.248567 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 06 11:50:00 crc kubenswrapper[4958]: E1006 11:50:00.249540 4958 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-06 11:50:00.749514707 +0000 UTC m=+154.635540015 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 06 11:50:00 crc kubenswrapper[4958]: I1006 11:50:00.255298 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d829t\" (UniqueName: \"kubernetes.io/projected/3a4bf8ec-4eaf-483f-b1c2-a981e3a83091-kube-api-access-d829t\") pod \"packageserver-d55dfcdfc-x42zz\" (UID: \"3a4bf8ec-4eaf-483f-b1c2-a981e3a83091\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-x42zz" Oct 06 11:50:00 crc kubenswrapper[4958]: I1006 11:50:00.270068 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xjr8z\" (UniqueName: \"kubernetes.io/projected/8c14b3fd-7202-4e51-9528-7eb2825f4a5b-kube-api-access-xjr8z\") pod \"csi-hostpathplugin-m9cln\" (UID: \"8c14b3fd-7202-4e51-9528-7eb2825f4a5b\") " pod="hostpath-provisioner/csi-hostpathplugin-m9cln" Oct 06 11:50:00 crc kubenswrapper[4958]: I1006 11:50:00.275223 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-xnxs2"] Oct 06 11:50:00 crc kubenswrapper[4958]: I1006 11:50:00.275709 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-gj6pw"] Oct 06 11:50:00 crc kubenswrapper[4958]: I1006 11:50:00.276646 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-f2982"] Oct 06 11:50:00 crc kubenswrapper[4958]: I1006 11:50:00.295679 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zxswz\" (UniqueName: \"kubernetes.io/projected/8bce8671-7d06-4e9b-81cb-f6ddf56ba8a4-kube-api-access-zxswz\") pod \"marketplace-operator-79b997595-928ch\" (UID: \"8bce8671-7d06-4e9b-81cb-f6ddf56ba8a4\") " pod="openshift-marketplace/marketplace-operator-79b997595-928ch" Oct 06 11:50:00 crc kubenswrapper[4958]: I1006 11:50:00.301074 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-c97vp" Oct 06 11:50:00 crc kubenswrapper[4958]: I1006 11:50:00.315973 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-nsfvl" Oct 06 11:50:00 crc kubenswrapper[4958]: I1006 11:50:00.331018 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-swm82" Oct 06 11:50:00 crc kubenswrapper[4958]: I1006 11:50:00.331718 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9gv4m\" (UniqueName: \"kubernetes.io/projected/f1163149-6bad-4274-87de-23e0b45a284e-kube-api-access-9gv4m\") pod \"catalog-operator-68c6474976-zbr46\" (UID: \"f1163149-6bad-4274-87de-23e0b45a284e\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-zbr46" Oct 06 11:50:00 crc kubenswrapper[4958]: I1006 11:50:00.337891 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zcwhl\" (UniqueName: \"kubernetes.io/projected/1658bc0b-def5-4f14-a28d-f20c83f435da-kube-api-access-zcwhl\") pod \"package-server-manager-789f6589d5-k94lv\" (UID: \"1658bc0b-def5-4f14-a28d-f20c83f435da\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-k94lv" Oct 06 11:50:00 crc kubenswrapper[4958]: I1006 11:50:00.355229 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-kt8qb\" (UID: \"6a7580eb-dece-41a6-8335-33c29bc41056\") " pod="openshift-image-registry/image-registry-697d97f7c8-kt8qb" Oct 06 11:50:00 crc kubenswrapper[4958]: E1006 11:50:00.355785 4958 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-06 11:50:00.855773164 +0000 UTC m=+154.741798462 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-kt8qb" (UID: "6a7580eb-dece-41a6-8335-33c29bc41056") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 06 11:50:00 crc kubenswrapper[4958]: I1006 11:50:00.383425 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-4cxkf" Oct 06 11:50:00 crc kubenswrapper[4958]: I1006 11:50:00.391995 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-m7q5b" Oct 06 11:50:00 crc kubenswrapper[4958]: I1006 11:50:00.398603 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-zbr46" Oct 06 11:50:00 crc kubenswrapper[4958]: I1006 11:50:00.405341 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-857f4d67dd-fhfvm" Oct 06 11:50:00 crc kubenswrapper[4958]: W1006 11:50:00.428426 4958 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda0a306b6_7739_4ec5_9dac_68001b21447c.slice/crio-22db215e5434d8e8fee14da22e64abef472fa066c08b812dea7de9c09807c424 WatchSource:0}: Error finding container 22db215e5434d8e8fee14da22e64abef472fa066c08b812dea7de9c09807c424: Status 404 returned error can't find the container with id 22db215e5434d8e8fee14da22e64abef472fa066c08b812dea7de9c09807c424 Oct 06 11:50:00 crc kubenswrapper[4958]: I1006 11:50:00.437992 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-928ch" Oct 06 11:50:00 crc kubenswrapper[4958]: I1006 11:50:00.446291 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29329185-6dnq5" Oct 06 11:50:00 crc kubenswrapper[4958]: I1006 11:50:00.455859 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 06 11:50:00 crc kubenswrapper[4958]: E1006 11:50:00.456242 4958 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-06 11:50:00.956227767 +0000 UTC m=+154.842253075 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 06 11:50:00 crc kubenswrapper[4958]: I1006 11:50:00.474052 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-x42zz" Oct 06 11:50:00 crc kubenswrapper[4958]: I1006 11:50:00.481074 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-k94lv" Oct 06 11:50:00 crc kubenswrapper[4958]: I1006 11:50:00.511125 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-m9cln" Oct 06 11:50:00 crc kubenswrapper[4958]: I1006 11:50:00.554480 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-t2nxc"] Oct 06 11:50:00 crc kubenswrapper[4958]: I1006 11:50:00.558597 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-kt8qb\" (UID: \"6a7580eb-dece-41a6-8335-33c29bc41056\") " pod="openshift-image-registry/image-registry-697d97f7c8-kt8qb" Oct 06 11:50:00 crc kubenswrapper[4958]: E1006 11:50:00.559029 4958 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-06 11:50:01.059013449 +0000 UTC m=+154.945038757 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-kt8qb" (UID: "6a7580eb-dece-41a6-8335-33c29bc41056") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 06 11:50:00 crc kubenswrapper[4958]: I1006 11:50:00.572276 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-7ndvg"] Oct 06 11:50:00 crc kubenswrapper[4958]: I1006 11:50:00.618809 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-wlv4g"] Oct 06 11:50:00 crc kubenswrapper[4958]: I1006 11:50:00.640500 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-pp96v"] Oct 06 11:50:00 crc kubenswrapper[4958]: I1006 11:50:00.668408 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 06 11:50:00 crc kubenswrapper[4958]: E1006 11:50:00.668882 4958 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-06 11:50:01.168865875 +0000 UTC m=+155.054891183 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 06 11:50:00 crc kubenswrapper[4958]: I1006 11:50:00.675445 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-2gbnx"] Oct 06 11:50:00 crc kubenswrapper[4958]: I1006 11:50:00.689781 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-jxkkt"] Oct 06 11:50:00 crc kubenswrapper[4958]: I1006 11:50:00.694923 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/downloads-7954f5f757-bs4m6"] Oct 06 11:50:00 crc kubenswrapper[4958]: W1006 11:50:00.712295 4958 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5924455c_9450_47f0_a762_355eae4c73c4.slice/crio-11ddd6a6257dbfc6f86a64d090a47c87e55b8301fcf42c217770f3476d93a85f WatchSource:0}: Error finding container 11ddd6a6257dbfc6f86a64d090a47c87e55b8301fcf42c217770f3476d93a85f: Status 404 returned error can't find the container with id 11ddd6a6257dbfc6f86a64d090a47c87e55b8301fcf42c217770f3476d93a85f Oct 06 11:50:00 crc kubenswrapper[4958]: W1006 11:50:00.714503 4958 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podef5f0384_c041_47c5_80cf_c545d95c5876.slice/crio-a169385384d56b3be7139ef65b39a5ff4e5326fe9fe8d7df36a72c2959e70bca WatchSource:0}: Error finding container a169385384d56b3be7139ef65b39a5ff4e5326fe9fe8d7df36a72c2959e70bca: Status 404 returned error can't find the container with id a169385384d56b3be7139ef65b39a5ff4e5326fe9fe8d7df36a72c2959e70bca Oct 06 11:50:00 crc kubenswrapper[4958]: W1006 11:50:00.723642 4958 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod21e2c2d4_0ea2_4767_b88d_0229390bff9a.slice/crio-9873937a373a7d0ab10c1bf7ca3929cdf79480a424ed7e2f437139a815a2c08b WatchSource:0}: Error finding container 9873937a373a7d0ab10c1bf7ca3929cdf79480a424ed7e2f437139a815a2c08b: Status 404 returned error can't find the container with id 9873937a373a7d0ab10c1bf7ca3929cdf79480a424ed7e2f437139a815a2c08b Oct 06 11:50:00 crc kubenswrapper[4958]: I1006 11:50:00.770070 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-kt8qb\" (UID: \"6a7580eb-dece-41a6-8335-33c29bc41056\") " pod="openshift-image-registry/image-registry-697d97f7c8-kt8qb" Oct 06 11:50:00 crc kubenswrapper[4958]: E1006 11:50:00.770649 4958 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-06 11:50:01.270637157 +0000 UTC m=+155.156662465 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-kt8qb" (UID: "6a7580eb-dece-41a6-8335-33c29bc41056") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 06 11:50:00 crc kubenswrapper[4958]: W1006 11:50:00.797500 4958 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poddc407428_3c19_40aa_b476_c159f9b8f2a4.slice/crio-79c9a343f6d1bf9d040d6826994966ee6cfb7d622ab3e5a2eed522819d47f44f WatchSource:0}: Error finding container 79c9a343f6d1bf9d040d6826994966ee6cfb7d622ab3e5a2eed522819d47f44f: Status 404 returned error can't find the container with id 79c9a343f6d1bf9d040d6826994966ee6cfb7d622ab3e5a2eed522819d47f44f Oct 06 11:50:00 crc kubenswrapper[4958]: W1006 11:50:00.806801 4958 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0538c536_f662_48c7_98fe_a1ceff33ade3.slice/crio-6770642d8d5229a80965a33ca5267a075186190332fe1d2b77eb901d24ecef8f WatchSource:0}: Error finding container 6770642d8d5229a80965a33ca5267a075186190332fe1d2b77eb901d24ecef8f: Status 404 returned error can't find the container with id 6770642d8d5229a80965a33ca5267a075186190332fe1d2b77eb901d24ecef8f Oct 06 11:50:00 crc kubenswrapper[4958]: I1006 11:50:00.839416 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-69f744f599-m9l68" event={"ID":"2eb410e0-7f66-4f68-8448-35569e09f1c5","Type":"ContainerStarted","Data":"babcd45bab95bd77b1ef3ab0449d67dd781db11a5bff22f29bf8c3cbe18130ae"} Oct 06 11:50:00 crc kubenswrapper[4958]: I1006 11:50:00.839460 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-69f744f599-m9l68" event={"ID":"2eb410e0-7f66-4f68-8448-35569e09f1c5","Type":"ContainerStarted","Data":"b33e7f3e42daa9452c1afa4ebf9c1eda238ea33bb834659e082ead1278195b83"} Oct 06 11:50:00 crc kubenswrapper[4958]: I1006 11:50:00.841224 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-server-n8gvd" event={"ID":"a0a306b6-7739-4ec5-9dac-68001b21447c","Type":"ContainerStarted","Data":"22db215e5434d8e8fee14da22e64abef472fa066c08b812dea7de9c09807c424"} Oct 06 11:50:00 crc kubenswrapper[4958]: I1006 11:50:00.870342 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-7ndvg" event={"ID":"ef5f0384-c041-47c5-80cf-c545d95c5876","Type":"ContainerStarted","Data":"a169385384d56b3be7139ef65b39a5ff4e5326fe9fe8d7df36a72c2959e70bca"} Oct 06 11:50:00 crc kubenswrapper[4958]: I1006 11:50:00.871302 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 06 11:50:00 crc kubenswrapper[4958]: I1006 11:50:00.871424 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-f2982" event={"ID":"5715f094-2c1f-4d0c-8901-f4378d613048","Type":"ContainerStarted","Data":"47d337575fddedf31ce37031caea8d510565e85460605f9fb0eda62f0eff8312"} Oct 06 11:50:00 crc kubenswrapper[4958]: E1006 11:50:00.871664 4958 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-06 11:50:01.371622314 +0000 UTC m=+155.257647622 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 06 11:50:00 crc kubenswrapper[4958]: I1006 11:50:00.871701 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-kt8qb\" (UID: \"6a7580eb-dece-41a6-8335-33c29bc41056\") " pod="openshift-image-registry/image-registry-697d97f7c8-kt8qb" Oct 06 11:50:00 crc kubenswrapper[4958]: E1006 11:50:00.872434 4958 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-06 11:50:01.372410738 +0000 UTC m=+155.258436046 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-kt8qb" (UID: "6a7580eb-dece-41a6-8335-33c29bc41056") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 06 11:50:00 crc kubenswrapper[4958]: I1006 11:50:00.895206 4958 generic.go:334] "Generic (PLEG): container finished" podID="3f84a255-5034-424e-acf0-5ba9f4aa0531" containerID="c53937e6e864bc2b2db722adeb30333c2d51b651b67b8b0fa51e9311e40411ff" exitCode=0 Oct 06 11:50:00 crc kubenswrapper[4958]: I1006 11:50:00.895967 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-57fbj" event={"ID":"3f84a255-5034-424e-acf0-5ba9f4aa0531","Type":"ContainerDied","Data":"c53937e6e864bc2b2db722adeb30333c2d51b651b67b8b0fa51e9311e40411ff"} Oct 06 11:50:00 crc kubenswrapper[4958]: I1006 11:50:00.895992 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-57fbj" event={"ID":"3f84a255-5034-424e-acf0-5ba9f4aa0531","Type":"ContainerStarted","Data":"8a5f5a1c4402b41cd987859f6c33621026b4aef3d5d17318d8302752f6f27444"} Oct 06 11:50:00 crc kubenswrapper[4958]: I1006 11:50:00.898997 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-fmlg2" event={"ID":"5af269ee-1565-4c36-a416-c4e2e7397fc5","Type":"ContainerStarted","Data":"ace380a116ed4519de71a9140281ab0f6ea44de1d65345c9fe5a367ec3bd5e73"} Oct 06 11:50:00 crc kubenswrapper[4958]: I1006 11:50:00.899029 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-fmlg2" Oct 06 11:50:00 crc kubenswrapper[4958]: I1006 11:50:00.899038 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-fmlg2" event={"ID":"5af269ee-1565-4c36-a416-c4e2e7397fc5","Type":"ContainerStarted","Data":"c0c05ff7108ed9a6e6874257ea66c5c2cd7dc794bdb1a691edd9db2ec827c41d"} Oct 06 11:50:00 crc kubenswrapper[4958]: I1006 11:50:00.900361 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-b45778765-xnxs2" event={"ID":"6604f8c6-1db7-4bc6-96a7-7bbd56ab2b4c","Type":"ContainerStarted","Data":"d64530cd2b2de126ef66be7f3ee827427de8d5cfbafe349c5e6a7d21792ceaf8"} Oct 06 11:50:00 crc kubenswrapper[4958]: I1006 11:50:00.903347 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-wlv4g" event={"ID":"5924455c-9450-47f0-a762-355eae4c73c4","Type":"ContainerStarted","Data":"11ddd6a6257dbfc6f86a64d090a47c87e55b8301fcf42c217770f3476d93a85f"} Oct 06 11:50:00 crc kubenswrapper[4958]: I1006 11:50:00.904988 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-thrt8" event={"ID":"cbf992a4-d7af-455c-953b-c865445feb6c","Type":"ContainerStarted","Data":"cfcab5d80b99f4da114997dd75b2bcad7b74f295924ac69ca30394659bec0e60"} Oct 06 11:50:00 crc kubenswrapper[4958]: I1006 11:50:00.905015 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-thrt8" event={"ID":"cbf992a4-d7af-455c-953b-c865445feb6c","Type":"ContainerStarted","Data":"18d80ed7d70cc68fdd1f03c7113d8070b382683417a0fbd9dd44b3ca98033247"} Oct 06 11:50:00 crc kubenswrapper[4958]: I1006 11:50:00.906990 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-879f6c89f-thrt8" Oct 06 11:50:00 crc kubenswrapper[4958]: I1006 11:50:00.909089 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console-operator/console-operator-58897d9998-z2w99" event={"ID":"11d97a87-1e86-4e2c-aad1-c66fd9dafea3","Type":"ContainerStarted","Data":"6ecd4877cf08794b55d189dd998f29bc1d1c5afa38301e2013bfef4fb65867ec"} Oct 06 11:50:00 crc kubenswrapper[4958]: I1006 11:50:00.909112 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console-operator/console-operator-58897d9998-z2w99" event={"ID":"11d97a87-1e86-4e2c-aad1-c66fd9dafea3","Type":"ContainerStarted","Data":"ddc89d61260ec9c5b2106daf867f8925e1d2e24033aa580d167c1c51af7733c1"} Oct 06 11:50:00 crc kubenswrapper[4958]: I1006 11:50:00.909868 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console-operator/console-operator-58897d9998-z2w99" Oct 06 11:50:00 crc kubenswrapper[4958]: I1006 11:50:00.927136 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-v7kkf" event={"ID":"97ec580a-268f-4a8a-a6e1-9b64e463ca20","Type":"ContainerStarted","Data":"c1a12af6565cad3e49095acf3385be59eb7adc33f99e0ec7b87bb62148895c46"} Oct 06 11:50:00 crc kubenswrapper[4958]: I1006 11:50:00.933882 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-pp96v" event={"ID":"21e2c2d4-0ea2-4767-b88d-0229390bff9a","Type":"ContainerStarted","Data":"9873937a373a7d0ab10c1bf7ca3929cdf79480a424ed7e2f437139a815a2c08b"} Oct 06 11:50:00 crc kubenswrapper[4958]: I1006 11:50:00.934889 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-5444994796-mfm8s" event={"ID":"37c87c2e-92b8-4b83-be26-cd63ec636eda","Type":"ContainerStarted","Data":"9164cffed00e5e005a10328399ca82b450f4b42fdb7db607705060253d358047"} Oct 06 11:50:00 crc kubenswrapper[4958]: I1006 11:50:00.936411 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-ccpgf" event={"ID":"799bd962-f454-498a-88e6-58793b08d732","Type":"ContainerStarted","Data":"c40c27e16230f41ae4aea9fd481e99813ed2d4e52340ea5e3497662b06da0e71"} Oct 06 11:50:00 crc kubenswrapper[4958]: I1006 11:50:00.936436 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-ccpgf" event={"ID":"799bd962-f454-498a-88e6-58793b08d732","Type":"ContainerStarted","Data":"3cde3f926bcccfd98d212a28aff8f431f9e977c5333b923bbc8451e9e51fc56f"} Oct 06 11:50:00 crc kubenswrapper[4958]: I1006 11:50:00.939869 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-t2nxc" event={"ID":"29169783-2273-40c2-abe9-85f3ee6c4f7f","Type":"ContainerStarted","Data":"3c2d3ea6b497ebd1cb9c7141b9a1f8694e1054b558c193095d8b570c681f2918"} Oct 06 11:50:00 crc kubenswrapper[4958]: I1006 11:50:00.940726 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-dftvm" event={"ID":"87166f97-d9f0-4391-87b6-0ea7ce0208e1","Type":"ContainerStarted","Data":"9559f00dc325696c0e070b63651dd64765fed84ea93907991d9174006bc992ad"} Oct 06 11:50:00 crc kubenswrapper[4958]: I1006 11:50:00.941497 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-gj6pw" event={"ID":"b261e077-f9df-49ba-87df-a5d1988755bf","Type":"ContainerStarted","Data":"42b5f332c2a594e6b5149597e1924ed5b96bec068af140fd0108b218ad424613"} Oct 06 11:50:00 crc kubenswrapper[4958]: I1006 11:50:00.942843 4958 generic.go:334] "Generic (PLEG): container finished" podID="e72b392d-2c9c-462e-bbe5-8f839912c083" containerID="fbf8725a3e865f776dd0029eeffa6e410af88f1562403bf0138887eb22fc8afd" exitCode=0 Oct 06 11:50:00 crc kubenswrapper[4958]: I1006 11:50:00.942891 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-vqq9g" event={"ID":"e72b392d-2c9c-462e-bbe5-8f839912c083","Type":"ContainerDied","Data":"fbf8725a3e865f776dd0029eeffa6e410af88f1562403bf0138887eb22fc8afd"} Oct 06 11:50:00 crc kubenswrapper[4958]: I1006 11:50:00.942907 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-vqq9g" event={"ID":"e72b392d-2c9c-462e-bbe5-8f839912c083","Type":"ContainerStarted","Data":"a8047659dd84da8d06d9eeb75642631e5ba3f3545c5bbb47295625a3bfafdcb0"} Oct 06 11:50:00 crc kubenswrapper[4958]: I1006 11:50:00.945093 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-jxkkt" event={"ID":"dc407428-3c19-40aa-b476-c159f9b8f2a4","Type":"ContainerStarted","Data":"79c9a343f6d1bf9d040d6826994966ee6cfb7d622ab3e5a2eed522819d47f44f"} Oct 06 11:50:00 crc kubenswrapper[4958]: I1006 11:50:00.946031 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-vqqpd" event={"ID":"975b4fb4-827e-4d99-b37a-5bf622b2c889","Type":"ContainerStarted","Data":"68714d1763c4afc117ffed243aac99cbc4f2c11a7da8491f4bf861ceba1cbe46"} Oct 06 11:50:00 crc kubenswrapper[4958]: I1006 11:50:00.947800 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-x8mdr" event={"ID":"d949b5d0-2cd6-40ab-8bd1-9fb19fd615f8","Type":"ContainerStarted","Data":"737b3cc7148c04dc4bcee628a74d852edf2c2734414f93e89b2d22c0b2c5a95c"} Oct 06 11:50:00 crc kubenswrapper[4958]: I1006 11:50:00.947826 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-x8mdr" event={"ID":"d949b5d0-2cd6-40ab-8bd1-9fb19fd615f8","Type":"ContainerStarted","Data":"4ae9473b02683cdce050ff9753dfa89cc0866cf587406d4e64a7051a4e319098"} Oct 06 11:50:00 crc kubenswrapper[4958]: I1006 11:50:00.948907 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-lswc6" event={"ID":"7dd9d221-704f-4205-8a53-a7d42e162723","Type":"ContainerStarted","Data":"5487b32eb1da6b34bbb8e0f80c1ca655471967e38524cf2f88b0f1a3a3e39dae"} Oct 06 11:50:00 crc kubenswrapper[4958]: I1006 11:50:00.948931 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-lswc6" event={"ID":"7dd9d221-704f-4205-8a53-a7d42e162723","Type":"ContainerStarted","Data":"cfd4f94775f34ca1f391263c0f049a0e63be4f76b6759989fdc4facad83b12b1"} Oct 06 11:50:00 crc kubenswrapper[4958]: I1006 11:50:00.951786 4958 patch_prober.go:28] interesting pod/controller-manager-879f6c89f-thrt8 container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.7:8443/healthz\": dial tcp 10.217.0.7:8443: connect: connection refused" start-of-body= Oct 06 11:50:00 crc kubenswrapper[4958]: I1006 11:50:00.951815 4958 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-879f6c89f-thrt8" podUID="cbf992a4-d7af-455c-953b-c865445feb6c" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.7:8443/healthz\": dial tcp 10.217.0.7:8443: connect: connection refused" Oct 06 11:50:00 crc kubenswrapper[4958]: I1006 11:50:00.954498 4958 patch_prober.go:28] interesting pod/console-operator-58897d9998-z2w99 container/console-operator namespace/openshift-console-operator: Readiness probe status=failure output="Get \"https://10.217.0.18:8443/readyz\": dial tcp 10.217.0.18:8443: connect: connection refused" start-of-body= Oct 06 11:50:00 crc kubenswrapper[4958]: I1006 11:50:00.954524 4958 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console-operator/console-operator-58897d9998-z2w99" podUID="11d97a87-1e86-4e2c-aad1-c66fd9dafea3" containerName="console-operator" probeResult="failure" output="Get \"https://10.217.0.18:8443/readyz\": dial tcp 10.217.0.18:8443: connect: connection refused" Oct 06 11:50:00 crc kubenswrapper[4958]: I1006 11:50:00.974181 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 06 11:50:00 crc kubenswrapper[4958]: E1006 11:50:00.975354 4958 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-06 11:50:01.475334415 +0000 UTC m=+155.361359723 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 06 11:50:01 crc kubenswrapper[4958]: I1006 11:50:01.021865 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-9blls"] Oct 06 11:50:01 crc kubenswrapper[4958]: I1006 11:50:01.075854 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-kt8qb\" (UID: \"6a7580eb-dece-41a6-8335-33c29bc41056\") " pod="openshift-image-registry/image-registry-697d97f7c8-kt8qb" Oct 06 11:50:01 crc kubenswrapper[4958]: E1006 11:50:01.079479 4958 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-06 11:50:01.579464888 +0000 UTC m=+155.465490196 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-kt8qb" (UID: "6a7580eb-dece-41a6-8335-33c29bc41056") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 06 11:50:01 crc kubenswrapper[4958]: I1006 11:50:01.127510 4958 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console-operator/console-operator-58897d9998-z2w99" podStartSLOduration=134.127492928 podStartE2EDuration="2m14.127492928s" podCreationTimestamp="2025-10-06 11:47:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-06 11:50:01.125604011 +0000 UTC m=+155.011629309" watchObservedRunningTime="2025-10-06 11:50:01.127492928 +0000 UTC m=+155.013518236" Oct 06 11:50:01 crc kubenswrapper[4958]: I1006 11:50:01.144419 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-928ch"] Oct 06 11:50:01 crc kubenswrapper[4958]: W1006 11:50:01.160308 4958 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3331225c_8593_4a8c_a256_1aee839e9bb3.slice/crio-05f53b376e2c1f66fccd513b1612f8f7027037d675a47cabc99473bf7aa407da WatchSource:0}: Error finding container 05f53b376e2c1f66fccd513b1612f8f7027037d675a47cabc99473bf7aa407da: Status 404 returned error can't find the container with id 05f53b376e2c1f66fccd513b1612f8f7027037d675a47cabc99473bf7aa407da Oct 06 11:50:01 crc kubenswrapper[4958]: I1006 11:50:01.182554 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 06 11:50:01 crc kubenswrapper[4958]: E1006 11:50:01.182852 4958 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-06 11:50:01.682835678 +0000 UTC m=+155.568860986 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 06 11:50:01 crc kubenswrapper[4958]: I1006 11:50:01.189544 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-d4mss"] Oct 06 11:50:01 crc kubenswrapper[4958]: I1006 11:50:01.195884 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-7zl8z"] Oct 06 11:50:01 crc kubenswrapper[4958]: I1006 11:50:01.206996 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-canary/ingress-canary-7flhj"] Oct 06 11:50:01 crc kubenswrapper[4958]: I1006 11:50:01.221388 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns/dns-default-kg57j"] Oct 06 11:50:01 crc kubenswrapper[4958]: W1006 11:50:01.241565 4958 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod12d2f911_e40e_44ea_9081_1bb8a08c7cbb.slice/crio-194f3de9d02ed00eb6ab3a076a88f105a2ca5fdef45ae741badede8b62711ab4 WatchSource:0}: Error finding container 194f3de9d02ed00eb6ab3a076a88f105a2ca5fdef45ae741badede8b62711ab4: Status 404 returned error can't find the container with id 194f3de9d02ed00eb6ab3a076a88f105a2ca5fdef45ae741badede8b62711ab4 Oct 06 11:50:01 crc kubenswrapper[4958]: I1006 11:50:01.269318 4958 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-fmlg2" podStartSLOduration=134.268832454 podStartE2EDuration="2m14.268832454s" podCreationTimestamp="2025-10-06 11:47:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-06 11:50:01.251456369 +0000 UTC m=+155.137481677" watchObservedRunningTime="2025-10-06 11:50:01.268832454 +0000 UTC m=+155.154857762" Oct 06 11:50:01 crc kubenswrapper[4958]: I1006 11:50:01.279889 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-c97vp"] Oct 06 11:50:01 crc kubenswrapper[4958]: I1006 11:50:01.283519 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-4cxkf"] Oct 06 11:50:01 crc kubenswrapper[4958]: I1006 11:50:01.285977 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-kt8qb\" (UID: \"6a7580eb-dece-41a6-8335-33c29bc41056\") " pod="openshift-image-registry/image-registry-697d97f7c8-kt8qb" Oct 06 11:50:01 crc kubenswrapper[4958]: E1006 11:50:01.286422 4958 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-06 11:50:01.786405974 +0000 UTC m=+155.672431282 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-kt8qb" (UID: "6a7580eb-dece-41a6-8335-33c29bc41056") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 06 11:50:01 crc kubenswrapper[4958]: I1006 11:50:01.366033 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-nsfvl"] Oct 06 11:50:01 crc kubenswrapper[4958]: I1006 11:50:01.387779 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 06 11:50:01 crc kubenswrapper[4958]: E1006 11:50:01.388292 4958 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-06 11:50:01.888263809 +0000 UTC m=+155.774289117 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 06 11:50:01 crc kubenswrapper[4958]: I1006 11:50:01.416614 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29329185-6dnq5"] Oct 06 11:50:01 crc kubenswrapper[4958]: I1006 11:50:01.440278 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-k94lv"] Oct 06 11:50:01 crc kubenswrapper[4958]: I1006 11:50:01.441251 4958 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-879f6c89f-thrt8" podStartSLOduration=134.441238468 podStartE2EDuration="2m14.441238468s" podCreationTimestamp="2025-10-06 11:47:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-06 11:50:01.412427148 +0000 UTC m=+155.298452466" watchObservedRunningTime="2025-10-06 11:50:01.441238468 +0000 UTC m=+155.327263776" Oct 06 11:50:01 crc kubenswrapper[4958]: I1006 11:50:01.482575 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-swm82"] Oct 06 11:50:01 crc kubenswrapper[4958]: I1006 11:50:01.485700 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-m7q5b"] Oct 06 11:50:01 crc kubenswrapper[4958]: I1006 11:50:01.488729 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-zbr46"] Oct 06 11:50:01 crc kubenswrapper[4958]: I1006 11:50:01.488764 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-fhfvm"] Oct 06 11:50:01 crc kubenswrapper[4958]: I1006 11:50:01.489410 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-kt8qb\" (UID: \"6a7580eb-dece-41a6-8335-33c29bc41056\") " pod="openshift-image-registry/image-registry-697d97f7c8-kt8qb" Oct 06 11:50:01 crc kubenswrapper[4958]: E1006 11:50:01.489745 4958 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-06 11:50:01.989732542 +0000 UTC m=+155.875757850 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-kt8qb" (UID: "6a7580eb-dece-41a6-8335-33c29bc41056") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 06 11:50:01 crc kubenswrapper[4958]: I1006 11:50:01.497585 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-x42zz"] Oct 06 11:50:01 crc kubenswrapper[4958]: W1006 11:50:01.510102 4958 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd982ea17_3155_4d01_bf28_5045db9fe780.slice/crio-0092d12e4a66e9978c9ac4fe57c57b96582fe64475374b77361433d856cbb28c WatchSource:0}: Error finding container 0092d12e4a66e9978c9ac4fe57c57b96582fe64475374b77361433d856cbb28c: Status 404 returned error can't find the container with id 0092d12e4a66e9978c9ac4fe57c57b96582fe64475374b77361433d856cbb28c Oct 06 11:50:01 crc kubenswrapper[4958]: I1006 11:50:01.530432 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-m9cln"] Oct 06 11:50:01 crc kubenswrapper[4958]: W1006 11:50:01.561649 4958 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2cc565c2_461e_4902_917d_51dc8aba5cf7.slice/crio-ba37970dbaf809ba5ca6d84ae2d5d392c851fd6537896715673121ebee0cb871 WatchSource:0}: Error finding container ba37970dbaf809ba5ca6d84ae2d5d392c851fd6537896715673121ebee0cb871: Status 404 returned error can't find the container with id ba37970dbaf809ba5ca6d84ae2d5d392c851fd6537896715673121ebee0cb871 Oct 06 11:50:01 crc kubenswrapper[4958]: W1006 11:50:01.577297 4958 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod59305dc8_3a12_401b_83d9_532e520b72b0.slice/crio-2a73d0904904082724ff8ce5bddb1162c0326cdc29349df6dff925c65410ab09 WatchSource:0}: Error finding container 2a73d0904904082724ff8ce5bddb1162c0326cdc29349df6dff925c65410ab09: Status 404 returned error can't find the container with id 2a73d0904904082724ff8ce5bddb1162c0326cdc29349df6dff925c65410ab09 Oct 06 11:50:01 crc kubenswrapper[4958]: W1006 11:50:01.590578 4958 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf1163149_6bad_4274_87de_23e0b45a284e.slice/crio-bab51ba94a727b0337759af087ec44daaa65a4469afb68385f103c9b7e60faa5 WatchSource:0}: Error finding container bab51ba94a727b0337759af087ec44daaa65a4469afb68385f103c9b7e60faa5: Status 404 returned error can't find the container with id bab51ba94a727b0337759af087ec44daaa65a4469afb68385f103c9b7e60faa5 Oct 06 11:50:01 crc kubenswrapper[4958]: I1006 11:50:01.591009 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 06 11:50:01 crc kubenswrapper[4958]: E1006 11:50:01.591482 4958 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-06 11:50:02.091447252 +0000 UTC m=+155.977472560 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 06 11:50:01 crc kubenswrapper[4958]: I1006 11:50:01.695298 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-kt8qb\" (UID: \"6a7580eb-dece-41a6-8335-33c29bc41056\") " pod="openshift-image-registry/image-registry-697d97f7c8-kt8qb" Oct 06 11:50:01 crc kubenswrapper[4958]: E1006 11:50:01.696112 4958 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-06 11:50:02.196084731 +0000 UTC m=+156.082110029 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-kt8qb" (UID: "6a7580eb-dece-41a6-8335-33c29bc41056") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 06 11:50:01 crc kubenswrapper[4958]: I1006 11:50:01.708188 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-fmlg2" Oct 06 11:50:01 crc kubenswrapper[4958]: I1006 11:50:01.801010 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 06 11:50:01 crc kubenswrapper[4958]: E1006 11:50:01.801397 4958 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-06 11:50:02.301361838 +0000 UTC m=+156.187387156 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 06 11:50:01 crc kubenswrapper[4958]: I1006 11:50:01.804473 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-kt8qb\" (UID: \"6a7580eb-dece-41a6-8335-33c29bc41056\") " pod="openshift-image-registry/image-registry-697d97f7c8-kt8qb" Oct 06 11:50:01 crc kubenswrapper[4958]: E1006 11:50:01.805167 4958 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-06 11:50:02.305130622 +0000 UTC m=+156.191155930 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-kt8qb" (UID: "6a7580eb-dece-41a6-8335-33c29bc41056") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 06 11:50:01 crc kubenswrapper[4958]: I1006 11:50:01.893308 4958 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication-operator/authentication-operator-69f744f599-m9l68" podStartSLOduration=134.893290923 podStartE2EDuration="2m14.893290923s" podCreationTimestamp="2025-10-06 11:47:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-06 11:50:01.892566791 +0000 UTC m=+155.778592099" watchObservedRunningTime="2025-10-06 11:50:01.893290923 +0000 UTC m=+155.779316231" Oct 06 11:50:01 crc kubenswrapper[4958]: I1006 11:50:01.906597 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 06 11:50:01 crc kubenswrapper[4958]: E1006 11:50:01.906938 4958 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-06 11:50:02.406867993 +0000 UTC m=+156.292893301 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 06 11:50:01 crc kubenswrapper[4958]: I1006 11:50:01.963856 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-m9cln" event={"ID":"8c14b3fd-7202-4e51-9528-7eb2825f4a5b","Type":"ContainerStarted","Data":"5863aeb51a3facdc59a49ccadd29d6ab28994ab45884b40bb817aee548cb5791"} Oct 06 11:50:01 crc kubenswrapper[4958]: I1006 11:50:01.968042 4958 generic.go:334] "Generic (PLEG): container finished" podID="975b4fb4-827e-4d99-b37a-5bf622b2c889" containerID="000004e3e0b06eaff41b96e9b4803e7cb83e8b34ddafbed10b7086a1d9ee5e5a" exitCode=0 Oct 06 11:50:01 crc kubenswrapper[4958]: I1006 11:50:01.968137 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-vqqpd" event={"ID":"975b4fb4-827e-4d99-b37a-5bf622b2c889","Type":"ContainerDied","Data":"000004e3e0b06eaff41b96e9b4803e7cb83e8b34ddafbed10b7086a1d9ee5e5a"} Oct 06 11:50:01 crc kubenswrapper[4958]: I1006 11:50:01.975206 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29329185-6dnq5" event={"ID":"221a0896-9d41-4bf4-b05e-57c067e8b885","Type":"ContainerStarted","Data":"d787d72a040a874b3756208cf03b7c3da0342a66991a052efaa3796cf1fef787"} Oct 06 11:50:01 crc kubenswrapper[4958]: I1006 11:50:01.988183 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-57fbj" event={"ID":"3f84a255-5034-424e-acf0-5ba9f4aa0531","Type":"ContainerStarted","Data":"0389774d87dfa2c3b3c29bb69c80e7d74b02e9bb3df6d5bc7c0109f557e6b97d"} Oct 06 11:50:02 crc kubenswrapper[4958]: I1006 11:50:02.009572 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-7ndvg" event={"ID":"ef5f0384-c041-47c5-80cf-c545d95c5876","Type":"ContainerStarted","Data":"ceb8823194e537500c724906aad35bb75ab2cc8a5b521495a550c6120dc82c99"} Oct 06 11:50:02 crc kubenswrapper[4958]: I1006 11:50:02.023185 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-kt8qb\" (UID: \"6a7580eb-dece-41a6-8335-33c29bc41056\") " pod="openshift-image-registry/image-registry-697d97f7c8-kt8qb" Oct 06 11:50:02 crc kubenswrapper[4958]: E1006 11:50:02.026261 4958 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-06 11:50:02.526232556 +0000 UTC m=+156.412257864 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-kt8qb" (UID: "6a7580eb-dece-41a6-8335-33c29bc41056") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 06 11:50:02 crc kubenswrapper[4958]: I1006 11:50:02.055967 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-lswc6" event={"ID":"7dd9d221-704f-4205-8a53-a7d42e162723","Type":"ContainerStarted","Data":"c157c9f65c1876013d077458e40174ba0cf012fa6f9bb5cdc5ee71ab0a499429"} Oct 06 11:50:02 crc kubenswrapper[4958]: I1006 11:50:02.098362 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-b45778765-xnxs2" event={"ID":"6604f8c6-1db7-4bc6-96a7-7bbd56ab2b4c","Type":"ContainerStarted","Data":"61de95f093242f846346047ef7f6dba1f6f6c6da0ee28bb88143feb789ef5c01"} Oct 06 11:50:02 crc kubenswrapper[4958]: I1006 11:50:02.105577 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-5444994796-mfm8s" event={"ID":"37c87c2e-92b8-4b83-be26-cd63ec636eda","Type":"ContainerStarted","Data":"c7f466f43a18bb5e6e0afdbe03afc4ef8e27475ab01142dab8abdfe1d4a3bad5"} Oct 06 11:50:02 crc kubenswrapper[4958]: I1006 11:50:02.113347 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca/service-ca-9c57cc56f-9blls" event={"ID":"3331225c-8593-4a8c-a256-1aee839e9bb3","Type":"ContainerStarted","Data":"05f53b376e2c1f66fccd513b1612f8f7027037d675a47cabc99473bf7aa407da"} Oct 06 11:50:02 crc kubenswrapper[4958]: I1006 11:50:02.126671 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-kg57j" event={"ID":"1538fe67-0070-42fe-86e4-3ad017710b44","Type":"ContainerStarted","Data":"cf8b03dda22108f73a609eba7b8f6ae5e54e39fa741f81b09db9d12dc44f8415"} Oct 06 11:50:02 crc kubenswrapper[4958]: I1006 11:50:02.130381 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-nsfvl" event={"ID":"1e722343-2649-43dd-a7dd-842e41136658","Type":"ContainerStarted","Data":"c09366505e7be5e6da4290dab6f075f3bc8305e7bbaeb104a2971f6fb0a631cd"} Oct 06 11:50:02 crc kubenswrapper[4958]: I1006 11:50:02.131021 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 06 11:50:02 crc kubenswrapper[4958]: E1006 11:50:02.133475 4958 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-06 11:50:02.633456453 +0000 UTC m=+156.519481761 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 06 11:50:02 crc kubenswrapper[4958]: I1006 11:50:02.145792 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-x8mdr" event={"ID":"d949b5d0-2cd6-40ab-8bd1-9fb19fd615f8","Type":"ContainerStarted","Data":"cfeace1a451c74d3df64aed90140d1b49f29b9b938814427263d02b873b02387"} Oct 06 11:50:02 crc kubenswrapper[4958]: I1006 11:50:02.148943 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-k94lv" event={"ID":"1658bc0b-def5-4f14-a28d-f20c83f435da","Type":"ContainerStarted","Data":"ad00ddcd0c72056723c87753588fbcd7688e286f66369e88eae53b7a2ab21b54"} Oct 06 11:50:02 crc kubenswrapper[4958]: I1006 11:50:02.150066 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-f2982" event={"ID":"5715f094-2c1f-4d0c-8901-f4378d613048","Type":"ContainerStarted","Data":"846d347f30114acfb8d5e8471efb99cbc74329d4c8ecef588039636bb2d0c8ac"} Oct 06 11:50:02 crc kubenswrapper[4958]: I1006 11:50:02.151173 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-c97vp" event={"ID":"abd850de-cbf2-4d68-9226-223ba0e3cb72","Type":"ContainerStarted","Data":"3ccaf98c80df96fff1da1ed60a6a6aaa96d301ef5f88c27f13f93c970c527728"} Oct 06 11:50:02 crc kubenswrapper[4958]: I1006 11:50:02.152008 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-7954f5f757-bs4m6" event={"ID":"0538c536-f662-48c7-98fe-a1ceff33ade3","Type":"ContainerStarted","Data":"9d8fba0d070a37e8ae5c93bbf735481cb21a14363f3ac11338f533d0b0d27698"} Oct 06 11:50:02 crc kubenswrapper[4958]: I1006 11:50:02.152032 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-7954f5f757-bs4m6" event={"ID":"0538c536-f662-48c7-98fe-a1ceff33ade3","Type":"ContainerStarted","Data":"6770642d8d5229a80965a33ca5267a075186190332fe1d2b77eb901d24ecef8f"} Oct 06 11:50:02 crc kubenswrapper[4958]: I1006 11:50:02.152672 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/downloads-7954f5f757-bs4m6" Oct 06 11:50:02 crc kubenswrapper[4958]: I1006 11:50:02.154668 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-gj6pw" event={"ID":"b261e077-f9df-49ba-87df-a5d1988755bf","Type":"ContainerStarted","Data":"8cbd6e8af1b97b3ea0d82273657c0f9e032839817dbbf48f69a053773212df93"} Oct 06 11:50:02 crc kubenswrapper[4958]: I1006 11:50:02.155510 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-pp96v" event={"ID":"21e2c2d4-0ea2-4767-b88d-0229390bff9a","Type":"ContainerStarted","Data":"daa8a3a9d6dd01f6a681aaa1a13e988d53ab16df17bbe81b3a779cda8b56dde5"} Oct 06 11:50:02 crc kubenswrapper[4958]: I1006 11:50:02.160735 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-m7q5b" event={"ID":"2cc565c2-461e-4902-917d-51dc8aba5cf7","Type":"ContainerStarted","Data":"ba37970dbaf809ba5ca6d84ae2d5d392c851fd6537896715673121ebee0cb871"} Oct 06 11:50:02 crc kubenswrapper[4958]: I1006 11:50:02.160856 4958 patch_prober.go:28] interesting pod/downloads-7954f5f757-bs4m6 container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.17:8080/\": dial tcp 10.217.0.17:8080: connect: connection refused" start-of-body= Oct 06 11:50:02 crc kubenswrapper[4958]: I1006 11:50:02.160887 4958 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-bs4m6" podUID="0538c536-f662-48c7-98fe-a1ceff33ade3" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.17:8080/\": dial tcp 10.217.0.17:8080: connect: connection refused" Oct 06 11:50:02 crc kubenswrapper[4958]: I1006 11:50:02.161929 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-fhfvm" event={"ID":"59305dc8-3a12-401b-83d9-532e520b72b0","Type":"ContainerStarted","Data":"2a73d0904904082724ff8ce5bddb1162c0326cdc29349df6dff925c65410ab09"} Oct 06 11:50:02 crc kubenswrapper[4958]: I1006 11:50:02.175374 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-7zl8z" event={"ID":"b853b964-2cab-4b75-93fe-df7bd4e0f033","Type":"ContainerStarted","Data":"0c1370bae9177da084863f63697c6ddbafb165b5bcdc8a453b8db62116258ff5"} Oct 06 11:50:02 crc kubenswrapper[4958]: I1006 11:50:02.179103 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-zbr46" event={"ID":"f1163149-6bad-4274-87de-23e0b45a284e","Type":"ContainerStarted","Data":"bab51ba94a727b0337759af087ec44daaa65a4469afb68385f103c9b7e60faa5"} Oct 06 11:50:02 crc kubenswrapper[4958]: I1006 11:50:02.190189 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-swm82" event={"ID":"6559fbed-d2e2-4578-992e-b4088643cd24","Type":"ContainerStarted","Data":"7712f931dfccee678693be4e4cbed65b4361956873718a7c81ccc585d2a3f223"} Oct 06 11:50:02 crc kubenswrapper[4958]: I1006 11:50:02.192543 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-928ch" event={"ID":"8bce8671-7d06-4e9b-81cb-f6ddf56ba8a4","Type":"ContainerStarted","Data":"d89319e4f2e5420619056e560443867ad9dba732e32866069f0168ec08df2b7e"} Oct 06 11:50:02 crc kubenswrapper[4958]: I1006 11:50:02.195499 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-canary/ingress-canary-7flhj" event={"ID":"8275a125-7a77-49fe-bebf-d140013d5a3f","Type":"ContainerStarted","Data":"f823e66f3229d5b6edeed42faeeb4d183b8a70779aeea2d568d826cfb38aa6c8"} Oct 06 11:50:02 crc kubenswrapper[4958]: I1006 11:50:02.202029 4958 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-f9d7485db-ccpgf" podStartSLOduration=135.202014642 podStartE2EDuration="2m15.202014642s" podCreationTimestamp="2025-10-06 11:47:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-06 11:50:02.1727848 +0000 UTC m=+156.058810128" watchObservedRunningTime="2025-10-06 11:50:02.202014642 +0000 UTC m=+156.088039950" Oct 06 11:50:02 crc kubenswrapper[4958]: I1006 11:50:02.206941 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-dftvm" event={"ID":"87166f97-d9f0-4391-87b6-0ea7ce0208e1","Type":"ContainerStarted","Data":"55424fe7ed492a4fc85e0900e39cb52e83b2e4d3886bb82110c6aa89b54fa6f8"} Oct 06 11:50:02 crc kubenswrapper[4958]: I1006 11:50:02.208020 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-authentication/oauth-openshift-558db77b4-dftvm" Oct 06 11:50:02 crc kubenswrapper[4958]: I1006 11:50:02.213110 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-4cxkf" event={"ID":"d982ea17-3155-4d01-bf28-5045db9fe780","Type":"ContainerStarted","Data":"0092d12e4a66e9978c9ac4fe57c57b96582fe64475374b77361433d856cbb28c"} Oct 06 11:50:02 crc kubenswrapper[4958]: I1006 11:50:02.213911 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-4cxkf" Oct 06 11:50:02 crc kubenswrapper[4958]: I1006 11:50:02.218110 4958 patch_prober.go:28] interesting pod/oauth-openshift-558db77b4-dftvm container/oauth-openshift namespace/openshift-authentication: Readiness probe status=failure output="Get \"https://10.217.0.12:6443/healthz\": dial tcp 10.217.0.12:6443: connect: connection refused" start-of-body= Oct 06 11:50:02 crc kubenswrapper[4958]: I1006 11:50:02.218172 4958 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-authentication/oauth-openshift-558db77b4-dftvm" podUID="87166f97-d9f0-4391-87b6-0ea7ce0208e1" containerName="oauth-openshift" probeResult="failure" output="Get \"https://10.217.0.12:6443/healthz\": dial tcp 10.217.0.12:6443: connect: connection refused" Oct 06 11:50:02 crc kubenswrapper[4958]: I1006 11:50:02.218211 4958 patch_prober.go:28] interesting pod/olm-operator-6b444d44fb-4cxkf container/olm-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.20:8443/healthz\": dial tcp 10.217.0.20:8443: connect: connection refused" start-of-body= Oct 06 11:50:02 crc kubenswrapper[4958]: I1006 11:50:02.218255 4958 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-4cxkf" podUID="d982ea17-3155-4d01-bf28-5045db9fe780" containerName="olm-operator" probeResult="failure" output="Get \"https://10.217.0.20:8443/healthz\": dial tcp 10.217.0.20:8443: connect: connection refused" Oct 06 11:50:02 crc kubenswrapper[4958]: I1006 11:50:02.223645 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-2gbnx" event={"ID":"9a23b5e6-14c9-46d0-aed5-4f8b88edfb98","Type":"ContainerStarted","Data":"3f520dbdaec5e8c6fb8ac5402f400a36155e1d0bb5790b99da8d47bd9f3c8981"} Oct 06 11:50:02 crc kubenswrapper[4958]: I1006 11:50:02.223676 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-2gbnx" event={"ID":"9a23b5e6-14c9-46d0-aed5-4f8b88edfb98","Type":"ContainerStarted","Data":"c60896dfd2b635537f55e96d39f0fa48197107e56df0ca35cb9321f78b58b475"} Oct 06 11:50:02 crc kubenswrapper[4958]: I1006 11:50:02.227012 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-x42zz" event={"ID":"3a4bf8ec-4eaf-483f-b1c2-a981e3a83091","Type":"ContainerStarted","Data":"1350cb4bf79f0ab9d0b152fafdbf558978b9d6b32051a54a0d6d0c87296bc265"} Oct 06 11:50:02 crc kubenswrapper[4958]: I1006 11:50:02.233109 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-kt8qb\" (UID: \"6a7580eb-dece-41a6-8335-33c29bc41056\") " pod="openshift-image-registry/image-registry-697d97f7c8-kt8qb" Oct 06 11:50:02 crc kubenswrapper[4958]: E1006 11:50:02.234872 4958 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-06 11:50:02.734857374 +0000 UTC m=+156.620882682 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-kt8qb" (UID: "6a7580eb-dece-41a6-8335-33c29bc41056") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 06 11:50:02 crc kubenswrapper[4958]: I1006 11:50:02.255496 4958 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-lswc6" podStartSLOduration=135.255476466 podStartE2EDuration="2m15.255476466s" podCreationTimestamp="2025-10-06 11:47:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-06 11:50:02.203426535 +0000 UTC m=+156.089451843" watchObservedRunningTime="2025-10-06 11:50:02.255476466 +0000 UTC m=+156.141501774" Oct 06 11:50:02 crc kubenswrapper[4958]: I1006 11:50:02.260481 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-server-n8gvd" event={"ID":"a0a306b6-7739-4ec5-9dac-68001b21447c","Type":"ContainerStarted","Data":"a83aaddc65bb5c116124f1281c09a77452ffe27db7d1f3b1dc174acdbb410099"} Oct 06 11:50:02 crc kubenswrapper[4958]: I1006 11:50:02.286268 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-v7kkf" event={"ID":"97ec580a-268f-4a8a-a6e1-9b64e463ca20","Type":"ContainerStarted","Data":"c459fe88aaf0fd9231826e88fc3cb14c0c26f7c33f7960b451364c97b24afac5"} Oct 06 11:50:02 crc kubenswrapper[4958]: I1006 11:50:02.300718 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-777779d784-d4mss" event={"ID":"12d2f911-e40e-44ea-9081-1bb8a08c7cbb","Type":"ContainerStarted","Data":"194f3de9d02ed00eb6ab3a076a88f105a2ca5fdef45ae741badede8b62711ab4"} Oct 06 11:50:02 crc kubenswrapper[4958]: I1006 11:50:02.302749 4958 patch_prober.go:28] interesting pod/controller-manager-879f6c89f-thrt8 container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.7:8443/healthz\": dial tcp 10.217.0.7:8443: connect: connection refused" start-of-body= Oct 06 11:50:02 crc kubenswrapper[4958]: I1006 11:50:02.302797 4958 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-879f6c89f-thrt8" podUID="cbf992a4-d7af-455c-953b-c865445feb6c" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.7:8443/healthz\": dial tcp 10.217.0.7:8443: connect: connection refused" Oct 06 11:50:02 crc kubenswrapper[4958]: I1006 11:50:02.314335 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console-operator/console-operator-58897d9998-z2w99" Oct 06 11:50:02 crc kubenswrapper[4958]: I1006 11:50:02.333516 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 06 11:50:02 crc kubenswrapper[4958]: E1006 11:50:02.341304 4958 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-06 11:50:02.841282216 +0000 UTC m=+156.727307564 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 06 11:50:02 crc kubenswrapper[4958]: I1006 11:50:02.343353 4958 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication/oauth-openshift-558db77b4-dftvm" podStartSLOduration=135.343306427 podStartE2EDuration="2m15.343306427s" podCreationTimestamp="2025-10-06 11:47:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-06 11:50:02.296091352 +0000 UTC m=+156.182116660" watchObservedRunningTime="2025-10-06 11:50:02.343306427 +0000 UTC m=+156.229331755" Oct 06 11:50:02 crc kubenswrapper[4958]: I1006 11:50:02.346488 4958 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-api/machine-api-operator-5694c8668f-x8mdr" podStartSLOduration=135.346476953 podStartE2EDuration="2m15.346476953s" podCreationTimestamp="2025-10-06 11:47:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-06 11:50:02.327070807 +0000 UTC m=+156.213096115" watchObservedRunningTime="2025-10-06 11:50:02.346476953 +0000 UTC m=+156.232502261" Oct 06 11:50:02 crc kubenswrapper[4958]: I1006 11:50:02.368443 4958 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-f2982" podStartSLOduration=135.368427715 podStartE2EDuration="2m15.368427715s" podCreationTimestamp="2025-10-06 11:47:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-06 11:50:02.36592553 +0000 UTC m=+156.251950838" watchObservedRunningTime="2025-10-06 11:50:02.368427715 +0000 UTC m=+156.254453023" Oct 06 11:50:02 crc kubenswrapper[4958]: I1006 11:50:02.407828 4958 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-4cxkf" podStartSLOduration=135.407808704 podStartE2EDuration="2m15.407808704s" podCreationTimestamp="2025-10-06 11:47:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-06 11:50:02.407253137 +0000 UTC m=+156.293278445" watchObservedRunningTime="2025-10-06 11:50:02.407808704 +0000 UTC m=+156.293834012" Oct 06 11:50:02 crc kubenswrapper[4958]: I1006 11:50:02.434649 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-kt8qb\" (UID: \"6a7580eb-dece-41a6-8335-33c29bc41056\") " pod="openshift-image-registry/image-registry-697d97f7c8-kt8qb" Oct 06 11:50:02 crc kubenswrapper[4958]: E1006 11:50:02.434949 4958 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-06 11:50:02.934938093 +0000 UTC m=+156.820963401 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-kt8qb" (UID: "6a7580eb-dece-41a6-8335-33c29bc41056") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 06 11:50:02 crc kubenswrapper[4958]: I1006 11:50:02.490635 4958 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/downloads-7954f5f757-bs4m6" podStartSLOduration=135.490612934 podStartE2EDuration="2m15.490612934s" podCreationTimestamp="2025-10-06 11:47:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-06 11:50:02.447828042 +0000 UTC m=+156.333853350" watchObservedRunningTime="2025-10-06 11:50:02.490612934 +0000 UTC m=+156.376638242" Oct 06 11:50:02 crc kubenswrapper[4958]: I1006 11:50:02.490738 4958 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-pp96v" podStartSLOduration=135.490732167 podStartE2EDuration="2m15.490732167s" podCreationTimestamp="2025-10-06 11:47:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-06 11:50:02.4904834 +0000 UTC m=+156.376508718" watchObservedRunningTime="2025-10-06 11:50:02.490732167 +0000 UTC m=+156.376757475" Oct 06 11:50:02 crc kubenswrapper[4958]: I1006 11:50:02.542022 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 06 11:50:02 crc kubenswrapper[4958]: E1006 11:50:02.542393 4958 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-06 11:50:03.042379166 +0000 UTC m=+156.928404474 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 06 11:50:02 crc kubenswrapper[4958]: I1006 11:50:02.544803 4958 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress/router-default-5444994796-mfm8s" podStartSLOduration=135.544779218 podStartE2EDuration="2m15.544779218s" podCreationTimestamp="2025-10-06 11:47:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-06 11:50:02.541419857 +0000 UTC m=+156.427445165" watchObservedRunningTime="2025-10-06 11:50:02.544779218 +0000 UTC m=+156.430804536" Oct 06 11:50:02 crc kubenswrapper[4958]: I1006 11:50:02.635944 4958 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-etcd-operator/etcd-operator-b45778765-xnxs2" podStartSLOduration=135.63592641 podStartE2EDuration="2m15.63592641s" podCreationTimestamp="2025-10-06 11:47:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-06 11:50:02.595910972 +0000 UTC m=+156.481936280" watchObservedRunningTime="2025-10-06 11:50:02.63592641 +0000 UTC m=+156.521951718" Oct 06 11:50:02 crc kubenswrapper[4958]: I1006 11:50:02.667233 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-kt8qb\" (UID: \"6a7580eb-dece-41a6-8335-33c29bc41056\") " pod="openshift-image-registry/image-registry-697d97f7c8-kt8qb" Oct 06 11:50:02 crc kubenswrapper[4958]: E1006 11:50:02.667520 4958 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-06 11:50:03.167509223 +0000 UTC m=+157.053534531 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-kt8qb" (UID: "6a7580eb-dece-41a6-8335-33c29bc41056") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 06 11:50:02 crc kubenswrapper[4958]: I1006 11:50:02.721167 4958 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-2gbnx" podStartSLOduration=135.721120101 podStartE2EDuration="2m15.721120101s" podCreationTimestamp="2025-10-06 11:47:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-06 11:50:02.637348483 +0000 UTC m=+156.523373791" watchObservedRunningTime="2025-10-06 11:50:02.721120101 +0000 UTC m=+156.607145409" Oct 06 11:50:02 crc kubenswrapper[4958]: I1006 11:50:02.723137 4958 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-jxkkt" podStartSLOduration=135.723129482 podStartE2EDuration="2m15.723129482s" podCreationTimestamp="2025-10-06 11:47:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-06 11:50:02.719550754 +0000 UTC m=+156.605576072" watchObservedRunningTime="2025-10-06 11:50:02.723129482 +0000 UTC m=+156.609154790" Oct 06 11:50:02 crc kubenswrapper[4958]: I1006 11:50:02.790820 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 06 11:50:02 crc kubenswrapper[4958]: E1006 11:50:02.791063 4958 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-06 11:50:03.291049022 +0000 UTC m=+157.177074320 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 06 11:50:02 crc kubenswrapper[4958]: I1006 11:50:02.814227 4958 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-v7kkf" podStartSLOduration=135.814207401 podStartE2EDuration="2m15.814207401s" podCreationTimestamp="2025-10-06 11:47:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-06 11:50:02.811960023 +0000 UTC m=+156.697985351" watchObservedRunningTime="2025-10-06 11:50:02.814207401 +0000 UTC m=+156.700232709" Oct 06 11:50:02 crc kubenswrapper[4958]: I1006 11:50:02.814351 4958 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-server-n8gvd" podStartSLOduration=5.814346915 podStartE2EDuration="5.814346915s" podCreationTimestamp="2025-10-06 11:49:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-06 11:50:02.776907075 +0000 UTC m=+156.662932373" watchObservedRunningTime="2025-10-06 11:50:02.814346915 +0000 UTC m=+156.700372223" Oct 06 11:50:02 crc kubenswrapper[4958]: I1006 11:50:02.891638 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-kt8qb\" (UID: \"6a7580eb-dece-41a6-8335-33c29bc41056\") " pod="openshift-image-registry/image-registry-697d97f7c8-kt8qb" Oct 06 11:50:02 crc kubenswrapper[4958]: E1006 11:50:02.892271 4958 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-06 11:50:03.392257767 +0000 UTC m=+157.278283075 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-kt8qb" (UID: "6a7580eb-dece-41a6-8335-33c29bc41056") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 06 11:50:02 crc kubenswrapper[4958]: I1006 11:50:02.993831 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 06 11:50:02 crc kubenswrapper[4958]: E1006 11:50:02.994288 4958 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-06 11:50:03.494270576 +0000 UTC m=+157.380295884 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 06 11:50:03 crc kubenswrapper[4958]: I1006 11:50:03.008867 4958 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-ingress/router-default-5444994796-mfm8s" Oct 06 11:50:03 crc kubenswrapper[4958]: I1006 11:50:03.016880 4958 patch_prober.go:28] interesting pod/router-default-5444994796-mfm8s container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 06 11:50:03 crc kubenswrapper[4958]: [-]has-synced failed: reason withheld Oct 06 11:50:03 crc kubenswrapper[4958]: [+]process-running ok Oct 06 11:50:03 crc kubenswrapper[4958]: healthz check failed Oct 06 11:50:03 crc kubenswrapper[4958]: I1006 11:50:03.016943 4958 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-mfm8s" podUID="37c87c2e-92b8-4b83-be26-cd63ec636eda" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 06 11:50:03 crc kubenswrapper[4958]: I1006 11:50:03.096950 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-kt8qb\" (UID: \"6a7580eb-dece-41a6-8335-33c29bc41056\") " pod="openshift-image-registry/image-registry-697d97f7c8-kt8qb" Oct 06 11:50:03 crc kubenswrapper[4958]: E1006 11:50:03.097488 4958 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-06 11:50:03.597461431 +0000 UTC m=+157.483486909 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-kt8qb" (UID: "6a7580eb-dece-41a6-8335-33c29bc41056") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 06 11:50:03 crc kubenswrapper[4958]: I1006 11:50:03.198462 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 06 11:50:03 crc kubenswrapper[4958]: E1006 11:50:03.198715 4958 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-06 11:50:03.698694447 +0000 UTC m=+157.584719745 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 06 11:50:03 crc kubenswrapper[4958]: I1006 11:50:03.198892 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-kt8qb\" (UID: \"6a7580eb-dece-41a6-8335-33c29bc41056\") " pod="openshift-image-registry/image-registry-697d97f7c8-kt8qb" Oct 06 11:50:03 crc kubenswrapper[4958]: E1006 11:50:03.199195 4958 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-06 11:50:03.699182622 +0000 UTC m=+157.585207930 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-kt8qb" (UID: "6a7580eb-dece-41a6-8335-33c29bc41056") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 06 11:50:03 crc kubenswrapper[4958]: I1006 11:50:03.300536 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 06 11:50:03 crc kubenswrapper[4958]: E1006 11:50:03.300725 4958 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-06 11:50:03.800696796 +0000 UTC m=+157.686722104 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 06 11:50:03 crc kubenswrapper[4958]: I1006 11:50:03.300884 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-kt8qb\" (UID: \"6a7580eb-dece-41a6-8335-33c29bc41056\") " pod="openshift-image-registry/image-registry-697d97f7c8-kt8qb" Oct 06 11:50:03 crc kubenswrapper[4958]: E1006 11:50:03.301328 4958 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-06 11:50:03.801309954 +0000 UTC m=+157.687335262 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-kt8qb" (UID: "6a7580eb-dece-41a6-8335-33c29bc41056") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 06 11:50:03 crc kubenswrapper[4958]: I1006 11:50:03.325444 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-7zl8z" event={"ID":"b853b964-2cab-4b75-93fe-df7bd4e0f033","Type":"ContainerStarted","Data":"b6f33b3b7d7576861733fefbef338571399d25bc6ccb5b31640253c9b7ff2105"} Oct 06 11:50:03 crc kubenswrapper[4958]: I1006 11:50:03.333810 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-ntrlk" Oct 06 11:50:03 crc kubenswrapper[4958]: I1006 11:50:03.342927 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-c97vp" event={"ID":"abd850de-cbf2-4d68-9226-223ba0e3cb72","Type":"ContainerStarted","Data":"a05186bbd65171975533fe63e451369fc7534cc8bf70266f894d6539968d7459"} Oct 06 11:50:03 crc kubenswrapper[4958]: I1006 11:50:03.342960 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-c97vp" event={"ID":"abd850de-cbf2-4d68-9226-223ba0e3cb72","Type":"ContainerStarted","Data":"d534801fa0ac185136ac53be7e74b1c9226942a6636b0eb25f070ce6da19e2dd"} Oct 06 11:50:03 crc kubenswrapper[4958]: I1006 11:50:03.348789 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-777779d784-d4mss" event={"ID":"12d2f911-e40e-44ea-9081-1bb8a08c7cbb","Type":"ContainerStarted","Data":"baf62933eb7a5a8544f65cac62f2df24e6a7c132a9a88da849064dd5e4c845d5"} Oct 06 11:50:03 crc kubenswrapper[4958]: I1006 11:50:03.357068 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-7ndvg" event={"ID":"ef5f0384-c041-47c5-80cf-c545d95c5876","Type":"ContainerStarted","Data":"f7027567ef21d57d5d81ae3000a56768bad944f8abd1e335e2957f8487d88b6b"} Oct 06 11:50:03 crc kubenswrapper[4958]: I1006 11:50:03.387484 4958 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-7zl8z" podStartSLOduration=136.387468975 podStartE2EDuration="2m16.387468975s" podCreationTimestamp="2025-10-06 11:47:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-06 11:50:03.374104792 +0000 UTC m=+157.260130100" watchObservedRunningTime="2025-10-06 11:50:03.387468975 +0000 UTC m=+157.273494283" Oct 06 11:50:03 crc kubenswrapper[4958]: I1006 11:50:03.394701 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-vqqpd" event={"ID":"975b4fb4-827e-4d99-b37a-5bf622b2c889","Type":"ContainerStarted","Data":"9d2f4470edb03d82296d25f940eca5c21ffb4ac250b4407ff45b8cc92b50dc60"} Oct 06 11:50:03 crc kubenswrapper[4958]: I1006 11:50:03.395271 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-config-operator/openshift-config-operator-7777fb866f-vqqpd" Oct 06 11:50:03 crc kubenswrapper[4958]: I1006 11:50:03.403578 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 06 11:50:03 crc kubenswrapper[4958]: E1006 11:50:03.404826 4958 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-06 11:50:03.904809389 +0000 UTC m=+157.790834697 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 06 11:50:03 crc kubenswrapper[4958]: I1006 11:50:03.412756 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-nsfvl" event={"ID":"1e722343-2649-43dd-a7dd-842e41136658","Type":"ContainerStarted","Data":"b2ef8b529937b9960a6f1ddec23e103569c076b57364512d6e19f31531417ee2"} Oct 06 11:50:03 crc kubenswrapper[4958]: I1006 11:50:03.414310 4958 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-service-ca-operator/service-ca-operator-777779d784-d4mss" podStartSLOduration=136.414290905 podStartE2EDuration="2m16.414290905s" podCreationTimestamp="2025-10-06 11:47:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-06 11:50:03.411322135 +0000 UTC m=+157.297347443" watchObservedRunningTime="2025-10-06 11:50:03.414290905 +0000 UTC m=+157.300316213" Oct 06 11:50:03 crc kubenswrapper[4958]: I1006 11:50:03.414858 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-k94lv" event={"ID":"1658bc0b-def5-4f14-a28d-f20c83f435da","Type":"ContainerStarted","Data":"eb553825abfc86dd1c580154282c37560c13de03d11acc7e558becc0eefc25d1"} Oct 06 11:50:03 crc kubenswrapper[4958]: I1006 11:50:03.414879 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-k94lv" Oct 06 11:50:03 crc kubenswrapper[4958]: I1006 11:50:03.414888 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-k94lv" event={"ID":"1658bc0b-def5-4f14-a28d-f20c83f435da","Type":"ContainerStarted","Data":"b288c7ffde5241e2cacf18161ed948d4b67a334664b489c74cb4bee5a65b92b1"} Oct 06 11:50:03 crc kubenswrapper[4958]: I1006 11:50:03.427990 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-x42zz" event={"ID":"3a4bf8ec-4eaf-483f-b1c2-a981e3a83091","Type":"ContainerStarted","Data":"d21d1fc9ca569be49bc93bc7085edbbff062d17aea1878dac64ffb7d6c0d64bd"} Oct 06 11:50:03 crc kubenswrapper[4958]: I1006 11:50:03.429073 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-x42zz" Oct 06 11:50:03 crc kubenswrapper[4958]: I1006 11:50:03.432878 4958 patch_prober.go:28] interesting pod/packageserver-d55dfcdfc-x42zz container/packageserver namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.27:5443/healthz\": dial tcp 10.217.0.27:5443: connect: connection refused" start-of-body= Oct 06 11:50:03 crc kubenswrapper[4958]: I1006 11:50:03.432931 4958 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-x42zz" podUID="3a4bf8ec-4eaf-483f-b1c2-a981e3a83091" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.27:5443/healthz\": dial tcp 10.217.0.27:5443: connect: connection refused" Oct 06 11:50:03 crc kubenswrapper[4958]: I1006 11:50:03.443414 4958 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-7ndvg" podStartSLOduration=136.443393423 podStartE2EDuration="2m16.443393423s" podCreationTimestamp="2025-10-06 11:47:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-06 11:50:03.44097918 +0000 UTC m=+157.327004488" watchObservedRunningTime="2025-10-06 11:50:03.443393423 +0000 UTC m=+157.329418731" Oct 06 11:50:03 crc kubenswrapper[4958]: I1006 11:50:03.479825 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-t2nxc" event={"ID":"29169783-2273-40c2-abe9-85f3ee6c4f7f","Type":"ContainerStarted","Data":"89e114fcdf39c77556bfb16cd8105da3d73805716a60c043419d39eeddf53ade"} Oct 06 11:50:03 crc kubenswrapper[4958]: I1006 11:50:03.479888 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-t2nxc" event={"ID":"29169783-2273-40c2-abe9-85f3ee6c4f7f","Type":"ContainerStarted","Data":"3113acc22f12186794deea63c598432a6f69d95c705bc58398791043836bac20"} Oct 06 11:50:03 crc kubenswrapper[4958]: I1006 11:50:03.494758 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-jxkkt" event={"ID":"dc407428-3c19-40aa-b476-c159f9b8f2a4","Type":"ContainerStarted","Data":"7d51889ecf5cf094ec427b410ed0f85490e3369feee26fa06d54a37d0d40a71e"} Oct 06 11:50:03 crc kubenswrapper[4958]: I1006 11:50:03.505738 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-kt8qb\" (UID: \"6a7580eb-dece-41a6-8335-33c29bc41056\") " pod="openshift-image-registry/image-registry-697d97f7c8-kt8qb" Oct 06 11:50:03 crc kubenswrapper[4958]: E1006 11:50:03.507122 4958 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-06 11:50:04.007108746 +0000 UTC m=+157.893134054 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-kt8qb" (UID: "6a7580eb-dece-41a6-8335-33c29bc41056") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 06 11:50:03 crc kubenswrapper[4958]: I1006 11:50:03.517565 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-57fbj" event={"ID":"3f84a255-5034-424e-acf0-5ba9f4aa0531","Type":"ContainerStarted","Data":"2ccda7fc615bf22544eca3481e59063830f060811167e3ed86f8349b9d5f2fef"} Oct 06 11:50:03 crc kubenswrapper[4958]: I1006 11:50:03.538907 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-zbr46" event={"ID":"f1163149-6bad-4274-87de-23e0b45a284e","Type":"ContainerStarted","Data":"6aa3074d02d332c59c0831848cade238805270adcc4c9994aa49851468c91d2a"} Oct 06 11:50:03 crc kubenswrapper[4958]: I1006 11:50:03.540019 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-zbr46" Oct 06 11:50:03 crc kubenswrapper[4958]: I1006 11:50:03.540892 4958 patch_prober.go:28] interesting pod/catalog-operator-68c6474976-zbr46 container/catalog-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.34:8443/healthz\": dial tcp 10.217.0.34:8443: connect: connection refused" start-of-body= Oct 06 11:50:03 crc kubenswrapper[4958]: I1006 11:50:03.540933 4958 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-zbr46" podUID="f1163149-6bad-4274-87de-23e0b45a284e" containerName="catalog-operator" probeResult="failure" output="Get \"https://10.217.0.34:8443/healthz\": dial tcp 10.217.0.34:8443: connect: connection refused" Oct 06 11:50:03 crc kubenswrapper[4958]: I1006 11:50:03.566666 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-vqq9g" event={"ID":"e72b392d-2c9c-462e-bbe5-8f839912c083","Type":"ContainerStarted","Data":"925ba051e42a787e38adf72b6a67117d8e5a0f35056a10edb068379db3a1ad7b"} Oct 06 11:50:03 crc kubenswrapper[4958]: I1006 11:50:03.580114 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-wlv4g" event={"ID":"5924455c-9450-47f0-a762-355eae4c73c4","Type":"ContainerStarted","Data":"d46bdd13d091e368cd990ea1d67a55d8dadea13186124974d68ce802f038a3a4"} Oct 06 11:50:03 crc kubenswrapper[4958]: I1006 11:50:03.580175 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-wlv4g" event={"ID":"5924455c-9450-47f0-a762-355eae4c73c4","Type":"ContainerStarted","Data":"705374c63afbf8f615b467b793f8ecdc48af9c4ccc08311d9186f2e8669ab93c"} Oct 06 11:50:03 crc kubenswrapper[4958]: I1006 11:50:03.580882 4958 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns-operator/dns-operator-744455d44c-t2nxc" podStartSLOduration=136.580855042 podStartE2EDuration="2m16.580855042s" podCreationTimestamp="2025-10-06 11:47:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-06 11:50:03.580469941 +0000 UTC m=+157.466495249" watchObservedRunningTime="2025-10-06 11:50:03.580855042 +0000 UTC m=+157.466880350" Oct 06 11:50:03 crc kubenswrapper[4958]: I1006 11:50:03.581278 4958 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-c97vp" podStartSLOduration=136.581272885 podStartE2EDuration="2m16.581272885s" podCreationTimestamp="2025-10-06 11:47:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-06 11:50:03.531858504 +0000 UTC m=+157.417883802" watchObservedRunningTime="2025-10-06 11:50:03.581272885 +0000 UTC m=+157.467298193" Oct 06 11:50:03 crc kubenswrapper[4958]: I1006 11:50:03.589569 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-m7q5b" event={"ID":"2cc565c2-461e-4902-917d-51dc8aba5cf7","Type":"ContainerStarted","Data":"776d4894a7a854e8c78a2f92d543eca3144c7f20b1c012b64a6e709afa91700e"} Oct 06 11:50:03 crc kubenswrapper[4958]: I1006 11:50:03.594300 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-928ch" event={"ID":"8bce8671-7d06-4e9b-81cb-f6ddf56ba8a4","Type":"ContainerStarted","Data":"20adad655409ba9f8f35264302dee6ceedb4284abb26084ce6fa087cc78f907d"} Oct 06 11:50:03 crc kubenswrapper[4958]: I1006 11:50:03.614993 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 06 11:50:03 crc kubenswrapper[4958]: E1006 11:50:03.616995 4958 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-06 11:50:04.116969273 +0000 UTC m=+158.002994581 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 06 11:50:03 crc kubenswrapper[4958]: I1006 11:50:03.617911 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-79b997595-928ch" Oct 06 11:50:03 crc kubenswrapper[4958]: I1006 11:50:03.623018 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-canary/ingress-canary-7flhj" event={"ID":"8275a125-7a77-49fe-bebf-d140013d5a3f","Type":"ContainerStarted","Data":"f8085ebad617b5773bfe01ba1f16cbbd4d1336d2cd728f89578aaa4332660ec5"} Oct 06 11:50:03 crc kubenswrapper[4958]: I1006 11:50:03.623595 4958 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-k94lv" podStartSLOduration=136.623570272 podStartE2EDuration="2m16.623570272s" podCreationTimestamp="2025-10-06 11:47:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-06 11:50:03.618113197 +0000 UTC m=+157.504138505" watchObservedRunningTime="2025-10-06 11:50:03.623570272 +0000 UTC m=+157.509595580" Oct 06 11:50:03 crc kubenswrapper[4958]: I1006 11:50:03.649373 4958 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-928ch container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.25:8080/healthz\": dial tcp 10.217.0.25:8080: connect: connection refused" start-of-body= Oct 06 11:50:03 crc kubenswrapper[4958]: I1006 11:50:03.649485 4958 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-928ch" podUID="8bce8671-7d06-4e9b-81cb-f6ddf56ba8a4" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.25:8080/healthz\": dial tcp 10.217.0.25:8080: connect: connection refused" Oct 06 11:50:03 crc kubenswrapper[4958]: I1006 11:50:03.651740 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-4cxkf" event={"ID":"d982ea17-3155-4d01-bf28-5045db9fe780","Type":"ContainerStarted","Data":"b7c92127bcfb91919bbdae566429608f3f66a20bf518ba4cc26058f3b744b8b0"} Oct 06 11:50:03 crc kubenswrapper[4958]: I1006 11:50:03.653008 4958 patch_prober.go:28] interesting pod/olm-operator-6b444d44fb-4cxkf container/olm-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.20:8443/healthz\": dial tcp 10.217.0.20:8443: connect: connection refused" start-of-body= Oct 06 11:50:03 crc kubenswrapper[4958]: I1006 11:50:03.653047 4958 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-4cxkf" podUID="d982ea17-3155-4d01-bf28-5045db9fe780" containerName="olm-operator" probeResult="failure" output="Get \"https://10.217.0.20:8443/healthz\": dial tcp 10.217.0.20:8443: connect: connection refused" Oct 06 11:50:03 crc kubenswrapper[4958]: I1006 11:50:03.687885 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-gj6pw" event={"ID":"b261e077-f9df-49ba-87df-a5d1988755bf","Type":"ContainerStarted","Data":"e6abb5dc233f6261ee53ae8555725adab29cac1e3b32fd08ce741c3d11a79c9e"} Oct 06 11:50:03 crc kubenswrapper[4958]: I1006 11:50:03.694077 4958 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-config-operator/openshift-config-operator-7777fb866f-vqqpd" podStartSLOduration=136.69405861 podStartE2EDuration="2m16.69405861s" podCreationTimestamp="2025-10-06 11:47:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-06 11:50:03.693675318 +0000 UTC m=+157.579700636" watchObservedRunningTime="2025-10-06 11:50:03.69405861 +0000 UTC m=+157.580083908" Oct 06 11:50:03 crc kubenswrapper[4958]: I1006 11:50:03.713890 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-fhfvm" event={"ID":"59305dc8-3a12-401b-83d9-532e520b72b0","Type":"ContainerStarted","Data":"20a4364305dec24ed164e16009cd18a496cc90868d8204912e87ea20814cda9b"} Oct 06 11:50:03 crc kubenswrapper[4958]: I1006 11:50:03.721844 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-kt8qb\" (UID: \"6a7580eb-dece-41a6-8335-33c29bc41056\") " pod="openshift-image-registry/image-registry-697d97f7c8-kt8qb" Oct 06 11:50:03 crc kubenswrapper[4958]: I1006 11:50:03.723429 4958 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-nsfvl" podStartSLOduration=136.723400335 podStartE2EDuration="2m16.723400335s" podCreationTimestamp="2025-10-06 11:47:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-06 11:50:03.721960232 +0000 UTC m=+157.607985540" watchObservedRunningTime="2025-10-06 11:50:03.723400335 +0000 UTC m=+157.609425643" Oct 06 11:50:03 crc kubenswrapper[4958]: E1006 11:50:03.725259 4958 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-06 11:50:04.225239801 +0000 UTC m=+158.111265109 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-kt8qb" (UID: "6a7580eb-dece-41a6-8335-33c29bc41056") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 06 11:50:03 crc kubenswrapper[4958]: I1006 11:50:03.739118 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29329185-6dnq5" event={"ID":"221a0896-9d41-4bf4-b05e-57c067e8b885","Type":"ContainerStarted","Data":"13b32212c738a7428967dc14b36ee17dbb64e527c6d599368c48b4ac82b29aaa"} Oct 06 11:50:03 crc kubenswrapper[4958]: I1006 11:50:03.775180 4958 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-apiserver/apiserver-76f77b778f-57fbj" podStartSLOduration=136.775134287 podStartE2EDuration="2m16.775134287s" podCreationTimestamp="2025-10-06 11:47:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-06 11:50:03.773137267 +0000 UTC m=+157.659162595" watchObservedRunningTime="2025-10-06 11:50:03.775134287 +0000 UTC m=+157.661159595" Oct 06 11:50:03 crc kubenswrapper[4958]: I1006 11:50:03.778795 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-swm82" event={"ID":"6559fbed-d2e2-4578-992e-b4088643cd24","Type":"ContainerStarted","Data":"cb85f2a284da100194f50a8e3d090fe089a6aa7eb039463543393fcddd70e7da"} Oct 06 11:50:03 crc kubenswrapper[4958]: I1006 11:50:03.778883 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-swm82" event={"ID":"6559fbed-d2e2-4578-992e-b4088643cd24","Type":"ContainerStarted","Data":"c0eefb37fc0fa2670da0f9ae8f58cace1a55328ba2a5b7e60653c145da7c1cc7"} Oct 06 11:50:03 crc kubenswrapper[4958]: I1006 11:50:03.812576 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca/service-ca-9c57cc56f-9blls" event={"ID":"3331225c-8593-4a8c-a256-1aee839e9bb3","Type":"ContainerStarted","Data":"6c94bd41848294bbd276732d0f52d715a7c733f7dedf454ef428bb191724bfa8"} Oct 06 11:50:03 crc kubenswrapper[4958]: I1006 11:50:03.827836 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 06 11:50:03 crc kubenswrapper[4958]: E1006 11:50:03.828844 4958 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-06 11:50:04.328821557 +0000 UTC m=+158.214846865 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 06 11:50:03 crc kubenswrapper[4958]: I1006 11:50:03.834767 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-kg57j" event={"ID":"1538fe67-0070-42fe-86e4-3ad017710b44","Type":"ContainerStarted","Data":"3d124b9de53c2a365f11240e61ac3955b75ac1b76434e8f3effba715adba17dc"} Oct 06 11:50:03 crc kubenswrapper[4958]: I1006 11:50:03.842378 4958 patch_prober.go:28] interesting pod/downloads-7954f5f757-bs4m6 container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.17:8080/\": dial tcp 10.217.0.17:8080: connect: connection refused" start-of-body= Oct 06 11:50:03 crc kubenswrapper[4958]: I1006 11:50:03.842459 4958 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-bs4m6" podUID="0538c536-f662-48c7-98fe-a1ceff33ade3" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.17:8080/\": dial tcp 10.217.0.17:8080: connect: connection refused" Oct 06 11:50:03 crc kubenswrapper[4958]: I1006 11:50:03.851530 4958 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-m7q5b" podStartSLOduration=136.851509192 podStartE2EDuration="2m16.851509192s" podCreationTimestamp="2025-10-06 11:47:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-06 11:50:03.844830381 +0000 UTC m=+157.730855689" watchObservedRunningTime="2025-10-06 11:50:03.851509192 +0000 UTC m=+157.737534500" Oct 06 11:50:03 crc kubenswrapper[4958]: I1006 11:50:03.851997 4958 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-x42zz" podStartSLOduration=136.851992617 podStartE2EDuration="2m16.851992617s" podCreationTimestamp="2025-10-06 11:47:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-06 11:50:03.819934099 +0000 UTC m=+157.705959407" watchObservedRunningTime="2025-10-06 11:50:03.851992617 +0000 UTC m=+157.738017925" Oct 06 11:50:03 crc kubenswrapper[4958]: I1006 11:50:03.866021 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-879f6c89f-thrt8" Oct 06 11:50:03 crc kubenswrapper[4958]: I1006 11:50:03.897794 4958 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-wlv4g" podStartSLOduration=136.897774299 podStartE2EDuration="2m16.897774299s" podCreationTimestamp="2025-10-06 11:47:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-06 11:50:03.887695455 +0000 UTC m=+157.773720763" watchObservedRunningTime="2025-10-06 11:50:03.897774299 +0000 UTC m=+157.783799607" Oct 06 11:50:03 crc kubenswrapper[4958]: I1006 11:50:03.932218 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-kt8qb\" (UID: \"6a7580eb-dece-41a6-8335-33c29bc41056\") " pod="openshift-image-registry/image-registry-697d97f7c8-kt8qb" Oct 06 11:50:03 crc kubenswrapper[4958]: E1006 11:50:03.937012 4958 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-06 11:50:04.436997283 +0000 UTC m=+158.323022591 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-kt8qb" (UID: "6a7580eb-dece-41a6-8335-33c29bc41056") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 06 11:50:04 crc kubenswrapper[4958]: I1006 11:50:04.017362 4958 patch_prober.go:28] interesting pod/router-default-5444994796-mfm8s container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 06 11:50:04 crc kubenswrapper[4958]: [-]has-synced failed: reason withheld Oct 06 11:50:04 crc kubenswrapper[4958]: [+]process-running ok Oct 06 11:50:04 crc kubenswrapper[4958]: healthz check failed Oct 06 11:50:04 crc kubenswrapper[4958]: I1006 11:50:04.017443 4958 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-mfm8s" podUID="37c87c2e-92b8-4b83-be26-cd63ec636eda" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 06 11:50:04 crc kubenswrapper[4958]: I1006 11:50:04.034335 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 06 11:50:04 crc kubenswrapper[4958]: E1006 11:50:04.034814 4958 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-06 11:50:04.534793585 +0000 UTC m=+158.420818893 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 06 11:50:04 crc kubenswrapper[4958]: I1006 11:50:04.049009 4958 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-zbr46" podStartSLOduration=137.048991283 podStartE2EDuration="2m17.048991283s" podCreationTimestamp="2025-10-06 11:47:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-06 11:50:04.045883279 +0000 UTC m=+157.931908587" watchObservedRunningTime="2025-10-06 11:50:04.048991283 +0000 UTC m=+157.935016591" Oct 06 11:50:04 crc kubenswrapper[4958]: I1006 11:50:04.136193 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-kt8qb\" (UID: \"6a7580eb-dece-41a6-8335-33c29bc41056\") " pod="openshift-image-registry/image-registry-697d97f7c8-kt8qb" Oct 06 11:50:04 crc kubenswrapper[4958]: E1006 11:50:04.136659 4958 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-06 11:50:04.636638849 +0000 UTC m=+158.522664157 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-kt8qb" (UID: "6a7580eb-dece-41a6-8335-33c29bc41056") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 06 11:50:04 crc kubenswrapper[4958]: I1006 11:50:04.237689 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 06 11:50:04 crc kubenswrapper[4958]: E1006 11:50:04.238338 4958 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-06 11:50:04.738322528 +0000 UTC m=+158.624347836 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 06 11:50:04 crc kubenswrapper[4958]: I1006 11:50:04.242618 4958 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-vqq9g" podStartSLOduration=137.242604407 podStartE2EDuration="2m17.242604407s" podCreationTimestamp="2025-10-06 11:47:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-06 11:50:04.1820585 +0000 UTC m=+158.068083818" watchObservedRunningTime="2025-10-06 11:50:04.242604407 +0000 UTC m=+158.128629715" Oct 06 11:50:04 crc kubenswrapper[4958]: I1006 11:50:04.283597 4958 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/marketplace-operator-79b997595-928ch" podStartSLOduration=137.283582884 podStartE2EDuration="2m17.283582884s" podCreationTimestamp="2025-10-06 11:47:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-06 11:50:04.243922427 +0000 UTC m=+158.129947735" watchObservedRunningTime="2025-10-06 11:50:04.283582884 +0000 UTC m=+158.169608192" Oct 06 11:50:04 crc kubenswrapper[4958]: I1006 11:50:04.330607 4958 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress-canary/ingress-canary-7flhj" podStartSLOduration=7.330591173 podStartE2EDuration="7.330591173s" podCreationTimestamp="2025-10-06 11:49:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-06 11:50:04.284567584 +0000 UTC m=+158.170592892" watchObservedRunningTime="2025-10-06 11:50:04.330591173 +0000 UTC m=+158.216616481" Oct 06 11:50:04 crc kubenswrapper[4958]: I1006 11:50:04.339548 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-kt8qb\" (UID: \"6a7580eb-dece-41a6-8335-33c29bc41056\") " pod="openshift-image-registry/image-registry-697d97f7c8-kt8qb" Oct 06 11:50:04 crc kubenswrapper[4958]: E1006 11:50:04.339810 4958 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-06 11:50:04.839799461 +0000 UTC m=+158.725824769 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-kt8qb" (UID: "6a7580eb-dece-41a6-8335-33c29bc41056") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 06 11:50:04 crc kubenswrapper[4958]: I1006 11:50:04.380884 4958 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-gj6pw" podStartSLOduration=137.38086313 podStartE2EDuration="2m17.38086313s" podCreationTimestamp="2025-10-06 11:47:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-06 11:50:04.331573573 +0000 UTC m=+158.217598881" watchObservedRunningTime="2025-10-06 11:50:04.38086313 +0000 UTC m=+158.266888428" Oct 06 11:50:04 crc kubenswrapper[4958]: I1006 11:50:04.409551 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-apiserver/apiserver-76f77b778f-57fbj" Oct 06 11:50:04 crc kubenswrapper[4958]: I1006 11:50:04.411044 4958 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-apiserver/apiserver-76f77b778f-57fbj" Oct 06 11:50:04 crc kubenswrapper[4958]: I1006 11:50:04.421952 4958 patch_prober.go:28] interesting pod/apiserver-76f77b778f-57fbj container/openshift-apiserver namespace/openshift-apiserver: Startup probe status=failure output="Get \"https://10.217.0.5:8443/livez\": dial tcp 10.217.0.5:8443: connect: connection refused" start-of-body= Oct 06 11:50:04 crc kubenswrapper[4958]: I1006 11:50:04.421993 4958 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-apiserver/apiserver-76f77b778f-57fbj" podUID="3f84a255-5034-424e-acf0-5ba9f4aa0531" containerName="openshift-apiserver" probeResult="failure" output="Get \"https://10.217.0.5:8443/livez\": dial tcp 10.217.0.5:8443: connect: connection refused" Oct 06 11:50:04 crc kubenswrapper[4958]: I1006 11:50:04.436677 4958 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-swm82" podStartSLOduration=137.436660684 podStartE2EDuration="2m17.436660684s" podCreationTimestamp="2025-10-06 11:47:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-06 11:50:04.383365385 +0000 UTC m=+158.269390703" watchObservedRunningTime="2025-10-06 11:50:04.436660684 +0000 UTC m=+158.322685992" Oct 06 11:50:04 crc kubenswrapper[4958]: I1006 11:50:04.443732 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 06 11:50:04 crc kubenswrapper[4958]: E1006 11:50:04.444013 4958 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-06 11:50:04.943999606 +0000 UTC m=+158.830024914 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 06 11:50:04 crc kubenswrapper[4958]: I1006 11:50:04.463953 4958 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-service-ca/service-ca-9c57cc56f-9blls" podStartSLOduration=137.463935087 podStartE2EDuration="2m17.463935087s" podCreationTimestamp="2025-10-06 11:47:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-06 11:50:04.43852706 +0000 UTC m=+158.324552368" watchObservedRunningTime="2025-10-06 11:50:04.463935087 +0000 UTC m=+158.349960395" Oct 06 11:50:04 crc kubenswrapper[4958]: I1006 11:50:04.464586 4958 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29329185-6dnq5" podStartSLOduration=137.464580157 podStartE2EDuration="2m17.464580157s" podCreationTimestamp="2025-10-06 11:47:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-06 11:50:04.460755161 +0000 UTC m=+158.346780469" watchObservedRunningTime="2025-10-06 11:50:04.464580157 +0000 UTC m=+158.350605465" Oct 06 11:50:04 crc kubenswrapper[4958]: I1006 11:50:04.545243 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-kt8qb\" (UID: \"6a7580eb-dece-41a6-8335-33c29bc41056\") " pod="openshift-image-registry/image-registry-697d97f7c8-kt8qb" Oct 06 11:50:04 crc kubenswrapper[4958]: E1006 11:50:04.545591 4958 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-06 11:50:05.045578752 +0000 UTC m=+158.931604060 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-kt8qb" (UID: "6a7580eb-dece-41a6-8335-33c29bc41056") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 06 11:50:04 crc kubenswrapper[4958]: I1006 11:50:04.551234 4958 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns/dns-default-kg57j" podStartSLOduration=7.551209722 podStartE2EDuration="7.551209722s" podCreationTimestamp="2025-10-06 11:49:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-06 11:50:04.510260566 +0000 UTC m=+158.396285874" watchObservedRunningTime="2025-10-06 11:50:04.551209722 +0000 UTC m=+158.437235020" Oct 06 11:50:04 crc kubenswrapper[4958]: I1006 11:50:04.556703 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-authentication/oauth-openshift-558db77b4-dftvm" Oct 06 11:50:04 crc kubenswrapper[4958]: I1006 11:50:04.613941 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-vqq9g" Oct 06 11:50:04 crc kubenswrapper[4958]: I1006 11:50:04.614100 4958 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-vqq9g" Oct 06 11:50:04 crc kubenswrapper[4958]: I1006 11:50:04.646376 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 06 11:50:04 crc kubenswrapper[4958]: E1006 11:50:04.646479 4958 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-06 11:50:05.146463997 +0000 UTC m=+159.032489305 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 06 11:50:04 crc kubenswrapper[4958]: I1006 11:50:04.646685 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-kt8qb\" (UID: \"6a7580eb-dece-41a6-8335-33c29bc41056\") " pod="openshift-image-registry/image-registry-697d97f7c8-kt8qb" Oct 06 11:50:04 crc kubenswrapper[4958]: E1006 11:50:04.646949 4958 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-06 11:50:05.146942971 +0000 UTC m=+159.032968279 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-kt8qb" (UID: "6a7580eb-dece-41a6-8335-33c29bc41056") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 06 11:50:04 crc kubenswrapper[4958]: I1006 11:50:04.747614 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 06 11:50:04 crc kubenswrapper[4958]: E1006 11:50:04.747741 4958 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-06 11:50:05.247722803 +0000 UTC m=+159.133748111 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 06 11:50:04 crc kubenswrapper[4958]: I1006 11:50:04.748101 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-kt8qb\" (UID: \"6a7580eb-dece-41a6-8335-33c29bc41056\") " pod="openshift-image-registry/image-registry-697d97f7c8-kt8qb" Oct 06 11:50:04 crc kubenswrapper[4958]: E1006 11:50:04.748523 4958 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-06 11:50:05.248505857 +0000 UTC m=+159.134531205 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-kt8qb" (UID: "6a7580eb-dece-41a6-8335-33c29bc41056") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 06 11:50:04 crc kubenswrapper[4958]: I1006 11:50:04.840255 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-kg57j" event={"ID":"1538fe67-0070-42fe-86e4-3ad017710b44","Type":"ContainerStarted","Data":"f0ed526030a082d483bc0c8fc077d57916a67a7d5b14e0ed76dc19696e89401b"} Oct 06 11:50:04 crc kubenswrapper[4958]: I1006 11:50:04.840935 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-dns/dns-default-kg57j" Oct 06 11:50:04 crc kubenswrapper[4958]: I1006 11:50:04.842661 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-m9cln" event={"ID":"8c14b3fd-7202-4e51-9528-7eb2825f4a5b","Type":"ContainerStarted","Data":"3b4b8834ea1fdab74bca0712d5ce609753865c55b3e3138e747a5da36fdfa0d8"} Oct 06 11:50:04 crc kubenswrapper[4958]: I1006 11:50:04.845517 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-fhfvm" event={"ID":"59305dc8-3a12-401b-83d9-532e520b72b0","Type":"ContainerStarted","Data":"5955dca6cd58494415274f44d25520dfbb9a2c1ffe872cbca83f92b6032055da"} Oct 06 11:50:04 crc kubenswrapper[4958]: I1006 11:50:04.848621 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 06 11:50:04 crc kubenswrapper[4958]: I1006 11:50:04.850497 4958 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-928ch container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.25:8080/healthz\": dial tcp 10.217.0.25:8080: connect: connection refused" start-of-body= Oct 06 11:50:04 crc kubenswrapper[4958]: I1006 11:50:04.850561 4958 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-928ch" podUID="8bce8671-7d06-4e9b-81cb-f6ddf56ba8a4" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.25:8080/healthz\": dial tcp 10.217.0.25:8080: connect: connection refused" Oct 06 11:50:04 crc kubenswrapper[4958]: E1006 11:50:04.851024 4958 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-06 11:50:05.351004971 +0000 UTC m=+159.237030279 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 06 11:50:04 crc kubenswrapper[4958]: I1006 11:50:04.851310 4958 patch_prober.go:28] interesting pod/downloads-7954f5f757-bs4m6 container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.17:8080/\": dial tcp 10.217.0.17:8080: connect: connection refused" start-of-body= Oct 06 11:50:04 crc kubenswrapper[4958]: I1006 11:50:04.851350 4958 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-bs4m6" podUID="0538c536-f662-48c7-98fe-a1ceff33ade3" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.17:8080/\": dial tcp 10.217.0.17:8080: connect: connection refused" Oct 06 11:50:04 crc kubenswrapper[4958]: I1006 11:50:04.862339 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-4cxkf" Oct 06 11:50:04 crc kubenswrapper[4958]: I1006 11:50:04.950633 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-kt8qb\" (UID: \"6a7580eb-dece-41a6-8335-33c29bc41056\") " pod="openshift-image-registry/image-registry-697d97f7c8-kt8qb" Oct 06 11:50:04 crc kubenswrapper[4958]: E1006 11:50:04.953392 4958 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-06 11:50:05.453375781 +0000 UTC m=+159.339401079 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-kt8qb" (UID: "6a7580eb-dece-41a6-8335-33c29bc41056") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 06 11:50:04 crc kubenswrapper[4958]: I1006 11:50:04.966343 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-zbr46" Oct 06 11:50:04 crc kubenswrapper[4958]: I1006 11:50:04.977688 4958 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-admission-controller-857f4d67dd-fhfvm" podStartSLOduration=137.977668174 podStartE2EDuration="2m17.977668174s" podCreationTimestamp="2025-10-06 11:47:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-06 11:50:04.884368918 +0000 UTC m=+158.770394226" watchObservedRunningTime="2025-10-06 11:50:04.977668174 +0000 UTC m=+158.863693482" Oct 06 11:50:05 crc kubenswrapper[4958]: I1006 11:50:05.020438 4958 patch_prober.go:28] interesting pod/router-default-5444994796-mfm8s container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 06 11:50:05 crc kubenswrapper[4958]: [-]has-synced failed: reason withheld Oct 06 11:50:05 crc kubenswrapper[4958]: [+]process-running ok Oct 06 11:50:05 crc kubenswrapper[4958]: healthz check failed Oct 06 11:50:05 crc kubenswrapper[4958]: I1006 11:50:05.020523 4958 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-mfm8s" podUID="37c87c2e-92b8-4b83-be26-cd63ec636eda" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 06 11:50:05 crc kubenswrapper[4958]: I1006 11:50:05.053084 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 06 11:50:05 crc kubenswrapper[4958]: E1006 11:50:05.053750 4958 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-06 11:50:05.55373167 +0000 UTC m=+159.439756978 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 06 11:50:05 crc kubenswrapper[4958]: I1006 11:50:05.155069 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-kt8qb\" (UID: \"6a7580eb-dece-41a6-8335-33c29bc41056\") " pod="openshift-image-registry/image-registry-697d97f7c8-kt8qb" Oct 06 11:50:05 crc kubenswrapper[4958]: E1006 11:50:05.155688 4958 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-06 11:50:05.655661617 +0000 UTC m=+159.541686925 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-kt8qb" (UID: "6a7580eb-dece-41a6-8335-33c29bc41056") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 06 11:50:05 crc kubenswrapper[4958]: I1006 11:50:05.256186 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 06 11:50:05 crc kubenswrapper[4958]: E1006 11:50:05.256476 4958 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-06 11:50:05.756447759 +0000 UTC m=+159.642473067 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 06 11:50:05 crc kubenswrapper[4958]: I1006 11:50:05.256568 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-kt8qb\" (UID: \"6a7580eb-dece-41a6-8335-33c29bc41056\") " pod="openshift-image-registry/image-registry-697d97f7c8-kt8qb" Oct 06 11:50:05 crc kubenswrapper[4958]: E1006 11:50:05.256980 4958 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-06 11:50:05.756963325 +0000 UTC m=+159.642988633 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-kt8qb" (UID: "6a7580eb-dece-41a6-8335-33c29bc41056") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 06 11:50:05 crc kubenswrapper[4958]: I1006 11:50:05.337848 4958 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-vqq9g" Oct 06 11:50:05 crc kubenswrapper[4958]: I1006 11:50:05.357093 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 06 11:50:05 crc kubenswrapper[4958]: E1006 11:50:05.357337 4958 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-06 11:50:05.857299454 +0000 UTC m=+159.743324762 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 06 11:50:05 crc kubenswrapper[4958]: I1006 11:50:05.357422 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-kt8qb\" (UID: \"6a7580eb-dece-41a6-8335-33c29bc41056\") " pod="openshift-image-registry/image-registry-697d97f7c8-kt8qb" Oct 06 11:50:05 crc kubenswrapper[4958]: E1006 11:50:05.357741 4958 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-06 11:50:05.857726606 +0000 UTC m=+159.743751914 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-kt8qb" (UID: "6a7580eb-dece-41a6-8335-33c29bc41056") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 06 11:50:05 crc kubenswrapper[4958]: I1006 11:50:05.459469 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-x42zz" Oct 06 11:50:05 crc kubenswrapper[4958]: I1006 11:50:05.459708 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 06 11:50:05 crc kubenswrapper[4958]: E1006 11:50:05.459919 4958 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-06 11:50:05.95989234 +0000 UTC m=+159.845917648 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 06 11:50:05 crc kubenswrapper[4958]: I1006 11:50:05.460073 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-kt8qb\" (UID: \"6a7580eb-dece-41a6-8335-33c29bc41056\") " pod="openshift-image-registry/image-registry-697d97f7c8-kt8qb" Oct 06 11:50:05 crc kubenswrapper[4958]: E1006 11:50:05.460426 4958 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-06 11:50:05.960417216 +0000 UTC m=+159.846442524 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-kt8qb" (UID: "6a7580eb-dece-41a6-8335-33c29bc41056") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 06 11:50:05 crc kubenswrapper[4958]: I1006 11:50:05.561562 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 06 11:50:05 crc kubenswrapper[4958]: E1006 11:50:05.561930 4958 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-06 11:50:06.06191656 +0000 UTC m=+159.947941868 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 06 11:50:05 crc kubenswrapper[4958]: I1006 11:50:05.687592 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-kt8qb\" (UID: \"6a7580eb-dece-41a6-8335-33c29bc41056\") " pod="openshift-image-registry/image-registry-697d97f7c8-kt8qb" Oct 06 11:50:05 crc kubenswrapper[4958]: E1006 11:50:05.688332 4958 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-06 11:50:06.188315705 +0000 UTC m=+160.074341013 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-kt8qb" (UID: "6a7580eb-dece-41a6-8335-33c29bc41056") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 06 11:50:05 crc kubenswrapper[4958]: I1006 11:50:05.791612 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 06 11:50:05 crc kubenswrapper[4958]: E1006 11:50:05.791794 4958 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-06 11:50:06.291765848 +0000 UTC m=+160.177791156 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 06 11:50:05 crc kubenswrapper[4958]: I1006 11:50:05.792419 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-kt8qb\" (UID: \"6a7580eb-dece-41a6-8335-33c29bc41056\") " pod="openshift-image-registry/image-registry-697d97f7c8-kt8qb" Oct 06 11:50:05 crc kubenswrapper[4958]: E1006 11:50:05.792768 4958 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-06 11:50:06.292757028 +0000 UTC m=+160.178782336 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-kt8qb" (UID: "6a7580eb-dece-41a6-8335-33c29bc41056") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 06 11:50:05 crc kubenswrapper[4958]: I1006 11:50:05.898607 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 06 11:50:05 crc kubenswrapper[4958]: E1006 11:50:05.898780 4958 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-06 11:50:06.398763898 +0000 UTC m=+160.284789206 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 06 11:50:05 crc kubenswrapper[4958]: I1006 11:50:05.898859 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-kt8qb\" (UID: \"6a7580eb-dece-41a6-8335-33c29bc41056\") " pod="openshift-image-registry/image-registry-697d97f7c8-kt8qb" Oct 06 11:50:05 crc kubenswrapper[4958]: E1006 11:50:05.899134 4958 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-06 11:50:06.399126609 +0000 UTC m=+160.285151917 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-kt8qb" (UID: "6a7580eb-dece-41a6-8335-33c29bc41056") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 06 11:50:05 crc kubenswrapper[4958]: I1006 11:50:05.906355 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-m9cln" event={"ID":"8c14b3fd-7202-4e51-9528-7eb2825f4a5b","Type":"ContainerStarted","Data":"648d1be41851e895a0ca6de266cced8bb8ab493787e4d085f6721feab98ee0c6"} Oct 06 11:50:05 crc kubenswrapper[4958]: I1006 11:50:05.916739 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-vqq9g" Oct 06 11:50:05 crc kubenswrapper[4958]: I1006 11:50:05.999828 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 06 11:50:06 crc kubenswrapper[4958]: E1006 11:50:06.000160 4958 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-06 11:50:06.500129447 +0000 UTC m=+160.386154775 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 06 11:50:06 crc kubenswrapper[4958]: I1006 11:50:06.000591 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-kt8qb\" (UID: \"6a7580eb-dece-41a6-8335-33c29bc41056\") " pod="openshift-image-registry/image-registry-697d97f7c8-kt8qb" Oct 06 11:50:06 crc kubenswrapper[4958]: E1006 11:50:06.002010 4958 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-06 11:50:06.501976733 +0000 UTC m=+160.388002251 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-kt8qb" (UID: "6a7580eb-dece-41a6-8335-33c29bc41056") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 06 11:50:06 crc kubenswrapper[4958]: I1006 11:50:06.013786 4958 patch_prober.go:28] interesting pod/router-default-5444994796-mfm8s container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 06 11:50:06 crc kubenswrapper[4958]: [-]has-synced failed: reason withheld Oct 06 11:50:06 crc kubenswrapper[4958]: [+]process-running ok Oct 06 11:50:06 crc kubenswrapper[4958]: healthz check failed Oct 06 11:50:06 crc kubenswrapper[4958]: I1006 11:50:06.014168 4958 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-mfm8s" podUID="37c87c2e-92b8-4b83-be26-cd63ec636eda" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 06 11:50:06 crc kubenswrapper[4958]: I1006 11:50:06.103689 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 06 11:50:06 crc kubenswrapper[4958]: E1006 11:50:06.103867 4958 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-06 11:50:06.603840808 +0000 UTC m=+160.489866116 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 06 11:50:06 crc kubenswrapper[4958]: I1006 11:50:06.104238 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-kt8qb\" (UID: \"6a7580eb-dece-41a6-8335-33c29bc41056\") " pod="openshift-image-registry/image-registry-697d97f7c8-kt8qb" Oct 06 11:50:06 crc kubenswrapper[4958]: E1006 11:50:06.104569 4958 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-06 11:50:06.6045609 +0000 UTC m=+160.490586208 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-kt8qb" (UID: "6a7580eb-dece-41a6-8335-33c29bc41056") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 06 11:50:06 crc kubenswrapper[4958]: I1006 11:50:06.205608 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 06 11:50:06 crc kubenswrapper[4958]: E1006 11:50:06.205870 4958 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-06 11:50:06.705843977 +0000 UTC m=+160.591869285 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 06 11:50:06 crc kubenswrapper[4958]: I1006 11:50:06.206268 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-kt8qb\" (UID: \"6a7580eb-dece-41a6-8335-33c29bc41056\") " pod="openshift-image-registry/image-registry-697d97f7c8-kt8qb" Oct 06 11:50:06 crc kubenswrapper[4958]: E1006 11:50:06.206636 4958 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-06 11:50:06.70662158 +0000 UTC m=+160.592646888 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-kt8qb" (UID: "6a7580eb-dece-41a6-8335-33c29bc41056") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 06 11:50:06 crc kubenswrapper[4958]: I1006 11:50:06.307021 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 06 11:50:06 crc kubenswrapper[4958]: E1006 11:50:06.307331 4958 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-06 11:50:06.80731749 +0000 UTC m=+160.693342798 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 06 11:50:06 crc kubenswrapper[4958]: I1006 11:50:06.409083 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-kt8qb\" (UID: \"6a7580eb-dece-41a6-8335-33c29bc41056\") " pod="openshift-image-registry/image-registry-697d97f7c8-kt8qb" Oct 06 11:50:06 crc kubenswrapper[4958]: E1006 11:50:06.409365 4958 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-06 11:50:06.90935426 +0000 UTC m=+160.795379568 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-kt8qb" (UID: "6a7580eb-dece-41a6-8335-33c29bc41056") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 06 11:50:06 crc kubenswrapper[4958]: I1006 11:50:06.454189 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-config-operator/openshift-config-operator-7777fb866f-vqqpd" Oct 06 11:50:06 crc kubenswrapper[4958]: I1006 11:50:06.470993 4958 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-vv7xq"] Oct 06 11:50:06 crc kubenswrapper[4958]: I1006 11:50:06.472097 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-vv7xq" Oct 06 11:50:06 crc kubenswrapper[4958]: I1006 11:50:06.477138 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Oct 06 11:50:06 crc kubenswrapper[4958]: I1006 11:50:06.509890 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 06 11:50:06 crc kubenswrapper[4958]: E1006 11:50:06.510139 4958 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-06 11:50:07.010125142 +0000 UTC m=+160.896150450 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 06 11:50:06 crc kubenswrapper[4958]: I1006 11:50:06.511086 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-vv7xq"] Oct 06 11:50:06 crc kubenswrapper[4958]: I1006 11:50:06.611426 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d262bad7-9d80-42f3-b97d-149b73d879c0-utilities\") pod \"certified-operators-vv7xq\" (UID: \"d262bad7-9d80-42f3-b97d-149b73d879c0\") " pod="openshift-marketplace/certified-operators-vv7xq" Oct 06 11:50:06 crc kubenswrapper[4958]: I1006 11:50:06.611487 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-kt8qb\" (UID: \"6a7580eb-dece-41a6-8335-33c29bc41056\") " pod="openshift-image-registry/image-registry-697d97f7c8-kt8qb" Oct 06 11:50:06 crc kubenswrapper[4958]: I1006 11:50:06.611508 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d262bad7-9d80-42f3-b97d-149b73d879c0-catalog-content\") pod \"certified-operators-vv7xq\" (UID: \"d262bad7-9d80-42f3-b97d-149b73d879c0\") " pod="openshift-marketplace/certified-operators-vv7xq" Oct 06 11:50:06 crc kubenswrapper[4958]: I1006 11:50:06.611546 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m96xq\" (UniqueName: \"kubernetes.io/projected/d262bad7-9d80-42f3-b97d-149b73d879c0-kube-api-access-m96xq\") pod \"certified-operators-vv7xq\" (UID: \"d262bad7-9d80-42f3-b97d-149b73d879c0\") " pod="openshift-marketplace/certified-operators-vv7xq" Oct 06 11:50:06 crc kubenswrapper[4958]: E1006 11:50:06.611797 4958 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-06 11:50:07.11178592 +0000 UTC m=+160.997811228 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-kt8qb" (UID: "6a7580eb-dece-41a6-8335-33c29bc41056") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 06 11:50:06 crc kubenswrapper[4958]: I1006 11:50:06.647474 4958 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-5l5pp"] Oct 06 11:50:06 crc kubenswrapper[4958]: I1006 11:50:06.648405 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-5l5pp" Oct 06 11:50:06 crc kubenswrapper[4958]: I1006 11:50:06.652747 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Oct 06 11:50:06 crc kubenswrapper[4958]: I1006 11:50:06.665811 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-5l5pp"] Oct 06 11:50:06 crc kubenswrapper[4958]: I1006 11:50:06.677780 4958 plugin_watcher.go:194] "Adding socket path or updating timestamp to desired state cache" path="/var/lib/kubelet/plugins_registry/kubevirt.io.hostpath-provisioner-reg.sock" Oct 06 11:50:06 crc kubenswrapper[4958]: I1006 11:50:06.712914 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 06 11:50:06 crc kubenswrapper[4958]: E1006 11:50:06.713088 4958 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-06 11:50:07.213064137 +0000 UTC m=+161.099089445 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 06 11:50:06 crc kubenswrapper[4958]: I1006 11:50:06.713347 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d262bad7-9d80-42f3-b97d-149b73d879c0-utilities\") pod \"certified-operators-vv7xq\" (UID: \"d262bad7-9d80-42f3-b97d-149b73d879c0\") " pod="openshift-marketplace/certified-operators-vv7xq" Oct 06 11:50:06 crc kubenswrapper[4958]: I1006 11:50:06.713443 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-kt8qb\" (UID: \"6a7580eb-dece-41a6-8335-33c29bc41056\") " pod="openshift-image-registry/image-registry-697d97f7c8-kt8qb" Oct 06 11:50:06 crc kubenswrapper[4958]: I1006 11:50:06.713533 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d262bad7-9d80-42f3-b97d-149b73d879c0-catalog-content\") pod \"certified-operators-vv7xq\" (UID: \"d262bad7-9d80-42f3-b97d-149b73d879c0\") " pod="openshift-marketplace/certified-operators-vv7xq" Oct 06 11:50:06 crc kubenswrapper[4958]: I1006 11:50:06.713625 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m96xq\" (UniqueName: \"kubernetes.io/projected/d262bad7-9d80-42f3-b97d-149b73d879c0-kube-api-access-m96xq\") pod \"certified-operators-vv7xq\" (UID: \"d262bad7-9d80-42f3-b97d-149b73d879c0\") " pod="openshift-marketplace/certified-operators-vv7xq" Oct 06 11:50:06 crc kubenswrapper[4958]: E1006 11:50:06.713676 4958 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-06 11:50:07.213667356 +0000 UTC m=+161.099692664 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-kt8qb" (UID: "6a7580eb-dece-41a6-8335-33c29bc41056") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 06 11:50:06 crc kubenswrapper[4958]: I1006 11:50:06.713768 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d262bad7-9d80-42f3-b97d-149b73d879c0-utilities\") pod \"certified-operators-vv7xq\" (UID: \"d262bad7-9d80-42f3-b97d-149b73d879c0\") " pod="openshift-marketplace/certified-operators-vv7xq" Oct 06 11:50:06 crc kubenswrapper[4958]: I1006 11:50:06.713976 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d262bad7-9d80-42f3-b97d-149b73d879c0-catalog-content\") pod \"certified-operators-vv7xq\" (UID: \"d262bad7-9d80-42f3-b97d-149b73d879c0\") " pod="openshift-marketplace/certified-operators-vv7xq" Oct 06 11:50:06 crc kubenswrapper[4958]: I1006 11:50:06.747889 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m96xq\" (UniqueName: \"kubernetes.io/projected/d262bad7-9d80-42f3-b97d-149b73d879c0-kube-api-access-m96xq\") pod \"certified-operators-vv7xq\" (UID: \"d262bad7-9d80-42f3-b97d-149b73d879c0\") " pod="openshift-marketplace/certified-operators-vv7xq" Oct 06 11:50:06 crc kubenswrapper[4958]: I1006 11:50:06.785451 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-vv7xq" Oct 06 11:50:06 crc kubenswrapper[4958]: I1006 11:50:06.817805 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 06 11:50:06 crc kubenswrapper[4958]: I1006 11:50:06.818199 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sqdsq\" (UniqueName: \"kubernetes.io/projected/3309b80a-11e6-4b60-be3f-c161644ffc7b-kube-api-access-sqdsq\") pod \"community-operators-5l5pp\" (UID: \"3309b80a-11e6-4b60-be3f-c161644ffc7b\") " pod="openshift-marketplace/community-operators-5l5pp" Oct 06 11:50:06 crc kubenswrapper[4958]: I1006 11:50:06.818282 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3309b80a-11e6-4b60-be3f-c161644ffc7b-catalog-content\") pod \"community-operators-5l5pp\" (UID: \"3309b80a-11e6-4b60-be3f-c161644ffc7b\") " pod="openshift-marketplace/community-operators-5l5pp" Oct 06 11:50:06 crc kubenswrapper[4958]: I1006 11:50:06.818410 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3309b80a-11e6-4b60-be3f-c161644ffc7b-utilities\") pod \"community-operators-5l5pp\" (UID: \"3309b80a-11e6-4b60-be3f-c161644ffc7b\") " pod="openshift-marketplace/community-operators-5l5pp" Oct 06 11:50:06 crc kubenswrapper[4958]: E1006 11:50:06.818594 4958 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-06 11:50:07.318580382 +0000 UTC m=+161.204605691 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 06 11:50:06 crc kubenswrapper[4958]: I1006 11:50:06.847765 4958 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-8fznl"] Oct 06 11:50:06 crc kubenswrapper[4958]: I1006 11:50:06.849013 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-8fznl" Oct 06 11:50:06 crc kubenswrapper[4958]: I1006 11:50:06.863005 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-8fznl"] Oct 06 11:50:06 crc kubenswrapper[4958]: I1006 11:50:06.950987 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3309b80a-11e6-4b60-be3f-c161644ffc7b-utilities\") pod \"community-operators-5l5pp\" (UID: \"3309b80a-11e6-4b60-be3f-c161644ffc7b\") " pod="openshift-marketplace/community-operators-5l5pp" Oct 06 11:50:06 crc kubenswrapper[4958]: I1006 11:50:06.951042 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-kt8qb\" (UID: \"6a7580eb-dece-41a6-8335-33c29bc41056\") " pod="openshift-image-registry/image-registry-697d97f7c8-kt8qb" Oct 06 11:50:06 crc kubenswrapper[4958]: I1006 11:50:06.951105 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/65b79cfc-41f1-40f1-b29b-9b9b4f8e28a2-catalog-content\") pod \"certified-operators-8fznl\" (UID: \"65b79cfc-41f1-40f1-b29b-9b9b4f8e28a2\") " pod="openshift-marketplace/certified-operators-8fznl" Oct 06 11:50:06 crc kubenswrapper[4958]: I1006 11:50:06.952333 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3309b80a-11e6-4b60-be3f-c161644ffc7b-utilities\") pod \"community-operators-5l5pp\" (UID: \"3309b80a-11e6-4b60-be3f-c161644ffc7b\") " pod="openshift-marketplace/community-operators-5l5pp" Oct 06 11:50:06 crc kubenswrapper[4958]: E1006 11:50:06.952524 4958 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-06 11:50:07.452500185 +0000 UTC m=+161.338525493 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-kt8qb" (UID: "6a7580eb-dece-41a6-8335-33c29bc41056") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 06 11:50:06 crc kubenswrapper[4958]: I1006 11:50:06.951137 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sqdsq\" (UniqueName: \"kubernetes.io/projected/3309b80a-11e6-4b60-be3f-c161644ffc7b-kube-api-access-sqdsq\") pod \"community-operators-5l5pp\" (UID: \"3309b80a-11e6-4b60-be3f-c161644ffc7b\") " pod="openshift-marketplace/community-operators-5l5pp" Oct 06 11:50:06 crc kubenswrapper[4958]: I1006 11:50:06.953017 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3309b80a-11e6-4b60-be3f-c161644ffc7b-catalog-content\") pod \"community-operators-5l5pp\" (UID: \"3309b80a-11e6-4b60-be3f-c161644ffc7b\") " pod="openshift-marketplace/community-operators-5l5pp" Oct 06 11:50:06 crc kubenswrapper[4958]: I1006 11:50:06.953056 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gwcv9\" (UniqueName: \"kubernetes.io/projected/65b79cfc-41f1-40f1-b29b-9b9b4f8e28a2-kube-api-access-gwcv9\") pod \"certified-operators-8fznl\" (UID: \"65b79cfc-41f1-40f1-b29b-9b9b4f8e28a2\") " pod="openshift-marketplace/certified-operators-8fznl" Oct 06 11:50:06 crc kubenswrapper[4958]: I1006 11:50:06.953105 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/65b79cfc-41f1-40f1-b29b-9b9b4f8e28a2-utilities\") pod \"certified-operators-8fznl\" (UID: \"65b79cfc-41f1-40f1-b29b-9b9b4f8e28a2\") " pod="openshift-marketplace/certified-operators-8fznl" Oct 06 11:50:06 crc kubenswrapper[4958]: I1006 11:50:06.954431 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3309b80a-11e6-4b60-be3f-c161644ffc7b-catalog-content\") pod \"community-operators-5l5pp\" (UID: \"3309b80a-11e6-4b60-be3f-c161644ffc7b\") " pod="openshift-marketplace/community-operators-5l5pp" Oct 06 11:50:06 crc kubenswrapper[4958]: I1006 11:50:06.978782 4958 generic.go:334] "Generic (PLEG): container finished" podID="221a0896-9d41-4bf4-b05e-57c067e8b885" containerID="13b32212c738a7428967dc14b36ee17dbb64e527c6d599368c48b4ac82b29aaa" exitCode=0 Oct 06 11:50:06 crc kubenswrapper[4958]: I1006 11:50:06.988253 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-m9cln" event={"ID":"8c14b3fd-7202-4e51-9528-7eb2825f4a5b","Type":"ContainerStarted","Data":"6ff950a9418d0f8f4a374b27e48fa16e7668ec56e0970ab5442900bc6b3ddbaa"} Oct 06 11:50:06 crc kubenswrapper[4958]: I1006 11:50:06.988300 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-m9cln" event={"ID":"8c14b3fd-7202-4e51-9528-7eb2825f4a5b","Type":"ContainerStarted","Data":"3a227178f7606f62b5ccb42225d6703bd908fee20dfa9e8f93dd5c7f6ab51faf"} Oct 06 11:50:06 crc kubenswrapper[4958]: I1006 11:50:06.988309 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29329185-6dnq5" event={"ID":"221a0896-9d41-4bf4-b05e-57c067e8b885","Type":"ContainerDied","Data":"13b32212c738a7428967dc14b36ee17dbb64e527c6d599368c48b4ac82b29aaa"} Oct 06 11:50:06 crc kubenswrapper[4958]: I1006 11:50:06.997680 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sqdsq\" (UniqueName: \"kubernetes.io/projected/3309b80a-11e6-4b60-be3f-c161644ffc7b-kube-api-access-sqdsq\") pod \"community-operators-5l5pp\" (UID: \"3309b80a-11e6-4b60-be3f-c161644ffc7b\") " pod="openshift-marketplace/community-operators-5l5pp" Oct 06 11:50:07 crc kubenswrapper[4958]: I1006 11:50:07.017753 4958 patch_prober.go:28] interesting pod/router-default-5444994796-mfm8s container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 06 11:50:07 crc kubenswrapper[4958]: [-]has-synced failed: reason withheld Oct 06 11:50:07 crc kubenswrapper[4958]: [+]process-running ok Oct 06 11:50:07 crc kubenswrapper[4958]: healthz check failed Oct 06 11:50:07 crc kubenswrapper[4958]: I1006 11:50:07.018191 4958 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-mfm8s" podUID="37c87c2e-92b8-4b83-be26-cd63ec636eda" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 06 11:50:07 crc kubenswrapper[4958]: I1006 11:50:07.041344 4958 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="hostpath-provisioner/csi-hostpathplugin-m9cln" podStartSLOduration=10.041305646 podStartE2EDuration="10.041305646s" podCreationTimestamp="2025-10-06 11:49:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-06 11:50:07.018389984 +0000 UTC m=+160.904415282" watchObservedRunningTime="2025-10-06 11:50:07.041305646 +0000 UTC m=+160.927330954" Oct 06 11:50:07 crc kubenswrapper[4958]: I1006 11:50:07.053704 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 06 11:50:07 crc kubenswrapper[4958]: I1006 11:50:07.053896 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/65b79cfc-41f1-40f1-b29b-9b9b4f8e28a2-catalog-content\") pod \"certified-operators-8fznl\" (UID: \"65b79cfc-41f1-40f1-b29b-9b9b4f8e28a2\") " pod="openshift-marketplace/certified-operators-8fznl" Oct 06 11:50:07 crc kubenswrapper[4958]: I1006 11:50:07.053975 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gwcv9\" (UniqueName: \"kubernetes.io/projected/65b79cfc-41f1-40f1-b29b-9b9b4f8e28a2-kube-api-access-gwcv9\") pod \"certified-operators-8fznl\" (UID: \"65b79cfc-41f1-40f1-b29b-9b9b4f8e28a2\") " pod="openshift-marketplace/certified-operators-8fznl" Oct 06 11:50:07 crc kubenswrapper[4958]: I1006 11:50:07.054022 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/65b79cfc-41f1-40f1-b29b-9b9b4f8e28a2-utilities\") pod \"certified-operators-8fznl\" (UID: \"65b79cfc-41f1-40f1-b29b-9b9b4f8e28a2\") " pod="openshift-marketplace/certified-operators-8fznl" Oct 06 11:50:07 crc kubenswrapper[4958]: E1006 11:50:07.058835 4958 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-06 11:50:07.558818294 +0000 UTC m=+161.444843602 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 06 11:50:07 crc kubenswrapper[4958]: I1006 11:50:07.059865 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/65b79cfc-41f1-40f1-b29b-9b9b4f8e28a2-catalog-content\") pod \"certified-operators-8fznl\" (UID: \"65b79cfc-41f1-40f1-b29b-9b9b4f8e28a2\") " pod="openshift-marketplace/certified-operators-8fznl" Oct 06 11:50:07 crc kubenswrapper[4958]: I1006 11:50:07.062016 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/65b79cfc-41f1-40f1-b29b-9b9b4f8e28a2-utilities\") pod \"certified-operators-8fznl\" (UID: \"65b79cfc-41f1-40f1-b29b-9b9b4f8e28a2\") " pod="openshift-marketplace/certified-operators-8fznl" Oct 06 11:50:07 crc kubenswrapper[4958]: I1006 11:50:07.067264 4958 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-cmm7m"] Oct 06 11:50:07 crc kubenswrapper[4958]: I1006 11:50:07.068702 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-cmm7m" Oct 06 11:50:07 crc kubenswrapper[4958]: I1006 11:50:07.069556 4958 reconciler.go:161] "OperationExecutor.RegisterPlugin started" plugin={"SocketPath":"/var/lib/kubelet/plugins_registry/kubevirt.io.hostpath-provisioner-reg.sock","Timestamp":"2025-10-06T11:50:06.677809393Z","Handler":null,"Name":""} Oct 06 11:50:07 crc kubenswrapper[4958]: I1006 11:50:07.073239 4958 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: kubevirt.io.hostpath-provisioner endpoint: /var/lib/kubelet/plugins/csi-hostpath/csi.sock versions: 1.0.0 Oct 06 11:50:07 crc kubenswrapper[4958]: I1006 11:50:07.073273 4958 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: kubevirt.io.hostpath-provisioner at endpoint: /var/lib/kubelet/plugins/csi-hostpath/csi.sock Oct 06 11:50:07 crc kubenswrapper[4958]: I1006 11:50:07.088768 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-cmm7m"] Oct 06 11:50:07 crc kubenswrapper[4958]: I1006 11:50:07.088855 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gwcv9\" (UniqueName: \"kubernetes.io/projected/65b79cfc-41f1-40f1-b29b-9b9b4f8e28a2-kube-api-access-gwcv9\") pod \"certified-operators-8fznl\" (UID: \"65b79cfc-41f1-40f1-b29b-9b9b4f8e28a2\") " pod="openshift-marketplace/certified-operators-8fznl" Oct 06 11:50:07 crc kubenswrapper[4958]: I1006 11:50:07.155035 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-kt8qb\" (UID: \"6a7580eb-dece-41a6-8335-33c29bc41056\") " pod="openshift-image-registry/image-registry-697d97f7c8-kt8qb" Oct 06 11:50:07 crc kubenswrapper[4958]: I1006 11:50:07.165734 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-vv7xq"] Oct 06 11:50:07 crc kubenswrapper[4958]: I1006 11:50:07.166288 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-8fznl" Oct 06 11:50:07 crc kubenswrapper[4958]: I1006 11:50:07.176636 4958 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Oct 06 11:50:07 crc kubenswrapper[4958]: I1006 11:50:07.176698 4958 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-kt8qb\" (UID: \"6a7580eb-dece-41a6-8335-33c29bc41056\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount\"" pod="openshift-image-registry/image-registry-697d97f7c8-kt8qb" Oct 06 11:50:07 crc kubenswrapper[4958]: I1006 11:50:07.249163 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-kt8qb\" (UID: \"6a7580eb-dece-41a6-8335-33c29bc41056\") " pod="openshift-image-registry/image-registry-697d97f7c8-kt8qb" Oct 06 11:50:07 crc kubenswrapper[4958]: I1006 11:50:07.257227 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 06 11:50:07 crc kubenswrapper[4958]: I1006 11:50:07.257400 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/aad2fcff-c2ad-444d-a6da-d4f04b1d3b25-catalog-content\") pod \"community-operators-cmm7m\" (UID: \"aad2fcff-c2ad-444d-a6da-d4f04b1d3b25\") " pod="openshift-marketplace/community-operators-cmm7m" Oct 06 11:50:07 crc kubenswrapper[4958]: I1006 11:50:07.257448 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-74qzt\" (UniqueName: \"kubernetes.io/projected/aad2fcff-c2ad-444d-a6da-d4f04b1d3b25-kube-api-access-74qzt\") pod \"community-operators-cmm7m\" (UID: \"aad2fcff-c2ad-444d-a6da-d4f04b1d3b25\") " pod="openshift-marketplace/community-operators-cmm7m" Oct 06 11:50:07 crc kubenswrapper[4958]: I1006 11:50:07.257525 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/aad2fcff-c2ad-444d-a6da-d4f04b1d3b25-utilities\") pod \"community-operators-cmm7m\" (UID: \"aad2fcff-c2ad-444d-a6da-d4f04b1d3b25\") " pod="openshift-marketplace/community-operators-cmm7m" Oct 06 11:50:07 crc kubenswrapper[4958]: I1006 11:50:07.264748 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-5l5pp" Oct 06 11:50:07 crc kubenswrapper[4958]: I1006 11:50:07.265011 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (OuterVolumeSpecName: "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8". PluginName "kubernetes.io/csi", VolumeGidValue "" Oct 06 11:50:07 crc kubenswrapper[4958]: I1006 11:50:07.358475 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/aad2fcff-c2ad-444d-a6da-d4f04b1d3b25-utilities\") pod \"community-operators-cmm7m\" (UID: \"aad2fcff-c2ad-444d-a6da-d4f04b1d3b25\") " pod="openshift-marketplace/community-operators-cmm7m" Oct 06 11:50:07 crc kubenswrapper[4958]: I1006 11:50:07.358885 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/aad2fcff-c2ad-444d-a6da-d4f04b1d3b25-catalog-content\") pod \"community-operators-cmm7m\" (UID: \"aad2fcff-c2ad-444d-a6da-d4f04b1d3b25\") " pod="openshift-marketplace/community-operators-cmm7m" Oct 06 11:50:07 crc kubenswrapper[4958]: I1006 11:50:07.358912 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-74qzt\" (UniqueName: \"kubernetes.io/projected/aad2fcff-c2ad-444d-a6da-d4f04b1d3b25-kube-api-access-74qzt\") pod \"community-operators-cmm7m\" (UID: \"aad2fcff-c2ad-444d-a6da-d4f04b1d3b25\") " pod="openshift-marketplace/community-operators-cmm7m" Oct 06 11:50:07 crc kubenswrapper[4958]: I1006 11:50:07.359810 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/aad2fcff-c2ad-444d-a6da-d4f04b1d3b25-catalog-content\") pod \"community-operators-cmm7m\" (UID: \"aad2fcff-c2ad-444d-a6da-d4f04b1d3b25\") " pod="openshift-marketplace/community-operators-cmm7m" Oct 06 11:50:07 crc kubenswrapper[4958]: I1006 11:50:07.360083 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/aad2fcff-c2ad-444d-a6da-d4f04b1d3b25-utilities\") pod \"community-operators-cmm7m\" (UID: \"aad2fcff-c2ad-444d-a6da-d4f04b1d3b25\") " pod="openshift-marketplace/community-operators-cmm7m" Oct 06 11:50:07 crc kubenswrapper[4958]: I1006 11:50:07.381087 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-74qzt\" (UniqueName: \"kubernetes.io/projected/aad2fcff-c2ad-444d-a6da-d4f04b1d3b25-kube-api-access-74qzt\") pod \"community-operators-cmm7m\" (UID: \"aad2fcff-c2ad-444d-a6da-d4f04b1d3b25\") " pod="openshift-marketplace/community-operators-cmm7m" Oct 06 11:50:07 crc kubenswrapper[4958]: I1006 11:50:07.394039 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-8fznl"] Oct 06 11:50:07 crc kubenswrapper[4958]: I1006 11:50:07.402516 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-cmm7m" Oct 06 11:50:07 crc kubenswrapper[4958]: I1006 11:50:07.473774 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-kt8qb" Oct 06 11:50:07 crc kubenswrapper[4958]: I1006 11:50:07.480462 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-5l5pp"] Oct 06 11:50:07 crc kubenswrapper[4958]: W1006 11:50:07.499044 4958 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3309b80a_11e6_4b60_be3f_c161644ffc7b.slice/crio-0d6d8334801b6b1e3044e1d3d01769000b721de86fca0397f9c6d425f8c1038a WatchSource:0}: Error finding container 0d6d8334801b6b1e3044e1d3d01769000b721de86fca0397f9c6d425f8c1038a: Status 404 returned error can't find the container with id 0d6d8334801b6b1e3044e1d3d01769000b721de86fca0397f9c6d425f8c1038a Oct 06 11:50:07 crc kubenswrapper[4958]: I1006 11:50:07.618858 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-cmm7m"] Oct 06 11:50:07 crc kubenswrapper[4958]: W1006 11:50:07.627683 4958 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podaad2fcff_c2ad_444d_a6da_d4f04b1d3b25.slice/crio-c71efdf5ae6eec3e21b5862b9cfa1c8fcc8d1bb52a735e89c9977ee80fef47b9 WatchSource:0}: Error finding container c71efdf5ae6eec3e21b5862b9cfa1c8fcc8d1bb52a735e89c9977ee80fef47b9: Status 404 returned error can't find the container with id c71efdf5ae6eec3e21b5862b9cfa1c8fcc8d1bb52a735e89c9977ee80fef47b9 Oct 06 11:50:07 crc kubenswrapper[4958]: I1006 11:50:07.706065 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-kt8qb"] Oct 06 11:50:07 crc kubenswrapper[4958]: W1006 11:50:07.782968 4958 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod6a7580eb_dece_41a6_8335_33c29bc41056.slice/crio-49052fa5cba9ac9adfefe36493d50adf3632348c937c12f73d5b999be0c2c99e WatchSource:0}: Error finding container 49052fa5cba9ac9adfefe36493d50adf3632348c937c12f73d5b999be0c2c99e: Status 404 returned error can't find the container with id 49052fa5cba9ac9adfefe36493d50adf3632348c937c12f73d5b999be0c2c99e Oct 06 11:50:07 crc kubenswrapper[4958]: I1006 11:50:07.985896 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-kt8qb" event={"ID":"6a7580eb-dece-41a6-8335-33c29bc41056","Type":"ContainerStarted","Data":"14315178cacd50869c144fa520e9f78e001ce768f21035b28800848f3c29f2a3"} Oct 06 11:50:07 crc kubenswrapper[4958]: I1006 11:50:07.986316 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-kt8qb" event={"ID":"6a7580eb-dece-41a6-8335-33c29bc41056","Type":"ContainerStarted","Data":"49052fa5cba9ac9adfefe36493d50adf3632348c937c12f73d5b999be0c2c99e"} Oct 06 11:50:07 crc kubenswrapper[4958]: I1006 11:50:07.986351 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-image-registry/image-registry-697d97f7c8-kt8qb" Oct 06 11:50:07 crc kubenswrapper[4958]: I1006 11:50:07.987818 4958 generic.go:334] "Generic (PLEG): container finished" podID="aad2fcff-c2ad-444d-a6da-d4f04b1d3b25" containerID="991a0b02330793288d50688b9308b20535ef9550655fba4703f6205b07a12b66" exitCode=0 Oct 06 11:50:07 crc kubenswrapper[4958]: I1006 11:50:07.987903 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-cmm7m" event={"ID":"aad2fcff-c2ad-444d-a6da-d4f04b1d3b25","Type":"ContainerDied","Data":"991a0b02330793288d50688b9308b20535ef9550655fba4703f6205b07a12b66"} Oct 06 11:50:07 crc kubenswrapper[4958]: I1006 11:50:07.987934 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-cmm7m" event={"ID":"aad2fcff-c2ad-444d-a6da-d4f04b1d3b25","Type":"ContainerStarted","Data":"c71efdf5ae6eec3e21b5862b9cfa1c8fcc8d1bb52a735e89c9977ee80fef47b9"} Oct 06 11:50:07 crc kubenswrapper[4958]: I1006 11:50:07.989451 4958 generic.go:334] "Generic (PLEG): container finished" podID="3309b80a-11e6-4b60-be3f-c161644ffc7b" containerID="6fd9d2d03d558775e14ee9b19bf7270bf4797410f18890909814a1eaae008ae6" exitCode=0 Oct 06 11:50:07 crc kubenswrapper[4958]: I1006 11:50:07.989500 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-5l5pp" event={"ID":"3309b80a-11e6-4b60-be3f-c161644ffc7b","Type":"ContainerDied","Data":"6fd9d2d03d558775e14ee9b19bf7270bf4797410f18890909814a1eaae008ae6"} Oct 06 11:50:07 crc kubenswrapper[4958]: I1006 11:50:07.989522 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-5l5pp" event={"ID":"3309b80a-11e6-4b60-be3f-c161644ffc7b","Type":"ContainerStarted","Data":"0d6d8334801b6b1e3044e1d3d01769000b721de86fca0397f9c6d425f8c1038a"} Oct 06 11:50:07 crc kubenswrapper[4958]: I1006 11:50:07.990338 4958 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Oct 06 11:50:07 crc kubenswrapper[4958]: I1006 11:50:07.992372 4958 generic.go:334] "Generic (PLEG): container finished" podID="65b79cfc-41f1-40f1-b29b-9b9b4f8e28a2" containerID="99dade0676d62c0e98fadc1846597ab485bd26886e52d993251914cb135278fb" exitCode=0 Oct 06 11:50:07 crc kubenswrapper[4958]: I1006 11:50:07.992485 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-8fznl" event={"ID":"65b79cfc-41f1-40f1-b29b-9b9b4f8e28a2","Type":"ContainerDied","Data":"99dade0676d62c0e98fadc1846597ab485bd26886e52d993251914cb135278fb"} Oct 06 11:50:07 crc kubenswrapper[4958]: I1006 11:50:07.992527 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-8fznl" event={"ID":"65b79cfc-41f1-40f1-b29b-9b9b4f8e28a2","Type":"ContainerStarted","Data":"c160badad2a8ff2b5d940095be5ec5359bea1ea1a6897535ef26ceafef6cffa7"} Oct 06 11:50:07 crc kubenswrapper[4958]: I1006 11:50:07.996940 4958 generic.go:334] "Generic (PLEG): container finished" podID="d262bad7-9d80-42f3-b97d-149b73d879c0" containerID="88a52d34ae07c03d1ebaaef7afc01869f3a7a8c8c894b9af19ca025f006e5df8" exitCode=0 Oct 06 11:50:07 crc kubenswrapper[4958]: I1006 11:50:07.997063 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-vv7xq" event={"ID":"d262bad7-9d80-42f3-b97d-149b73d879c0","Type":"ContainerDied","Data":"88a52d34ae07c03d1ebaaef7afc01869f3a7a8c8c894b9af19ca025f006e5df8"} Oct 06 11:50:07 crc kubenswrapper[4958]: I1006 11:50:07.999066 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-vv7xq" event={"ID":"d262bad7-9d80-42f3-b97d-149b73d879c0","Type":"ContainerStarted","Data":"b6126f78e4c5372671da65984a93fdee9a8b906feb2efa3f11beef0c3c670062"} Oct 06 11:50:08 crc kubenswrapper[4958]: I1006 11:50:08.017651 4958 patch_prober.go:28] interesting pod/router-default-5444994796-mfm8s container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 06 11:50:08 crc kubenswrapper[4958]: [-]has-synced failed: reason withheld Oct 06 11:50:08 crc kubenswrapper[4958]: [+]process-running ok Oct 06 11:50:08 crc kubenswrapper[4958]: healthz check failed Oct 06 11:50:08 crc kubenswrapper[4958]: I1006 11:50:08.017727 4958 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-mfm8s" podUID="37c87c2e-92b8-4b83-be26-cd63ec636eda" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 06 11:50:08 crc kubenswrapper[4958]: I1006 11:50:08.027226 4958 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/image-registry-697d97f7c8-kt8qb" podStartSLOduration=141.027206574 podStartE2EDuration="2m21.027206574s" podCreationTimestamp="2025-10-06 11:47:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-06 11:50:08.02311127 +0000 UTC m=+161.909136578" watchObservedRunningTime="2025-10-06 11:50:08.027206574 +0000 UTC m=+161.913231882" Oct 06 11:50:08 crc kubenswrapper[4958]: I1006 11:50:08.209806 4958 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Oct 06 11:50:08 crc kubenswrapper[4958]: I1006 11:50:08.210615 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Oct 06 11:50:08 crc kubenswrapper[4958]: I1006 11:50:08.220938 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver"/"installer-sa-dockercfg-5pr6n" Oct 06 11:50:08 crc kubenswrapper[4958]: I1006 11:50:08.222117 4958 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver"/"kube-root-ca.crt" Oct 06 11:50:08 crc kubenswrapper[4958]: I1006 11:50:08.223581 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Oct 06 11:50:08 crc kubenswrapper[4958]: I1006 11:50:08.324110 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29329185-6dnq5" Oct 06 11:50:08 crc kubenswrapper[4958]: I1006 11:50:08.380526 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/675bad16-24bf-4685-b5e5-51311ca2f42e-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"675bad16-24bf-4685-b5e5-51311ca2f42e\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Oct 06 11:50:08 crc kubenswrapper[4958]: I1006 11:50:08.380619 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/675bad16-24bf-4685-b5e5-51311ca2f42e-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"675bad16-24bf-4685-b5e5-51311ca2f42e\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Oct 06 11:50:08 crc kubenswrapper[4958]: I1006 11:50:08.481720 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/221a0896-9d41-4bf4-b05e-57c067e8b885-config-volume\") pod \"221a0896-9d41-4bf4-b05e-57c067e8b885\" (UID: \"221a0896-9d41-4bf4-b05e-57c067e8b885\") " Oct 06 11:50:08 crc kubenswrapper[4958]: I1006 11:50:08.482597 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-97hxg\" (UniqueName: \"kubernetes.io/projected/221a0896-9d41-4bf4-b05e-57c067e8b885-kube-api-access-97hxg\") pod \"221a0896-9d41-4bf4-b05e-57c067e8b885\" (UID: \"221a0896-9d41-4bf4-b05e-57c067e8b885\") " Oct 06 11:50:08 crc kubenswrapper[4958]: I1006 11:50:08.482548 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/221a0896-9d41-4bf4-b05e-57c067e8b885-config-volume" (OuterVolumeSpecName: "config-volume") pod "221a0896-9d41-4bf4-b05e-57c067e8b885" (UID: "221a0896-9d41-4bf4-b05e-57c067e8b885"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 06 11:50:08 crc kubenswrapper[4958]: I1006 11:50:08.482674 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/221a0896-9d41-4bf4-b05e-57c067e8b885-secret-volume\") pod \"221a0896-9d41-4bf4-b05e-57c067e8b885\" (UID: \"221a0896-9d41-4bf4-b05e-57c067e8b885\") " Oct 06 11:50:08 crc kubenswrapper[4958]: I1006 11:50:08.483438 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/675bad16-24bf-4685-b5e5-51311ca2f42e-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"675bad16-24bf-4685-b5e5-51311ca2f42e\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Oct 06 11:50:08 crc kubenswrapper[4958]: I1006 11:50:08.483515 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/675bad16-24bf-4685-b5e5-51311ca2f42e-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"675bad16-24bf-4685-b5e5-51311ca2f42e\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Oct 06 11:50:08 crc kubenswrapper[4958]: I1006 11:50:08.483598 4958 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/221a0896-9d41-4bf4-b05e-57c067e8b885-config-volume\") on node \"crc\" DevicePath \"\"" Oct 06 11:50:08 crc kubenswrapper[4958]: I1006 11:50:08.483909 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/675bad16-24bf-4685-b5e5-51311ca2f42e-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"675bad16-24bf-4685-b5e5-51311ca2f42e\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Oct 06 11:50:08 crc kubenswrapper[4958]: I1006 11:50:08.489080 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/221a0896-9d41-4bf4-b05e-57c067e8b885-kube-api-access-97hxg" (OuterVolumeSpecName: "kube-api-access-97hxg") pod "221a0896-9d41-4bf4-b05e-57c067e8b885" (UID: "221a0896-9d41-4bf4-b05e-57c067e8b885"). InnerVolumeSpecName "kube-api-access-97hxg". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 06 11:50:08 crc kubenswrapper[4958]: I1006 11:50:08.489456 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/221a0896-9d41-4bf4-b05e-57c067e8b885-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "221a0896-9d41-4bf4-b05e-57c067e8b885" (UID: "221a0896-9d41-4bf4-b05e-57c067e8b885"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 06 11:50:08 crc kubenswrapper[4958]: I1006 11:50:08.498228 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/675bad16-24bf-4685-b5e5-51311ca2f42e-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"675bad16-24bf-4685-b5e5-51311ca2f42e\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Oct 06 11:50:08 crc kubenswrapper[4958]: I1006 11:50:08.551932 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Oct 06 11:50:08 crc kubenswrapper[4958]: I1006 11:50:08.584246 4958 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-97hxg\" (UniqueName: \"kubernetes.io/projected/221a0896-9d41-4bf4-b05e-57c067e8b885-kube-api-access-97hxg\") on node \"crc\" DevicePath \"\"" Oct 06 11:50:08 crc kubenswrapper[4958]: I1006 11:50:08.584276 4958 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/221a0896-9d41-4bf4-b05e-57c067e8b885-secret-volume\") on node \"crc\" DevicePath \"\"" Oct 06 11:50:08 crc kubenswrapper[4958]: I1006 11:50:08.656891 4958 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-lntkc"] Oct 06 11:50:08 crc kubenswrapper[4958]: E1006 11:50:08.661966 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="221a0896-9d41-4bf4-b05e-57c067e8b885" containerName="collect-profiles" Oct 06 11:50:08 crc kubenswrapper[4958]: I1006 11:50:08.662010 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="221a0896-9d41-4bf4-b05e-57c067e8b885" containerName="collect-profiles" Oct 06 11:50:08 crc kubenswrapper[4958]: I1006 11:50:08.662289 4958 memory_manager.go:354] "RemoveStaleState removing state" podUID="221a0896-9d41-4bf4-b05e-57c067e8b885" containerName="collect-profiles" Oct 06 11:50:08 crc kubenswrapper[4958]: I1006 11:50:08.663566 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-lntkc" Oct 06 11:50:08 crc kubenswrapper[4958]: I1006 11:50:08.667157 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Oct 06 11:50:08 crc kubenswrapper[4958]: I1006 11:50:08.674654 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-lntkc"] Oct 06 11:50:08 crc kubenswrapper[4958]: I1006 11:50:08.712671 4958 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Oct 06 11:50:08 crc kubenswrapper[4958]: I1006 11:50:08.713856 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Oct 06 11:50:08 crc kubenswrapper[4958]: I1006 11:50:08.718546 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager"/"installer-sa-dockercfg-kjl2n" Oct 06 11:50:08 crc kubenswrapper[4958]: I1006 11:50:08.718794 4958 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager"/"kube-root-ca.crt" Oct 06 11:50:08 crc kubenswrapper[4958]: I1006 11:50:08.731813 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Oct 06 11:50:08 crc kubenswrapper[4958]: I1006 11:50:08.787618 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f668bb45-68a3-4e4a-850e-45f82572b753-utilities\") pod \"redhat-marketplace-lntkc\" (UID: \"f668bb45-68a3-4e4a-850e-45f82572b753\") " pod="openshift-marketplace/redhat-marketplace-lntkc" Oct 06 11:50:08 crc kubenswrapper[4958]: I1006 11:50:08.787678 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f668bb45-68a3-4e4a-850e-45f82572b753-catalog-content\") pod \"redhat-marketplace-lntkc\" (UID: \"f668bb45-68a3-4e4a-850e-45f82572b753\") " pod="openshift-marketplace/redhat-marketplace-lntkc" Oct 06 11:50:08 crc kubenswrapper[4958]: I1006 11:50:08.787773 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jb7vx\" (UniqueName: \"kubernetes.io/projected/f668bb45-68a3-4e4a-850e-45f82572b753-kube-api-access-jb7vx\") pod \"redhat-marketplace-lntkc\" (UID: \"f668bb45-68a3-4e4a-850e-45f82572b753\") " pod="openshift-marketplace/redhat-marketplace-lntkc" Oct 06 11:50:08 crc kubenswrapper[4958]: I1006 11:50:08.801123 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Oct 06 11:50:08 crc kubenswrapper[4958]: I1006 11:50:08.889657 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/22a4e500-57ab-4e3c-a3d5-c0256618c3cd-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"22a4e500-57ab-4e3c-a3d5-c0256618c3cd\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Oct 06 11:50:08 crc kubenswrapper[4958]: I1006 11:50:08.889933 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f668bb45-68a3-4e4a-850e-45f82572b753-utilities\") pod \"redhat-marketplace-lntkc\" (UID: \"f668bb45-68a3-4e4a-850e-45f82572b753\") " pod="openshift-marketplace/redhat-marketplace-lntkc" Oct 06 11:50:08 crc kubenswrapper[4958]: I1006 11:50:08.889974 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f668bb45-68a3-4e4a-850e-45f82572b753-catalog-content\") pod \"redhat-marketplace-lntkc\" (UID: \"f668bb45-68a3-4e4a-850e-45f82572b753\") " pod="openshift-marketplace/redhat-marketplace-lntkc" Oct 06 11:50:08 crc kubenswrapper[4958]: I1006 11:50:08.890062 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/22a4e500-57ab-4e3c-a3d5-c0256618c3cd-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"22a4e500-57ab-4e3c-a3d5-c0256618c3cd\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Oct 06 11:50:08 crc kubenswrapper[4958]: I1006 11:50:08.890207 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jb7vx\" (UniqueName: \"kubernetes.io/projected/f668bb45-68a3-4e4a-850e-45f82572b753-kube-api-access-jb7vx\") pod \"redhat-marketplace-lntkc\" (UID: \"f668bb45-68a3-4e4a-850e-45f82572b753\") " pod="openshift-marketplace/redhat-marketplace-lntkc" Oct 06 11:50:08 crc kubenswrapper[4958]: I1006 11:50:08.890475 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f668bb45-68a3-4e4a-850e-45f82572b753-utilities\") pod \"redhat-marketplace-lntkc\" (UID: \"f668bb45-68a3-4e4a-850e-45f82572b753\") " pod="openshift-marketplace/redhat-marketplace-lntkc" Oct 06 11:50:08 crc kubenswrapper[4958]: I1006 11:50:08.891201 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f668bb45-68a3-4e4a-850e-45f82572b753-catalog-content\") pod \"redhat-marketplace-lntkc\" (UID: \"f668bb45-68a3-4e4a-850e-45f82572b753\") " pod="openshift-marketplace/redhat-marketplace-lntkc" Oct 06 11:50:08 crc kubenswrapper[4958]: I1006 11:50:08.915680 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jb7vx\" (UniqueName: \"kubernetes.io/projected/f668bb45-68a3-4e4a-850e-45f82572b753-kube-api-access-jb7vx\") pod \"redhat-marketplace-lntkc\" (UID: \"f668bb45-68a3-4e4a-850e-45f82572b753\") " pod="openshift-marketplace/redhat-marketplace-lntkc" Oct 06 11:50:08 crc kubenswrapper[4958]: I1006 11:50:08.935328 4958 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8f668bae-612b-4b75-9490-919e737c6a3b" path="/var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes" Oct 06 11:50:08 crc kubenswrapper[4958]: I1006 11:50:08.990690 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-lntkc" Oct 06 11:50:08 crc kubenswrapper[4958]: I1006 11:50:08.991240 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/22a4e500-57ab-4e3c-a3d5-c0256618c3cd-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"22a4e500-57ab-4e3c-a3d5-c0256618c3cd\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Oct 06 11:50:08 crc kubenswrapper[4958]: I1006 11:50:08.991321 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/22a4e500-57ab-4e3c-a3d5-c0256618c3cd-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"22a4e500-57ab-4e3c-a3d5-c0256618c3cd\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Oct 06 11:50:08 crc kubenswrapper[4958]: I1006 11:50:08.991493 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/22a4e500-57ab-4e3c-a3d5-c0256618c3cd-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"22a4e500-57ab-4e3c-a3d5-c0256618c3cd\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Oct 06 11:50:09 crc kubenswrapper[4958]: I1006 11:50:09.010706 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"675bad16-24bf-4685-b5e5-51311ca2f42e","Type":"ContainerStarted","Data":"e62b2e0284822cf0ab68e335c2c55f95924bb67fbe112430161baaae322c1ada"} Oct 06 11:50:09 crc kubenswrapper[4958]: I1006 11:50:09.013790 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/22a4e500-57ab-4e3c-a3d5-c0256618c3cd-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"22a4e500-57ab-4e3c-a3d5-c0256618c3cd\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Oct 06 11:50:09 crc kubenswrapper[4958]: I1006 11:50:09.015839 4958 patch_prober.go:28] interesting pod/router-default-5444994796-mfm8s container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 06 11:50:09 crc kubenswrapper[4958]: [-]has-synced failed: reason withheld Oct 06 11:50:09 crc kubenswrapper[4958]: [+]process-running ok Oct 06 11:50:09 crc kubenswrapper[4958]: healthz check failed Oct 06 11:50:09 crc kubenswrapper[4958]: I1006 11:50:09.015952 4958 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-mfm8s" podUID="37c87c2e-92b8-4b83-be26-cd63ec636eda" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 06 11:50:09 crc kubenswrapper[4958]: I1006 11:50:09.021279 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29329185-6dnq5" event={"ID":"221a0896-9d41-4bf4-b05e-57c067e8b885","Type":"ContainerDied","Data":"d787d72a040a874b3756208cf03b7c3da0342a66991a052efaa3796cf1fef787"} Oct 06 11:50:09 crc kubenswrapper[4958]: I1006 11:50:09.021355 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29329185-6dnq5" Oct 06 11:50:09 crc kubenswrapper[4958]: I1006 11:50:09.021352 4958 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d787d72a040a874b3756208cf03b7c3da0342a66991a052efaa3796cf1fef787" Oct 06 11:50:09 crc kubenswrapper[4958]: I1006 11:50:09.030299 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Oct 06 11:50:09 crc kubenswrapper[4958]: I1006 11:50:09.043507 4958 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-fqzt6"] Oct 06 11:50:09 crc kubenswrapper[4958]: I1006 11:50:09.045647 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-fqzt6" Oct 06 11:50:09 crc kubenswrapper[4958]: I1006 11:50:09.058660 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-fqzt6"] Oct 06 11:50:09 crc kubenswrapper[4958]: I1006 11:50:09.194507 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kv2xm\" (UniqueName: \"kubernetes.io/projected/b08c0f74-407f-47c0-8cd4-85b2ceb71bfa-kube-api-access-kv2xm\") pod \"redhat-marketplace-fqzt6\" (UID: \"b08c0f74-407f-47c0-8cd4-85b2ceb71bfa\") " pod="openshift-marketplace/redhat-marketplace-fqzt6" Oct 06 11:50:09 crc kubenswrapper[4958]: I1006 11:50:09.194864 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b08c0f74-407f-47c0-8cd4-85b2ceb71bfa-utilities\") pod \"redhat-marketplace-fqzt6\" (UID: \"b08c0f74-407f-47c0-8cd4-85b2ceb71bfa\") " pod="openshift-marketplace/redhat-marketplace-fqzt6" Oct 06 11:50:09 crc kubenswrapper[4958]: I1006 11:50:09.194891 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b08c0f74-407f-47c0-8cd4-85b2ceb71bfa-catalog-content\") pod \"redhat-marketplace-fqzt6\" (UID: \"b08c0f74-407f-47c0-8cd4-85b2ceb71bfa\") " pod="openshift-marketplace/redhat-marketplace-fqzt6" Oct 06 11:50:09 crc kubenswrapper[4958]: I1006 11:50:09.301216 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kv2xm\" (UniqueName: \"kubernetes.io/projected/b08c0f74-407f-47c0-8cd4-85b2ceb71bfa-kube-api-access-kv2xm\") pod \"redhat-marketplace-fqzt6\" (UID: \"b08c0f74-407f-47c0-8cd4-85b2ceb71bfa\") " pod="openshift-marketplace/redhat-marketplace-fqzt6" Oct 06 11:50:09 crc kubenswrapper[4958]: I1006 11:50:09.301451 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b08c0f74-407f-47c0-8cd4-85b2ceb71bfa-utilities\") pod \"redhat-marketplace-fqzt6\" (UID: \"b08c0f74-407f-47c0-8cd4-85b2ceb71bfa\") " pod="openshift-marketplace/redhat-marketplace-fqzt6" Oct 06 11:50:09 crc kubenswrapper[4958]: I1006 11:50:09.301505 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b08c0f74-407f-47c0-8cd4-85b2ceb71bfa-catalog-content\") pod \"redhat-marketplace-fqzt6\" (UID: \"b08c0f74-407f-47c0-8cd4-85b2ceb71bfa\") " pod="openshift-marketplace/redhat-marketplace-fqzt6" Oct 06 11:50:09 crc kubenswrapper[4958]: I1006 11:50:09.302623 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b08c0f74-407f-47c0-8cd4-85b2ceb71bfa-utilities\") pod \"redhat-marketplace-fqzt6\" (UID: \"b08c0f74-407f-47c0-8cd4-85b2ceb71bfa\") " pod="openshift-marketplace/redhat-marketplace-fqzt6" Oct 06 11:50:09 crc kubenswrapper[4958]: I1006 11:50:09.302706 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b08c0f74-407f-47c0-8cd4-85b2ceb71bfa-catalog-content\") pod \"redhat-marketplace-fqzt6\" (UID: \"b08c0f74-407f-47c0-8cd4-85b2ceb71bfa\") " pod="openshift-marketplace/redhat-marketplace-fqzt6" Oct 06 11:50:09 crc kubenswrapper[4958]: I1006 11:50:09.306126 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-lntkc"] Oct 06 11:50:09 crc kubenswrapper[4958]: W1006 11:50:09.317023 4958 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf668bb45_68a3_4e4a_850e_45f82572b753.slice/crio-e57cda0140ef5f3707bbc291187724a53df848edc9c381f0dfe9a1f812c5a235 WatchSource:0}: Error finding container e57cda0140ef5f3707bbc291187724a53df848edc9c381f0dfe9a1f812c5a235: Status 404 returned error can't find the container with id e57cda0140ef5f3707bbc291187724a53df848edc9c381f0dfe9a1f812c5a235 Oct 06 11:50:09 crc kubenswrapper[4958]: I1006 11:50:09.326214 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kv2xm\" (UniqueName: \"kubernetes.io/projected/b08c0f74-407f-47c0-8cd4-85b2ceb71bfa-kube-api-access-kv2xm\") pod \"redhat-marketplace-fqzt6\" (UID: \"b08c0f74-407f-47c0-8cd4-85b2ceb71bfa\") " pod="openshift-marketplace/redhat-marketplace-fqzt6" Oct 06 11:50:09 crc kubenswrapper[4958]: I1006 11:50:09.389681 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-fqzt6" Oct 06 11:50:09 crc kubenswrapper[4958]: I1006 11:50:09.414802 4958 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-apiserver/apiserver-76f77b778f-57fbj" Oct 06 11:50:09 crc kubenswrapper[4958]: I1006 11:50:09.419852 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-apiserver/apiserver-76f77b778f-57fbj" Oct 06 11:50:09 crc kubenswrapper[4958]: I1006 11:50:09.637296 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Oct 06 11:50:09 crc kubenswrapper[4958]: I1006 11:50:09.646794 4958 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-r96rs"] Oct 06 11:50:09 crc kubenswrapper[4958]: I1006 11:50:09.649071 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-r96rs" Oct 06 11:50:09 crc kubenswrapper[4958]: I1006 11:50:09.655573 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Oct 06 11:50:09 crc kubenswrapper[4958]: I1006 11:50:09.659154 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-r96rs"] Oct 06 11:50:09 crc kubenswrapper[4958]: I1006 11:50:09.688407 4958 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-f9d7485db-ccpgf" Oct 06 11:50:09 crc kubenswrapper[4958]: I1006 11:50:09.688852 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-f9d7485db-ccpgf" Oct 06 11:50:09 crc kubenswrapper[4958]: I1006 11:50:09.711217 4958 patch_prober.go:28] interesting pod/console-f9d7485db-ccpgf container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.217.0.35:8443/health\": dial tcp 10.217.0.35:8443: connect: connection refused" start-of-body= Oct 06 11:50:09 crc kubenswrapper[4958]: I1006 11:50:09.711277 4958 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-f9d7485db-ccpgf" podUID="799bd962-f454-498a-88e6-58793b08d732" containerName="console" probeResult="failure" output="Get \"https://10.217.0.35:8443/health\": dial tcp 10.217.0.35:8443: connect: connection refused" Oct 06 11:50:09 crc kubenswrapper[4958]: I1006 11:50:09.765390 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-fqzt6"] Oct 06 11:50:09 crc kubenswrapper[4958]: I1006 11:50:09.830900 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-96bk9\" (UniqueName: \"kubernetes.io/projected/4c365fe8-47fa-4b1e-9d41-51d5b21a377e-kube-api-access-96bk9\") pod \"redhat-operators-r96rs\" (UID: \"4c365fe8-47fa-4b1e-9d41-51d5b21a377e\") " pod="openshift-marketplace/redhat-operators-r96rs" Oct 06 11:50:09 crc kubenswrapper[4958]: I1006 11:50:09.830981 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4c365fe8-47fa-4b1e-9d41-51d5b21a377e-utilities\") pod \"redhat-operators-r96rs\" (UID: \"4c365fe8-47fa-4b1e-9d41-51d5b21a377e\") " pod="openshift-marketplace/redhat-operators-r96rs" Oct 06 11:50:09 crc kubenswrapper[4958]: I1006 11:50:09.831003 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4c365fe8-47fa-4b1e-9d41-51d5b21a377e-catalog-content\") pod \"redhat-operators-r96rs\" (UID: \"4c365fe8-47fa-4b1e-9d41-51d5b21a377e\") " pod="openshift-marketplace/redhat-operators-r96rs" Oct 06 11:50:09 crc kubenswrapper[4958]: I1006 11:50:09.848030 4958 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-b5fmm"] Oct 06 11:50:09 crc kubenswrapper[4958]: I1006 11:50:09.849019 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-b5fmm" Oct 06 11:50:09 crc kubenswrapper[4958]: I1006 11:50:09.885462 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-b5fmm"] Oct 06 11:50:09 crc kubenswrapper[4958]: I1006 11:50:09.915254 4958 patch_prober.go:28] interesting pod/downloads-7954f5f757-bs4m6 container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.17:8080/\": dial tcp 10.217.0.17:8080: connect: connection refused" start-of-body= Oct 06 11:50:09 crc kubenswrapper[4958]: I1006 11:50:09.915324 4958 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-7954f5f757-bs4m6" podUID="0538c536-f662-48c7-98fe-a1ceff33ade3" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.17:8080/\": dial tcp 10.217.0.17:8080: connect: connection refused" Oct 06 11:50:09 crc kubenswrapper[4958]: I1006 11:50:09.915379 4958 patch_prober.go:28] interesting pod/downloads-7954f5f757-bs4m6 container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.17:8080/\": dial tcp 10.217.0.17:8080: connect: connection refused" start-of-body= Oct 06 11:50:09 crc kubenswrapper[4958]: I1006 11:50:09.915433 4958 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-bs4m6" podUID="0538c536-f662-48c7-98fe-a1ceff33ade3" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.17:8080/\": dial tcp 10.217.0.17:8080: connect: connection refused" Oct 06 11:50:09 crc kubenswrapper[4958]: I1006 11:50:09.932264 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-96bk9\" (UniqueName: \"kubernetes.io/projected/4c365fe8-47fa-4b1e-9d41-51d5b21a377e-kube-api-access-96bk9\") pod \"redhat-operators-r96rs\" (UID: \"4c365fe8-47fa-4b1e-9d41-51d5b21a377e\") " pod="openshift-marketplace/redhat-operators-r96rs" Oct 06 11:50:09 crc kubenswrapper[4958]: I1006 11:50:09.932697 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4c365fe8-47fa-4b1e-9d41-51d5b21a377e-utilities\") pod \"redhat-operators-r96rs\" (UID: \"4c365fe8-47fa-4b1e-9d41-51d5b21a377e\") " pod="openshift-marketplace/redhat-operators-r96rs" Oct 06 11:50:09 crc kubenswrapper[4958]: I1006 11:50:09.932721 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4c365fe8-47fa-4b1e-9d41-51d5b21a377e-catalog-content\") pod \"redhat-operators-r96rs\" (UID: \"4c365fe8-47fa-4b1e-9d41-51d5b21a377e\") " pod="openshift-marketplace/redhat-operators-r96rs" Oct 06 11:50:09 crc kubenswrapper[4958]: I1006 11:50:09.933329 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4c365fe8-47fa-4b1e-9d41-51d5b21a377e-utilities\") pod \"redhat-operators-r96rs\" (UID: \"4c365fe8-47fa-4b1e-9d41-51d5b21a377e\") " pod="openshift-marketplace/redhat-operators-r96rs" Oct 06 11:50:09 crc kubenswrapper[4958]: I1006 11:50:09.933638 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4c365fe8-47fa-4b1e-9d41-51d5b21a377e-catalog-content\") pod \"redhat-operators-r96rs\" (UID: \"4c365fe8-47fa-4b1e-9d41-51d5b21a377e\") " pod="openshift-marketplace/redhat-operators-r96rs" Oct 06 11:50:09 crc kubenswrapper[4958]: I1006 11:50:09.959837 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-96bk9\" (UniqueName: \"kubernetes.io/projected/4c365fe8-47fa-4b1e-9d41-51d5b21a377e-kube-api-access-96bk9\") pod \"redhat-operators-r96rs\" (UID: \"4c365fe8-47fa-4b1e-9d41-51d5b21a377e\") " pod="openshift-marketplace/redhat-operators-r96rs" Oct 06 11:50:09 crc kubenswrapper[4958]: I1006 11:50:09.983110 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-r96rs" Oct 06 11:50:10 crc kubenswrapper[4958]: I1006 11:50:10.014093 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ingress/router-default-5444994796-mfm8s" Oct 06 11:50:10 crc kubenswrapper[4958]: I1006 11:50:10.022340 4958 patch_prober.go:28] interesting pod/router-default-5444994796-mfm8s container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 06 11:50:10 crc kubenswrapper[4958]: [-]has-synced failed: reason withheld Oct 06 11:50:10 crc kubenswrapper[4958]: [+]process-running ok Oct 06 11:50:10 crc kubenswrapper[4958]: healthz check failed Oct 06 11:50:10 crc kubenswrapper[4958]: I1006 11:50:10.022437 4958 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-mfm8s" podUID="37c87c2e-92b8-4b83-be26-cd63ec636eda" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 06 11:50:10 crc kubenswrapper[4958]: I1006 11:50:10.036983 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/cbfbf2ea-df5f-4844-ba69-8c9f16a3a26c-metrics-certs\") pod \"network-metrics-daemon-4mxw5\" (UID: \"cbfbf2ea-df5f-4844-ba69-8c9f16a3a26c\") " pod="openshift-multus/network-metrics-daemon-4mxw5" Oct 06 11:50:10 crc kubenswrapper[4958]: I1006 11:50:10.037028 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2068c9e3-4b44-4225-b1a5-e495dd6ab26e-catalog-content\") pod \"redhat-operators-b5fmm\" (UID: \"2068c9e3-4b44-4225-b1a5-e495dd6ab26e\") " pod="openshift-marketplace/redhat-operators-b5fmm" Oct 06 11:50:10 crc kubenswrapper[4958]: I1006 11:50:10.037047 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6qg4k\" (UniqueName: \"kubernetes.io/projected/2068c9e3-4b44-4225-b1a5-e495dd6ab26e-kube-api-access-6qg4k\") pod \"redhat-operators-b5fmm\" (UID: \"2068c9e3-4b44-4225-b1a5-e495dd6ab26e\") " pod="openshift-marketplace/redhat-operators-b5fmm" Oct 06 11:50:10 crc kubenswrapper[4958]: I1006 11:50:10.037072 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2068c9e3-4b44-4225-b1a5-e495dd6ab26e-utilities\") pod \"redhat-operators-b5fmm\" (UID: \"2068c9e3-4b44-4225-b1a5-e495dd6ab26e\") " pod="openshift-marketplace/redhat-operators-b5fmm" Oct 06 11:50:10 crc kubenswrapper[4958]: I1006 11:50:10.058674 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/cbfbf2ea-df5f-4844-ba69-8c9f16a3a26c-metrics-certs\") pod \"network-metrics-daemon-4mxw5\" (UID: \"cbfbf2ea-df5f-4844-ba69-8c9f16a3a26c\") " pod="openshift-multus/network-metrics-daemon-4mxw5" Oct 06 11:50:10 crc kubenswrapper[4958]: I1006 11:50:10.069296 4958 generic.go:334] "Generic (PLEG): container finished" podID="675bad16-24bf-4685-b5e5-51311ca2f42e" containerID="a867ea0a3e83bea6fe75b8bf116c435f786e7005146d659e6e1b54c72a10cb61" exitCode=0 Oct 06 11:50:10 crc kubenswrapper[4958]: I1006 11:50:10.070213 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"675bad16-24bf-4685-b5e5-51311ca2f42e","Type":"ContainerDied","Data":"a867ea0a3e83bea6fe75b8bf116c435f786e7005146d659e6e1b54c72a10cb61"} Oct 06 11:50:10 crc kubenswrapper[4958]: I1006 11:50:10.109662 4958 generic.go:334] "Generic (PLEG): container finished" podID="f668bb45-68a3-4e4a-850e-45f82572b753" containerID="f0ce3a3f9655d45d417096f6e116cf6ac7facf155f772a9eedf3956769774482" exitCode=0 Oct 06 11:50:10 crc kubenswrapper[4958]: I1006 11:50:10.110510 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-lntkc" event={"ID":"f668bb45-68a3-4e4a-850e-45f82572b753","Type":"ContainerDied","Data":"f0ce3a3f9655d45d417096f6e116cf6ac7facf155f772a9eedf3956769774482"} Oct 06 11:50:10 crc kubenswrapper[4958]: I1006 11:50:10.110585 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-lntkc" event={"ID":"f668bb45-68a3-4e4a-850e-45f82572b753","Type":"ContainerStarted","Data":"e57cda0140ef5f3707bbc291187724a53df848edc9c381f0dfe9a1f812c5a235"} Oct 06 11:50:10 crc kubenswrapper[4958]: I1006 11:50:10.159799 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2068c9e3-4b44-4225-b1a5-e495dd6ab26e-utilities\") pod \"redhat-operators-b5fmm\" (UID: \"2068c9e3-4b44-4225-b1a5-e495dd6ab26e\") " pod="openshift-marketplace/redhat-operators-b5fmm" Oct 06 11:50:10 crc kubenswrapper[4958]: I1006 11:50:10.159938 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2068c9e3-4b44-4225-b1a5-e495dd6ab26e-catalog-content\") pod \"redhat-operators-b5fmm\" (UID: \"2068c9e3-4b44-4225-b1a5-e495dd6ab26e\") " pod="openshift-marketplace/redhat-operators-b5fmm" Oct 06 11:50:10 crc kubenswrapper[4958]: I1006 11:50:10.159970 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6qg4k\" (UniqueName: \"kubernetes.io/projected/2068c9e3-4b44-4225-b1a5-e495dd6ab26e-kube-api-access-6qg4k\") pod \"redhat-operators-b5fmm\" (UID: \"2068c9e3-4b44-4225-b1a5-e495dd6ab26e\") " pod="openshift-marketplace/redhat-operators-b5fmm" Oct 06 11:50:10 crc kubenswrapper[4958]: I1006 11:50:10.162336 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2068c9e3-4b44-4225-b1a5-e495dd6ab26e-catalog-content\") pod \"redhat-operators-b5fmm\" (UID: \"2068c9e3-4b44-4225-b1a5-e495dd6ab26e\") " pod="openshift-marketplace/redhat-operators-b5fmm" Oct 06 11:50:10 crc kubenswrapper[4958]: I1006 11:50:10.162435 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2068c9e3-4b44-4225-b1a5-e495dd6ab26e-utilities\") pod \"redhat-operators-b5fmm\" (UID: \"2068c9e3-4b44-4225-b1a5-e495dd6ab26e\") " pod="openshift-marketplace/redhat-operators-b5fmm" Oct 06 11:50:10 crc kubenswrapper[4958]: I1006 11:50:10.187582 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-fqzt6" event={"ID":"b08c0f74-407f-47c0-8cd4-85b2ceb71bfa","Type":"ContainerStarted","Data":"d1074736ca0d15fcbadec158b025e3f718c1302a9bba1f01643120b4d779c4e0"} Oct 06 11:50:10 crc kubenswrapper[4958]: I1006 11:50:10.192085 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6qg4k\" (UniqueName: \"kubernetes.io/projected/2068c9e3-4b44-4225-b1a5-e495dd6ab26e-kube-api-access-6qg4k\") pod \"redhat-operators-b5fmm\" (UID: \"2068c9e3-4b44-4225-b1a5-e495dd6ab26e\") " pod="openshift-marketplace/redhat-operators-b5fmm" Oct 06 11:50:10 crc kubenswrapper[4958]: I1006 11:50:10.219781 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"22a4e500-57ab-4e3c-a3d5-c0256618c3cd","Type":"ContainerStarted","Data":"499ff59dc33b7027d1e56eed42161c325b9dbcd645464d51ade55eca10c49b3b"} Oct 06 11:50:10 crc kubenswrapper[4958]: I1006 11:50:10.257627 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-b5fmm" Oct 06 11:50:10 crc kubenswrapper[4958]: I1006 11:50:10.277624 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-4mxw5" Oct 06 11:50:10 crc kubenswrapper[4958]: I1006 11:50:10.458448 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-r96rs"] Oct 06 11:50:10 crc kubenswrapper[4958]: I1006 11:50:10.474393 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-79b997595-928ch" Oct 06 11:50:10 crc kubenswrapper[4958]: I1006 11:50:10.942318 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/network-metrics-daemon-4mxw5"] Oct 06 11:50:10 crc kubenswrapper[4958]: I1006 11:50:10.955701 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-b5fmm"] Oct 06 11:50:11 crc kubenswrapper[4958]: I1006 11:50:11.023184 4958 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-ingress/router-default-5444994796-mfm8s" Oct 06 11:50:11 crc kubenswrapper[4958]: I1006 11:50:11.027173 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ingress/router-default-5444994796-mfm8s" Oct 06 11:50:11 crc kubenswrapper[4958]: I1006 11:50:11.228904 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-b5fmm" event={"ID":"2068c9e3-4b44-4225-b1a5-e495dd6ab26e","Type":"ContainerStarted","Data":"db33b7cc7929af175692d7644fe5fd45c702828d7aff0b7be1cc9f4c74479575"} Oct 06 11:50:11 crc kubenswrapper[4958]: I1006 11:50:11.235704 4958 generic.go:334] "Generic (PLEG): container finished" podID="4c365fe8-47fa-4b1e-9d41-51d5b21a377e" containerID="add7323d8e2b6c5592a0676c41634b09eca4748b5b72ff28b7d7837a5bab48fb" exitCode=0 Oct 06 11:50:11 crc kubenswrapper[4958]: I1006 11:50:11.236089 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-r96rs" event={"ID":"4c365fe8-47fa-4b1e-9d41-51d5b21a377e","Type":"ContainerDied","Data":"add7323d8e2b6c5592a0676c41634b09eca4748b5b72ff28b7d7837a5bab48fb"} Oct 06 11:50:11 crc kubenswrapper[4958]: I1006 11:50:11.236186 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-r96rs" event={"ID":"4c365fe8-47fa-4b1e-9d41-51d5b21a377e","Type":"ContainerStarted","Data":"f7566f7d79f3f3f15730d5a4d95be14b3191a5488a5d61300a67597a0a4b356e"} Oct 06 11:50:11 crc kubenswrapper[4958]: I1006 11:50:11.239167 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-4mxw5" event={"ID":"cbfbf2ea-df5f-4844-ba69-8c9f16a3a26c","Type":"ContainerStarted","Data":"5aeaa67beccc2e5a314bf550b7cc9123e3737fe79a4eb9cdc010d71c23139b1b"} Oct 06 11:50:11 crc kubenswrapper[4958]: I1006 11:50:11.245127 4958 generic.go:334] "Generic (PLEG): container finished" podID="b08c0f74-407f-47c0-8cd4-85b2ceb71bfa" containerID="66bf95c0fa86c72c27d6565518738ee847d192fa2d51508c60ccd43a0e549d1f" exitCode=0 Oct 06 11:50:11 crc kubenswrapper[4958]: I1006 11:50:11.245218 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-fqzt6" event={"ID":"b08c0f74-407f-47c0-8cd4-85b2ceb71bfa","Type":"ContainerDied","Data":"66bf95c0fa86c72c27d6565518738ee847d192fa2d51508c60ccd43a0e549d1f"} Oct 06 11:50:11 crc kubenswrapper[4958]: I1006 11:50:11.248768 4958 generic.go:334] "Generic (PLEG): container finished" podID="22a4e500-57ab-4e3c-a3d5-c0256618c3cd" containerID="2e5f58697701bdaa84624693ef19d5cbfe905b08dedeb7686a1b9b4fbd8f30b4" exitCode=0 Oct 06 11:50:11 crc kubenswrapper[4958]: I1006 11:50:11.248878 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"22a4e500-57ab-4e3c-a3d5-c0256618c3cd","Type":"ContainerDied","Data":"2e5f58697701bdaa84624693ef19d5cbfe905b08dedeb7686a1b9b4fbd8f30b4"} Oct 06 11:50:11 crc kubenswrapper[4958]: I1006 11:50:11.552723 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Oct 06 11:50:11 crc kubenswrapper[4958]: I1006 11:50:11.686675 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/675bad16-24bf-4685-b5e5-51311ca2f42e-kubelet-dir\") pod \"675bad16-24bf-4685-b5e5-51311ca2f42e\" (UID: \"675bad16-24bf-4685-b5e5-51311ca2f42e\") " Oct 06 11:50:11 crc kubenswrapper[4958]: I1006 11:50:11.686731 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/675bad16-24bf-4685-b5e5-51311ca2f42e-kube-api-access\") pod \"675bad16-24bf-4685-b5e5-51311ca2f42e\" (UID: \"675bad16-24bf-4685-b5e5-51311ca2f42e\") " Oct 06 11:50:11 crc kubenswrapper[4958]: I1006 11:50:11.686904 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/675bad16-24bf-4685-b5e5-51311ca2f42e-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "675bad16-24bf-4685-b5e5-51311ca2f42e" (UID: "675bad16-24bf-4685-b5e5-51311ca2f42e"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 06 11:50:11 crc kubenswrapper[4958]: I1006 11:50:11.687296 4958 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/675bad16-24bf-4685-b5e5-51311ca2f42e-kubelet-dir\") on node \"crc\" DevicePath \"\"" Oct 06 11:50:11 crc kubenswrapper[4958]: I1006 11:50:11.692281 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/675bad16-24bf-4685-b5e5-51311ca2f42e-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "675bad16-24bf-4685-b5e5-51311ca2f42e" (UID: "675bad16-24bf-4685-b5e5-51311ca2f42e"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 06 11:50:11 crc kubenswrapper[4958]: I1006 11:50:11.788020 4958 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/675bad16-24bf-4685-b5e5-51311ca2f42e-kube-api-access\") on node \"crc\" DevicePath \"\"" Oct 06 11:50:12 crc kubenswrapper[4958]: I1006 11:50:12.194446 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-dns/dns-default-kg57j" Oct 06 11:50:12 crc kubenswrapper[4958]: I1006 11:50:12.258500 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"675bad16-24bf-4685-b5e5-51311ca2f42e","Type":"ContainerDied","Data":"e62b2e0284822cf0ab68e335c2c55f95924bb67fbe112430161baaae322c1ada"} Oct 06 11:50:12 crc kubenswrapper[4958]: I1006 11:50:12.258565 4958 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e62b2e0284822cf0ab68e335c2c55f95924bb67fbe112430161baaae322c1ada" Oct 06 11:50:12 crc kubenswrapper[4958]: I1006 11:50:12.258671 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Oct 06 11:50:12 crc kubenswrapper[4958]: I1006 11:50:12.272478 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-4mxw5" event={"ID":"cbfbf2ea-df5f-4844-ba69-8c9f16a3a26c","Type":"ContainerStarted","Data":"81b7eaa31f950c69527c43ca0bd0a797fcc86bda5325ae535d1d202f56cf17fb"} Oct 06 11:50:12 crc kubenswrapper[4958]: I1006 11:50:12.281021 4958 generic.go:334] "Generic (PLEG): container finished" podID="2068c9e3-4b44-4225-b1a5-e495dd6ab26e" containerID="9b06c7873c6407b401a3cd4bd4b93c3d2d991954e7ab0ca73f9edcc2ca8412c8" exitCode=0 Oct 06 11:50:12 crc kubenswrapper[4958]: I1006 11:50:12.281289 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-b5fmm" event={"ID":"2068c9e3-4b44-4225-b1a5-e495dd6ab26e","Type":"ContainerDied","Data":"9b06c7873c6407b401a3cd4bd4b93c3d2d991954e7ab0ca73f9edcc2ca8412c8"} Oct 06 11:50:12 crc kubenswrapper[4958]: I1006 11:50:12.568384 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Oct 06 11:50:12 crc kubenswrapper[4958]: I1006 11:50:12.705712 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/22a4e500-57ab-4e3c-a3d5-c0256618c3cd-kube-api-access\") pod \"22a4e500-57ab-4e3c-a3d5-c0256618c3cd\" (UID: \"22a4e500-57ab-4e3c-a3d5-c0256618c3cd\") " Oct 06 11:50:12 crc kubenswrapper[4958]: I1006 11:50:12.705892 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/22a4e500-57ab-4e3c-a3d5-c0256618c3cd-kubelet-dir\") pod \"22a4e500-57ab-4e3c-a3d5-c0256618c3cd\" (UID: \"22a4e500-57ab-4e3c-a3d5-c0256618c3cd\") " Oct 06 11:50:12 crc kubenswrapper[4958]: I1006 11:50:12.706357 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/22a4e500-57ab-4e3c-a3d5-c0256618c3cd-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "22a4e500-57ab-4e3c-a3d5-c0256618c3cd" (UID: "22a4e500-57ab-4e3c-a3d5-c0256618c3cd"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 06 11:50:12 crc kubenswrapper[4958]: I1006 11:50:12.712315 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/22a4e500-57ab-4e3c-a3d5-c0256618c3cd-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "22a4e500-57ab-4e3c-a3d5-c0256618c3cd" (UID: "22a4e500-57ab-4e3c-a3d5-c0256618c3cd"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 06 11:50:12 crc kubenswrapper[4958]: I1006 11:50:12.807055 4958 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/22a4e500-57ab-4e3c-a3d5-c0256618c3cd-kubelet-dir\") on node \"crc\" DevicePath \"\"" Oct 06 11:50:12 crc kubenswrapper[4958]: I1006 11:50:12.807085 4958 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/22a4e500-57ab-4e3c-a3d5-c0256618c3cd-kube-api-access\") on node \"crc\" DevicePath \"\"" Oct 06 11:50:13 crc kubenswrapper[4958]: I1006 11:50:13.300091 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"22a4e500-57ab-4e3c-a3d5-c0256618c3cd","Type":"ContainerDied","Data":"499ff59dc33b7027d1e56eed42161c325b9dbcd645464d51ade55eca10c49b3b"} Oct 06 11:50:13 crc kubenswrapper[4958]: I1006 11:50:13.300231 4958 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="499ff59dc33b7027d1e56eed42161c325b9dbcd645464d51ade55eca10c49b3b" Oct 06 11:50:13 crc kubenswrapper[4958]: I1006 11:50:13.300109 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Oct 06 11:50:13 crc kubenswrapper[4958]: I1006 11:50:13.304001 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-4mxw5" event={"ID":"cbfbf2ea-df5f-4844-ba69-8c9f16a3a26c","Type":"ContainerStarted","Data":"341c2767c74f8aaea7ac03842e1c8dfd506c7b3f30dbac60322189d8ff9c26a2"} Oct 06 11:50:13 crc kubenswrapper[4958]: I1006 11:50:13.319394 4958 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/network-metrics-daemon-4mxw5" podStartSLOduration=146.319374537 podStartE2EDuration="2m26.319374537s" podCreationTimestamp="2025-10-06 11:47:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-06 11:50:13.318756729 +0000 UTC m=+167.204782037" watchObservedRunningTime="2025-10-06 11:50:13.319374537 +0000 UTC m=+167.205399845" Oct 06 11:50:19 crc kubenswrapper[4958]: I1006 11:50:19.692333 4958 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-f9d7485db-ccpgf" Oct 06 11:50:19 crc kubenswrapper[4958]: I1006 11:50:19.697052 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-f9d7485db-ccpgf" Oct 06 11:50:19 crc kubenswrapper[4958]: I1006 11:50:19.925645 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/downloads-7954f5f757-bs4m6" Oct 06 11:50:23 crc kubenswrapper[4958]: I1006 11:50:23.801668 4958 patch_prober.go:28] interesting pod/machine-config-daemon-whw6z container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 06 11:50:23 crc kubenswrapper[4958]: I1006 11:50:23.802021 4958 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-whw6z" podUID="1a63a5e9-6d00-40ff-a080-cfcaf03a1c1b" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 06 11:50:27 crc kubenswrapper[4958]: I1006 11:50:27.485895 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-image-registry/image-registry-697d97f7c8-kt8qb" Oct 06 11:50:32 crc kubenswrapper[4958]: E1006 11:50:32.402091 4958 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/certified-operator-index:v4.18" Oct 06 11:50:32 crc kubenswrapper[4958]: E1006 11:50:32.403040 4958 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/certified-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-m96xq,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod certified-operators-vv7xq_openshift-marketplace(d262bad7-9d80-42f3-b97d-149b73d879c0): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Oct 06 11:50:32 crc kubenswrapper[4958]: E1006 11:50:32.404555 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/certified-operators-vv7xq" podUID="d262bad7-9d80-42f3-b97d-149b73d879c0" Oct 06 11:50:33 crc kubenswrapper[4958]: E1006 11:50:33.836869 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-vv7xq" podUID="d262bad7-9d80-42f3-b97d-149b73d879c0" Oct 06 11:50:34 crc kubenswrapper[4958]: I1006 11:50:34.952405 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 06 11:50:36 crc kubenswrapper[4958]: E1006 11:50:36.846867 4958 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-operator-index:v4.18" Oct 06 11:50:36 crc kubenswrapper[4958]: E1006 11:50:36.846919 4958 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-operator-index:v4.18" Oct 06 11:50:36 crc kubenswrapper[4958]: E1006 11:50:36.847590 4958 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-96bk9,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-operators-r96rs_openshift-marketplace(4c365fe8-47fa-4b1e-9d41-51d5b21a377e): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Oct 06 11:50:36 crc kubenswrapper[4958]: E1006 11:50:36.847717 4958 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-6qg4k,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-operators-b5fmm_openshift-marketplace(2068c9e3-4b44-4225-b1a5-e495dd6ab26e): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Oct 06 11:50:36 crc kubenswrapper[4958]: E1006 11:50:36.848901 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-operators-r96rs" podUID="4c365fe8-47fa-4b1e-9d41-51d5b21a377e" Oct 06 11:50:36 crc kubenswrapper[4958]: E1006 11:50:36.848989 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-operators-b5fmm" podUID="2068c9e3-4b44-4225-b1a5-e495dd6ab26e" Oct 06 11:50:36 crc kubenswrapper[4958]: E1006 11:50:36.882906 4958 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/certified-operator-index:v4.18" Oct 06 11:50:36 crc kubenswrapper[4958]: E1006 11:50:36.883024 4958 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/certified-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-gwcv9,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod certified-operators-8fznl_openshift-marketplace(65b79cfc-41f1-40f1-b29b-9b9b4f8e28a2): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Oct 06 11:50:36 crc kubenswrapper[4958]: E1006 11:50:36.884371 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/certified-operators-8fznl" podUID="65b79cfc-41f1-40f1-b29b-9b9b4f8e28a2" Oct 06 11:50:39 crc kubenswrapper[4958]: E1006 11:50:39.501373 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-8fznl" podUID="65b79cfc-41f1-40f1-b29b-9b9b4f8e28a2" Oct 06 11:50:39 crc kubenswrapper[4958]: E1006 11:50:39.501598 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-r96rs" podUID="4c365fe8-47fa-4b1e-9d41-51d5b21a377e" Oct 06 11:50:39 crc kubenswrapper[4958]: E1006 11:50:39.502046 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-b5fmm" podUID="2068c9e3-4b44-4225-b1a5-e495dd6ab26e" Oct 06 11:50:39 crc kubenswrapper[4958]: E1006 11:50:39.582601 4958 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/community-operator-index:v4.18" Oct 06 11:50:39 crc kubenswrapper[4958]: E1006 11:50:39.582869 4958 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/community-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-sqdsq,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod community-operators-5l5pp_openshift-marketplace(3309b80a-11e6-4b60-be3f-c161644ffc7b): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Oct 06 11:50:39 crc kubenswrapper[4958]: E1006 11:50:39.584071 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/community-operators-5l5pp" podUID="3309b80a-11e6-4b60-be3f-c161644ffc7b" Oct 06 11:50:40 crc kubenswrapper[4958]: E1006 11:50:40.336060 4958 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-marketplace-index:v4.18" Oct 06 11:50:40 crc kubenswrapper[4958]: E1006 11:50:40.336753 4958 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-marketplace-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-jb7vx,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-marketplace-lntkc_openshift-marketplace(f668bb45-68a3-4e4a-850e-45f82572b753): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Oct 06 11:50:40 crc kubenswrapper[4958]: E1006 11:50:40.337907 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-marketplace-lntkc" podUID="f668bb45-68a3-4e4a-850e-45f82572b753" Oct 06 11:50:40 crc kubenswrapper[4958]: E1006 11:50:40.355946 4958 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-marketplace-index:v4.18" Oct 06 11:50:40 crc kubenswrapper[4958]: E1006 11:50:40.356167 4958 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-marketplace-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-kv2xm,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-marketplace-fqzt6_openshift-marketplace(b08c0f74-407f-47c0-8cd4-85b2ceb71bfa): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Oct 06 11:50:40 crc kubenswrapper[4958]: E1006 11:50:40.357596 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-marketplace-fqzt6" podUID="b08c0f74-407f-47c0-8cd4-85b2ceb71bfa" Oct 06 11:50:40 crc kubenswrapper[4958]: I1006 11:50:40.474572 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-cmm7m" event={"ID":"aad2fcff-c2ad-444d-a6da-d4f04b1d3b25","Type":"ContainerStarted","Data":"e73ad21c6b0d03c76891f98969ae9a37e7428d07433bc503cdaff7047be5b340"} Oct 06 11:50:40 crc kubenswrapper[4958]: E1006 11:50:40.475292 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-lntkc" podUID="f668bb45-68a3-4e4a-850e-45f82572b753" Oct 06 11:50:40 crc kubenswrapper[4958]: E1006 11:50:40.477003 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-fqzt6" podUID="b08c0f74-407f-47c0-8cd4-85b2ceb71bfa" Oct 06 11:50:40 crc kubenswrapper[4958]: E1006 11:50:40.479596 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-5l5pp" podUID="3309b80a-11e6-4b60-be3f-c161644ffc7b" Oct 06 11:50:40 crc kubenswrapper[4958]: I1006 11:50:40.504158 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-k94lv" Oct 06 11:50:41 crc kubenswrapper[4958]: I1006 11:50:41.481845 4958 generic.go:334] "Generic (PLEG): container finished" podID="aad2fcff-c2ad-444d-a6da-d4f04b1d3b25" containerID="e73ad21c6b0d03c76891f98969ae9a37e7428d07433bc503cdaff7047be5b340" exitCode=0 Oct 06 11:50:41 crc kubenswrapper[4958]: I1006 11:50:41.481959 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-cmm7m" event={"ID":"aad2fcff-c2ad-444d-a6da-d4f04b1d3b25","Type":"ContainerDied","Data":"e73ad21c6b0d03c76891f98969ae9a37e7428d07433bc503cdaff7047be5b340"} Oct 06 11:50:42 crc kubenswrapper[4958]: I1006 11:50:42.491272 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-cmm7m" event={"ID":"aad2fcff-c2ad-444d-a6da-d4f04b1d3b25","Type":"ContainerStarted","Data":"cbf8e80f8f089b600e778fe244ecce65939c029a6667a11c0dbb855ab3de7c06"} Oct 06 11:50:42 crc kubenswrapper[4958]: I1006 11:50:42.525226 4958 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-cmm7m" podStartSLOduration=1.426105513 podStartE2EDuration="35.525191752s" podCreationTimestamp="2025-10-06 11:50:07 +0000 UTC" firstStartedPulling="2025-10-06 11:50:07.990047662 +0000 UTC m=+161.876072970" lastFinishedPulling="2025-10-06 11:50:42.089133901 +0000 UTC m=+195.975159209" observedRunningTime="2025-10-06 11:50:42.524417099 +0000 UTC m=+196.410442447" watchObservedRunningTime="2025-10-06 11:50:42.525191752 +0000 UTC m=+196.411217110" Oct 06 11:50:47 crc kubenswrapper[4958]: I1006 11:50:47.404165 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-cmm7m" Oct 06 11:50:47 crc kubenswrapper[4958]: I1006 11:50:47.404539 4958 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-cmm7m" Oct 06 11:50:47 crc kubenswrapper[4958]: I1006 11:50:47.583567 4958 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-cmm7m" Oct 06 11:50:47 crc kubenswrapper[4958]: I1006 11:50:47.659265 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-cmm7m" Oct 06 11:50:47 crc kubenswrapper[4958]: I1006 11:50:47.830911 4958 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-cmm7m"] Oct 06 11:50:49 crc kubenswrapper[4958]: I1006 11:50:49.540041 4958 generic.go:334] "Generic (PLEG): container finished" podID="d262bad7-9d80-42f3-b97d-149b73d879c0" containerID="728d21ce65acb6e431ddbc467abd230b78f71dbc5ab04cfcd66cdd0d8c251bec" exitCode=0 Oct 06 11:50:49 crc kubenswrapper[4958]: I1006 11:50:49.540186 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-vv7xq" event={"ID":"d262bad7-9d80-42f3-b97d-149b73d879c0","Type":"ContainerDied","Data":"728d21ce65acb6e431ddbc467abd230b78f71dbc5ab04cfcd66cdd0d8c251bec"} Oct 06 11:50:49 crc kubenswrapper[4958]: I1006 11:50:49.540803 4958 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-cmm7m" podUID="aad2fcff-c2ad-444d-a6da-d4f04b1d3b25" containerName="registry-server" containerID="cri-o://cbf8e80f8f089b600e778fe244ecce65939c029a6667a11c0dbb855ab3de7c06" gracePeriod=2 Oct 06 11:50:49 crc kubenswrapper[4958]: I1006 11:50:49.920943 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-cmm7m" Oct 06 11:50:50 crc kubenswrapper[4958]: I1006 11:50:50.037732 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-74qzt\" (UniqueName: \"kubernetes.io/projected/aad2fcff-c2ad-444d-a6da-d4f04b1d3b25-kube-api-access-74qzt\") pod \"aad2fcff-c2ad-444d-a6da-d4f04b1d3b25\" (UID: \"aad2fcff-c2ad-444d-a6da-d4f04b1d3b25\") " Oct 06 11:50:50 crc kubenswrapper[4958]: I1006 11:50:50.038253 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/aad2fcff-c2ad-444d-a6da-d4f04b1d3b25-catalog-content\") pod \"aad2fcff-c2ad-444d-a6da-d4f04b1d3b25\" (UID: \"aad2fcff-c2ad-444d-a6da-d4f04b1d3b25\") " Oct 06 11:50:50 crc kubenswrapper[4958]: I1006 11:50:50.038427 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/aad2fcff-c2ad-444d-a6da-d4f04b1d3b25-utilities\") pod \"aad2fcff-c2ad-444d-a6da-d4f04b1d3b25\" (UID: \"aad2fcff-c2ad-444d-a6da-d4f04b1d3b25\") " Oct 06 11:50:50 crc kubenswrapper[4958]: I1006 11:50:50.040281 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/aad2fcff-c2ad-444d-a6da-d4f04b1d3b25-utilities" (OuterVolumeSpecName: "utilities") pod "aad2fcff-c2ad-444d-a6da-d4f04b1d3b25" (UID: "aad2fcff-c2ad-444d-a6da-d4f04b1d3b25"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 06 11:50:50 crc kubenswrapper[4958]: I1006 11:50:50.045912 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/aad2fcff-c2ad-444d-a6da-d4f04b1d3b25-kube-api-access-74qzt" (OuterVolumeSpecName: "kube-api-access-74qzt") pod "aad2fcff-c2ad-444d-a6da-d4f04b1d3b25" (UID: "aad2fcff-c2ad-444d-a6da-d4f04b1d3b25"). InnerVolumeSpecName "kube-api-access-74qzt". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 06 11:50:50 crc kubenswrapper[4958]: I1006 11:50:50.109636 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/aad2fcff-c2ad-444d-a6da-d4f04b1d3b25-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "aad2fcff-c2ad-444d-a6da-d4f04b1d3b25" (UID: "aad2fcff-c2ad-444d-a6da-d4f04b1d3b25"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 06 11:50:50 crc kubenswrapper[4958]: I1006 11:50:50.139711 4958 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/aad2fcff-c2ad-444d-a6da-d4f04b1d3b25-catalog-content\") on node \"crc\" DevicePath \"\"" Oct 06 11:50:50 crc kubenswrapper[4958]: I1006 11:50:50.139747 4958 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/aad2fcff-c2ad-444d-a6da-d4f04b1d3b25-utilities\") on node \"crc\" DevicePath \"\"" Oct 06 11:50:50 crc kubenswrapper[4958]: I1006 11:50:50.139757 4958 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-74qzt\" (UniqueName: \"kubernetes.io/projected/aad2fcff-c2ad-444d-a6da-d4f04b1d3b25-kube-api-access-74qzt\") on node \"crc\" DevicePath \"\"" Oct 06 11:50:50 crc kubenswrapper[4958]: I1006 11:50:50.553937 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-vv7xq" event={"ID":"d262bad7-9d80-42f3-b97d-149b73d879c0","Type":"ContainerStarted","Data":"2c30bfef579af1fb0eb74c65ad7dc393a11d4e89178e8abd4fffb93f341afc07"} Oct 06 11:50:50 crc kubenswrapper[4958]: I1006 11:50:50.563436 4958 generic.go:334] "Generic (PLEG): container finished" podID="aad2fcff-c2ad-444d-a6da-d4f04b1d3b25" containerID="cbf8e80f8f089b600e778fe244ecce65939c029a6667a11c0dbb855ab3de7c06" exitCode=0 Oct 06 11:50:50 crc kubenswrapper[4958]: I1006 11:50:50.563857 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-cmm7m" event={"ID":"aad2fcff-c2ad-444d-a6da-d4f04b1d3b25","Type":"ContainerDied","Data":"cbf8e80f8f089b600e778fe244ecce65939c029a6667a11c0dbb855ab3de7c06"} Oct 06 11:50:50 crc kubenswrapper[4958]: I1006 11:50:50.563901 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-cmm7m" event={"ID":"aad2fcff-c2ad-444d-a6da-d4f04b1d3b25","Type":"ContainerDied","Data":"c71efdf5ae6eec3e21b5862b9cfa1c8fcc8d1bb52a735e89c9977ee80fef47b9"} Oct 06 11:50:50 crc kubenswrapper[4958]: I1006 11:50:50.563929 4958 scope.go:117] "RemoveContainer" containerID="cbf8e80f8f089b600e778fe244ecce65939c029a6667a11c0dbb855ab3de7c06" Oct 06 11:50:50 crc kubenswrapper[4958]: I1006 11:50:50.564120 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-cmm7m" Oct 06 11:50:50 crc kubenswrapper[4958]: I1006 11:50:50.598908 4958 scope.go:117] "RemoveContainer" containerID="e73ad21c6b0d03c76891f98969ae9a37e7428d07433bc503cdaff7047be5b340" Oct 06 11:50:50 crc kubenswrapper[4958]: I1006 11:50:50.602220 4958 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-vv7xq" podStartSLOduration=2.655706044 podStartE2EDuration="44.602206655s" podCreationTimestamp="2025-10-06 11:50:06 +0000 UTC" firstStartedPulling="2025-10-06 11:50:07.998411915 +0000 UTC m=+161.884437263" lastFinishedPulling="2025-10-06 11:50:49.944912546 +0000 UTC m=+203.830937874" observedRunningTime="2025-10-06 11:50:50.580874444 +0000 UTC m=+204.466899752" watchObservedRunningTime="2025-10-06 11:50:50.602206655 +0000 UTC m=+204.488231973" Oct 06 11:50:50 crc kubenswrapper[4958]: I1006 11:50:50.608584 4958 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-cmm7m"] Oct 06 11:50:50 crc kubenswrapper[4958]: I1006 11:50:50.612508 4958 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-cmm7m"] Oct 06 11:50:50 crc kubenswrapper[4958]: I1006 11:50:50.647021 4958 scope.go:117] "RemoveContainer" containerID="991a0b02330793288d50688b9308b20535ef9550655fba4703f6205b07a12b66" Oct 06 11:50:50 crc kubenswrapper[4958]: I1006 11:50:50.661368 4958 scope.go:117] "RemoveContainer" containerID="cbf8e80f8f089b600e778fe244ecce65939c029a6667a11c0dbb855ab3de7c06" Oct 06 11:50:50 crc kubenswrapper[4958]: E1006 11:50:50.662008 4958 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"cbf8e80f8f089b600e778fe244ecce65939c029a6667a11c0dbb855ab3de7c06\": container with ID starting with cbf8e80f8f089b600e778fe244ecce65939c029a6667a11c0dbb855ab3de7c06 not found: ID does not exist" containerID="cbf8e80f8f089b600e778fe244ecce65939c029a6667a11c0dbb855ab3de7c06" Oct 06 11:50:50 crc kubenswrapper[4958]: I1006 11:50:50.662172 4958 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cbf8e80f8f089b600e778fe244ecce65939c029a6667a11c0dbb855ab3de7c06"} err="failed to get container status \"cbf8e80f8f089b600e778fe244ecce65939c029a6667a11c0dbb855ab3de7c06\": rpc error: code = NotFound desc = could not find container \"cbf8e80f8f089b600e778fe244ecce65939c029a6667a11c0dbb855ab3de7c06\": container with ID starting with cbf8e80f8f089b600e778fe244ecce65939c029a6667a11c0dbb855ab3de7c06 not found: ID does not exist" Oct 06 11:50:50 crc kubenswrapper[4958]: I1006 11:50:50.662328 4958 scope.go:117] "RemoveContainer" containerID="e73ad21c6b0d03c76891f98969ae9a37e7428d07433bc503cdaff7047be5b340" Oct 06 11:50:50 crc kubenswrapper[4958]: E1006 11:50:50.662738 4958 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e73ad21c6b0d03c76891f98969ae9a37e7428d07433bc503cdaff7047be5b340\": container with ID starting with e73ad21c6b0d03c76891f98969ae9a37e7428d07433bc503cdaff7047be5b340 not found: ID does not exist" containerID="e73ad21c6b0d03c76891f98969ae9a37e7428d07433bc503cdaff7047be5b340" Oct 06 11:50:50 crc kubenswrapper[4958]: I1006 11:50:50.662775 4958 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e73ad21c6b0d03c76891f98969ae9a37e7428d07433bc503cdaff7047be5b340"} err="failed to get container status \"e73ad21c6b0d03c76891f98969ae9a37e7428d07433bc503cdaff7047be5b340\": rpc error: code = NotFound desc = could not find container \"e73ad21c6b0d03c76891f98969ae9a37e7428d07433bc503cdaff7047be5b340\": container with ID starting with e73ad21c6b0d03c76891f98969ae9a37e7428d07433bc503cdaff7047be5b340 not found: ID does not exist" Oct 06 11:50:50 crc kubenswrapper[4958]: I1006 11:50:50.662802 4958 scope.go:117] "RemoveContainer" containerID="991a0b02330793288d50688b9308b20535ef9550655fba4703f6205b07a12b66" Oct 06 11:50:50 crc kubenswrapper[4958]: E1006 11:50:50.663015 4958 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"991a0b02330793288d50688b9308b20535ef9550655fba4703f6205b07a12b66\": container with ID starting with 991a0b02330793288d50688b9308b20535ef9550655fba4703f6205b07a12b66 not found: ID does not exist" containerID="991a0b02330793288d50688b9308b20535ef9550655fba4703f6205b07a12b66" Oct 06 11:50:50 crc kubenswrapper[4958]: I1006 11:50:50.663042 4958 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"991a0b02330793288d50688b9308b20535ef9550655fba4703f6205b07a12b66"} err="failed to get container status \"991a0b02330793288d50688b9308b20535ef9550655fba4703f6205b07a12b66\": rpc error: code = NotFound desc = could not find container \"991a0b02330793288d50688b9308b20535ef9550655fba4703f6205b07a12b66\": container with ID starting with 991a0b02330793288d50688b9308b20535ef9550655fba4703f6205b07a12b66 not found: ID does not exist" Oct 06 11:50:50 crc kubenswrapper[4958]: I1006 11:50:50.932675 4958 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="aad2fcff-c2ad-444d-a6da-d4f04b1d3b25" path="/var/lib/kubelet/pods/aad2fcff-c2ad-444d-a6da-d4f04b1d3b25/volumes" Oct 06 11:50:53 crc kubenswrapper[4958]: I1006 11:50:53.584897 4958 generic.go:334] "Generic (PLEG): container finished" podID="f668bb45-68a3-4e4a-850e-45f82572b753" containerID="5ee2206f250bd32daf3447fa982c4b0bf32e457e0a81a3f75642567598ac664e" exitCode=0 Oct 06 11:50:53 crc kubenswrapper[4958]: I1006 11:50:53.584963 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-lntkc" event={"ID":"f668bb45-68a3-4e4a-850e-45f82572b753","Type":"ContainerDied","Data":"5ee2206f250bd32daf3447fa982c4b0bf32e457e0a81a3f75642567598ac664e"} Oct 06 11:50:53 crc kubenswrapper[4958]: I1006 11:50:53.588692 4958 generic.go:334] "Generic (PLEG): container finished" podID="b08c0f74-407f-47c0-8cd4-85b2ceb71bfa" containerID="96108182a9d978330119a2e7b585bc78d535569aa3ff4420cdcce385bbd704a0" exitCode=0 Oct 06 11:50:53 crc kubenswrapper[4958]: I1006 11:50:53.588743 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-fqzt6" event={"ID":"b08c0f74-407f-47c0-8cd4-85b2ceb71bfa","Type":"ContainerDied","Data":"96108182a9d978330119a2e7b585bc78d535569aa3ff4420cdcce385bbd704a0"} Oct 06 11:50:53 crc kubenswrapper[4958]: I1006 11:50:53.590021 4958 generic.go:334] "Generic (PLEG): container finished" podID="2068c9e3-4b44-4225-b1a5-e495dd6ab26e" containerID="f7523debc04b05fac608ca33bc1a66c27de977d3229c982385d9dcb5f9665f8a" exitCode=0 Oct 06 11:50:53 crc kubenswrapper[4958]: I1006 11:50:53.590073 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-b5fmm" event={"ID":"2068c9e3-4b44-4225-b1a5-e495dd6ab26e","Type":"ContainerDied","Data":"f7523debc04b05fac608ca33bc1a66c27de977d3229c982385d9dcb5f9665f8a"} Oct 06 11:50:53 crc kubenswrapper[4958]: I1006 11:50:53.801846 4958 patch_prober.go:28] interesting pod/machine-config-daemon-whw6z container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 06 11:50:53 crc kubenswrapper[4958]: I1006 11:50:53.801939 4958 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-whw6z" podUID="1a63a5e9-6d00-40ff-a080-cfcaf03a1c1b" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 06 11:50:53 crc kubenswrapper[4958]: I1006 11:50:53.801995 4958 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-whw6z" Oct 06 11:50:53 crc kubenswrapper[4958]: I1006 11:50:53.802632 4958 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"0be6251825a9cd485da028e6f7fd169dae74215a7191607dfd0a4b7a470e43c4"} pod="openshift-machine-config-operator/machine-config-daemon-whw6z" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Oct 06 11:50:53 crc kubenswrapper[4958]: I1006 11:50:53.802734 4958 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-whw6z" podUID="1a63a5e9-6d00-40ff-a080-cfcaf03a1c1b" containerName="machine-config-daemon" containerID="cri-o://0be6251825a9cd485da028e6f7fd169dae74215a7191607dfd0a4b7a470e43c4" gracePeriod=600 Oct 06 11:50:54 crc kubenswrapper[4958]: I1006 11:50:54.600926 4958 generic.go:334] "Generic (PLEG): container finished" podID="65b79cfc-41f1-40f1-b29b-9b9b4f8e28a2" containerID="0466e74830822003a84038f337c16db9cc25ae376ef0c0f9983e3aab8b0bcf27" exitCode=0 Oct 06 11:50:54 crc kubenswrapper[4958]: I1006 11:50:54.600985 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-8fznl" event={"ID":"65b79cfc-41f1-40f1-b29b-9b9b4f8e28a2","Type":"ContainerDied","Data":"0466e74830822003a84038f337c16db9cc25ae376ef0c0f9983e3aab8b0bcf27"} Oct 06 11:50:54 crc kubenswrapper[4958]: I1006 11:50:54.606601 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-lntkc" event={"ID":"f668bb45-68a3-4e4a-850e-45f82572b753","Type":"ContainerStarted","Data":"b23aa956000a239c5219aa8a7f3866487de5fb55cdfcc4aad0fccbeac213115d"} Oct 06 11:50:54 crc kubenswrapper[4958]: I1006 11:50:54.610174 4958 generic.go:334] "Generic (PLEG): container finished" podID="1a63a5e9-6d00-40ff-a080-cfcaf03a1c1b" containerID="0be6251825a9cd485da028e6f7fd169dae74215a7191607dfd0a4b7a470e43c4" exitCode=0 Oct 06 11:50:54 crc kubenswrapper[4958]: I1006 11:50:54.610221 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-whw6z" event={"ID":"1a63a5e9-6d00-40ff-a080-cfcaf03a1c1b","Type":"ContainerDied","Data":"0be6251825a9cd485da028e6f7fd169dae74215a7191607dfd0a4b7a470e43c4"} Oct 06 11:50:54 crc kubenswrapper[4958]: I1006 11:50:54.610238 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-whw6z" event={"ID":"1a63a5e9-6d00-40ff-a080-cfcaf03a1c1b","Type":"ContainerStarted","Data":"995bc778c4c0807c4500542d2b3c01981314abfb47f00f805741a4ecb4ef1873"} Oct 06 11:50:54 crc kubenswrapper[4958]: I1006 11:50:54.612330 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-b5fmm" event={"ID":"2068c9e3-4b44-4225-b1a5-e495dd6ab26e","Type":"ContainerStarted","Data":"d133d8c686054cacf76e8531bde7252aff1e2972ddca1bc902adf19d92605163"} Oct 06 11:50:54 crc kubenswrapper[4958]: I1006 11:50:54.652283 4958 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-b5fmm" podStartSLOduration=3.8909963899999997 podStartE2EDuration="45.652263168s" podCreationTimestamp="2025-10-06 11:50:09 +0000 UTC" firstStartedPulling="2025-10-06 11:50:12.283977554 +0000 UTC m=+166.170002862" lastFinishedPulling="2025-10-06 11:50:54.045244332 +0000 UTC m=+207.931269640" observedRunningTime="2025-10-06 11:50:54.651462023 +0000 UTC m=+208.537487341" watchObservedRunningTime="2025-10-06 11:50:54.652263168 +0000 UTC m=+208.538288476" Oct 06 11:50:55 crc kubenswrapper[4958]: I1006 11:50:55.928917 4958 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-lntkc" podStartSLOduration=4.08364102 podStartE2EDuration="47.928901811s" podCreationTimestamp="2025-10-06 11:50:08 +0000 UTC" firstStartedPulling="2025-10-06 11:50:10.158674532 +0000 UTC m=+164.044699840" lastFinishedPulling="2025-10-06 11:50:54.003935323 +0000 UTC m=+207.889960631" observedRunningTime="2025-10-06 11:50:54.672480724 +0000 UTC m=+208.558506052" watchObservedRunningTime="2025-10-06 11:50:55.928901811 +0000 UTC m=+209.814927119" Oct 06 11:50:56 crc kubenswrapper[4958]: I1006 11:50:56.645694 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-fqzt6" event={"ID":"b08c0f74-407f-47c0-8cd4-85b2ceb71bfa","Type":"ContainerStarted","Data":"52a01670941c4a28e863f1483d599203a4fe2f101faa93a82b7359e21e96fe5e"} Oct 06 11:50:56 crc kubenswrapper[4958]: I1006 11:50:56.650700 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-8fznl" event={"ID":"65b79cfc-41f1-40f1-b29b-9b9b4f8e28a2","Type":"ContainerStarted","Data":"fdec7ac15f50c771dc6e2c0d2fea89254cc36d9772a7876b813c94d3206fabfe"} Oct 06 11:50:56 crc kubenswrapper[4958]: I1006 11:50:56.652798 4958 generic.go:334] "Generic (PLEG): container finished" podID="4c365fe8-47fa-4b1e-9d41-51d5b21a377e" containerID="2313e8151b54ae6d13864cc02bdb6d9731f5a1521f90eb7a34a24a32da64bdf1" exitCode=0 Oct 06 11:50:56 crc kubenswrapper[4958]: I1006 11:50:56.652853 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-r96rs" event={"ID":"4c365fe8-47fa-4b1e-9d41-51d5b21a377e","Type":"ContainerDied","Data":"2313e8151b54ae6d13864cc02bdb6d9731f5a1521f90eb7a34a24a32da64bdf1"} Oct 06 11:50:56 crc kubenswrapper[4958]: I1006 11:50:56.699525 4958 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-fqzt6" podStartSLOduration=3.371385806 podStartE2EDuration="47.699501385s" podCreationTimestamp="2025-10-06 11:50:09 +0000 UTC" firstStartedPulling="2025-10-06 11:50:11.248172019 +0000 UTC m=+165.134197327" lastFinishedPulling="2025-10-06 11:50:55.576287598 +0000 UTC m=+209.462312906" observedRunningTime="2025-10-06 11:50:56.668292143 +0000 UTC m=+210.554317481" watchObservedRunningTime="2025-10-06 11:50:56.699501385 +0000 UTC m=+210.585526713" Oct 06 11:50:56 crc kubenswrapper[4958]: I1006 11:50:56.699687 4958 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-8fznl" podStartSLOduration=3.10933715 podStartE2EDuration="50.6996808s" podCreationTimestamp="2025-10-06 11:50:06 +0000 UTC" firstStartedPulling="2025-10-06 11:50:07.99427027 +0000 UTC m=+161.880295578" lastFinishedPulling="2025-10-06 11:50:55.58461392 +0000 UTC m=+209.470639228" observedRunningTime="2025-10-06 11:50:56.690269014 +0000 UTC m=+210.576294322" watchObservedRunningTime="2025-10-06 11:50:56.6996808 +0000 UTC m=+210.585706128" Oct 06 11:50:56 crc kubenswrapper[4958]: I1006 11:50:56.786353 4958 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-vv7xq" Oct 06 11:50:56 crc kubenswrapper[4958]: I1006 11:50:56.786411 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-vv7xq" Oct 06 11:50:56 crc kubenswrapper[4958]: I1006 11:50:56.824980 4958 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-vv7xq" Oct 06 11:50:57 crc kubenswrapper[4958]: I1006 11:50:57.167583 4958 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-8fznl" Oct 06 11:50:57 crc kubenswrapper[4958]: I1006 11:50:57.167646 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-8fznl" Oct 06 11:50:57 crc kubenswrapper[4958]: I1006 11:50:57.661340 4958 generic.go:334] "Generic (PLEG): container finished" podID="3309b80a-11e6-4b60-be3f-c161644ffc7b" containerID="7d3e9b6c9707e03401d38c8995219ea6749f549e73c9b70559d908071a2aa64a" exitCode=0 Oct 06 11:50:57 crc kubenswrapper[4958]: I1006 11:50:57.661404 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-5l5pp" event={"ID":"3309b80a-11e6-4b60-be3f-c161644ffc7b","Type":"ContainerDied","Data":"7d3e9b6c9707e03401d38c8995219ea6749f549e73c9b70559d908071a2aa64a"} Oct 06 11:50:57 crc kubenswrapper[4958]: I1006 11:50:57.666883 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-r96rs" event={"ID":"4c365fe8-47fa-4b1e-9d41-51d5b21a377e","Type":"ContainerStarted","Data":"da951b810ea6c170ba43433f46e4832bfc6d85442141b8fac65174e8e5570cec"} Oct 06 11:50:57 crc kubenswrapper[4958]: I1006 11:50:57.709516 4958 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-r96rs" podStartSLOduration=2.851307784 podStartE2EDuration="48.709496079s" podCreationTimestamp="2025-10-06 11:50:09 +0000 UTC" firstStartedPulling="2025-10-06 11:50:11.239676002 +0000 UTC m=+165.125701310" lastFinishedPulling="2025-10-06 11:50:57.097864297 +0000 UTC m=+210.983889605" observedRunningTime="2025-10-06 11:50:57.708204579 +0000 UTC m=+211.594229897" watchObservedRunningTime="2025-10-06 11:50:57.709496079 +0000 UTC m=+211.595521387" Oct 06 11:50:57 crc kubenswrapper[4958]: I1006 11:50:57.715201 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-vv7xq" Oct 06 11:50:58 crc kubenswrapper[4958]: I1006 11:50:58.209985 4958 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/certified-operators-8fznl" podUID="65b79cfc-41f1-40f1-b29b-9b9b4f8e28a2" containerName="registry-server" probeResult="failure" output=< Oct 06 11:50:58 crc kubenswrapper[4958]: timeout: failed to connect service ":50051" within 1s Oct 06 11:50:58 crc kubenswrapper[4958]: > Oct 06 11:50:58 crc kubenswrapper[4958]: I1006 11:50:58.473678 4958 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-dftvm"] Oct 06 11:50:58 crc kubenswrapper[4958]: I1006 11:50:58.680465 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-5l5pp" event={"ID":"3309b80a-11e6-4b60-be3f-c161644ffc7b","Type":"ContainerStarted","Data":"66b7ff6233a1defd1880ce30f9c0d93ccda69fcd878842e63998e8cd5405b959"} Oct 06 11:50:58 crc kubenswrapper[4958]: I1006 11:50:58.991017 4958 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-lntkc" Oct 06 11:50:58 crc kubenswrapper[4958]: I1006 11:50:58.991079 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-lntkc" Oct 06 11:50:59 crc kubenswrapper[4958]: I1006 11:50:59.037591 4958 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-lntkc" Oct 06 11:50:59 crc kubenswrapper[4958]: I1006 11:50:59.052798 4958 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-5l5pp" podStartSLOduration=2.863016914 podStartE2EDuration="53.052779008s" podCreationTimestamp="2025-10-06 11:50:06 +0000 UTC" firstStartedPulling="2025-10-06 11:50:07.991263579 +0000 UTC m=+161.877288927" lastFinishedPulling="2025-10-06 11:50:58.181025713 +0000 UTC m=+212.067051021" observedRunningTime="2025-10-06 11:50:58.718517072 +0000 UTC m=+212.604542380" watchObservedRunningTime="2025-10-06 11:50:59.052779008 +0000 UTC m=+212.938804316" Oct 06 11:50:59 crc kubenswrapper[4958]: I1006 11:50:59.390121 4958 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-fqzt6" Oct 06 11:50:59 crc kubenswrapper[4958]: I1006 11:50:59.390181 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-fqzt6" Oct 06 11:50:59 crc kubenswrapper[4958]: I1006 11:50:59.442128 4958 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-fqzt6" Oct 06 11:50:59 crc kubenswrapper[4958]: I1006 11:50:59.736055 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-lntkc" Oct 06 11:50:59 crc kubenswrapper[4958]: I1006 11:50:59.983988 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-r96rs" Oct 06 11:50:59 crc kubenswrapper[4958]: I1006 11:50:59.984089 4958 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-r96rs" Oct 06 11:51:00 crc kubenswrapper[4958]: I1006 11:51:00.258919 4958 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-b5fmm" Oct 06 11:51:00 crc kubenswrapper[4958]: I1006 11:51:00.258978 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-b5fmm" Oct 06 11:51:00 crc kubenswrapper[4958]: I1006 11:51:00.312398 4958 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-b5fmm" Oct 06 11:51:00 crc kubenswrapper[4958]: I1006 11:51:00.726357 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-b5fmm" Oct 06 11:51:01 crc kubenswrapper[4958]: I1006 11:51:01.022266 4958 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-r96rs" podUID="4c365fe8-47fa-4b1e-9d41-51d5b21a377e" containerName="registry-server" probeResult="failure" output=< Oct 06 11:51:01 crc kubenswrapper[4958]: timeout: failed to connect service ":50051" within 1s Oct 06 11:51:01 crc kubenswrapper[4958]: > Oct 06 11:51:02 crc kubenswrapper[4958]: I1006 11:51:02.226106 4958 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-b5fmm"] Oct 06 11:51:02 crc kubenswrapper[4958]: I1006 11:51:02.705474 4958 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-b5fmm" podUID="2068c9e3-4b44-4225-b1a5-e495dd6ab26e" containerName="registry-server" containerID="cri-o://d133d8c686054cacf76e8531bde7252aff1e2972ddca1bc902adf19d92605163" gracePeriod=2 Oct 06 11:51:03 crc kubenswrapper[4958]: I1006 11:51:03.082973 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-b5fmm" Oct 06 11:51:03 crc kubenswrapper[4958]: I1006 11:51:03.214165 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6qg4k\" (UniqueName: \"kubernetes.io/projected/2068c9e3-4b44-4225-b1a5-e495dd6ab26e-kube-api-access-6qg4k\") pod \"2068c9e3-4b44-4225-b1a5-e495dd6ab26e\" (UID: \"2068c9e3-4b44-4225-b1a5-e495dd6ab26e\") " Oct 06 11:51:03 crc kubenswrapper[4958]: I1006 11:51:03.214242 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2068c9e3-4b44-4225-b1a5-e495dd6ab26e-catalog-content\") pod \"2068c9e3-4b44-4225-b1a5-e495dd6ab26e\" (UID: \"2068c9e3-4b44-4225-b1a5-e495dd6ab26e\") " Oct 06 11:51:03 crc kubenswrapper[4958]: I1006 11:51:03.214300 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2068c9e3-4b44-4225-b1a5-e495dd6ab26e-utilities\") pod \"2068c9e3-4b44-4225-b1a5-e495dd6ab26e\" (UID: \"2068c9e3-4b44-4225-b1a5-e495dd6ab26e\") " Oct 06 11:51:03 crc kubenswrapper[4958]: I1006 11:51:03.215266 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2068c9e3-4b44-4225-b1a5-e495dd6ab26e-utilities" (OuterVolumeSpecName: "utilities") pod "2068c9e3-4b44-4225-b1a5-e495dd6ab26e" (UID: "2068c9e3-4b44-4225-b1a5-e495dd6ab26e"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 06 11:51:03 crc kubenswrapper[4958]: I1006 11:51:03.219847 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2068c9e3-4b44-4225-b1a5-e495dd6ab26e-kube-api-access-6qg4k" (OuterVolumeSpecName: "kube-api-access-6qg4k") pod "2068c9e3-4b44-4225-b1a5-e495dd6ab26e" (UID: "2068c9e3-4b44-4225-b1a5-e495dd6ab26e"). InnerVolumeSpecName "kube-api-access-6qg4k". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 06 11:51:03 crc kubenswrapper[4958]: I1006 11:51:03.313285 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2068c9e3-4b44-4225-b1a5-e495dd6ab26e-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "2068c9e3-4b44-4225-b1a5-e495dd6ab26e" (UID: "2068c9e3-4b44-4225-b1a5-e495dd6ab26e"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 06 11:51:03 crc kubenswrapper[4958]: I1006 11:51:03.316202 4958 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6qg4k\" (UniqueName: \"kubernetes.io/projected/2068c9e3-4b44-4225-b1a5-e495dd6ab26e-kube-api-access-6qg4k\") on node \"crc\" DevicePath \"\"" Oct 06 11:51:03 crc kubenswrapper[4958]: I1006 11:51:03.316255 4958 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2068c9e3-4b44-4225-b1a5-e495dd6ab26e-catalog-content\") on node \"crc\" DevicePath \"\"" Oct 06 11:51:03 crc kubenswrapper[4958]: I1006 11:51:03.316275 4958 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2068c9e3-4b44-4225-b1a5-e495dd6ab26e-utilities\") on node \"crc\" DevicePath \"\"" Oct 06 11:51:03 crc kubenswrapper[4958]: I1006 11:51:03.711985 4958 generic.go:334] "Generic (PLEG): container finished" podID="2068c9e3-4b44-4225-b1a5-e495dd6ab26e" containerID="d133d8c686054cacf76e8531bde7252aff1e2972ddca1bc902adf19d92605163" exitCode=0 Oct 06 11:51:03 crc kubenswrapper[4958]: I1006 11:51:03.712090 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-b5fmm" Oct 06 11:51:03 crc kubenswrapper[4958]: I1006 11:51:03.712121 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-b5fmm" event={"ID":"2068c9e3-4b44-4225-b1a5-e495dd6ab26e","Type":"ContainerDied","Data":"d133d8c686054cacf76e8531bde7252aff1e2972ddca1bc902adf19d92605163"} Oct 06 11:51:03 crc kubenswrapper[4958]: I1006 11:51:03.712497 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-b5fmm" event={"ID":"2068c9e3-4b44-4225-b1a5-e495dd6ab26e","Type":"ContainerDied","Data":"db33b7cc7929af175692d7644fe5fd45c702828d7aff0b7be1cc9f4c74479575"} Oct 06 11:51:03 crc kubenswrapper[4958]: I1006 11:51:03.712536 4958 scope.go:117] "RemoveContainer" containerID="d133d8c686054cacf76e8531bde7252aff1e2972ddca1bc902adf19d92605163" Oct 06 11:51:03 crc kubenswrapper[4958]: I1006 11:51:03.733410 4958 scope.go:117] "RemoveContainer" containerID="f7523debc04b05fac608ca33bc1a66c27de977d3229c982385d9dcb5f9665f8a" Oct 06 11:51:03 crc kubenswrapper[4958]: I1006 11:51:03.745409 4958 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-b5fmm"] Oct 06 11:51:03 crc kubenswrapper[4958]: I1006 11:51:03.748944 4958 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-b5fmm"] Oct 06 11:51:03 crc kubenswrapper[4958]: I1006 11:51:03.773480 4958 scope.go:117] "RemoveContainer" containerID="9b06c7873c6407b401a3cd4bd4b93c3d2d991954e7ab0ca73f9edcc2ca8412c8" Oct 06 11:51:03 crc kubenswrapper[4958]: I1006 11:51:03.796763 4958 scope.go:117] "RemoveContainer" containerID="d133d8c686054cacf76e8531bde7252aff1e2972ddca1bc902adf19d92605163" Oct 06 11:51:03 crc kubenswrapper[4958]: E1006 11:51:03.797280 4958 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d133d8c686054cacf76e8531bde7252aff1e2972ddca1bc902adf19d92605163\": container with ID starting with d133d8c686054cacf76e8531bde7252aff1e2972ddca1bc902adf19d92605163 not found: ID does not exist" containerID="d133d8c686054cacf76e8531bde7252aff1e2972ddca1bc902adf19d92605163" Oct 06 11:51:03 crc kubenswrapper[4958]: I1006 11:51:03.797307 4958 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d133d8c686054cacf76e8531bde7252aff1e2972ddca1bc902adf19d92605163"} err="failed to get container status \"d133d8c686054cacf76e8531bde7252aff1e2972ddca1bc902adf19d92605163\": rpc error: code = NotFound desc = could not find container \"d133d8c686054cacf76e8531bde7252aff1e2972ddca1bc902adf19d92605163\": container with ID starting with d133d8c686054cacf76e8531bde7252aff1e2972ddca1bc902adf19d92605163 not found: ID does not exist" Oct 06 11:51:03 crc kubenswrapper[4958]: I1006 11:51:03.797329 4958 scope.go:117] "RemoveContainer" containerID="f7523debc04b05fac608ca33bc1a66c27de977d3229c982385d9dcb5f9665f8a" Oct 06 11:51:03 crc kubenswrapper[4958]: E1006 11:51:03.797854 4958 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f7523debc04b05fac608ca33bc1a66c27de977d3229c982385d9dcb5f9665f8a\": container with ID starting with f7523debc04b05fac608ca33bc1a66c27de977d3229c982385d9dcb5f9665f8a not found: ID does not exist" containerID="f7523debc04b05fac608ca33bc1a66c27de977d3229c982385d9dcb5f9665f8a" Oct 06 11:51:03 crc kubenswrapper[4958]: I1006 11:51:03.797902 4958 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f7523debc04b05fac608ca33bc1a66c27de977d3229c982385d9dcb5f9665f8a"} err="failed to get container status \"f7523debc04b05fac608ca33bc1a66c27de977d3229c982385d9dcb5f9665f8a\": rpc error: code = NotFound desc = could not find container \"f7523debc04b05fac608ca33bc1a66c27de977d3229c982385d9dcb5f9665f8a\": container with ID starting with f7523debc04b05fac608ca33bc1a66c27de977d3229c982385d9dcb5f9665f8a not found: ID does not exist" Oct 06 11:51:03 crc kubenswrapper[4958]: I1006 11:51:03.797935 4958 scope.go:117] "RemoveContainer" containerID="9b06c7873c6407b401a3cd4bd4b93c3d2d991954e7ab0ca73f9edcc2ca8412c8" Oct 06 11:51:03 crc kubenswrapper[4958]: E1006 11:51:03.798464 4958 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9b06c7873c6407b401a3cd4bd4b93c3d2d991954e7ab0ca73f9edcc2ca8412c8\": container with ID starting with 9b06c7873c6407b401a3cd4bd4b93c3d2d991954e7ab0ca73f9edcc2ca8412c8 not found: ID does not exist" containerID="9b06c7873c6407b401a3cd4bd4b93c3d2d991954e7ab0ca73f9edcc2ca8412c8" Oct 06 11:51:03 crc kubenswrapper[4958]: I1006 11:51:03.798493 4958 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9b06c7873c6407b401a3cd4bd4b93c3d2d991954e7ab0ca73f9edcc2ca8412c8"} err="failed to get container status \"9b06c7873c6407b401a3cd4bd4b93c3d2d991954e7ab0ca73f9edcc2ca8412c8\": rpc error: code = NotFound desc = could not find container \"9b06c7873c6407b401a3cd4bd4b93c3d2d991954e7ab0ca73f9edcc2ca8412c8\": container with ID starting with 9b06c7873c6407b401a3cd4bd4b93c3d2d991954e7ab0ca73f9edcc2ca8412c8 not found: ID does not exist" Oct 06 11:51:04 crc kubenswrapper[4958]: I1006 11:51:04.922401 4958 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2068c9e3-4b44-4225-b1a5-e495dd6ab26e" path="/var/lib/kubelet/pods/2068c9e3-4b44-4225-b1a5-e495dd6ab26e/volumes" Oct 06 11:51:07 crc kubenswrapper[4958]: I1006 11:51:07.209977 4958 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-8fznl" Oct 06 11:51:07 crc kubenswrapper[4958]: I1006 11:51:07.264905 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-5l5pp" Oct 06 11:51:07 crc kubenswrapper[4958]: I1006 11:51:07.265229 4958 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-5l5pp" Oct 06 11:51:07 crc kubenswrapper[4958]: I1006 11:51:07.267030 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-8fznl" Oct 06 11:51:07 crc kubenswrapper[4958]: I1006 11:51:07.311602 4958 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-5l5pp" Oct 06 11:51:07 crc kubenswrapper[4958]: I1006 11:51:07.787991 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-5l5pp" Oct 06 11:51:09 crc kubenswrapper[4958]: I1006 11:51:09.450346 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-fqzt6" Oct 06 11:51:10 crc kubenswrapper[4958]: I1006 11:51:10.028916 4958 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-r96rs" Oct 06 11:51:10 crc kubenswrapper[4958]: I1006 11:51:10.085124 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-r96rs" Oct 06 11:51:10 crc kubenswrapper[4958]: I1006 11:51:10.427451 4958 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-8fznl"] Oct 06 11:51:10 crc kubenswrapper[4958]: I1006 11:51:10.427716 4958 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-8fznl" podUID="65b79cfc-41f1-40f1-b29b-9b9b4f8e28a2" containerName="registry-server" containerID="cri-o://fdec7ac15f50c771dc6e2c0d2fea89254cc36d9772a7876b813c94d3206fabfe" gracePeriod=2 Oct 06 11:51:10 crc kubenswrapper[4958]: I1006 11:51:10.757653 4958 generic.go:334] "Generic (PLEG): container finished" podID="65b79cfc-41f1-40f1-b29b-9b9b4f8e28a2" containerID="fdec7ac15f50c771dc6e2c0d2fea89254cc36d9772a7876b813c94d3206fabfe" exitCode=0 Oct 06 11:51:10 crc kubenswrapper[4958]: I1006 11:51:10.757732 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-8fznl" event={"ID":"65b79cfc-41f1-40f1-b29b-9b9b4f8e28a2","Type":"ContainerDied","Data":"fdec7ac15f50c771dc6e2c0d2fea89254cc36d9772a7876b813c94d3206fabfe"} Oct 06 11:51:10 crc kubenswrapper[4958]: I1006 11:51:10.811847 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-8fznl" Oct 06 11:51:10 crc kubenswrapper[4958]: I1006 11:51:10.908764 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/65b79cfc-41f1-40f1-b29b-9b9b4f8e28a2-catalog-content\") pod \"65b79cfc-41f1-40f1-b29b-9b9b4f8e28a2\" (UID: \"65b79cfc-41f1-40f1-b29b-9b9b4f8e28a2\") " Oct 06 11:51:10 crc kubenswrapper[4958]: I1006 11:51:10.908824 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/65b79cfc-41f1-40f1-b29b-9b9b4f8e28a2-utilities\") pod \"65b79cfc-41f1-40f1-b29b-9b9b4f8e28a2\" (UID: \"65b79cfc-41f1-40f1-b29b-9b9b4f8e28a2\") " Oct 06 11:51:10 crc kubenswrapper[4958]: I1006 11:51:10.908848 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gwcv9\" (UniqueName: \"kubernetes.io/projected/65b79cfc-41f1-40f1-b29b-9b9b4f8e28a2-kube-api-access-gwcv9\") pod \"65b79cfc-41f1-40f1-b29b-9b9b4f8e28a2\" (UID: \"65b79cfc-41f1-40f1-b29b-9b9b4f8e28a2\") " Oct 06 11:51:10 crc kubenswrapper[4958]: I1006 11:51:10.909978 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/65b79cfc-41f1-40f1-b29b-9b9b4f8e28a2-utilities" (OuterVolumeSpecName: "utilities") pod "65b79cfc-41f1-40f1-b29b-9b9b4f8e28a2" (UID: "65b79cfc-41f1-40f1-b29b-9b9b4f8e28a2"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 06 11:51:10 crc kubenswrapper[4958]: I1006 11:51:10.915296 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/65b79cfc-41f1-40f1-b29b-9b9b4f8e28a2-kube-api-access-gwcv9" (OuterVolumeSpecName: "kube-api-access-gwcv9") pod "65b79cfc-41f1-40f1-b29b-9b9b4f8e28a2" (UID: "65b79cfc-41f1-40f1-b29b-9b9b4f8e28a2"). InnerVolumeSpecName "kube-api-access-gwcv9". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 06 11:51:10 crc kubenswrapper[4958]: I1006 11:51:10.952641 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/65b79cfc-41f1-40f1-b29b-9b9b4f8e28a2-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "65b79cfc-41f1-40f1-b29b-9b9b4f8e28a2" (UID: "65b79cfc-41f1-40f1-b29b-9b9b4f8e28a2"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 06 11:51:11 crc kubenswrapper[4958]: I1006 11:51:11.009980 4958 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/65b79cfc-41f1-40f1-b29b-9b9b4f8e28a2-catalog-content\") on node \"crc\" DevicePath \"\"" Oct 06 11:51:11 crc kubenswrapper[4958]: I1006 11:51:11.010014 4958 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/65b79cfc-41f1-40f1-b29b-9b9b4f8e28a2-utilities\") on node \"crc\" DevicePath \"\"" Oct 06 11:51:11 crc kubenswrapper[4958]: I1006 11:51:11.010027 4958 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gwcv9\" (UniqueName: \"kubernetes.io/projected/65b79cfc-41f1-40f1-b29b-9b9b4f8e28a2-kube-api-access-gwcv9\") on node \"crc\" DevicePath \"\"" Oct 06 11:51:11 crc kubenswrapper[4958]: I1006 11:51:11.768309 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-8fznl" event={"ID":"65b79cfc-41f1-40f1-b29b-9b9b4f8e28a2","Type":"ContainerDied","Data":"c160badad2a8ff2b5d940095be5ec5359bea1ea1a6897535ef26ceafef6cffa7"} Oct 06 11:51:11 crc kubenswrapper[4958]: I1006 11:51:11.768418 4958 scope.go:117] "RemoveContainer" containerID="fdec7ac15f50c771dc6e2c0d2fea89254cc36d9772a7876b813c94d3206fabfe" Oct 06 11:51:11 crc kubenswrapper[4958]: I1006 11:51:11.768464 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-8fznl" Oct 06 11:51:11 crc kubenswrapper[4958]: I1006 11:51:11.806074 4958 scope.go:117] "RemoveContainer" containerID="0466e74830822003a84038f337c16db9cc25ae376ef0c0f9983e3aab8b0bcf27" Oct 06 11:51:11 crc kubenswrapper[4958]: I1006 11:51:11.827117 4958 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-8fznl"] Oct 06 11:51:11 crc kubenswrapper[4958]: I1006 11:51:11.832843 4958 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-8fznl"] Oct 06 11:51:11 crc kubenswrapper[4958]: I1006 11:51:11.846759 4958 scope.go:117] "RemoveContainer" containerID="99dade0676d62c0e98fadc1846597ab485bd26886e52d993251914cb135278fb" Oct 06 11:51:12 crc kubenswrapper[4958]: I1006 11:51:12.631265 4958 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-fqzt6"] Oct 06 11:51:12 crc kubenswrapper[4958]: I1006 11:51:12.631665 4958 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-fqzt6" podUID="b08c0f74-407f-47c0-8cd4-85b2ceb71bfa" containerName="registry-server" containerID="cri-o://52a01670941c4a28e863f1483d599203a4fe2f101faa93a82b7359e21e96fe5e" gracePeriod=2 Oct 06 11:51:12 crc kubenswrapper[4958]: I1006 11:51:12.776720 4958 generic.go:334] "Generic (PLEG): container finished" podID="b08c0f74-407f-47c0-8cd4-85b2ceb71bfa" containerID="52a01670941c4a28e863f1483d599203a4fe2f101faa93a82b7359e21e96fe5e" exitCode=0 Oct 06 11:51:12 crc kubenswrapper[4958]: I1006 11:51:12.776879 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-fqzt6" event={"ID":"b08c0f74-407f-47c0-8cd4-85b2ceb71bfa","Type":"ContainerDied","Data":"52a01670941c4a28e863f1483d599203a4fe2f101faa93a82b7359e21e96fe5e"} Oct 06 11:51:12 crc kubenswrapper[4958]: I1006 11:51:12.927131 4958 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="65b79cfc-41f1-40f1-b29b-9b9b4f8e28a2" path="/var/lib/kubelet/pods/65b79cfc-41f1-40f1-b29b-9b9b4f8e28a2/volumes" Oct 06 11:51:13 crc kubenswrapper[4958]: I1006 11:51:13.031222 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-fqzt6" Oct 06 11:51:13 crc kubenswrapper[4958]: I1006 11:51:13.139206 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b08c0f74-407f-47c0-8cd4-85b2ceb71bfa-utilities\") pod \"b08c0f74-407f-47c0-8cd4-85b2ceb71bfa\" (UID: \"b08c0f74-407f-47c0-8cd4-85b2ceb71bfa\") " Oct 06 11:51:13 crc kubenswrapper[4958]: I1006 11:51:13.139296 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kv2xm\" (UniqueName: \"kubernetes.io/projected/b08c0f74-407f-47c0-8cd4-85b2ceb71bfa-kube-api-access-kv2xm\") pod \"b08c0f74-407f-47c0-8cd4-85b2ceb71bfa\" (UID: \"b08c0f74-407f-47c0-8cd4-85b2ceb71bfa\") " Oct 06 11:51:13 crc kubenswrapper[4958]: I1006 11:51:13.139335 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b08c0f74-407f-47c0-8cd4-85b2ceb71bfa-catalog-content\") pod \"b08c0f74-407f-47c0-8cd4-85b2ceb71bfa\" (UID: \"b08c0f74-407f-47c0-8cd4-85b2ceb71bfa\") " Oct 06 11:51:13 crc kubenswrapper[4958]: I1006 11:51:13.140639 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b08c0f74-407f-47c0-8cd4-85b2ceb71bfa-utilities" (OuterVolumeSpecName: "utilities") pod "b08c0f74-407f-47c0-8cd4-85b2ceb71bfa" (UID: "b08c0f74-407f-47c0-8cd4-85b2ceb71bfa"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 06 11:51:13 crc kubenswrapper[4958]: I1006 11:51:13.148474 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b08c0f74-407f-47c0-8cd4-85b2ceb71bfa-kube-api-access-kv2xm" (OuterVolumeSpecName: "kube-api-access-kv2xm") pod "b08c0f74-407f-47c0-8cd4-85b2ceb71bfa" (UID: "b08c0f74-407f-47c0-8cd4-85b2ceb71bfa"). InnerVolumeSpecName "kube-api-access-kv2xm". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 06 11:51:13 crc kubenswrapper[4958]: I1006 11:51:13.163536 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b08c0f74-407f-47c0-8cd4-85b2ceb71bfa-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "b08c0f74-407f-47c0-8cd4-85b2ceb71bfa" (UID: "b08c0f74-407f-47c0-8cd4-85b2ceb71bfa"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 06 11:51:13 crc kubenswrapper[4958]: I1006 11:51:13.241625 4958 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b08c0f74-407f-47c0-8cd4-85b2ceb71bfa-utilities\") on node \"crc\" DevicePath \"\"" Oct 06 11:51:13 crc kubenswrapper[4958]: I1006 11:51:13.241692 4958 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kv2xm\" (UniqueName: \"kubernetes.io/projected/b08c0f74-407f-47c0-8cd4-85b2ceb71bfa-kube-api-access-kv2xm\") on node \"crc\" DevicePath \"\"" Oct 06 11:51:13 crc kubenswrapper[4958]: I1006 11:51:13.241712 4958 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b08c0f74-407f-47c0-8cd4-85b2ceb71bfa-catalog-content\") on node \"crc\" DevicePath \"\"" Oct 06 11:51:13 crc kubenswrapper[4958]: I1006 11:51:13.788134 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-fqzt6" event={"ID":"b08c0f74-407f-47c0-8cd4-85b2ceb71bfa","Type":"ContainerDied","Data":"d1074736ca0d15fcbadec158b025e3f718c1302a9bba1f01643120b4d779c4e0"} Oct 06 11:51:13 crc kubenswrapper[4958]: I1006 11:51:13.788241 4958 scope.go:117] "RemoveContainer" containerID="52a01670941c4a28e863f1483d599203a4fe2f101faa93a82b7359e21e96fe5e" Oct 06 11:51:13 crc kubenswrapper[4958]: I1006 11:51:13.788240 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-fqzt6" Oct 06 11:51:13 crc kubenswrapper[4958]: I1006 11:51:13.816873 4958 scope.go:117] "RemoveContainer" containerID="96108182a9d978330119a2e7b585bc78d535569aa3ff4420cdcce385bbd704a0" Oct 06 11:51:13 crc kubenswrapper[4958]: I1006 11:51:13.842210 4958 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-fqzt6"] Oct 06 11:51:13 crc kubenswrapper[4958]: I1006 11:51:13.847291 4958 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-fqzt6"] Oct 06 11:51:13 crc kubenswrapper[4958]: I1006 11:51:13.866020 4958 scope.go:117] "RemoveContainer" containerID="66bf95c0fa86c72c27d6565518738ee847d192fa2d51508c60ccd43a0e549d1f" Oct 06 11:51:14 crc kubenswrapper[4958]: I1006 11:51:14.926851 4958 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b08c0f74-407f-47c0-8cd4-85b2ceb71bfa" path="/var/lib/kubelet/pods/b08c0f74-407f-47c0-8cd4-85b2ceb71bfa/volumes" Oct 06 11:51:23 crc kubenswrapper[4958]: I1006 11:51:23.507779 4958 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-authentication/oauth-openshift-558db77b4-dftvm" podUID="87166f97-d9f0-4391-87b6-0ea7ce0208e1" containerName="oauth-openshift" containerID="cri-o://55424fe7ed492a4fc85e0900e39cb52e83b2e4d3886bb82110c6aa89b54fa6f8" gracePeriod=15 Oct 06 11:51:23 crc kubenswrapper[4958]: I1006 11:51:23.851564 4958 generic.go:334] "Generic (PLEG): container finished" podID="87166f97-d9f0-4391-87b6-0ea7ce0208e1" containerID="55424fe7ed492a4fc85e0900e39cb52e83b2e4d3886bb82110c6aa89b54fa6f8" exitCode=0 Oct 06 11:51:23 crc kubenswrapper[4958]: I1006 11:51:23.851699 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-dftvm" event={"ID":"87166f97-d9f0-4391-87b6-0ea7ce0208e1","Type":"ContainerDied","Data":"55424fe7ed492a4fc85e0900e39cb52e83b2e4d3886bb82110c6aa89b54fa6f8"} Oct 06 11:51:23 crc kubenswrapper[4958]: I1006 11:51:23.946306 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-dftvm" Oct 06 11:51:23 crc kubenswrapper[4958]: I1006 11:51:23.981801 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fdq7w\" (UniqueName: \"kubernetes.io/projected/87166f97-d9f0-4391-87b6-0ea7ce0208e1-kube-api-access-fdq7w\") pod \"87166f97-d9f0-4391-87b6-0ea7ce0208e1\" (UID: \"87166f97-d9f0-4391-87b6-0ea7ce0208e1\") " Oct 06 11:51:23 crc kubenswrapper[4958]: I1006 11:51:23.981875 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/87166f97-d9f0-4391-87b6-0ea7ce0208e1-v4-0-config-user-idp-0-file-data\") pod \"87166f97-d9f0-4391-87b6-0ea7ce0208e1\" (UID: \"87166f97-d9f0-4391-87b6-0ea7ce0208e1\") " Oct 06 11:51:23 crc kubenswrapper[4958]: I1006 11:51:23.981924 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/87166f97-d9f0-4391-87b6-0ea7ce0208e1-v4-0-config-user-template-error\") pod \"87166f97-d9f0-4391-87b6-0ea7ce0208e1\" (UID: \"87166f97-d9f0-4391-87b6-0ea7ce0208e1\") " Oct 06 11:51:23 crc kubenswrapper[4958]: I1006 11:51:23.981967 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/87166f97-d9f0-4391-87b6-0ea7ce0208e1-v4-0-config-system-serving-cert\") pod \"87166f97-d9f0-4391-87b6-0ea7ce0208e1\" (UID: \"87166f97-d9f0-4391-87b6-0ea7ce0208e1\") " Oct 06 11:51:23 crc kubenswrapper[4958]: I1006 11:51:23.982000 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/87166f97-d9f0-4391-87b6-0ea7ce0208e1-v4-0-config-system-session\") pod \"87166f97-d9f0-4391-87b6-0ea7ce0208e1\" (UID: \"87166f97-d9f0-4391-87b6-0ea7ce0208e1\") " Oct 06 11:51:23 crc kubenswrapper[4958]: I1006 11:51:23.982025 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/87166f97-d9f0-4391-87b6-0ea7ce0208e1-audit-policies\") pod \"87166f97-d9f0-4391-87b6-0ea7ce0208e1\" (UID: \"87166f97-d9f0-4391-87b6-0ea7ce0208e1\") " Oct 06 11:51:23 crc kubenswrapper[4958]: I1006 11:51:23.982053 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/87166f97-d9f0-4391-87b6-0ea7ce0208e1-v4-0-config-system-cliconfig\") pod \"87166f97-d9f0-4391-87b6-0ea7ce0208e1\" (UID: \"87166f97-d9f0-4391-87b6-0ea7ce0208e1\") " Oct 06 11:51:23 crc kubenswrapper[4958]: I1006 11:51:23.982084 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/87166f97-d9f0-4391-87b6-0ea7ce0208e1-v4-0-config-system-trusted-ca-bundle\") pod \"87166f97-d9f0-4391-87b6-0ea7ce0208e1\" (UID: \"87166f97-d9f0-4391-87b6-0ea7ce0208e1\") " Oct 06 11:51:23 crc kubenswrapper[4958]: I1006 11:51:23.982108 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/87166f97-d9f0-4391-87b6-0ea7ce0208e1-v4-0-config-user-template-login\") pod \"87166f97-d9f0-4391-87b6-0ea7ce0208e1\" (UID: \"87166f97-d9f0-4391-87b6-0ea7ce0208e1\") " Oct 06 11:51:23 crc kubenswrapper[4958]: I1006 11:51:23.982128 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/87166f97-d9f0-4391-87b6-0ea7ce0208e1-audit-dir\") pod \"87166f97-d9f0-4391-87b6-0ea7ce0208e1\" (UID: \"87166f97-d9f0-4391-87b6-0ea7ce0208e1\") " Oct 06 11:51:23 crc kubenswrapper[4958]: I1006 11:51:23.982185 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/87166f97-d9f0-4391-87b6-0ea7ce0208e1-v4-0-config-system-router-certs\") pod \"87166f97-d9f0-4391-87b6-0ea7ce0208e1\" (UID: \"87166f97-d9f0-4391-87b6-0ea7ce0208e1\") " Oct 06 11:51:23 crc kubenswrapper[4958]: I1006 11:51:23.982208 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/87166f97-d9f0-4391-87b6-0ea7ce0208e1-v4-0-config-system-service-ca\") pod \"87166f97-d9f0-4391-87b6-0ea7ce0208e1\" (UID: \"87166f97-d9f0-4391-87b6-0ea7ce0208e1\") " Oct 06 11:51:23 crc kubenswrapper[4958]: I1006 11:51:23.982232 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/87166f97-d9f0-4391-87b6-0ea7ce0208e1-v4-0-config-system-ocp-branding-template\") pod \"87166f97-d9f0-4391-87b6-0ea7ce0208e1\" (UID: \"87166f97-d9f0-4391-87b6-0ea7ce0208e1\") " Oct 06 11:51:23 crc kubenswrapper[4958]: I1006 11:51:23.982266 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/87166f97-d9f0-4391-87b6-0ea7ce0208e1-v4-0-config-user-template-provider-selection\") pod \"87166f97-d9f0-4391-87b6-0ea7ce0208e1\" (UID: \"87166f97-d9f0-4391-87b6-0ea7ce0208e1\") " Oct 06 11:51:23 crc kubenswrapper[4958]: I1006 11:51:23.986063 4958 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication/oauth-openshift-5bc77478bd-dhdcs"] Oct 06 11:51:23 crc kubenswrapper[4958]: E1006 11:51:23.986288 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="65b79cfc-41f1-40f1-b29b-9b9b4f8e28a2" containerName="extract-utilities" Oct 06 11:51:23 crc kubenswrapper[4958]: I1006 11:51:23.986311 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="65b79cfc-41f1-40f1-b29b-9b9b4f8e28a2" containerName="extract-utilities" Oct 06 11:51:23 crc kubenswrapper[4958]: E1006 11:51:23.986326 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="65b79cfc-41f1-40f1-b29b-9b9b4f8e28a2" containerName="extract-content" Oct 06 11:51:23 crc kubenswrapper[4958]: I1006 11:51:23.986333 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="65b79cfc-41f1-40f1-b29b-9b9b4f8e28a2" containerName="extract-content" Oct 06 11:51:23 crc kubenswrapper[4958]: E1006 11:51:23.986341 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="87166f97-d9f0-4391-87b6-0ea7ce0208e1" containerName="oauth-openshift" Oct 06 11:51:23 crc kubenswrapper[4958]: I1006 11:51:23.986347 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="87166f97-d9f0-4391-87b6-0ea7ce0208e1" containerName="oauth-openshift" Oct 06 11:51:23 crc kubenswrapper[4958]: E1006 11:51:23.986356 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2068c9e3-4b44-4225-b1a5-e495dd6ab26e" containerName="extract-utilities" Oct 06 11:51:23 crc kubenswrapper[4958]: I1006 11:51:23.986362 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="2068c9e3-4b44-4225-b1a5-e495dd6ab26e" containerName="extract-utilities" Oct 06 11:51:23 crc kubenswrapper[4958]: E1006 11:51:23.986369 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="aad2fcff-c2ad-444d-a6da-d4f04b1d3b25" containerName="registry-server" Oct 06 11:51:23 crc kubenswrapper[4958]: I1006 11:51:23.986375 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="aad2fcff-c2ad-444d-a6da-d4f04b1d3b25" containerName="registry-server" Oct 06 11:51:23 crc kubenswrapper[4958]: E1006 11:51:23.986385 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2068c9e3-4b44-4225-b1a5-e495dd6ab26e" containerName="registry-server" Oct 06 11:51:23 crc kubenswrapper[4958]: I1006 11:51:23.986390 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="2068c9e3-4b44-4225-b1a5-e495dd6ab26e" containerName="registry-server" Oct 06 11:51:23 crc kubenswrapper[4958]: E1006 11:51:23.986397 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b08c0f74-407f-47c0-8cd4-85b2ceb71bfa" containerName="registry-server" Oct 06 11:51:23 crc kubenswrapper[4958]: I1006 11:51:23.986402 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="b08c0f74-407f-47c0-8cd4-85b2ceb71bfa" containerName="registry-server" Oct 06 11:51:23 crc kubenswrapper[4958]: E1006 11:51:23.986410 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="aad2fcff-c2ad-444d-a6da-d4f04b1d3b25" containerName="extract-content" Oct 06 11:51:23 crc kubenswrapper[4958]: I1006 11:51:23.986415 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="aad2fcff-c2ad-444d-a6da-d4f04b1d3b25" containerName="extract-content" Oct 06 11:51:23 crc kubenswrapper[4958]: E1006 11:51:23.986422 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2068c9e3-4b44-4225-b1a5-e495dd6ab26e" containerName="extract-content" Oct 06 11:51:23 crc kubenswrapper[4958]: I1006 11:51:23.986428 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="2068c9e3-4b44-4225-b1a5-e495dd6ab26e" containerName="extract-content" Oct 06 11:51:23 crc kubenswrapper[4958]: E1006 11:51:23.986436 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="aad2fcff-c2ad-444d-a6da-d4f04b1d3b25" containerName="extract-utilities" Oct 06 11:51:23 crc kubenswrapper[4958]: I1006 11:51:23.986444 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="aad2fcff-c2ad-444d-a6da-d4f04b1d3b25" containerName="extract-utilities" Oct 06 11:51:23 crc kubenswrapper[4958]: E1006 11:51:23.986451 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b08c0f74-407f-47c0-8cd4-85b2ceb71bfa" containerName="extract-content" Oct 06 11:51:23 crc kubenswrapper[4958]: I1006 11:51:23.986457 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="b08c0f74-407f-47c0-8cd4-85b2ceb71bfa" containerName="extract-content" Oct 06 11:51:23 crc kubenswrapper[4958]: E1006 11:51:23.986465 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="675bad16-24bf-4685-b5e5-51311ca2f42e" containerName="pruner" Oct 06 11:51:23 crc kubenswrapper[4958]: I1006 11:51:23.986471 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="675bad16-24bf-4685-b5e5-51311ca2f42e" containerName="pruner" Oct 06 11:51:23 crc kubenswrapper[4958]: E1006 11:51:23.986479 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b08c0f74-407f-47c0-8cd4-85b2ceb71bfa" containerName="extract-utilities" Oct 06 11:51:23 crc kubenswrapper[4958]: I1006 11:51:23.986484 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="b08c0f74-407f-47c0-8cd4-85b2ceb71bfa" containerName="extract-utilities" Oct 06 11:51:23 crc kubenswrapper[4958]: E1006 11:51:23.986490 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="22a4e500-57ab-4e3c-a3d5-c0256618c3cd" containerName="pruner" Oct 06 11:51:23 crc kubenswrapper[4958]: I1006 11:51:23.986496 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="22a4e500-57ab-4e3c-a3d5-c0256618c3cd" containerName="pruner" Oct 06 11:51:23 crc kubenswrapper[4958]: E1006 11:51:23.986504 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="65b79cfc-41f1-40f1-b29b-9b9b4f8e28a2" containerName="registry-server" Oct 06 11:51:23 crc kubenswrapper[4958]: I1006 11:51:23.986510 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="65b79cfc-41f1-40f1-b29b-9b9b4f8e28a2" containerName="registry-server" Oct 06 11:51:23 crc kubenswrapper[4958]: I1006 11:51:23.986591 4958 memory_manager.go:354] "RemoveStaleState removing state" podUID="87166f97-d9f0-4391-87b6-0ea7ce0208e1" containerName="oauth-openshift" Oct 06 11:51:23 crc kubenswrapper[4958]: I1006 11:51:23.986600 4958 memory_manager.go:354] "RemoveStaleState removing state" podUID="65b79cfc-41f1-40f1-b29b-9b9b4f8e28a2" containerName="registry-server" Oct 06 11:51:23 crc kubenswrapper[4958]: I1006 11:51:23.986608 4958 memory_manager.go:354] "RemoveStaleState removing state" podUID="b08c0f74-407f-47c0-8cd4-85b2ceb71bfa" containerName="registry-server" Oct 06 11:51:23 crc kubenswrapper[4958]: I1006 11:51:23.986614 4958 memory_manager.go:354] "RemoveStaleState removing state" podUID="aad2fcff-c2ad-444d-a6da-d4f04b1d3b25" containerName="registry-server" Oct 06 11:51:23 crc kubenswrapper[4958]: I1006 11:51:23.986625 4958 memory_manager.go:354] "RemoveStaleState removing state" podUID="675bad16-24bf-4685-b5e5-51311ca2f42e" containerName="pruner" Oct 06 11:51:23 crc kubenswrapper[4958]: I1006 11:51:23.986631 4958 memory_manager.go:354] "RemoveStaleState removing state" podUID="22a4e500-57ab-4e3c-a3d5-c0256618c3cd" containerName="pruner" Oct 06 11:51:23 crc kubenswrapper[4958]: I1006 11:51:23.986641 4958 memory_manager.go:354] "RemoveStaleState removing state" podUID="2068c9e3-4b44-4225-b1a5-e495dd6ab26e" containerName="registry-server" Oct 06 11:51:23 crc kubenswrapper[4958]: I1006 11:51:23.988191 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-5bc77478bd-dhdcs" Oct 06 11:51:23 crc kubenswrapper[4958]: I1006 11:51:23.992191 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/87166f97-d9f0-4391-87b6-0ea7ce0208e1-audit-dir" (OuterVolumeSpecName: "audit-dir") pod "87166f97-d9f0-4391-87b6-0ea7ce0208e1" (UID: "87166f97-d9f0-4391-87b6-0ea7ce0208e1"). InnerVolumeSpecName "audit-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 06 11:51:23 crc kubenswrapper[4958]: I1006 11:51:23.995238 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-5bc77478bd-dhdcs"] Oct 06 11:51:23 crc kubenswrapper[4958]: I1006 11:51:23.996050 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/87166f97-d9f0-4391-87b6-0ea7ce0208e1-v4-0-config-system-service-ca" (OuterVolumeSpecName: "v4-0-config-system-service-ca") pod "87166f97-d9f0-4391-87b6-0ea7ce0208e1" (UID: "87166f97-d9f0-4391-87b6-0ea7ce0208e1"). InnerVolumeSpecName "v4-0-config-system-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 06 11:51:23 crc kubenswrapper[4958]: I1006 11:51:23.997746 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/87166f97-d9f0-4391-87b6-0ea7ce0208e1-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "87166f97-d9f0-4391-87b6-0ea7ce0208e1" (UID: "87166f97-d9f0-4391-87b6-0ea7ce0208e1"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 06 11:51:23 crc kubenswrapper[4958]: I1006 11:51:23.999493 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/87166f97-d9f0-4391-87b6-0ea7ce0208e1-v4-0-config-user-template-error" (OuterVolumeSpecName: "v4-0-config-user-template-error") pod "87166f97-d9f0-4391-87b6-0ea7ce0208e1" (UID: "87166f97-d9f0-4391-87b6-0ea7ce0208e1"). InnerVolumeSpecName "v4-0-config-user-template-error". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 06 11:51:24 crc kubenswrapper[4958]: I1006 11:51:24.000753 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/87166f97-d9f0-4391-87b6-0ea7ce0208e1-v4-0-config-system-cliconfig" (OuterVolumeSpecName: "v4-0-config-system-cliconfig") pod "87166f97-d9f0-4391-87b6-0ea7ce0208e1" (UID: "87166f97-d9f0-4391-87b6-0ea7ce0208e1"). InnerVolumeSpecName "v4-0-config-system-cliconfig". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 06 11:51:24 crc kubenswrapper[4958]: I1006 11:51:24.001002 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/87166f97-d9f0-4391-87b6-0ea7ce0208e1-kube-api-access-fdq7w" (OuterVolumeSpecName: "kube-api-access-fdq7w") pod "87166f97-d9f0-4391-87b6-0ea7ce0208e1" (UID: "87166f97-d9f0-4391-87b6-0ea7ce0208e1"). InnerVolumeSpecName "kube-api-access-fdq7w". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 06 11:51:24 crc kubenswrapper[4958]: I1006 11:51:24.001024 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/87166f97-d9f0-4391-87b6-0ea7ce0208e1-v4-0-config-system-session" (OuterVolumeSpecName: "v4-0-config-system-session") pod "87166f97-d9f0-4391-87b6-0ea7ce0208e1" (UID: "87166f97-d9f0-4391-87b6-0ea7ce0208e1"). InnerVolumeSpecName "v4-0-config-system-session". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 06 11:51:24 crc kubenswrapper[4958]: I1006 11:51:24.001247 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/87166f97-d9f0-4391-87b6-0ea7ce0208e1-v4-0-config-user-idp-0-file-data" (OuterVolumeSpecName: "v4-0-config-user-idp-0-file-data") pod "87166f97-d9f0-4391-87b6-0ea7ce0208e1" (UID: "87166f97-d9f0-4391-87b6-0ea7ce0208e1"). InnerVolumeSpecName "v4-0-config-user-idp-0-file-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 06 11:51:24 crc kubenswrapper[4958]: I1006 11:51:24.001449 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/87166f97-d9f0-4391-87b6-0ea7ce0208e1-v4-0-config-system-trusted-ca-bundle" (OuterVolumeSpecName: "v4-0-config-system-trusted-ca-bundle") pod "87166f97-d9f0-4391-87b6-0ea7ce0208e1" (UID: "87166f97-d9f0-4391-87b6-0ea7ce0208e1"). InnerVolumeSpecName "v4-0-config-system-trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 06 11:51:24 crc kubenswrapper[4958]: I1006 11:51:24.001842 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/87166f97-d9f0-4391-87b6-0ea7ce0208e1-v4-0-config-system-ocp-branding-template" (OuterVolumeSpecName: "v4-0-config-system-ocp-branding-template") pod "87166f97-d9f0-4391-87b6-0ea7ce0208e1" (UID: "87166f97-d9f0-4391-87b6-0ea7ce0208e1"). InnerVolumeSpecName "v4-0-config-system-ocp-branding-template". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 06 11:51:24 crc kubenswrapper[4958]: I1006 11:51:24.002516 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/87166f97-d9f0-4391-87b6-0ea7ce0208e1-v4-0-config-user-template-login" (OuterVolumeSpecName: "v4-0-config-user-template-login") pod "87166f97-d9f0-4391-87b6-0ea7ce0208e1" (UID: "87166f97-d9f0-4391-87b6-0ea7ce0208e1"). InnerVolumeSpecName "v4-0-config-user-template-login". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 06 11:51:24 crc kubenswrapper[4958]: I1006 11:51:24.002761 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/87166f97-d9f0-4391-87b6-0ea7ce0208e1-v4-0-config-user-template-provider-selection" (OuterVolumeSpecName: "v4-0-config-user-template-provider-selection") pod "87166f97-d9f0-4391-87b6-0ea7ce0208e1" (UID: "87166f97-d9f0-4391-87b6-0ea7ce0208e1"). InnerVolumeSpecName "v4-0-config-user-template-provider-selection". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 06 11:51:24 crc kubenswrapper[4958]: I1006 11:51:24.003634 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/87166f97-d9f0-4391-87b6-0ea7ce0208e1-v4-0-config-system-router-certs" (OuterVolumeSpecName: "v4-0-config-system-router-certs") pod "87166f97-d9f0-4391-87b6-0ea7ce0208e1" (UID: "87166f97-d9f0-4391-87b6-0ea7ce0208e1"). InnerVolumeSpecName "v4-0-config-system-router-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 06 11:51:24 crc kubenswrapper[4958]: I1006 11:51:24.003656 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/87166f97-d9f0-4391-87b6-0ea7ce0208e1-v4-0-config-system-serving-cert" (OuterVolumeSpecName: "v4-0-config-system-serving-cert") pod "87166f97-d9f0-4391-87b6-0ea7ce0208e1" (UID: "87166f97-d9f0-4391-87b6-0ea7ce0208e1"). InnerVolumeSpecName "v4-0-config-system-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 06 11:51:24 crc kubenswrapper[4958]: I1006 11:51:24.084399 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/94117b86-abc5-4d3e-a260-efe1472daf2e-v4-0-config-system-router-certs\") pod \"oauth-openshift-5bc77478bd-dhdcs\" (UID: \"94117b86-abc5-4d3e-a260-efe1472daf2e\") " pod="openshift-authentication/oauth-openshift-5bc77478bd-dhdcs" Oct 06 11:51:24 crc kubenswrapper[4958]: I1006 11:51:24.084440 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/94117b86-abc5-4d3e-a260-efe1472daf2e-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-5bc77478bd-dhdcs\" (UID: \"94117b86-abc5-4d3e-a260-efe1472daf2e\") " pod="openshift-authentication/oauth-openshift-5bc77478bd-dhdcs" Oct 06 11:51:24 crc kubenswrapper[4958]: I1006 11:51:24.084467 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/94117b86-abc5-4d3e-a260-efe1472daf2e-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-5bc77478bd-dhdcs\" (UID: \"94117b86-abc5-4d3e-a260-efe1472daf2e\") " pod="openshift-authentication/oauth-openshift-5bc77478bd-dhdcs" Oct 06 11:51:24 crc kubenswrapper[4958]: I1006 11:51:24.084489 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/94117b86-abc5-4d3e-a260-efe1472daf2e-v4-0-config-system-session\") pod \"oauth-openshift-5bc77478bd-dhdcs\" (UID: \"94117b86-abc5-4d3e-a260-efe1472daf2e\") " pod="openshift-authentication/oauth-openshift-5bc77478bd-dhdcs" Oct 06 11:51:24 crc kubenswrapper[4958]: I1006 11:51:24.084508 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/94117b86-abc5-4d3e-a260-efe1472daf2e-v4-0-config-system-serving-cert\") pod \"oauth-openshift-5bc77478bd-dhdcs\" (UID: \"94117b86-abc5-4d3e-a260-efe1472daf2e\") " pod="openshift-authentication/oauth-openshift-5bc77478bd-dhdcs" Oct 06 11:51:24 crc kubenswrapper[4958]: I1006 11:51:24.084555 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/94117b86-abc5-4d3e-a260-efe1472daf2e-v4-0-config-system-cliconfig\") pod \"oauth-openshift-5bc77478bd-dhdcs\" (UID: \"94117b86-abc5-4d3e-a260-efe1472daf2e\") " pod="openshift-authentication/oauth-openshift-5bc77478bd-dhdcs" Oct 06 11:51:24 crc kubenswrapper[4958]: I1006 11:51:24.084571 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gzkhd\" (UniqueName: \"kubernetes.io/projected/94117b86-abc5-4d3e-a260-efe1472daf2e-kube-api-access-gzkhd\") pod \"oauth-openshift-5bc77478bd-dhdcs\" (UID: \"94117b86-abc5-4d3e-a260-efe1472daf2e\") " pod="openshift-authentication/oauth-openshift-5bc77478bd-dhdcs" Oct 06 11:51:24 crc kubenswrapper[4958]: I1006 11:51:24.084662 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/94117b86-abc5-4d3e-a260-efe1472daf2e-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-5bc77478bd-dhdcs\" (UID: \"94117b86-abc5-4d3e-a260-efe1472daf2e\") " pod="openshift-authentication/oauth-openshift-5bc77478bd-dhdcs" Oct 06 11:51:24 crc kubenswrapper[4958]: I1006 11:51:24.084699 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/94117b86-abc5-4d3e-a260-efe1472daf2e-v4-0-config-user-template-error\") pod \"oauth-openshift-5bc77478bd-dhdcs\" (UID: \"94117b86-abc5-4d3e-a260-efe1472daf2e\") " pod="openshift-authentication/oauth-openshift-5bc77478bd-dhdcs" Oct 06 11:51:24 crc kubenswrapper[4958]: I1006 11:51:24.084725 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/94117b86-abc5-4d3e-a260-efe1472daf2e-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-5bc77478bd-dhdcs\" (UID: \"94117b86-abc5-4d3e-a260-efe1472daf2e\") " pod="openshift-authentication/oauth-openshift-5bc77478bd-dhdcs" Oct 06 11:51:24 crc kubenswrapper[4958]: I1006 11:51:24.084756 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/94117b86-abc5-4d3e-a260-efe1472daf2e-v4-0-config-system-service-ca\") pod \"oauth-openshift-5bc77478bd-dhdcs\" (UID: \"94117b86-abc5-4d3e-a260-efe1472daf2e\") " pod="openshift-authentication/oauth-openshift-5bc77478bd-dhdcs" Oct 06 11:51:24 crc kubenswrapper[4958]: I1006 11:51:24.084842 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/94117b86-abc5-4d3e-a260-efe1472daf2e-audit-dir\") pod \"oauth-openshift-5bc77478bd-dhdcs\" (UID: \"94117b86-abc5-4d3e-a260-efe1472daf2e\") " pod="openshift-authentication/oauth-openshift-5bc77478bd-dhdcs" Oct 06 11:51:24 crc kubenswrapper[4958]: I1006 11:51:24.084976 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/94117b86-abc5-4d3e-a260-efe1472daf2e-audit-policies\") pod \"oauth-openshift-5bc77478bd-dhdcs\" (UID: \"94117b86-abc5-4d3e-a260-efe1472daf2e\") " pod="openshift-authentication/oauth-openshift-5bc77478bd-dhdcs" Oct 06 11:51:24 crc kubenswrapper[4958]: I1006 11:51:24.085040 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/94117b86-abc5-4d3e-a260-efe1472daf2e-v4-0-config-user-template-login\") pod \"oauth-openshift-5bc77478bd-dhdcs\" (UID: \"94117b86-abc5-4d3e-a260-efe1472daf2e\") " pod="openshift-authentication/oauth-openshift-5bc77478bd-dhdcs" Oct 06 11:51:24 crc kubenswrapper[4958]: I1006 11:51:24.085209 4958 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/87166f97-d9f0-4391-87b6-0ea7ce0208e1-v4-0-config-system-serving-cert\") on node \"crc\" DevicePath \"\"" Oct 06 11:51:24 crc kubenswrapper[4958]: I1006 11:51:24.085247 4958 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/87166f97-d9f0-4391-87b6-0ea7ce0208e1-v4-0-config-system-session\") on node \"crc\" DevicePath \"\"" Oct 06 11:51:24 crc kubenswrapper[4958]: I1006 11:51:24.085270 4958 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/87166f97-d9f0-4391-87b6-0ea7ce0208e1-audit-policies\") on node \"crc\" DevicePath \"\"" Oct 06 11:51:24 crc kubenswrapper[4958]: I1006 11:51:24.085290 4958 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/87166f97-d9f0-4391-87b6-0ea7ce0208e1-v4-0-config-system-cliconfig\") on node \"crc\" DevicePath \"\"" Oct 06 11:51:24 crc kubenswrapper[4958]: I1006 11:51:24.085309 4958 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/87166f97-d9f0-4391-87b6-0ea7ce0208e1-v4-0-config-system-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 06 11:51:24 crc kubenswrapper[4958]: I1006 11:51:24.085330 4958 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/87166f97-d9f0-4391-87b6-0ea7ce0208e1-v4-0-config-user-template-login\") on node \"crc\" DevicePath \"\"" Oct 06 11:51:24 crc kubenswrapper[4958]: I1006 11:51:24.085349 4958 reconciler_common.go:293] "Volume detached for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/87166f97-d9f0-4391-87b6-0ea7ce0208e1-audit-dir\") on node \"crc\" DevicePath \"\"" Oct 06 11:51:24 crc kubenswrapper[4958]: I1006 11:51:24.085367 4958 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/87166f97-d9f0-4391-87b6-0ea7ce0208e1-v4-0-config-system-router-certs\") on node \"crc\" DevicePath \"\"" Oct 06 11:51:24 crc kubenswrapper[4958]: I1006 11:51:24.085386 4958 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/87166f97-d9f0-4391-87b6-0ea7ce0208e1-v4-0-config-system-service-ca\") on node \"crc\" DevicePath \"\"" Oct 06 11:51:24 crc kubenswrapper[4958]: I1006 11:51:24.085404 4958 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/87166f97-d9f0-4391-87b6-0ea7ce0208e1-v4-0-config-system-ocp-branding-template\") on node \"crc\" DevicePath \"\"" Oct 06 11:51:24 crc kubenswrapper[4958]: I1006 11:51:24.085425 4958 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/87166f97-d9f0-4391-87b6-0ea7ce0208e1-v4-0-config-user-template-provider-selection\") on node \"crc\" DevicePath \"\"" Oct 06 11:51:24 crc kubenswrapper[4958]: I1006 11:51:24.085445 4958 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fdq7w\" (UniqueName: \"kubernetes.io/projected/87166f97-d9f0-4391-87b6-0ea7ce0208e1-kube-api-access-fdq7w\") on node \"crc\" DevicePath \"\"" Oct 06 11:51:24 crc kubenswrapper[4958]: I1006 11:51:24.085464 4958 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/87166f97-d9f0-4391-87b6-0ea7ce0208e1-v4-0-config-user-idp-0-file-data\") on node \"crc\" DevicePath \"\"" Oct 06 11:51:24 crc kubenswrapper[4958]: I1006 11:51:24.085482 4958 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/87166f97-d9f0-4391-87b6-0ea7ce0208e1-v4-0-config-user-template-error\") on node \"crc\" DevicePath \"\"" Oct 06 11:51:24 crc kubenswrapper[4958]: I1006 11:51:24.186689 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/94117b86-abc5-4d3e-a260-efe1472daf2e-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-5bc77478bd-dhdcs\" (UID: \"94117b86-abc5-4d3e-a260-efe1472daf2e\") " pod="openshift-authentication/oauth-openshift-5bc77478bd-dhdcs" Oct 06 11:51:24 crc kubenswrapper[4958]: I1006 11:51:24.186766 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/94117b86-abc5-4d3e-a260-efe1472daf2e-v4-0-config-system-service-ca\") pod \"oauth-openshift-5bc77478bd-dhdcs\" (UID: \"94117b86-abc5-4d3e-a260-efe1472daf2e\") " pod="openshift-authentication/oauth-openshift-5bc77478bd-dhdcs" Oct 06 11:51:24 crc kubenswrapper[4958]: I1006 11:51:24.186809 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/94117b86-abc5-4d3e-a260-efe1472daf2e-audit-dir\") pod \"oauth-openshift-5bc77478bd-dhdcs\" (UID: \"94117b86-abc5-4d3e-a260-efe1472daf2e\") " pod="openshift-authentication/oauth-openshift-5bc77478bd-dhdcs" Oct 06 11:51:24 crc kubenswrapper[4958]: I1006 11:51:24.186869 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/94117b86-abc5-4d3e-a260-efe1472daf2e-audit-policies\") pod \"oauth-openshift-5bc77478bd-dhdcs\" (UID: \"94117b86-abc5-4d3e-a260-efe1472daf2e\") " pod="openshift-authentication/oauth-openshift-5bc77478bd-dhdcs" Oct 06 11:51:24 crc kubenswrapper[4958]: I1006 11:51:24.186915 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/94117b86-abc5-4d3e-a260-efe1472daf2e-v4-0-config-user-template-login\") pod \"oauth-openshift-5bc77478bd-dhdcs\" (UID: \"94117b86-abc5-4d3e-a260-efe1472daf2e\") " pod="openshift-authentication/oauth-openshift-5bc77478bd-dhdcs" Oct 06 11:51:24 crc kubenswrapper[4958]: I1006 11:51:24.186938 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/94117b86-abc5-4d3e-a260-efe1472daf2e-audit-dir\") pod \"oauth-openshift-5bc77478bd-dhdcs\" (UID: \"94117b86-abc5-4d3e-a260-efe1472daf2e\") " pod="openshift-authentication/oauth-openshift-5bc77478bd-dhdcs" Oct 06 11:51:24 crc kubenswrapper[4958]: I1006 11:51:24.186973 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/94117b86-abc5-4d3e-a260-efe1472daf2e-v4-0-config-system-router-certs\") pod \"oauth-openshift-5bc77478bd-dhdcs\" (UID: \"94117b86-abc5-4d3e-a260-efe1472daf2e\") " pod="openshift-authentication/oauth-openshift-5bc77478bd-dhdcs" Oct 06 11:51:24 crc kubenswrapper[4958]: I1006 11:51:24.187012 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/94117b86-abc5-4d3e-a260-efe1472daf2e-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-5bc77478bd-dhdcs\" (UID: \"94117b86-abc5-4d3e-a260-efe1472daf2e\") " pod="openshift-authentication/oauth-openshift-5bc77478bd-dhdcs" Oct 06 11:51:24 crc kubenswrapper[4958]: I1006 11:51:24.187065 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/94117b86-abc5-4d3e-a260-efe1472daf2e-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-5bc77478bd-dhdcs\" (UID: \"94117b86-abc5-4d3e-a260-efe1472daf2e\") " pod="openshift-authentication/oauth-openshift-5bc77478bd-dhdcs" Oct 06 11:51:24 crc kubenswrapper[4958]: I1006 11:51:24.187105 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/94117b86-abc5-4d3e-a260-efe1472daf2e-v4-0-config-system-session\") pod \"oauth-openshift-5bc77478bd-dhdcs\" (UID: \"94117b86-abc5-4d3e-a260-efe1472daf2e\") " pod="openshift-authentication/oauth-openshift-5bc77478bd-dhdcs" Oct 06 11:51:24 crc kubenswrapper[4958]: I1006 11:51:24.187173 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/94117b86-abc5-4d3e-a260-efe1472daf2e-v4-0-config-system-serving-cert\") pod \"oauth-openshift-5bc77478bd-dhdcs\" (UID: \"94117b86-abc5-4d3e-a260-efe1472daf2e\") " pod="openshift-authentication/oauth-openshift-5bc77478bd-dhdcs" Oct 06 11:51:24 crc kubenswrapper[4958]: I1006 11:51:24.187242 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/94117b86-abc5-4d3e-a260-efe1472daf2e-v4-0-config-system-cliconfig\") pod \"oauth-openshift-5bc77478bd-dhdcs\" (UID: \"94117b86-abc5-4d3e-a260-efe1472daf2e\") " pod="openshift-authentication/oauth-openshift-5bc77478bd-dhdcs" Oct 06 11:51:24 crc kubenswrapper[4958]: I1006 11:51:24.187287 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gzkhd\" (UniqueName: \"kubernetes.io/projected/94117b86-abc5-4d3e-a260-efe1472daf2e-kube-api-access-gzkhd\") pod \"oauth-openshift-5bc77478bd-dhdcs\" (UID: \"94117b86-abc5-4d3e-a260-efe1472daf2e\") " pod="openshift-authentication/oauth-openshift-5bc77478bd-dhdcs" Oct 06 11:51:24 crc kubenswrapper[4958]: I1006 11:51:24.187349 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/94117b86-abc5-4d3e-a260-efe1472daf2e-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-5bc77478bd-dhdcs\" (UID: \"94117b86-abc5-4d3e-a260-efe1472daf2e\") " pod="openshift-authentication/oauth-openshift-5bc77478bd-dhdcs" Oct 06 11:51:24 crc kubenswrapper[4958]: I1006 11:51:24.187396 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/94117b86-abc5-4d3e-a260-efe1472daf2e-v4-0-config-user-template-error\") pod \"oauth-openshift-5bc77478bd-dhdcs\" (UID: \"94117b86-abc5-4d3e-a260-efe1472daf2e\") " pod="openshift-authentication/oauth-openshift-5bc77478bd-dhdcs" Oct 06 11:51:24 crc kubenswrapper[4958]: I1006 11:51:24.188170 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/94117b86-abc5-4d3e-a260-efe1472daf2e-audit-policies\") pod \"oauth-openshift-5bc77478bd-dhdcs\" (UID: \"94117b86-abc5-4d3e-a260-efe1472daf2e\") " pod="openshift-authentication/oauth-openshift-5bc77478bd-dhdcs" Oct 06 11:51:24 crc kubenswrapper[4958]: I1006 11:51:24.188320 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/94117b86-abc5-4d3e-a260-efe1472daf2e-v4-0-config-system-service-ca\") pod \"oauth-openshift-5bc77478bd-dhdcs\" (UID: \"94117b86-abc5-4d3e-a260-efe1472daf2e\") " pod="openshift-authentication/oauth-openshift-5bc77478bd-dhdcs" Oct 06 11:51:24 crc kubenswrapper[4958]: I1006 11:51:24.188421 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/94117b86-abc5-4d3e-a260-efe1472daf2e-v4-0-config-system-cliconfig\") pod \"oauth-openshift-5bc77478bd-dhdcs\" (UID: \"94117b86-abc5-4d3e-a260-efe1472daf2e\") " pod="openshift-authentication/oauth-openshift-5bc77478bd-dhdcs" Oct 06 11:51:24 crc kubenswrapper[4958]: I1006 11:51:24.188878 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/94117b86-abc5-4d3e-a260-efe1472daf2e-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-5bc77478bd-dhdcs\" (UID: \"94117b86-abc5-4d3e-a260-efe1472daf2e\") " pod="openshift-authentication/oauth-openshift-5bc77478bd-dhdcs" Oct 06 11:51:24 crc kubenswrapper[4958]: I1006 11:51:24.192971 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/94117b86-abc5-4d3e-a260-efe1472daf2e-v4-0-config-system-serving-cert\") pod \"oauth-openshift-5bc77478bd-dhdcs\" (UID: \"94117b86-abc5-4d3e-a260-efe1472daf2e\") " pod="openshift-authentication/oauth-openshift-5bc77478bd-dhdcs" Oct 06 11:51:24 crc kubenswrapper[4958]: I1006 11:51:24.193710 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/94117b86-abc5-4d3e-a260-efe1472daf2e-v4-0-config-user-template-login\") pod \"oauth-openshift-5bc77478bd-dhdcs\" (UID: \"94117b86-abc5-4d3e-a260-efe1472daf2e\") " pod="openshift-authentication/oauth-openshift-5bc77478bd-dhdcs" Oct 06 11:51:24 crc kubenswrapper[4958]: I1006 11:51:24.194817 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/94117b86-abc5-4d3e-a260-efe1472daf2e-v4-0-config-user-template-error\") pod \"oauth-openshift-5bc77478bd-dhdcs\" (UID: \"94117b86-abc5-4d3e-a260-efe1472daf2e\") " pod="openshift-authentication/oauth-openshift-5bc77478bd-dhdcs" Oct 06 11:51:24 crc kubenswrapper[4958]: I1006 11:51:24.194850 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/94117b86-abc5-4d3e-a260-efe1472daf2e-v4-0-config-system-session\") pod \"oauth-openshift-5bc77478bd-dhdcs\" (UID: \"94117b86-abc5-4d3e-a260-efe1472daf2e\") " pod="openshift-authentication/oauth-openshift-5bc77478bd-dhdcs" Oct 06 11:51:24 crc kubenswrapper[4958]: I1006 11:51:24.195386 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/94117b86-abc5-4d3e-a260-efe1472daf2e-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-5bc77478bd-dhdcs\" (UID: \"94117b86-abc5-4d3e-a260-efe1472daf2e\") " pod="openshift-authentication/oauth-openshift-5bc77478bd-dhdcs" Oct 06 11:51:24 crc kubenswrapper[4958]: I1006 11:51:24.195733 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/94117b86-abc5-4d3e-a260-efe1472daf2e-v4-0-config-system-router-certs\") pod \"oauth-openshift-5bc77478bd-dhdcs\" (UID: \"94117b86-abc5-4d3e-a260-efe1472daf2e\") " pod="openshift-authentication/oauth-openshift-5bc77478bd-dhdcs" Oct 06 11:51:24 crc kubenswrapper[4958]: I1006 11:51:24.197240 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/94117b86-abc5-4d3e-a260-efe1472daf2e-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-5bc77478bd-dhdcs\" (UID: \"94117b86-abc5-4d3e-a260-efe1472daf2e\") " pod="openshift-authentication/oauth-openshift-5bc77478bd-dhdcs" Oct 06 11:51:24 crc kubenswrapper[4958]: I1006 11:51:24.197395 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/94117b86-abc5-4d3e-a260-efe1472daf2e-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-5bc77478bd-dhdcs\" (UID: \"94117b86-abc5-4d3e-a260-efe1472daf2e\") " pod="openshift-authentication/oauth-openshift-5bc77478bd-dhdcs" Oct 06 11:51:24 crc kubenswrapper[4958]: I1006 11:51:24.216521 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gzkhd\" (UniqueName: \"kubernetes.io/projected/94117b86-abc5-4d3e-a260-efe1472daf2e-kube-api-access-gzkhd\") pod \"oauth-openshift-5bc77478bd-dhdcs\" (UID: \"94117b86-abc5-4d3e-a260-efe1472daf2e\") " pod="openshift-authentication/oauth-openshift-5bc77478bd-dhdcs" Oct 06 11:51:24 crc kubenswrapper[4958]: I1006 11:51:24.333960 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-5bc77478bd-dhdcs" Oct 06 11:51:24 crc kubenswrapper[4958]: I1006 11:51:24.555761 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-5bc77478bd-dhdcs"] Oct 06 11:51:24 crc kubenswrapper[4958]: I1006 11:51:24.859134 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-dftvm" event={"ID":"87166f97-d9f0-4391-87b6-0ea7ce0208e1","Type":"ContainerDied","Data":"9559f00dc325696c0e070b63651dd64765fed84ea93907991d9174006bc992ad"} Oct 06 11:51:24 crc kubenswrapper[4958]: I1006 11:51:24.859268 4958 scope.go:117] "RemoveContainer" containerID="55424fe7ed492a4fc85e0900e39cb52e83b2e4d3886bb82110c6aa89b54fa6f8" Oct 06 11:51:24 crc kubenswrapper[4958]: I1006 11:51:24.859518 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-dftvm" Oct 06 11:51:24 crc kubenswrapper[4958]: I1006 11:51:24.873763 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-5bc77478bd-dhdcs" event={"ID":"94117b86-abc5-4d3e-a260-efe1472daf2e","Type":"ContainerStarted","Data":"93a00b9f448cec6bf15c7a3d2e97cec166ba33f60e458aa35f8f5f5518fc21a5"} Oct 06 11:51:24 crc kubenswrapper[4958]: I1006 11:51:24.874135 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-authentication/oauth-openshift-5bc77478bd-dhdcs" Oct 06 11:51:24 crc kubenswrapper[4958]: I1006 11:51:24.874179 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-5bc77478bd-dhdcs" event={"ID":"94117b86-abc5-4d3e-a260-efe1472daf2e","Type":"ContainerStarted","Data":"e48d56a71ab79adfb8e986961a35924f3a8821cca1a4ca8c3922ec337d25f58b"} Oct 06 11:51:24 crc kubenswrapper[4958]: I1006 11:51:24.877273 4958 patch_prober.go:28] interesting pod/oauth-openshift-5bc77478bd-dhdcs container/oauth-openshift namespace/openshift-authentication: Readiness probe status=failure output="Get \"https://10.217.0.54:6443/healthz\": dial tcp 10.217.0.54:6443: connect: connection refused" start-of-body= Oct 06 11:51:24 crc kubenswrapper[4958]: I1006 11:51:24.877338 4958 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-authentication/oauth-openshift-5bc77478bd-dhdcs" podUID="94117b86-abc5-4d3e-a260-efe1472daf2e" containerName="oauth-openshift" probeResult="failure" output="Get \"https://10.217.0.54:6443/healthz\": dial tcp 10.217.0.54:6443: connect: connection refused" Oct 06 11:51:24 crc kubenswrapper[4958]: I1006 11:51:24.906821 4958 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication/oauth-openshift-5bc77478bd-dhdcs" podStartSLOduration=26.90679634 podStartE2EDuration="26.90679634s" podCreationTimestamp="2025-10-06 11:50:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-06 11:51:24.901628778 +0000 UTC m=+238.787654166" watchObservedRunningTime="2025-10-06 11:51:24.90679634 +0000 UTC m=+238.792821658" Oct 06 11:51:24 crc kubenswrapper[4958]: I1006 11:51:24.926243 4958 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-dftvm"] Oct 06 11:51:24 crc kubenswrapper[4958]: I1006 11:51:24.926723 4958 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-dftvm"] Oct 06 11:51:25 crc kubenswrapper[4958]: I1006 11:51:25.889743 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-authentication/oauth-openshift-5bc77478bd-dhdcs" Oct 06 11:51:26 crc kubenswrapper[4958]: I1006 11:51:26.926107 4958 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="87166f97-d9f0-4391-87b6-0ea7ce0208e1" path="/var/lib/kubelet/pods/87166f97-d9f0-4391-87b6-0ea7ce0208e1/volumes" Oct 06 11:51:45 crc kubenswrapper[4958]: I1006 11:51:45.794127 4958 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-vv7xq"] Oct 06 11:51:45 crc kubenswrapper[4958]: I1006 11:51:45.795198 4958 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-vv7xq" podUID="d262bad7-9d80-42f3-b97d-149b73d879c0" containerName="registry-server" containerID="cri-o://2c30bfef579af1fb0eb74c65ad7dc393a11d4e89178e8abd4fffb93f341afc07" gracePeriod=30 Oct 06 11:51:45 crc kubenswrapper[4958]: I1006 11:51:45.803331 4958 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-5l5pp"] Oct 06 11:51:45 crc kubenswrapper[4958]: I1006 11:51:45.803563 4958 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-5l5pp" podUID="3309b80a-11e6-4b60-be3f-c161644ffc7b" containerName="registry-server" containerID="cri-o://66b7ff6233a1defd1880ce30f9c0d93ccda69fcd878842e63998e8cd5405b959" gracePeriod=30 Oct 06 11:51:45 crc kubenswrapper[4958]: I1006 11:51:45.812056 4958 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-928ch"] Oct 06 11:51:45 crc kubenswrapper[4958]: I1006 11:51:45.812262 4958 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/marketplace-operator-79b997595-928ch" podUID="8bce8671-7d06-4e9b-81cb-f6ddf56ba8a4" containerName="marketplace-operator" containerID="cri-o://20adad655409ba9f8f35264302dee6ceedb4284abb26084ce6fa087cc78f907d" gracePeriod=30 Oct 06 11:51:45 crc kubenswrapper[4958]: I1006 11:51:45.821566 4958 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-lntkc"] Oct 06 11:51:45 crc kubenswrapper[4958]: I1006 11:51:45.821776 4958 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-lntkc" podUID="f668bb45-68a3-4e4a-850e-45f82572b753" containerName="registry-server" containerID="cri-o://b23aa956000a239c5219aa8a7f3866487de5fb55cdfcc4aad0fccbeac213115d" gracePeriod=30 Oct 06 11:51:45 crc kubenswrapper[4958]: I1006 11:51:45.833636 4958 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-xg6kb"] Oct 06 11:51:45 crc kubenswrapper[4958]: I1006 11:51:45.834479 4958 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-r96rs"] Oct 06 11:51:45 crc kubenswrapper[4958]: I1006 11:51:45.834724 4958 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-r96rs" podUID="4c365fe8-47fa-4b1e-9d41-51d5b21a377e" containerName="registry-server" containerID="cri-o://da951b810ea6c170ba43433f46e4832bfc6d85442141b8fac65174e8e5570cec" gracePeriod=30 Oct 06 11:51:45 crc kubenswrapper[4958]: I1006 11:51:45.834903 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-xg6kb" Oct 06 11:51:45 crc kubenswrapper[4958]: I1006 11:51:45.841401 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-xg6kb"] Oct 06 11:51:45 crc kubenswrapper[4958]: I1006 11:51:45.881741 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ls7kr\" (UniqueName: \"kubernetes.io/projected/e4ac258a-8e41-4889-9395-9f0a614425cb-kube-api-access-ls7kr\") pod \"marketplace-operator-79b997595-xg6kb\" (UID: \"e4ac258a-8e41-4889-9395-9f0a614425cb\") " pod="openshift-marketplace/marketplace-operator-79b997595-xg6kb" Oct 06 11:51:45 crc kubenswrapper[4958]: I1006 11:51:45.882062 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/e4ac258a-8e41-4889-9395-9f0a614425cb-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-xg6kb\" (UID: \"e4ac258a-8e41-4889-9395-9f0a614425cb\") " pod="openshift-marketplace/marketplace-operator-79b997595-xg6kb" Oct 06 11:51:45 crc kubenswrapper[4958]: I1006 11:51:45.882242 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/e4ac258a-8e41-4889-9395-9f0a614425cb-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-xg6kb\" (UID: \"e4ac258a-8e41-4889-9395-9f0a614425cb\") " pod="openshift-marketplace/marketplace-operator-79b997595-xg6kb" Oct 06 11:51:45 crc kubenswrapper[4958]: I1006 11:51:45.983276 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ls7kr\" (UniqueName: \"kubernetes.io/projected/e4ac258a-8e41-4889-9395-9f0a614425cb-kube-api-access-ls7kr\") pod \"marketplace-operator-79b997595-xg6kb\" (UID: \"e4ac258a-8e41-4889-9395-9f0a614425cb\") " pod="openshift-marketplace/marketplace-operator-79b997595-xg6kb" Oct 06 11:51:45 crc kubenswrapper[4958]: I1006 11:51:45.983596 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/e4ac258a-8e41-4889-9395-9f0a614425cb-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-xg6kb\" (UID: \"e4ac258a-8e41-4889-9395-9f0a614425cb\") " pod="openshift-marketplace/marketplace-operator-79b997595-xg6kb" Oct 06 11:51:45 crc kubenswrapper[4958]: I1006 11:51:45.983670 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/e4ac258a-8e41-4889-9395-9f0a614425cb-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-xg6kb\" (UID: \"e4ac258a-8e41-4889-9395-9f0a614425cb\") " pod="openshift-marketplace/marketplace-operator-79b997595-xg6kb" Oct 06 11:51:45 crc kubenswrapper[4958]: I1006 11:51:45.985268 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/e4ac258a-8e41-4889-9395-9f0a614425cb-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-xg6kb\" (UID: \"e4ac258a-8e41-4889-9395-9f0a614425cb\") " pod="openshift-marketplace/marketplace-operator-79b997595-xg6kb" Oct 06 11:51:45 crc kubenswrapper[4958]: I1006 11:51:45.990369 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/e4ac258a-8e41-4889-9395-9f0a614425cb-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-xg6kb\" (UID: \"e4ac258a-8e41-4889-9395-9f0a614425cb\") " pod="openshift-marketplace/marketplace-operator-79b997595-xg6kb" Oct 06 11:51:46 crc kubenswrapper[4958]: I1006 11:51:46.003470 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ls7kr\" (UniqueName: \"kubernetes.io/projected/e4ac258a-8e41-4889-9395-9f0a614425cb-kube-api-access-ls7kr\") pod \"marketplace-operator-79b997595-xg6kb\" (UID: \"e4ac258a-8e41-4889-9395-9f0a614425cb\") " pod="openshift-marketplace/marketplace-operator-79b997595-xg6kb" Oct 06 11:51:46 crc kubenswrapper[4958]: I1006 11:51:46.021822 4958 generic.go:334] "Generic (PLEG): container finished" podID="d262bad7-9d80-42f3-b97d-149b73d879c0" containerID="2c30bfef579af1fb0eb74c65ad7dc393a11d4e89178e8abd4fffb93f341afc07" exitCode=0 Oct 06 11:51:46 crc kubenswrapper[4958]: I1006 11:51:46.021935 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-vv7xq" event={"ID":"d262bad7-9d80-42f3-b97d-149b73d879c0","Type":"ContainerDied","Data":"2c30bfef579af1fb0eb74c65ad7dc393a11d4e89178e8abd4fffb93f341afc07"} Oct 06 11:51:46 crc kubenswrapper[4958]: I1006 11:51:46.026179 4958 generic.go:334] "Generic (PLEG): container finished" podID="4c365fe8-47fa-4b1e-9d41-51d5b21a377e" containerID="da951b810ea6c170ba43433f46e4832bfc6d85442141b8fac65174e8e5570cec" exitCode=0 Oct 06 11:51:46 crc kubenswrapper[4958]: I1006 11:51:46.026215 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-r96rs" event={"ID":"4c365fe8-47fa-4b1e-9d41-51d5b21a377e","Type":"ContainerDied","Data":"da951b810ea6c170ba43433f46e4832bfc6d85442141b8fac65174e8e5570cec"} Oct 06 11:51:46 crc kubenswrapper[4958]: I1006 11:51:46.029750 4958 generic.go:334] "Generic (PLEG): container finished" podID="f668bb45-68a3-4e4a-850e-45f82572b753" containerID="b23aa956000a239c5219aa8a7f3866487de5fb55cdfcc4aad0fccbeac213115d" exitCode=0 Oct 06 11:51:46 crc kubenswrapper[4958]: I1006 11:51:46.029842 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-lntkc" event={"ID":"f668bb45-68a3-4e4a-850e-45f82572b753","Type":"ContainerDied","Data":"b23aa956000a239c5219aa8a7f3866487de5fb55cdfcc4aad0fccbeac213115d"} Oct 06 11:51:46 crc kubenswrapper[4958]: I1006 11:51:46.032678 4958 generic.go:334] "Generic (PLEG): container finished" podID="3309b80a-11e6-4b60-be3f-c161644ffc7b" containerID="66b7ff6233a1defd1880ce30f9c0d93ccda69fcd878842e63998e8cd5405b959" exitCode=0 Oct 06 11:51:46 crc kubenswrapper[4958]: I1006 11:51:46.032719 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-5l5pp" event={"ID":"3309b80a-11e6-4b60-be3f-c161644ffc7b","Type":"ContainerDied","Data":"66b7ff6233a1defd1880ce30f9c0d93ccda69fcd878842e63998e8cd5405b959"} Oct 06 11:51:46 crc kubenswrapper[4958]: I1006 11:51:46.033896 4958 generic.go:334] "Generic (PLEG): container finished" podID="8bce8671-7d06-4e9b-81cb-f6ddf56ba8a4" containerID="20adad655409ba9f8f35264302dee6ceedb4284abb26084ce6fa087cc78f907d" exitCode=0 Oct 06 11:51:46 crc kubenswrapper[4958]: I1006 11:51:46.033919 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-928ch" event={"ID":"8bce8671-7d06-4e9b-81cb-f6ddf56ba8a4","Type":"ContainerDied","Data":"20adad655409ba9f8f35264302dee6ceedb4284abb26084ce6fa087cc78f907d"} Oct 06 11:51:46 crc kubenswrapper[4958]: I1006 11:51:46.200781 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-xg6kb" Oct 06 11:51:46 crc kubenswrapper[4958]: I1006 11:51:46.209183 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-vv7xq" Oct 06 11:51:46 crc kubenswrapper[4958]: I1006 11:51:46.279852 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-r96rs" Oct 06 11:51:46 crc kubenswrapper[4958]: I1006 11:51:46.284863 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-5l5pp" Oct 06 11:51:46 crc kubenswrapper[4958]: I1006 11:51:46.285676 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d262bad7-9d80-42f3-b97d-149b73d879c0-utilities\") pod \"d262bad7-9d80-42f3-b97d-149b73d879c0\" (UID: \"d262bad7-9d80-42f3-b97d-149b73d879c0\") " Oct 06 11:51:46 crc kubenswrapper[4958]: I1006 11:51:46.285739 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-m96xq\" (UniqueName: \"kubernetes.io/projected/d262bad7-9d80-42f3-b97d-149b73d879c0-kube-api-access-m96xq\") pod \"d262bad7-9d80-42f3-b97d-149b73d879c0\" (UID: \"d262bad7-9d80-42f3-b97d-149b73d879c0\") " Oct 06 11:51:46 crc kubenswrapper[4958]: I1006 11:51:46.285788 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d262bad7-9d80-42f3-b97d-149b73d879c0-catalog-content\") pod \"d262bad7-9d80-42f3-b97d-149b73d879c0\" (UID: \"d262bad7-9d80-42f3-b97d-149b73d879c0\") " Oct 06 11:51:46 crc kubenswrapper[4958]: I1006 11:51:46.286799 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d262bad7-9d80-42f3-b97d-149b73d879c0-utilities" (OuterVolumeSpecName: "utilities") pod "d262bad7-9d80-42f3-b97d-149b73d879c0" (UID: "d262bad7-9d80-42f3-b97d-149b73d879c0"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 06 11:51:46 crc kubenswrapper[4958]: I1006 11:51:46.287706 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-928ch" Oct 06 11:51:46 crc kubenswrapper[4958]: I1006 11:51:46.289112 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d262bad7-9d80-42f3-b97d-149b73d879c0-kube-api-access-m96xq" (OuterVolumeSpecName: "kube-api-access-m96xq") pod "d262bad7-9d80-42f3-b97d-149b73d879c0" (UID: "d262bad7-9d80-42f3-b97d-149b73d879c0"). InnerVolumeSpecName "kube-api-access-m96xq". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 06 11:51:46 crc kubenswrapper[4958]: I1006 11:51:46.305065 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-lntkc" Oct 06 11:51:46 crc kubenswrapper[4958]: I1006 11:51:46.381304 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d262bad7-9d80-42f3-b97d-149b73d879c0-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "d262bad7-9d80-42f3-b97d-149b73d879c0" (UID: "d262bad7-9d80-42f3-b97d-149b73d879c0"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 06 11:51:46 crc kubenswrapper[4958]: I1006 11:51:46.387473 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zxswz\" (UniqueName: \"kubernetes.io/projected/8bce8671-7d06-4e9b-81cb-f6ddf56ba8a4-kube-api-access-zxswz\") pod \"8bce8671-7d06-4e9b-81cb-f6ddf56ba8a4\" (UID: \"8bce8671-7d06-4e9b-81cb-f6ddf56ba8a4\") " Oct 06 11:51:46 crc kubenswrapper[4958]: I1006 11:51:46.387649 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4c365fe8-47fa-4b1e-9d41-51d5b21a377e-catalog-content\") pod \"4c365fe8-47fa-4b1e-9d41-51d5b21a377e\" (UID: \"4c365fe8-47fa-4b1e-9d41-51d5b21a377e\") " Oct 06 11:51:46 crc kubenswrapper[4958]: I1006 11:51:46.387836 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sqdsq\" (UniqueName: \"kubernetes.io/projected/3309b80a-11e6-4b60-be3f-c161644ffc7b-kube-api-access-sqdsq\") pod \"3309b80a-11e6-4b60-be3f-c161644ffc7b\" (UID: \"3309b80a-11e6-4b60-be3f-c161644ffc7b\") " Oct 06 11:51:46 crc kubenswrapper[4958]: I1006 11:51:46.387861 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-96bk9\" (UniqueName: \"kubernetes.io/projected/4c365fe8-47fa-4b1e-9d41-51d5b21a377e-kube-api-access-96bk9\") pod \"4c365fe8-47fa-4b1e-9d41-51d5b21a377e\" (UID: \"4c365fe8-47fa-4b1e-9d41-51d5b21a377e\") " Oct 06 11:51:46 crc kubenswrapper[4958]: I1006 11:51:46.387890 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3309b80a-11e6-4b60-be3f-c161644ffc7b-utilities\") pod \"3309b80a-11e6-4b60-be3f-c161644ffc7b\" (UID: \"3309b80a-11e6-4b60-be3f-c161644ffc7b\") " Oct 06 11:51:46 crc kubenswrapper[4958]: I1006 11:51:46.387920 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8bce8671-7d06-4e9b-81cb-f6ddf56ba8a4-marketplace-trusted-ca\") pod \"8bce8671-7d06-4e9b-81cb-f6ddf56ba8a4\" (UID: \"8bce8671-7d06-4e9b-81cb-f6ddf56ba8a4\") " Oct 06 11:51:46 crc kubenswrapper[4958]: I1006 11:51:46.387965 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f668bb45-68a3-4e4a-850e-45f82572b753-utilities\") pod \"f668bb45-68a3-4e4a-850e-45f82572b753\" (UID: \"f668bb45-68a3-4e4a-850e-45f82572b753\") " Oct 06 11:51:46 crc kubenswrapper[4958]: I1006 11:51:46.388068 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4c365fe8-47fa-4b1e-9d41-51d5b21a377e-utilities\") pod \"4c365fe8-47fa-4b1e-9d41-51d5b21a377e\" (UID: \"4c365fe8-47fa-4b1e-9d41-51d5b21a377e\") " Oct 06 11:51:46 crc kubenswrapper[4958]: I1006 11:51:46.388151 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3309b80a-11e6-4b60-be3f-c161644ffc7b-catalog-content\") pod \"3309b80a-11e6-4b60-be3f-c161644ffc7b\" (UID: \"3309b80a-11e6-4b60-be3f-c161644ffc7b\") " Oct 06 11:51:46 crc kubenswrapper[4958]: I1006 11:51:46.388190 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jb7vx\" (UniqueName: \"kubernetes.io/projected/f668bb45-68a3-4e4a-850e-45f82572b753-kube-api-access-jb7vx\") pod \"f668bb45-68a3-4e4a-850e-45f82572b753\" (UID: \"f668bb45-68a3-4e4a-850e-45f82572b753\") " Oct 06 11:51:46 crc kubenswrapper[4958]: I1006 11:51:46.388223 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f668bb45-68a3-4e4a-850e-45f82572b753-catalog-content\") pod \"f668bb45-68a3-4e4a-850e-45f82572b753\" (UID: \"f668bb45-68a3-4e4a-850e-45f82572b753\") " Oct 06 11:51:46 crc kubenswrapper[4958]: I1006 11:51:46.388259 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/8bce8671-7d06-4e9b-81cb-f6ddf56ba8a4-marketplace-operator-metrics\") pod \"8bce8671-7d06-4e9b-81cb-f6ddf56ba8a4\" (UID: \"8bce8671-7d06-4e9b-81cb-f6ddf56ba8a4\") " Oct 06 11:51:46 crc kubenswrapper[4958]: I1006 11:51:46.388618 4958 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-m96xq\" (UniqueName: \"kubernetes.io/projected/d262bad7-9d80-42f3-b97d-149b73d879c0-kube-api-access-m96xq\") on node \"crc\" DevicePath \"\"" Oct 06 11:51:46 crc kubenswrapper[4958]: I1006 11:51:46.388634 4958 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d262bad7-9d80-42f3-b97d-149b73d879c0-catalog-content\") on node \"crc\" DevicePath \"\"" Oct 06 11:51:46 crc kubenswrapper[4958]: I1006 11:51:46.388644 4958 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d262bad7-9d80-42f3-b97d-149b73d879c0-utilities\") on node \"crc\" DevicePath \"\"" Oct 06 11:51:46 crc kubenswrapper[4958]: I1006 11:51:46.390866 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4c365fe8-47fa-4b1e-9d41-51d5b21a377e-utilities" (OuterVolumeSpecName: "utilities") pod "4c365fe8-47fa-4b1e-9d41-51d5b21a377e" (UID: "4c365fe8-47fa-4b1e-9d41-51d5b21a377e"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 06 11:51:46 crc kubenswrapper[4958]: I1006 11:51:46.391471 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4c365fe8-47fa-4b1e-9d41-51d5b21a377e-kube-api-access-96bk9" (OuterVolumeSpecName: "kube-api-access-96bk9") pod "4c365fe8-47fa-4b1e-9d41-51d5b21a377e" (UID: "4c365fe8-47fa-4b1e-9d41-51d5b21a377e"). InnerVolumeSpecName "kube-api-access-96bk9". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 06 11:51:46 crc kubenswrapper[4958]: I1006 11:51:46.391494 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3309b80a-11e6-4b60-be3f-c161644ffc7b-utilities" (OuterVolumeSpecName: "utilities") pod "3309b80a-11e6-4b60-be3f-c161644ffc7b" (UID: "3309b80a-11e6-4b60-be3f-c161644ffc7b"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 06 11:51:46 crc kubenswrapper[4958]: I1006 11:51:46.391790 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8bce8671-7d06-4e9b-81cb-f6ddf56ba8a4-marketplace-operator-metrics" (OuterVolumeSpecName: "marketplace-operator-metrics") pod "8bce8671-7d06-4e9b-81cb-f6ddf56ba8a4" (UID: "8bce8671-7d06-4e9b-81cb-f6ddf56ba8a4"). InnerVolumeSpecName "marketplace-operator-metrics". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 06 11:51:46 crc kubenswrapper[4958]: I1006 11:51:46.392264 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8bce8671-7d06-4e9b-81cb-f6ddf56ba8a4-marketplace-trusted-ca" (OuterVolumeSpecName: "marketplace-trusted-ca") pod "8bce8671-7d06-4e9b-81cb-f6ddf56ba8a4" (UID: "8bce8671-7d06-4e9b-81cb-f6ddf56ba8a4"). InnerVolumeSpecName "marketplace-trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 06 11:51:46 crc kubenswrapper[4958]: I1006 11:51:46.393493 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f668bb45-68a3-4e4a-850e-45f82572b753-utilities" (OuterVolumeSpecName: "utilities") pod "f668bb45-68a3-4e4a-850e-45f82572b753" (UID: "f668bb45-68a3-4e4a-850e-45f82572b753"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 06 11:51:46 crc kubenswrapper[4958]: I1006 11:51:46.397512 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f668bb45-68a3-4e4a-850e-45f82572b753-kube-api-access-jb7vx" (OuterVolumeSpecName: "kube-api-access-jb7vx") pod "f668bb45-68a3-4e4a-850e-45f82572b753" (UID: "f668bb45-68a3-4e4a-850e-45f82572b753"). InnerVolumeSpecName "kube-api-access-jb7vx". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 06 11:51:46 crc kubenswrapper[4958]: I1006 11:51:46.398312 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3309b80a-11e6-4b60-be3f-c161644ffc7b-kube-api-access-sqdsq" (OuterVolumeSpecName: "kube-api-access-sqdsq") pod "3309b80a-11e6-4b60-be3f-c161644ffc7b" (UID: "3309b80a-11e6-4b60-be3f-c161644ffc7b"). InnerVolumeSpecName "kube-api-access-sqdsq". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 06 11:51:46 crc kubenswrapper[4958]: I1006 11:51:46.404870 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8bce8671-7d06-4e9b-81cb-f6ddf56ba8a4-kube-api-access-zxswz" (OuterVolumeSpecName: "kube-api-access-zxswz") pod "8bce8671-7d06-4e9b-81cb-f6ddf56ba8a4" (UID: "8bce8671-7d06-4e9b-81cb-f6ddf56ba8a4"). InnerVolumeSpecName "kube-api-access-zxswz". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 06 11:51:46 crc kubenswrapper[4958]: I1006 11:51:46.414643 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f668bb45-68a3-4e4a-850e-45f82572b753-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "f668bb45-68a3-4e4a-850e-45f82572b753" (UID: "f668bb45-68a3-4e4a-850e-45f82572b753"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 06 11:51:46 crc kubenswrapper[4958]: I1006 11:51:46.469431 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3309b80a-11e6-4b60-be3f-c161644ffc7b-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "3309b80a-11e6-4b60-be3f-c161644ffc7b" (UID: "3309b80a-11e6-4b60-be3f-c161644ffc7b"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 06 11:51:46 crc kubenswrapper[4958]: I1006 11:51:46.475855 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4c365fe8-47fa-4b1e-9d41-51d5b21a377e-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "4c365fe8-47fa-4b1e-9d41-51d5b21a377e" (UID: "4c365fe8-47fa-4b1e-9d41-51d5b21a377e"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 06 11:51:46 crc kubenswrapper[4958]: I1006 11:51:46.490670 4958 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sqdsq\" (UniqueName: \"kubernetes.io/projected/3309b80a-11e6-4b60-be3f-c161644ffc7b-kube-api-access-sqdsq\") on node \"crc\" DevicePath \"\"" Oct 06 11:51:46 crc kubenswrapper[4958]: I1006 11:51:46.490703 4958 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-96bk9\" (UniqueName: \"kubernetes.io/projected/4c365fe8-47fa-4b1e-9d41-51d5b21a377e-kube-api-access-96bk9\") on node \"crc\" DevicePath \"\"" Oct 06 11:51:46 crc kubenswrapper[4958]: I1006 11:51:46.490713 4958 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3309b80a-11e6-4b60-be3f-c161644ffc7b-utilities\") on node \"crc\" DevicePath \"\"" Oct 06 11:51:46 crc kubenswrapper[4958]: I1006 11:51:46.490723 4958 reconciler_common.go:293] "Volume detached for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8bce8671-7d06-4e9b-81cb-f6ddf56ba8a4-marketplace-trusted-ca\") on node \"crc\" DevicePath \"\"" Oct 06 11:51:46 crc kubenswrapper[4958]: I1006 11:51:46.490732 4958 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f668bb45-68a3-4e4a-850e-45f82572b753-utilities\") on node \"crc\" DevicePath \"\"" Oct 06 11:51:46 crc kubenswrapper[4958]: I1006 11:51:46.490741 4958 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4c365fe8-47fa-4b1e-9d41-51d5b21a377e-utilities\") on node \"crc\" DevicePath \"\"" Oct 06 11:51:46 crc kubenswrapper[4958]: I1006 11:51:46.490750 4958 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3309b80a-11e6-4b60-be3f-c161644ffc7b-catalog-content\") on node \"crc\" DevicePath \"\"" Oct 06 11:51:46 crc kubenswrapper[4958]: I1006 11:51:46.490759 4958 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jb7vx\" (UniqueName: \"kubernetes.io/projected/f668bb45-68a3-4e4a-850e-45f82572b753-kube-api-access-jb7vx\") on node \"crc\" DevicePath \"\"" Oct 06 11:51:46 crc kubenswrapper[4958]: I1006 11:51:46.490771 4958 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f668bb45-68a3-4e4a-850e-45f82572b753-catalog-content\") on node \"crc\" DevicePath \"\"" Oct 06 11:51:46 crc kubenswrapper[4958]: I1006 11:51:46.490817 4958 reconciler_common.go:293] "Volume detached for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/8bce8671-7d06-4e9b-81cb-f6ddf56ba8a4-marketplace-operator-metrics\") on node \"crc\" DevicePath \"\"" Oct 06 11:51:46 crc kubenswrapper[4958]: I1006 11:51:46.490828 4958 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zxswz\" (UniqueName: \"kubernetes.io/projected/8bce8671-7d06-4e9b-81cb-f6ddf56ba8a4-kube-api-access-zxswz\") on node \"crc\" DevicePath \"\"" Oct 06 11:51:46 crc kubenswrapper[4958]: I1006 11:51:46.490836 4958 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4c365fe8-47fa-4b1e-9d41-51d5b21a377e-catalog-content\") on node \"crc\" DevicePath \"\"" Oct 06 11:51:46 crc kubenswrapper[4958]: I1006 11:51:46.604860 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-xg6kb"] Oct 06 11:51:46 crc kubenswrapper[4958]: W1006 11:51:46.617484 4958 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode4ac258a_8e41_4889_9395_9f0a614425cb.slice/crio-a6b9b04df494d93dda2864ae1812d5895bbb17dc4e938ce47ead42f1edef414c WatchSource:0}: Error finding container a6b9b04df494d93dda2864ae1812d5895bbb17dc4e938ce47ead42f1edef414c: Status 404 returned error can't find the container with id a6b9b04df494d93dda2864ae1812d5895bbb17dc4e938ce47ead42f1edef414c Oct 06 11:51:47 crc kubenswrapper[4958]: I1006 11:51:47.040463 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-5l5pp" event={"ID":"3309b80a-11e6-4b60-be3f-c161644ffc7b","Type":"ContainerDied","Data":"0d6d8334801b6b1e3044e1d3d01769000b721de86fca0397f9c6d425f8c1038a"} Oct 06 11:51:47 crc kubenswrapper[4958]: I1006 11:51:47.040504 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-5l5pp" Oct 06 11:51:47 crc kubenswrapper[4958]: I1006 11:51:47.040530 4958 scope.go:117] "RemoveContainer" containerID="66b7ff6233a1defd1880ce30f9c0d93ccda69fcd878842e63998e8cd5405b959" Oct 06 11:51:47 crc kubenswrapper[4958]: I1006 11:51:47.041510 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-928ch" event={"ID":"8bce8671-7d06-4e9b-81cb-f6ddf56ba8a4","Type":"ContainerDied","Data":"d89319e4f2e5420619056e560443867ad9dba732e32866069f0168ec08df2b7e"} Oct 06 11:51:47 crc kubenswrapper[4958]: I1006 11:51:47.041611 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-928ch" Oct 06 11:51:47 crc kubenswrapper[4958]: I1006 11:51:47.043290 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-xg6kb" event={"ID":"e4ac258a-8e41-4889-9395-9f0a614425cb","Type":"ContainerStarted","Data":"6433acd69a46574637d04f51c20f48ce35d16b338167e0d3773e439a9e43bf65"} Oct 06 11:51:47 crc kubenswrapper[4958]: I1006 11:51:47.043333 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-xg6kb" event={"ID":"e4ac258a-8e41-4889-9395-9f0a614425cb","Type":"ContainerStarted","Data":"a6b9b04df494d93dda2864ae1812d5895bbb17dc4e938ce47ead42f1edef414c"} Oct 06 11:51:47 crc kubenswrapper[4958]: I1006 11:51:47.043699 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-79b997595-xg6kb" Oct 06 11:51:47 crc kubenswrapper[4958]: I1006 11:51:47.046600 4958 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-xg6kb container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.55:8080/healthz\": dial tcp 10.217.0.55:8080: connect: connection refused" start-of-body= Oct 06 11:51:47 crc kubenswrapper[4958]: I1006 11:51:47.046634 4958 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-xg6kb" podUID="e4ac258a-8e41-4889-9395-9f0a614425cb" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.55:8080/healthz\": dial tcp 10.217.0.55:8080: connect: connection refused" Oct 06 11:51:47 crc kubenswrapper[4958]: I1006 11:51:47.047260 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-vv7xq" event={"ID":"d262bad7-9d80-42f3-b97d-149b73d879c0","Type":"ContainerDied","Data":"b6126f78e4c5372671da65984a93fdee9a8b906feb2efa3f11beef0c3c670062"} Oct 06 11:51:47 crc kubenswrapper[4958]: I1006 11:51:47.047304 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-vv7xq" Oct 06 11:51:47 crc kubenswrapper[4958]: I1006 11:51:47.049857 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-r96rs" event={"ID":"4c365fe8-47fa-4b1e-9d41-51d5b21a377e","Type":"ContainerDied","Data":"f7566f7d79f3f3f15730d5a4d95be14b3191a5488a5d61300a67597a0a4b356e"} Oct 06 11:51:47 crc kubenswrapper[4958]: I1006 11:51:47.049922 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-r96rs" Oct 06 11:51:47 crc kubenswrapper[4958]: I1006 11:51:47.053055 4958 scope.go:117] "RemoveContainer" containerID="7d3e9b6c9707e03401d38c8995219ea6749f549e73c9b70559d908071a2aa64a" Oct 06 11:51:47 crc kubenswrapper[4958]: I1006 11:51:47.054732 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-lntkc" event={"ID":"f668bb45-68a3-4e4a-850e-45f82572b753","Type":"ContainerDied","Data":"e57cda0140ef5f3707bbc291187724a53df848edc9c381f0dfe9a1f812c5a235"} Oct 06 11:51:47 crc kubenswrapper[4958]: I1006 11:51:47.054897 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-lntkc" Oct 06 11:51:47 crc kubenswrapper[4958]: I1006 11:51:47.064416 4958 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-5l5pp"] Oct 06 11:51:47 crc kubenswrapper[4958]: I1006 11:51:47.068403 4958 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-5l5pp"] Oct 06 11:51:47 crc kubenswrapper[4958]: I1006 11:51:47.071324 4958 scope.go:117] "RemoveContainer" containerID="6fd9d2d03d558775e14ee9b19bf7270bf4797410f18890909814a1eaae008ae6" Oct 06 11:51:47 crc kubenswrapper[4958]: I1006 11:51:47.083447 4958 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/marketplace-operator-79b997595-xg6kb" podStartSLOduration=2.083429864 podStartE2EDuration="2.083429864s" podCreationTimestamp="2025-10-06 11:51:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-06 11:51:47.08242864 +0000 UTC m=+260.968453948" watchObservedRunningTime="2025-10-06 11:51:47.083429864 +0000 UTC m=+260.969455202" Oct 06 11:51:47 crc kubenswrapper[4958]: I1006 11:51:47.095220 4958 scope.go:117] "RemoveContainer" containerID="20adad655409ba9f8f35264302dee6ceedb4284abb26084ce6fa087cc78f907d" Oct 06 11:51:47 crc kubenswrapper[4958]: I1006 11:51:47.098112 4958 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-928ch"] Oct 06 11:51:47 crc kubenswrapper[4958]: I1006 11:51:47.101268 4958 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-928ch"] Oct 06 11:51:47 crc kubenswrapper[4958]: I1006 11:51:47.107825 4958 scope.go:117] "RemoveContainer" containerID="2c30bfef579af1fb0eb74c65ad7dc393a11d4e89178e8abd4fffb93f341afc07" Oct 06 11:51:47 crc kubenswrapper[4958]: I1006 11:51:47.112337 4958 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-vv7xq"] Oct 06 11:51:47 crc kubenswrapper[4958]: I1006 11:51:47.116741 4958 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-vv7xq"] Oct 06 11:51:47 crc kubenswrapper[4958]: I1006 11:51:47.131217 4958 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-lntkc"] Oct 06 11:51:47 crc kubenswrapper[4958]: I1006 11:51:47.135479 4958 scope.go:117] "RemoveContainer" containerID="728d21ce65acb6e431ddbc467abd230b78f71dbc5ab04cfcd66cdd0d8c251bec" Oct 06 11:51:47 crc kubenswrapper[4958]: I1006 11:51:47.141621 4958 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-lntkc"] Oct 06 11:51:47 crc kubenswrapper[4958]: I1006 11:51:47.145925 4958 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-r96rs"] Oct 06 11:51:47 crc kubenswrapper[4958]: I1006 11:51:47.148556 4958 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-r96rs"] Oct 06 11:51:47 crc kubenswrapper[4958]: I1006 11:51:47.156110 4958 scope.go:117] "RemoveContainer" containerID="88a52d34ae07c03d1ebaaef7afc01869f3a7a8c8c894b9af19ca025f006e5df8" Oct 06 11:51:47 crc kubenswrapper[4958]: I1006 11:51:47.182826 4958 scope.go:117] "RemoveContainer" containerID="da951b810ea6c170ba43433f46e4832bfc6d85442141b8fac65174e8e5570cec" Oct 06 11:51:47 crc kubenswrapper[4958]: I1006 11:51:47.196231 4958 scope.go:117] "RemoveContainer" containerID="2313e8151b54ae6d13864cc02bdb6d9731f5a1521f90eb7a34a24a32da64bdf1" Oct 06 11:51:47 crc kubenswrapper[4958]: I1006 11:51:47.211756 4958 scope.go:117] "RemoveContainer" containerID="add7323d8e2b6c5592a0676c41634b09eca4748b5b72ff28b7d7837a5bab48fb" Oct 06 11:51:47 crc kubenswrapper[4958]: I1006 11:51:47.228872 4958 scope.go:117] "RemoveContainer" containerID="b23aa956000a239c5219aa8a7f3866487de5fb55cdfcc4aad0fccbeac213115d" Oct 06 11:51:47 crc kubenswrapper[4958]: I1006 11:51:47.243151 4958 scope.go:117] "RemoveContainer" containerID="5ee2206f250bd32daf3447fa982c4b0bf32e457e0a81a3f75642567598ac664e" Oct 06 11:51:47 crc kubenswrapper[4958]: I1006 11:51:47.257586 4958 scope.go:117] "RemoveContainer" containerID="f0ce3a3f9655d45d417096f6e116cf6ac7facf155f772a9eedf3956769774482" Oct 06 11:51:48 crc kubenswrapper[4958]: I1006 11:51:48.011609 4958 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-gcls7"] Oct 06 11:51:48 crc kubenswrapper[4958]: E1006 11:51:48.012129 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f668bb45-68a3-4e4a-850e-45f82572b753" containerName="extract-content" Oct 06 11:51:48 crc kubenswrapper[4958]: I1006 11:51:48.012148 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="f668bb45-68a3-4e4a-850e-45f82572b753" containerName="extract-content" Oct 06 11:51:48 crc kubenswrapper[4958]: E1006 11:51:48.012187 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4c365fe8-47fa-4b1e-9d41-51d5b21a377e" containerName="extract-utilities" Oct 06 11:51:48 crc kubenswrapper[4958]: I1006 11:51:48.012196 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="4c365fe8-47fa-4b1e-9d41-51d5b21a377e" containerName="extract-utilities" Oct 06 11:51:48 crc kubenswrapper[4958]: E1006 11:51:48.012208 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4c365fe8-47fa-4b1e-9d41-51d5b21a377e" containerName="registry-server" Oct 06 11:51:48 crc kubenswrapper[4958]: I1006 11:51:48.012216 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="4c365fe8-47fa-4b1e-9d41-51d5b21a377e" containerName="registry-server" Oct 06 11:51:48 crc kubenswrapper[4958]: E1006 11:51:48.012228 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f668bb45-68a3-4e4a-850e-45f82572b753" containerName="extract-utilities" Oct 06 11:51:48 crc kubenswrapper[4958]: I1006 11:51:48.012235 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="f668bb45-68a3-4e4a-850e-45f82572b753" containerName="extract-utilities" Oct 06 11:51:48 crc kubenswrapper[4958]: E1006 11:51:48.012246 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4c365fe8-47fa-4b1e-9d41-51d5b21a377e" containerName="extract-content" Oct 06 11:51:48 crc kubenswrapper[4958]: I1006 11:51:48.012254 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="4c365fe8-47fa-4b1e-9d41-51d5b21a377e" containerName="extract-content" Oct 06 11:51:48 crc kubenswrapper[4958]: E1006 11:51:48.012263 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3309b80a-11e6-4b60-be3f-c161644ffc7b" containerName="extract-content" Oct 06 11:51:48 crc kubenswrapper[4958]: I1006 11:51:48.012271 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="3309b80a-11e6-4b60-be3f-c161644ffc7b" containerName="extract-content" Oct 06 11:51:48 crc kubenswrapper[4958]: E1006 11:51:48.012284 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d262bad7-9d80-42f3-b97d-149b73d879c0" containerName="extract-utilities" Oct 06 11:51:48 crc kubenswrapper[4958]: I1006 11:51:48.012293 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="d262bad7-9d80-42f3-b97d-149b73d879c0" containerName="extract-utilities" Oct 06 11:51:48 crc kubenswrapper[4958]: E1006 11:51:48.012301 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f668bb45-68a3-4e4a-850e-45f82572b753" containerName="registry-server" Oct 06 11:51:48 crc kubenswrapper[4958]: I1006 11:51:48.012308 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="f668bb45-68a3-4e4a-850e-45f82572b753" containerName="registry-server" Oct 06 11:51:48 crc kubenswrapper[4958]: E1006 11:51:48.012320 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8bce8671-7d06-4e9b-81cb-f6ddf56ba8a4" containerName="marketplace-operator" Oct 06 11:51:48 crc kubenswrapper[4958]: I1006 11:51:48.012327 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="8bce8671-7d06-4e9b-81cb-f6ddf56ba8a4" containerName="marketplace-operator" Oct 06 11:51:48 crc kubenswrapper[4958]: E1006 11:51:48.012337 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d262bad7-9d80-42f3-b97d-149b73d879c0" containerName="registry-server" Oct 06 11:51:48 crc kubenswrapper[4958]: I1006 11:51:48.012344 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="d262bad7-9d80-42f3-b97d-149b73d879c0" containerName="registry-server" Oct 06 11:51:48 crc kubenswrapper[4958]: E1006 11:51:48.012353 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3309b80a-11e6-4b60-be3f-c161644ffc7b" containerName="extract-utilities" Oct 06 11:51:48 crc kubenswrapper[4958]: I1006 11:51:48.012360 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="3309b80a-11e6-4b60-be3f-c161644ffc7b" containerName="extract-utilities" Oct 06 11:51:48 crc kubenswrapper[4958]: E1006 11:51:48.012372 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d262bad7-9d80-42f3-b97d-149b73d879c0" containerName="extract-content" Oct 06 11:51:48 crc kubenswrapper[4958]: I1006 11:51:48.012379 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="d262bad7-9d80-42f3-b97d-149b73d879c0" containerName="extract-content" Oct 06 11:51:48 crc kubenswrapper[4958]: E1006 11:51:48.012388 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3309b80a-11e6-4b60-be3f-c161644ffc7b" containerName="registry-server" Oct 06 11:51:48 crc kubenswrapper[4958]: I1006 11:51:48.012395 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="3309b80a-11e6-4b60-be3f-c161644ffc7b" containerName="registry-server" Oct 06 11:51:48 crc kubenswrapper[4958]: I1006 11:51:48.012492 4958 memory_manager.go:354] "RemoveStaleState removing state" podUID="4c365fe8-47fa-4b1e-9d41-51d5b21a377e" containerName="registry-server" Oct 06 11:51:48 crc kubenswrapper[4958]: I1006 11:51:48.012508 4958 memory_manager.go:354] "RemoveStaleState removing state" podUID="8bce8671-7d06-4e9b-81cb-f6ddf56ba8a4" containerName="marketplace-operator" Oct 06 11:51:48 crc kubenswrapper[4958]: I1006 11:51:48.012515 4958 memory_manager.go:354] "RemoveStaleState removing state" podUID="f668bb45-68a3-4e4a-850e-45f82572b753" containerName="registry-server" Oct 06 11:51:48 crc kubenswrapper[4958]: I1006 11:51:48.012522 4958 memory_manager.go:354] "RemoveStaleState removing state" podUID="d262bad7-9d80-42f3-b97d-149b73d879c0" containerName="registry-server" Oct 06 11:51:48 crc kubenswrapper[4958]: I1006 11:51:48.012530 4958 memory_manager.go:354] "RemoveStaleState removing state" podUID="3309b80a-11e6-4b60-be3f-c161644ffc7b" containerName="registry-server" Oct 06 11:51:48 crc kubenswrapper[4958]: I1006 11:51:48.013391 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-gcls7" Oct 06 11:51:48 crc kubenswrapper[4958]: I1006 11:51:48.017839 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Oct 06 11:51:48 crc kubenswrapper[4958]: I1006 11:51:48.036621 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-gcls7"] Oct 06 11:51:48 crc kubenswrapper[4958]: I1006 11:51:48.067339 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-79b997595-xg6kb" Oct 06 11:51:48 crc kubenswrapper[4958]: I1006 11:51:48.107800 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4lx6n\" (UniqueName: \"kubernetes.io/projected/01236baa-2bc7-4607-b92a-be74c28426af-kube-api-access-4lx6n\") pod \"redhat-marketplace-gcls7\" (UID: \"01236baa-2bc7-4607-b92a-be74c28426af\") " pod="openshift-marketplace/redhat-marketplace-gcls7" Oct 06 11:51:48 crc kubenswrapper[4958]: I1006 11:51:48.107896 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/01236baa-2bc7-4607-b92a-be74c28426af-utilities\") pod \"redhat-marketplace-gcls7\" (UID: \"01236baa-2bc7-4607-b92a-be74c28426af\") " pod="openshift-marketplace/redhat-marketplace-gcls7" Oct 06 11:51:48 crc kubenswrapper[4958]: I1006 11:51:48.108180 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/01236baa-2bc7-4607-b92a-be74c28426af-catalog-content\") pod \"redhat-marketplace-gcls7\" (UID: \"01236baa-2bc7-4607-b92a-be74c28426af\") " pod="openshift-marketplace/redhat-marketplace-gcls7" Oct 06 11:51:48 crc kubenswrapper[4958]: I1006 11:51:48.209770 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4lx6n\" (UniqueName: \"kubernetes.io/projected/01236baa-2bc7-4607-b92a-be74c28426af-kube-api-access-4lx6n\") pod \"redhat-marketplace-gcls7\" (UID: \"01236baa-2bc7-4607-b92a-be74c28426af\") " pod="openshift-marketplace/redhat-marketplace-gcls7" Oct 06 11:51:48 crc kubenswrapper[4958]: I1006 11:51:48.209888 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/01236baa-2bc7-4607-b92a-be74c28426af-utilities\") pod \"redhat-marketplace-gcls7\" (UID: \"01236baa-2bc7-4607-b92a-be74c28426af\") " pod="openshift-marketplace/redhat-marketplace-gcls7" Oct 06 11:51:48 crc kubenswrapper[4958]: I1006 11:51:48.209983 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/01236baa-2bc7-4607-b92a-be74c28426af-catalog-content\") pod \"redhat-marketplace-gcls7\" (UID: \"01236baa-2bc7-4607-b92a-be74c28426af\") " pod="openshift-marketplace/redhat-marketplace-gcls7" Oct 06 11:51:48 crc kubenswrapper[4958]: I1006 11:51:48.210701 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/01236baa-2bc7-4607-b92a-be74c28426af-utilities\") pod \"redhat-marketplace-gcls7\" (UID: \"01236baa-2bc7-4607-b92a-be74c28426af\") " pod="openshift-marketplace/redhat-marketplace-gcls7" Oct 06 11:51:48 crc kubenswrapper[4958]: I1006 11:51:48.210797 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/01236baa-2bc7-4607-b92a-be74c28426af-catalog-content\") pod \"redhat-marketplace-gcls7\" (UID: \"01236baa-2bc7-4607-b92a-be74c28426af\") " pod="openshift-marketplace/redhat-marketplace-gcls7" Oct 06 11:51:48 crc kubenswrapper[4958]: I1006 11:51:48.216603 4958 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-lsbmd"] Oct 06 11:51:48 crc kubenswrapper[4958]: I1006 11:51:48.217549 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-lsbmd" Oct 06 11:51:48 crc kubenswrapper[4958]: I1006 11:51:48.220780 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Oct 06 11:51:48 crc kubenswrapper[4958]: I1006 11:51:48.228386 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-lsbmd"] Oct 06 11:51:48 crc kubenswrapper[4958]: I1006 11:51:48.242424 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4lx6n\" (UniqueName: \"kubernetes.io/projected/01236baa-2bc7-4607-b92a-be74c28426af-kube-api-access-4lx6n\") pod \"redhat-marketplace-gcls7\" (UID: \"01236baa-2bc7-4607-b92a-be74c28426af\") " pod="openshift-marketplace/redhat-marketplace-gcls7" Oct 06 11:51:48 crc kubenswrapper[4958]: I1006 11:51:48.311355 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a492d97c-2ec7-4443-9bda-536127c35353-utilities\") pod \"redhat-operators-lsbmd\" (UID: \"a492d97c-2ec7-4443-9bda-536127c35353\") " pod="openshift-marketplace/redhat-operators-lsbmd" Oct 06 11:51:48 crc kubenswrapper[4958]: I1006 11:51:48.311393 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wbmdp\" (UniqueName: \"kubernetes.io/projected/a492d97c-2ec7-4443-9bda-536127c35353-kube-api-access-wbmdp\") pod \"redhat-operators-lsbmd\" (UID: \"a492d97c-2ec7-4443-9bda-536127c35353\") " pod="openshift-marketplace/redhat-operators-lsbmd" Oct 06 11:51:48 crc kubenswrapper[4958]: I1006 11:51:48.311423 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a492d97c-2ec7-4443-9bda-536127c35353-catalog-content\") pod \"redhat-operators-lsbmd\" (UID: \"a492d97c-2ec7-4443-9bda-536127c35353\") " pod="openshift-marketplace/redhat-operators-lsbmd" Oct 06 11:51:48 crc kubenswrapper[4958]: I1006 11:51:48.345737 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-gcls7" Oct 06 11:51:48 crc kubenswrapper[4958]: I1006 11:51:48.413871 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a492d97c-2ec7-4443-9bda-536127c35353-utilities\") pod \"redhat-operators-lsbmd\" (UID: \"a492d97c-2ec7-4443-9bda-536127c35353\") " pod="openshift-marketplace/redhat-operators-lsbmd" Oct 06 11:51:48 crc kubenswrapper[4958]: I1006 11:51:48.413932 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wbmdp\" (UniqueName: \"kubernetes.io/projected/a492d97c-2ec7-4443-9bda-536127c35353-kube-api-access-wbmdp\") pod \"redhat-operators-lsbmd\" (UID: \"a492d97c-2ec7-4443-9bda-536127c35353\") " pod="openshift-marketplace/redhat-operators-lsbmd" Oct 06 11:51:48 crc kubenswrapper[4958]: I1006 11:51:48.413980 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a492d97c-2ec7-4443-9bda-536127c35353-catalog-content\") pod \"redhat-operators-lsbmd\" (UID: \"a492d97c-2ec7-4443-9bda-536127c35353\") " pod="openshift-marketplace/redhat-operators-lsbmd" Oct 06 11:51:48 crc kubenswrapper[4958]: I1006 11:51:48.414641 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a492d97c-2ec7-4443-9bda-536127c35353-catalog-content\") pod \"redhat-operators-lsbmd\" (UID: \"a492d97c-2ec7-4443-9bda-536127c35353\") " pod="openshift-marketplace/redhat-operators-lsbmd" Oct 06 11:51:48 crc kubenswrapper[4958]: I1006 11:51:48.415088 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a492d97c-2ec7-4443-9bda-536127c35353-utilities\") pod \"redhat-operators-lsbmd\" (UID: \"a492d97c-2ec7-4443-9bda-536127c35353\") " pod="openshift-marketplace/redhat-operators-lsbmd" Oct 06 11:51:48 crc kubenswrapper[4958]: I1006 11:51:48.436689 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wbmdp\" (UniqueName: \"kubernetes.io/projected/a492d97c-2ec7-4443-9bda-536127c35353-kube-api-access-wbmdp\") pod \"redhat-operators-lsbmd\" (UID: \"a492d97c-2ec7-4443-9bda-536127c35353\") " pod="openshift-marketplace/redhat-operators-lsbmd" Oct 06 11:51:48 crc kubenswrapper[4958]: I1006 11:51:48.531558 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-gcls7"] Oct 06 11:51:48 crc kubenswrapper[4958]: W1006 11:51:48.537518 4958 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod01236baa_2bc7_4607_b92a_be74c28426af.slice/crio-b577e1e17073a057390ce4ddc48c8ad75a9310d0cc9b4edd81a115460be22359 WatchSource:0}: Error finding container b577e1e17073a057390ce4ddc48c8ad75a9310d0cc9b4edd81a115460be22359: Status 404 returned error can't find the container with id b577e1e17073a057390ce4ddc48c8ad75a9310d0cc9b4edd81a115460be22359 Oct 06 11:51:48 crc kubenswrapper[4958]: I1006 11:51:48.538451 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-lsbmd" Oct 06 11:51:48 crc kubenswrapper[4958]: I1006 11:51:48.748100 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-lsbmd"] Oct 06 11:51:48 crc kubenswrapper[4958]: I1006 11:51:48.920007 4958 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3309b80a-11e6-4b60-be3f-c161644ffc7b" path="/var/lib/kubelet/pods/3309b80a-11e6-4b60-be3f-c161644ffc7b/volumes" Oct 06 11:51:48 crc kubenswrapper[4958]: I1006 11:51:48.920858 4958 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4c365fe8-47fa-4b1e-9d41-51d5b21a377e" path="/var/lib/kubelet/pods/4c365fe8-47fa-4b1e-9d41-51d5b21a377e/volumes" Oct 06 11:51:48 crc kubenswrapper[4958]: I1006 11:51:48.921526 4958 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8bce8671-7d06-4e9b-81cb-f6ddf56ba8a4" path="/var/lib/kubelet/pods/8bce8671-7d06-4e9b-81cb-f6ddf56ba8a4/volumes" Oct 06 11:51:48 crc kubenswrapper[4958]: I1006 11:51:48.922523 4958 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d262bad7-9d80-42f3-b97d-149b73d879c0" path="/var/lib/kubelet/pods/d262bad7-9d80-42f3-b97d-149b73d879c0/volumes" Oct 06 11:51:48 crc kubenswrapper[4958]: I1006 11:51:48.923168 4958 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f668bb45-68a3-4e4a-850e-45f82572b753" path="/var/lib/kubelet/pods/f668bb45-68a3-4e4a-850e-45f82572b753/volumes" Oct 06 11:51:49 crc kubenswrapper[4958]: I1006 11:51:49.073774 4958 generic.go:334] "Generic (PLEG): container finished" podID="01236baa-2bc7-4607-b92a-be74c28426af" containerID="945baf4216a7991858af853befab92be4d672107df672c2ac383c6514ab58419" exitCode=0 Oct 06 11:51:49 crc kubenswrapper[4958]: I1006 11:51:49.075202 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-gcls7" event={"ID":"01236baa-2bc7-4607-b92a-be74c28426af","Type":"ContainerDied","Data":"945baf4216a7991858af853befab92be4d672107df672c2ac383c6514ab58419"} Oct 06 11:51:49 crc kubenswrapper[4958]: I1006 11:51:49.075283 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-gcls7" event={"ID":"01236baa-2bc7-4607-b92a-be74c28426af","Type":"ContainerStarted","Data":"b577e1e17073a057390ce4ddc48c8ad75a9310d0cc9b4edd81a115460be22359"} Oct 06 11:51:49 crc kubenswrapper[4958]: I1006 11:51:49.079184 4958 generic.go:334] "Generic (PLEG): container finished" podID="a492d97c-2ec7-4443-9bda-536127c35353" containerID="db6d6e9516c1c586087617560f79ce443e0b3a2b9a5955fff06efdd52031ef26" exitCode=0 Oct 06 11:51:49 crc kubenswrapper[4958]: I1006 11:51:49.079244 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-lsbmd" event={"ID":"a492d97c-2ec7-4443-9bda-536127c35353","Type":"ContainerDied","Data":"db6d6e9516c1c586087617560f79ce443e0b3a2b9a5955fff06efdd52031ef26"} Oct 06 11:51:49 crc kubenswrapper[4958]: I1006 11:51:49.079300 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-lsbmd" event={"ID":"a492d97c-2ec7-4443-9bda-536127c35353","Type":"ContainerStarted","Data":"fe9ed2b8d9218da96746dc8d084eade257de9c3fb4a018c76b6bde8f30554745"} Oct 06 11:51:50 crc kubenswrapper[4958]: I1006 11:51:50.087123 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-lsbmd" event={"ID":"a492d97c-2ec7-4443-9bda-536127c35353","Type":"ContainerStarted","Data":"6fe3d17445ff64a020e40628d9951231ade51931da36c634db527cbce9d1d39b"} Oct 06 11:51:50 crc kubenswrapper[4958]: I1006 11:51:50.090319 4958 generic.go:334] "Generic (PLEG): container finished" podID="01236baa-2bc7-4607-b92a-be74c28426af" containerID="ae34ca62e9f63f73b4bfa93178cf39d83055a52d4f44951ac3b72d467ca6372a" exitCode=0 Oct 06 11:51:50 crc kubenswrapper[4958]: I1006 11:51:50.090374 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-gcls7" event={"ID":"01236baa-2bc7-4607-b92a-be74c28426af","Type":"ContainerDied","Data":"ae34ca62e9f63f73b4bfa93178cf39d83055a52d4f44951ac3b72d467ca6372a"} Oct 06 11:51:50 crc kubenswrapper[4958]: I1006 11:51:50.414284 4958 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-xkchh"] Oct 06 11:51:50 crc kubenswrapper[4958]: I1006 11:51:50.415520 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-xkchh" Oct 06 11:51:50 crc kubenswrapper[4958]: I1006 11:51:50.417545 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Oct 06 11:51:50 crc kubenswrapper[4958]: I1006 11:51:50.430407 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-xkchh"] Oct 06 11:51:50 crc kubenswrapper[4958]: I1006 11:51:50.439440 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/590ccad0-d358-4f6a-9dcd-dfd539830f4e-catalog-content\") pod \"certified-operators-xkchh\" (UID: \"590ccad0-d358-4f6a-9dcd-dfd539830f4e\") " pod="openshift-marketplace/certified-operators-xkchh" Oct 06 11:51:50 crc kubenswrapper[4958]: I1006 11:51:50.439583 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/590ccad0-d358-4f6a-9dcd-dfd539830f4e-utilities\") pod \"certified-operators-xkchh\" (UID: \"590ccad0-d358-4f6a-9dcd-dfd539830f4e\") " pod="openshift-marketplace/certified-operators-xkchh" Oct 06 11:51:50 crc kubenswrapper[4958]: I1006 11:51:50.439708 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l88rm\" (UniqueName: \"kubernetes.io/projected/590ccad0-d358-4f6a-9dcd-dfd539830f4e-kube-api-access-l88rm\") pod \"certified-operators-xkchh\" (UID: \"590ccad0-d358-4f6a-9dcd-dfd539830f4e\") " pod="openshift-marketplace/certified-operators-xkchh" Oct 06 11:51:50 crc kubenswrapper[4958]: I1006 11:51:50.541230 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l88rm\" (UniqueName: \"kubernetes.io/projected/590ccad0-d358-4f6a-9dcd-dfd539830f4e-kube-api-access-l88rm\") pod \"certified-operators-xkchh\" (UID: \"590ccad0-d358-4f6a-9dcd-dfd539830f4e\") " pod="openshift-marketplace/certified-operators-xkchh" Oct 06 11:51:50 crc kubenswrapper[4958]: I1006 11:51:50.541277 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/590ccad0-d358-4f6a-9dcd-dfd539830f4e-catalog-content\") pod \"certified-operators-xkchh\" (UID: \"590ccad0-d358-4f6a-9dcd-dfd539830f4e\") " pod="openshift-marketplace/certified-operators-xkchh" Oct 06 11:51:50 crc kubenswrapper[4958]: I1006 11:51:50.541307 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/590ccad0-d358-4f6a-9dcd-dfd539830f4e-utilities\") pod \"certified-operators-xkchh\" (UID: \"590ccad0-d358-4f6a-9dcd-dfd539830f4e\") " pod="openshift-marketplace/certified-operators-xkchh" Oct 06 11:51:50 crc kubenswrapper[4958]: I1006 11:51:50.541654 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/590ccad0-d358-4f6a-9dcd-dfd539830f4e-utilities\") pod \"certified-operators-xkchh\" (UID: \"590ccad0-d358-4f6a-9dcd-dfd539830f4e\") " pod="openshift-marketplace/certified-operators-xkchh" Oct 06 11:51:50 crc kubenswrapper[4958]: I1006 11:51:50.542043 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/590ccad0-d358-4f6a-9dcd-dfd539830f4e-catalog-content\") pod \"certified-operators-xkchh\" (UID: \"590ccad0-d358-4f6a-9dcd-dfd539830f4e\") " pod="openshift-marketplace/certified-operators-xkchh" Oct 06 11:51:50 crc kubenswrapper[4958]: I1006 11:51:50.560195 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l88rm\" (UniqueName: \"kubernetes.io/projected/590ccad0-d358-4f6a-9dcd-dfd539830f4e-kube-api-access-l88rm\") pod \"certified-operators-xkchh\" (UID: \"590ccad0-d358-4f6a-9dcd-dfd539830f4e\") " pod="openshift-marketplace/certified-operators-xkchh" Oct 06 11:51:50 crc kubenswrapper[4958]: I1006 11:51:50.615942 4958 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-4zfnc"] Oct 06 11:51:50 crc kubenswrapper[4958]: I1006 11:51:50.617423 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-4zfnc" Oct 06 11:51:50 crc kubenswrapper[4958]: I1006 11:51:50.620192 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Oct 06 11:51:50 crc kubenswrapper[4958]: I1006 11:51:50.626523 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-4zfnc"] Oct 06 11:51:50 crc kubenswrapper[4958]: I1006 11:51:50.643533 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/446071e0-1a30-4a0d-8b68-76eb7ece32c9-utilities\") pod \"community-operators-4zfnc\" (UID: \"446071e0-1a30-4a0d-8b68-76eb7ece32c9\") " pod="openshift-marketplace/community-operators-4zfnc" Oct 06 11:51:50 crc kubenswrapper[4958]: I1006 11:51:50.643597 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jlkjt\" (UniqueName: \"kubernetes.io/projected/446071e0-1a30-4a0d-8b68-76eb7ece32c9-kube-api-access-jlkjt\") pod \"community-operators-4zfnc\" (UID: \"446071e0-1a30-4a0d-8b68-76eb7ece32c9\") " pod="openshift-marketplace/community-operators-4zfnc" Oct 06 11:51:50 crc kubenswrapper[4958]: I1006 11:51:50.643628 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/446071e0-1a30-4a0d-8b68-76eb7ece32c9-catalog-content\") pod \"community-operators-4zfnc\" (UID: \"446071e0-1a30-4a0d-8b68-76eb7ece32c9\") " pod="openshift-marketplace/community-operators-4zfnc" Oct 06 11:51:50 crc kubenswrapper[4958]: I1006 11:51:50.735223 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-xkchh" Oct 06 11:51:50 crc kubenswrapper[4958]: I1006 11:51:50.744496 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/446071e0-1a30-4a0d-8b68-76eb7ece32c9-utilities\") pod \"community-operators-4zfnc\" (UID: \"446071e0-1a30-4a0d-8b68-76eb7ece32c9\") " pod="openshift-marketplace/community-operators-4zfnc" Oct 06 11:51:50 crc kubenswrapper[4958]: I1006 11:51:50.744558 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jlkjt\" (UniqueName: \"kubernetes.io/projected/446071e0-1a30-4a0d-8b68-76eb7ece32c9-kube-api-access-jlkjt\") pod \"community-operators-4zfnc\" (UID: \"446071e0-1a30-4a0d-8b68-76eb7ece32c9\") " pod="openshift-marketplace/community-operators-4zfnc" Oct 06 11:51:50 crc kubenswrapper[4958]: I1006 11:51:50.744583 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/446071e0-1a30-4a0d-8b68-76eb7ece32c9-catalog-content\") pod \"community-operators-4zfnc\" (UID: \"446071e0-1a30-4a0d-8b68-76eb7ece32c9\") " pod="openshift-marketplace/community-operators-4zfnc" Oct 06 11:51:50 crc kubenswrapper[4958]: I1006 11:51:50.745106 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/446071e0-1a30-4a0d-8b68-76eb7ece32c9-catalog-content\") pod \"community-operators-4zfnc\" (UID: \"446071e0-1a30-4a0d-8b68-76eb7ece32c9\") " pod="openshift-marketplace/community-operators-4zfnc" Oct 06 11:51:50 crc kubenswrapper[4958]: I1006 11:51:50.745390 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/446071e0-1a30-4a0d-8b68-76eb7ece32c9-utilities\") pod \"community-operators-4zfnc\" (UID: \"446071e0-1a30-4a0d-8b68-76eb7ece32c9\") " pod="openshift-marketplace/community-operators-4zfnc" Oct 06 11:51:50 crc kubenswrapper[4958]: I1006 11:51:50.765647 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jlkjt\" (UniqueName: \"kubernetes.io/projected/446071e0-1a30-4a0d-8b68-76eb7ece32c9-kube-api-access-jlkjt\") pod \"community-operators-4zfnc\" (UID: \"446071e0-1a30-4a0d-8b68-76eb7ece32c9\") " pod="openshift-marketplace/community-operators-4zfnc" Oct 06 11:51:50 crc kubenswrapper[4958]: I1006 11:51:50.929039 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-xkchh"] Oct 06 11:51:50 crc kubenswrapper[4958]: I1006 11:51:50.944967 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-4zfnc" Oct 06 11:51:51 crc kubenswrapper[4958]: I1006 11:51:51.096046 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-xkchh" event={"ID":"590ccad0-d358-4f6a-9dcd-dfd539830f4e","Type":"ContainerStarted","Data":"df7ca5f26f1dd9f20e01b2b5f778447189e7b5396876e21ae3b93373ad9086b9"} Oct 06 11:51:51 crc kubenswrapper[4958]: I1006 11:51:51.096395 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-xkchh" event={"ID":"590ccad0-d358-4f6a-9dcd-dfd539830f4e","Type":"ContainerStarted","Data":"fe138ee04901a1ca97d1ed1298738f8e5173db4fedf65b44ab46b057fde35fbd"} Oct 06 11:51:51 crc kubenswrapper[4958]: I1006 11:51:51.098992 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-gcls7" event={"ID":"01236baa-2bc7-4607-b92a-be74c28426af","Type":"ContainerStarted","Data":"108e1c01eea20d28311591490c18a45aa9ed527f91cc3efe4299bc917f778013"} Oct 06 11:51:51 crc kubenswrapper[4958]: I1006 11:51:51.101298 4958 generic.go:334] "Generic (PLEG): container finished" podID="a492d97c-2ec7-4443-9bda-536127c35353" containerID="6fe3d17445ff64a020e40628d9951231ade51931da36c634db527cbce9d1d39b" exitCode=0 Oct 06 11:51:51 crc kubenswrapper[4958]: I1006 11:51:51.101333 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-lsbmd" event={"ID":"a492d97c-2ec7-4443-9bda-536127c35353","Type":"ContainerDied","Data":"6fe3d17445ff64a020e40628d9951231ade51931da36c634db527cbce9d1d39b"} Oct 06 11:51:51 crc kubenswrapper[4958]: I1006 11:51:51.126362 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-4zfnc"] Oct 06 11:51:51 crc kubenswrapper[4958]: I1006 11:51:51.153693 4958 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-gcls7" podStartSLOduration=2.732962824 podStartE2EDuration="4.153675886s" podCreationTimestamp="2025-10-06 11:51:47 +0000 UTC" firstStartedPulling="2025-10-06 11:51:49.07716982 +0000 UTC m=+262.963195128" lastFinishedPulling="2025-10-06 11:51:50.497882882 +0000 UTC m=+264.383908190" observedRunningTime="2025-10-06 11:51:51.151081679 +0000 UTC m=+265.037106987" watchObservedRunningTime="2025-10-06 11:51:51.153675886 +0000 UTC m=+265.039701184" Oct 06 11:51:51 crc kubenswrapper[4958]: W1006 11:51:51.156007 4958 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod446071e0_1a30_4a0d_8b68_76eb7ece32c9.slice/crio-e9bd938e6fe2942b47a03ccd30c420cc5a5419ee96bb237275e3954ceb850ed0 WatchSource:0}: Error finding container e9bd938e6fe2942b47a03ccd30c420cc5a5419ee96bb237275e3954ceb850ed0: Status 404 returned error can't find the container with id e9bd938e6fe2942b47a03ccd30c420cc5a5419ee96bb237275e3954ceb850ed0 Oct 06 11:51:52 crc kubenswrapper[4958]: I1006 11:51:52.107480 4958 generic.go:334] "Generic (PLEG): container finished" podID="446071e0-1a30-4a0d-8b68-76eb7ece32c9" containerID="ef470734c36e0b5682b99463379e5f22810af2a7dfe24c79ec058a5bad30fa32" exitCode=0 Oct 06 11:51:52 crc kubenswrapper[4958]: I1006 11:51:52.107554 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-4zfnc" event={"ID":"446071e0-1a30-4a0d-8b68-76eb7ece32c9","Type":"ContainerDied","Data":"ef470734c36e0b5682b99463379e5f22810af2a7dfe24c79ec058a5bad30fa32"} Oct 06 11:51:52 crc kubenswrapper[4958]: I1006 11:51:52.107862 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-4zfnc" event={"ID":"446071e0-1a30-4a0d-8b68-76eb7ece32c9","Type":"ContainerStarted","Data":"e9bd938e6fe2942b47a03ccd30c420cc5a5419ee96bb237275e3954ceb850ed0"} Oct 06 11:51:52 crc kubenswrapper[4958]: I1006 11:51:52.111252 4958 generic.go:334] "Generic (PLEG): container finished" podID="590ccad0-d358-4f6a-9dcd-dfd539830f4e" containerID="df7ca5f26f1dd9f20e01b2b5f778447189e7b5396876e21ae3b93373ad9086b9" exitCode=0 Oct 06 11:51:52 crc kubenswrapper[4958]: I1006 11:51:52.111318 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-xkchh" event={"ID":"590ccad0-d358-4f6a-9dcd-dfd539830f4e","Type":"ContainerDied","Data":"df7ca5f26f1dd9f20e01b2b5f778447189e7b5396876e21ae3b93373ad9086b9"} Oct 06 11:51:52 crc kubenswrapper[4958]: I1006 11:51:52.111345 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-xkchh" event={"ID":"590ccad0-d358-4f6a-9dcd-dfd539830f4e","Type":"ContainerStarted","Data":"3b92f0f121c06cfc266d36ea953a56c359ed0552145f4615bcfd5a09cfb54609"} Oct 06 11:51:52 crc kubenswrapper[4958]: I1006 11:51:52.115242 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-lsbmd" event={"ID":"a492d97c-2ec7-4443-9bda-536127c35353","Type":"ContainerStarted","Data":"b41a8be25150d10e8956b4e877bd4a1b49a452ac980bd966ceb36ea59d6653c5"} Oct 06 11:51:52 crc kubenswrapper[4958]: I1006 11:51:52.172618 4958 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-lsbmd" podStartSLOduration=1.625418433 podStartE2EDuration="4.172597689s" podCreationTimestamp="2025-10-06 11:51:48 +0000 UTC" firstStartedPulling="2025-10-06 11:51:49.085329454 +0000 UTC m=+262.971354762" lastFinishedPulling="2025-10-06 11:51:51.63250866 +0000 UTC m=+265.518534018" observedRunningTime="2025-10-06 11:51:52.166894727 +0000 UTC m=+266.052920035" watchObservedRunningTime="2025-10-06 11:51:52.172597689 +0000 UTC m=+266.058622997" Oct 06 11:51:53 crc kubenswrapper[4958]: I1006 11:51:53.122091 4958 generic.go:334] "Generic (PLEG): container finished" podID="590ccad0-d358-4f6a-9dcd-dfd539830f4e" containerID="3b92f0f121c06cfc266d36ea953a56c359ed0552145f4615bcfd5a09cfb54609" exitCode=0 Oct 06 11:51:53 crc kubenswrapper[4958]: I1006 11:51:53.122222 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-xkchh" event={"ID":"590ccad0-d358-4f6a-9dcd-dfd539830f4e","Type":"ContainerDied","Data":"3b92f0f121c06cfc266d36ea953a56c359ed0552145f4615bcfd5a09cfb54609"} Oct 06 11:51:54 crc kubenswrapper[4958]: I1006 11:51:54.129510 4958 generic.go:334] "Generic (PLEG): container finished" podID="446071e0-1a30-4a0d-8b68-76eb7ece32c9" containerID="2f88716466411fcef7e8beea2086ef136661343797f4cca7e254c933ac8e6a49" exitCode=0 Oct 06 11:51:54 crc kubenswrapper[4958]: I1006 11:51:54.129554 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-4zfnc" event={"ID":"446071e0-1a30-4a0d-8b68-76eb7ece32c9","Type":"ContainerDied","Data":"2f88716466411fcef7e8beea2086ef136661343797f4cca7e254c933ac8e6a49"} Oct 06 11:51:55 crc kubenswrapper[4958]: I1006 11:51:55.140894 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-4zfnc" event={"ID":"446071e0-1a30-4a0d-8b68-76eb7ece32c9","Type":"ContainerStarted","Data":"84343dcd99eb523715672c0ec0ebf4189d2b5c9984c8e764f371b132025b9db0"} Oct 06 11:51:55 crc kubenswrapper[4958]: I1006 11:51:55.144949 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-xkchh" event={"ID":"590ccad0-d358-4f6a-9dcd-dfd539830f4e","Type":"ContainerStarted","Data":"ebd97896949aa51a7b3fdb90bc7ff13c546d6726780d846b2a184eedceea46bf"} Oct 06 11:51:55 crc kubenswrapper[4958]: I1006 11:51:55.163130 4958 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-4zfnc" podStartSLOduration=2.765002655 podStartE2EDuration="5.163109586s" podCreationTimestamp="2025-10-06 11:51:50 +0000 UTC" firstStartedPulling="2025-10-06 11:51:52.109022515 +0000 UTC m=+265.995047833" lastFinishedPulling="2025-10-06 11:51:54.507129456 +0000 UTC m=+268.393154764" observedRunningTime="2025-10-06 11:51:55.159369391 +0000 UTC m=+269.045394699" watchObservedRunningTime="2025-10-06 11:51:55.163109586 +0000 UTC m=+269.049134904" Oct 06 11:51:55 crc kubenswrapper[4958]: I1006 11:51:55.181874 4958 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-xkchh" podStartSLOduration=2.593881111 podStartE2EDuration="5.181831195s" podCreationTimestamp="2025-10-06 11:51:50 +0000 UTC" firstStartedPulling="2025-10-06 11:51:51.09930041 +0000 UTC m=+264.985325718" lastFinishedPulling="2025-10-06 11:51:53.687250494 +0000 UTC m=+267.573275802" observedRunningTime="2025-10-06 11:51:55.180040005 +0000 UTC m=+269.066065333" watchObservedRunningTime="2025-10-06 11:51:55.181831195 +0000 UTC m=+269.067856523" Oct 06 11:51:58 crc kubenswrapper[4958]: I1006 11:51:58.346038 4958 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-gcls7" Oct 06 11:51:58 crc kubenswrapper[4958]: I1006 11:51:58.346394 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-gcls7" Oct 06 11:51:58 crc kubenswrapper[4958]: I1006 11:51:58.385771 4958 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-gcls7" Oct 06 11:51:58 crc kubenswrapper[4958]: I1006 11:51:58.539342 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-lsbmd" Oct 06 11:51:58 crc kubenswrapper[4958]: I1006 11:51:58.539447 4958 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-lsbmd" Oct 06 11:51:58 crc kubenswrapper[4958]: I1006 11:51:58.594358 4958 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-lsbmd" Oct 06 11:51:59 crc kubenswrapper[4958]: I1006 11:51:59.207550 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-gcls7" Oct 06 11:51:59 crc kubenswrapper[4958]: I1006 11:51:59.226594 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-lsbmd" Oct 06 11:52:00 crc kubenswrapper[4958]: I1006 11:52:00.736380 4958 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-xkchh" Oct 06 11:52:00 crc kubenswrapper[4958]: I1006 11:52:00.736838 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-xkchh" Oct 06 11:52:00 crc kubenswrapper[4958]: I1006 11:52:00.783121 4958 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-xkchh" Oct 06 11:52:00 crc kubenswrapper[4958]: I1006 11:52:00.945293 4958 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-4zfnc" Oct 06 11:52:00 crc kubenswrapper[4958]: I1006 11:52:00.945353 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-4zfnc" Oct 06 11:52:01 crc kubenswrapper[4958]: I1006 11:52:01.002105 4958 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-4zfnc" Oct 06 11:52:01 crc kubenswrapper[4958]: I1006 11:52:01.240931 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-4zfnc" Oct 06 11:52:01 crc kubenswrapper[4958]: I1006 11:52:01.265750 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-xkchh" Oct 06 11:53:23 crc kubenswrapper[4958]: I1006 11:53:23.801961 4958 patch_prober.go:28] interesting pod/machine-config-daemon-whw6z container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 06 11:53:23 crc kubenswrapper[4958]: I1006 11:53:23.802669 4958 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-whw6z" podUID="1a63a5e9-6d00-40ff-a080-cfcaf03a1c1b" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 06 11:53:53 crc kubenswrapper[4958]: I1006 11:53:53.801589 4958 patch_prober.go:28] interesting pod/machine-config-daemon-whw6z container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 06 11:53:53 crc kubenswrapper[4958]: I1006 11:53:53.802642 4958 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-whw6z" podUID="1a63a5e9-6d00-40ff-a080-cfcaf03a1c1b" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 06 11:54:05 crc kubenswrapper[4958]: I1006 11:54:05.691261 4958 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-hjqzn"] Oct 06 11:54:05 crc kubenswrapper[4958]: I1006 11:54:05.692614 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-66df7c8f76-hjqzn" Oct 06 11:54:05 crc kubenswrapper[4958]: I1006 11:54:05.720207 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-hjqzn"] Oct 06 11:54:05 crc kubenswrapper[4958]: I1006 11:54:05.882264 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/0f7b9cc8-0326-4c2a-ae59-05f4353a6043-registry-certificates\") pod \"image-registry-66df7c8f76-hjqzn\" (UID: \"0f7b9cc8-0326-4c2a-ae59-05f4353a6043\") " pod="openshift-image-registry/image-registry-66df7c8f76-hjqzn" Oct 06 11:54:05 crc kubenswrapper[4958]: I1006 11:54:05.882532 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-66df7c8f76-hjqzn\" (UID: \"0f7b9cc8-0326-4c2a-ae59-05f4353a6043\") " pod="openshift-image-registry/image-registry-66df7c8f76-hjqzn" Oct 06 11:54:05 crc kubenswrapper[4958]: I1006 11:54:05.882619 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/0f7b9cc8-0326-4c2a-ae59-05f4353a6043-bound-sa-token\") pod \"image-registry-66df7c8f76-hjqzn\" (UID: \"0f7b9cc8-0326-4c2a-ae59-05f4353a6043\") " pod="openshift-image-registry/image-registry-66df7c8f76-hjqzn" Oct 06 11:54:05 crc kubenswrapper[4958]: I1006 11:54:05.882702 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/0f7b9cc8-0326-4c2a-ae59-05f4353a6043-ca-trust-extracted\") pod \"image-registry-66df7c8f76-hjqzn\" (UID: \"0f7b9cc8-0326-4c2a-ae59-05f4353a6043\") " pod="openshift-image-registry/image-registry-66df7c8f76-hjqzn" Oct 06 11:54:05 crc kubenswrapper[4958]: I1006 11:54:05.882779 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/0f7b9cc8-0326-4c2a-ae59-05f4353a6043-registry-tls\") pod \"image-registry-66df7c8f76-hjqzn\" (UID: \"0f7b9cc8-0326-4c2a-ae59-05f4353a6043\") " pod="openshift-image-registry/image-registry-66df7c8f76-hjqzn" Oct 06 11:54:05 crc kubenswrapper[4958]: I1006 11:54:05.882874 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/0f7b9cc8-0326-4c2a-ae59-05f4353a6043-trusted-ca\") pod \"image-registry-66df7c8f76-hjqzn\" (UID: \"0f7b9cc8-0326-4c2a-ae59-05f4353a6043\") " pod="openshift-image-registry/image-registry-66df7c8f76-hjqzn" Oct 06 11:54:05 crc kubenswrapper[4958]: I1006 11:54:05.882954 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k94w6\" (UniqueName: \"kubernetes.io/projected/0f7b9cc8-0326-4c2a-ae59-05f4353a6043-kube-api-access-k94w6\") pod \"image-registry-66df7c8f76-hjqzn\" (UID: \"0f7b9cc8-0326-4c2a-ae59-05f4353a6043\") " pod="openshift-image-registry/image-registry-66df7c8f76-hjqzn" Oct 06 11:54:05 crc kubenswrapper[4958]: I1006 11:54:05.883058 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/0f7b9cc8-0326-4c2a-ae59-05f4353a6043-installation-pull-secrets\") pod \"image-registry-66df7c8f76-hjqzn\" (UID: \"0f7b9cc8-0326-4c2a-ae59-05f4353a6043\") " pod="openshift-image-registry/image-registry-66df7c8f76-hjqzn" Oct 06 11:54:05 crc kubenswrapper[4958]: I1006 11:54:05.912463 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-66df7c8f76-hjqzn\" (UID: \"0f7b9cc8-0326-4c2a-ae59-05f4353a6043\") " pod="openshift-image-registry/image-registry-66df7c8f76-hjqzn" Oct 06 11:54:05 crc kubenswrapper[4958]: I1006 11:54:05.983856 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/0f7b9cc8-0326-4c2a-ae59-05f4353a6043-ca-trust-extracted\") pod \"image-registry-66df7c8f76-hjqzn\" (UID: \"0f7b9cc8-0326-4c2a-ae59-05f4353a6043\") " pod="openshift-image-registry/image-registry-66df7c8f76-hjqzn" Oct 06 11:54:05 crc kubenswrapper[4958]: I1006 11:54:05.983918 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/0f7b9cc8-0326-4c2a-ae59-05f4353a6043-registry-tls\") pod \"image-registry-66df7c8f76-hjqzn\" (UID: \"0f7b9cc8-0326-4c2a-ae59-05f4353a6043\") " pod="openshift-image-registry/image-registry-66df7c8f76-hjqzn" Oct 06 11:54:05 crc kubenswrapper[4958]: I1006 11:54:05.983957 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/0f7b9cc8-0326-4c2a-ae59-05f4353a6043-trusted-ca\") pod \"image-registry-66df7c8f76-hjqzn\" (UID: \"0f7b9cc8-0326-4c2a-ae59-05f4353a6043\") " pod="openshift-image-registry/image-registry-66df7c8f76-hjqzn" Oct 06 11:54:05 crc kubenswrapper[4958]: I1006 11:54:05.983988 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k94w6\" (UniqueName: \"kubernetes.io/projected/0f7b9cc8-0326-4c2a-ae59-05f4353a6043-kube-api-access-k94w6\") pod \"image-registry-66df7c8f76-hjqzn\" (UID: \"0f7b9cc8-0326-4c2a-ae59-05f4353a6043\") " pod="openshift-image-registry/image-registry-66df7c8f76-hjqzn" Oct 06 11:54:05 crc kubenswrapper[4958]: I1006 11:54:05.984025 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/0f7b9cc8-0326-4c2a-ae59-05f4353a6043-installation-pull-secrets\") pod \"image-registry-66df7c8f76-hjqzn\" (UID: \"0f7b9cc8-0326-4c2a-ae59-05f4353a6043\") " pod="openshift-image-registry/image-registry-66df7c8f76-hjqzn" Oct 06 11:54:05 crc kubenswrapper[4958]: I1006 11:54:05.984051 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/0f7b9cc8-0326-4c2a-ae59-05f4353a6043-registry-certificates\") pod \"image-registry-66df7c8f76-hjqzn\" (UID: \"0f7b9cc8-0326-4c2a-ae59-05f4353a6043\") " pod="openshift-image-registry/image-registry-66df7c8f76-hjqzn" Oct 06 11:54:05 crc kubenswrapper[4958]: I1006 11:54:05.984106 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/0f7b9cc8-0326-4c2a-ae59-05f4353a6043-bound-sa-token\") pod \"image-registry-66df7c8f76-hjqzn\" (UID: \"0f7b9cc8-0326-4c2a-ae59-05f4353a6043\") " pod="openshift-image-registry/image-registry-66df7c8f76-hjqzn" Oct 06 11:54:05 crc kubenswrapper[4958]: I1006 11:54:05.984482 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/0f7b9cc8-0326-4c2a-ae59-05f4353a6043-ca-trust-extracted\") pod \"image-registry-66df7c8f76-hjqzn\" (UID: \"0f7b9cc8-0326-4c2a-ae59-05f4353a6043\") " pod="openshift-image-registry/image-registry-66df7c8f76-hjqzn" Oct 06 11:54:05 crc kubenswrapper[4958]: I1006 11:54:05.985881 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/0f7b9cc8-0326-4c2a-ae59-05f4353a6043-trusted-ca\") pod \"image-registry-66df7c8f76-hjqzn\" (UID: \"0f7b9cc8-0326-4c2a-ae59-05f4353a6043\") " pod="openshift-image-registry/image-registry-66df7c8f76-hjqzn" Oct 06 11:54:05 crc kubenswrapper[4958]: I1006 11:54:05.985908 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/0f7b9cc8-0326-4c2a-ae59-05f4353a6043-registry-certificates\") pod \"image-registry-66df7c8f76-hjqzn\" (UID: \"0f7b9cc8-0326-4c2a-ae59-05f4353a6043\") " pod="openshift-image-registry/image-registry-66df7c8f76-hjqzn" Oct 06 11:54:05 crc kubenswrapper[4958]: I1006 11:54:05.993418 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/0f7b9cc8-0326-4c2a-ae59-05f4353a6043-installation-pull-secrets\") pod \"image-registry-66df7c8f76-hjqzn\" (UID: \"0f7b9cc8-0326-4c2a-ae59-05f4353a6043\") " pod="openshift-image-registry/image-registry-66df7c8f76-hjqzn" Oct 06 11:54:05 crc kubenswrapper[4958]: I1006 11:54:05.995052 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/0f7b9cc8-0326-4c2a-ae59-05f4353a6043-registry-tls\") pod \"image-registry-66df7c8f76-hjqzn\" (UID: \"0f7b9cc8-0326-4c2a-ae59-05f4353a6043\") " pod="openshift-image-registry/image-registry-66df7c8f76-hjqzn" Oct 06 11:54:06 crc kubenswrapper[4958]: I1006 11:54:06.007819 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k94w6\" (UniqueName: \"kubernetes.io/projected/0f7b9cc8-0326-4c2a-ae59-05f4353a6043-kube-api-access-k94w6\") pod \"image-registry-66df7c8f76-hjqzn\" (UID: \"0f7b9cc8-0326-4c2a-ae59-05f4353a6043\") " pod="openshift-image-registry/image-registry-66df7c8f76-hjqzn" Oct 06 11:54:06 crc kubenswrapper[4958]: I1006 11:54:06.009759 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/0f7b9cc8-0326-4c2a-ae59-05f4353a6043-bound-sa-token\") pod \"image-registry-66df7c8f76-hjqzn\" (UID: \"0f7b9cc8-0326-4c2a-ae59-05f4353a6043\") " pod="openshift-image-registry/image-registry-66df7c8f76-hjqzn" Oct 06 11:54:06 crc kubenswrapper[4958]: I1006 11:54:06.012006 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-66df7c8f76-hjqzn" Oct 06 11:54:06 crc kubenswrapper[4958]: I1006 11:54:06.216614 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-hjqzn"] Oct 06 11:54:06 crc kubenswrapper[4958]: I1006 11:54:06.988295 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-66df7c8f76-hjqzn" event={"ID":"0f7b9cc8-0326-4c2a-ae59-05f4353a6043","Type":"ContainerStarted","Data":"ce0400dfd5d2198cea5d3da6d1daaf2fcd4e13495919268bdda81460fea820ea"} Oct 06 11:54:06 crc kubenswrapper[4958]: I1006 11:54:06.988810 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-image-registry/image-registry-66df7c8f76-hjqzn" Oct 06 11:54:06 crc kubenswrapper[4958]: I1006 11:54:06.988844 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-66df7c8f76-hjqzn" event={"ID":"0f7b9cc8-0326-4c2a-ae59-05f4353a6043","Type":"ContainerStarted","Data":"3592dbd3f1e7b81a9106c601e3002874c75b5be25aff7cf59e7a90d402cb927d"} Oct 06 11:54:07 crc kubenswrapper[4958]: I1006 11:54:07.013009 4958 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/image-registry-66df7c8f76-hjqzn" podStartSLOduration=2.012973283 podStartE2EDuration="2.012973283s" podCreationTimestamp="2025-10-06 11:54:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-06 11:54:07.00710359 +0000 UTC m=+400.893128938" watchObservedRunningTime="2025-10-06 11:54:07.012973283 +0000 UTC m=+400.898998631" Oct 06 11:54:23 crc kubenswrapper[4958]: I1006 11:54:23.802496 4958 patch_prober.go:28] interesting pod/machine-config-daemon-whw6z container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 06 11:54:23 crc kubenswrapper[4958]: I1006 11:54:23.802949 4958 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-whw6z" podUID="1a63a5e9-6d00-40ff-a080-cfcaf03a1c1b" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 06 11:54:23 crc kubenswrapper[4958]: I1006 11:54:23.803016 4958 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-whw6z" Oct 06 11:54:23 crc kubenswrapper[4958]: I1006 11:54:23.803946 4958 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"995bc778c4c0807c4500542d2b3c01981314abfb47f00f805741a4ecb4ef1873"} pod="openshift-machine-config-operator/machine-config-daemon-whw6z" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Oct 06 11:54:23 crc kubenswrapper[4958]: I1006 11:54:23.804045 4958 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-whw6z" podUID="1a63a5e9-6d00-40ff-a080-cfcaf03a1c1b" containerName="machine-config-daemon" containerID="cri-o://995bc778c4c0807c4500542d2b3c01981314abfb47f00f805741a4ecb4ef1873" gracePeriod=600 Oct 06 11:54:24 crc kubenswrapper[4958]: I1006 11:54:24.105915 4958 generic.go:334] "Generic (PLEG): container finished" podID="1a63a5e9-6d00-40ff-a080-cfcaf03a1c1b" containerID="995bc778c4c0807c4500542d2b3c01981314abfb47f00f805741a4ecb4ef1873" exitCode=0 Oct 06 11:54:24 crc kubenswrapper[4958]: I1006 11:54:24.106038 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-whw6z" event={"ID":"1a63a5e9-6d00-40ff-a080-cfcaf03a1c1b","Type":"ContainerDied","Data":"995bc778c4c0807c4500542d2b3c01981314abfb47f00f805741a4ecb4ef1873"} Oct 06 11:54:24 crc kubenswrapper[4958]: I1006 11:54:24.106395 4958 scope.go:117] "RemoveContainer" containerID="0be6251825a9cd485da028e6f7fd169dae74215a7191607dfd0a4b7a470e43c4" Oct 06 11:54:25 crc kubenswrapper[4958]: I1006 11:54:25.120776 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-whw6z" event={"ID":"1a63a5e9-6d00-40ff-a080-cfcaf03a1c1b","Type":"ContainerStarted","Data":"62259fb4a3daeef462272162942ee3efe9a1f7d5314ed03623fbe14dfc330edf"} Oct 06 11:54:26 crc kubenswrapper[4958]: I1006 11:54:26.022584 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-image-registry/image-registry-66df7c8f76-hjqzn" Oct 06 11:54:26 crc kubenswrapper[4958]: I1006 11:54:26.115531 4958 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-kt8qb"] Oct 06 11:54:51 crc kubenswrapper[4958]: I1006 11:54:51.175849 4958 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-image-registry/image-registry-697d97f7c8-kt8qb" podUID="6a7580eb-dece-41a6-8335-33c29bc41056" containerName="registry" containerID="cri-o://14315178cacd50869c144fa520e9f78e001ce768f21035b28800848f3c29f2a3" gracePeriod=30 Oct 06 11:54:51 crc kubenswrapper[4958]: I1006 11:54:51.610697 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-kt8qb" Oct 06 11:54:51 crc kubenswrapper[4958]: I1006 11:54:51.796058 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/6a7580eb-dece-41a6-8335-33c29bc41056-registry-tls\") pod \"6a7580eb-dece-41a6-8335-33c29bc41056\" (UID: \"6a7580eb-dece-41a6-8335-33c29bc41056\") " Oct 06 11:54:51 crc kubenswrapper[4958]: I1006 11:54:51.796217 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/6a7580eb-dece-41a6-8335-33c29bc41056-bound-sa-token\") pod \"6a7580eb-dece-41a6-8335-33c29bc41056\" (UID: \"6a7580eb-dece-41a6-8335-33c29bc41056\") " Oct 06 11:54:51 crc kubenswrapper[4958]: I1006 11:54:51.796277 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s55zm\" (UniqueName: \"kubernetes.io/projected/6a7580eb-dece-41a6-8335-33c29bc41056-kube-api-access-s55zm\") pod \"6a7580eb-dece-41a6-8335-33c29bc41056\" (UID: \"6a7580eb-dece-41a6-8335-33c29bc41056\") " Oct 06 11:54:51 crc kubenswrapper[4958]: I1006 11:54:51.796330 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/6a7580eb-dece-41a6-8335-33c29bc41056-ca-trust-extracted\") pod \"6a7580eb-dece-41a6-8335-33c29bc41056\" (UID: \"6a7580eb-dece-41a6-8335-33c29bc41056\") " Oct 06 11:54:51 crc kubenswrapper[4958]: I1006 11:54:51.796439 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/6a7580eb-dece-41a6-8335-33c29bc41056-trusted-ca\") pod \"6a7580eb-dece-41a6-8335-33c29bc41056\" (UID: \"6a7580eb-dece-41a6-8335-33c29bc41056\") " Oct 06 11:54:51 crc kubenswrapper[4958]: I1006 11:54:51.796522 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/6a7580eb-dece-41a6-8335-33c29bc41056-installation-pull-secrets\") pod \"6a7580eb-dece-41a6-8335-33c29bc41056\" (UID: \"6a7580eb-dece-41a6-8335-33c29bc41056\") " Oct 06 11:54:51 crc kubenswrapper[4958]: I1006 11:54:51.796565 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/6a7580eb-dece-41a6-8335-33c29bc41056-registry-certificates\") pod \"6a7580eb-dece-41a6-8335-33c29bc41056\" (UID: \"6a7580eb-dece-41a6-8335-33c29bc41056\") " Oct 06 11:54:51 crc kubenswrapper[4958]: I1006 11:54:51.796757 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-storage\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"6a7580eb-dece-41a6-8335-33c29bc41056\" (UID: \"6a7580eb-dece-41a6-8335-33c29bc41056\") " Oct 06 11:54:51 crc kubenswrapper[4958]: I1006 11:54:51.798411 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6a7580eb-dece-41a6-8335-33c29bc41056-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "6a7580eb-dece-41a6-8335-33c29bc41056" (UID: "6a7580eb-dece-41a6-8335-33c29bc41056"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 06 11:54:51 crc kubenswrapper[4958]: I1006 11:54:51.798504 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6a7580eb-dece-41a6-8335-33c29bc41056-registry-certificates" (OuterVolumeSpecName: "registry-certificates") pod "6a7580eb-dece-41a6-8335-33c29bc41056" (UID: "6a7580eb-dece-41a6-8335-33c29bc41056"). InnerVolumeSpecName "registry-certificates". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 06 11:54:51 crc kubenswrapper[4958]: I1006 11:54:51.804191 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6a7580eb-dece-41a6-8335-33c29bc41056-installation-pull-secrets" (OuterVolumeSpecName: "installation-pull-secrets") pod "6a7580eb-dece-41a6-8335-33c29bc41056" (UID: "6a7580eb-dece-41a6-8335-33c29bc41056"). InnerVolumeSpecName "installation-pull-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 06 11:54:51 crc kubenswrapper[4958]: I1006 11:54:51.804665 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6a7580eb-dece-41a6-8335-33c29bc41056-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "6a7580eb-dece-41a6-8335-33c29bc41056" (UID: "6a7580eb-dece-41a6-8335-33c29bc41056"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 06 11:54:51 crc kubenswrapper[4958]: I1006 11:54:51.805068 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6a7580eb-dece-41a6-8335-33c29bc41056-registry-tls" (OuterVolumeSpecName: "registry-tls") pod "6a7580eb-dece-41a6-8335-33c29bc41056" (UID: "6a7580eb-dece-41a6-8335-33c29bc41056"). InnerVolumeSpecName "registry-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 06 11:54:51 crc kubenswrapper[4958]: I1006 11:54:51.806075 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6a7580eb-dece-41a6-8335-33c29bc41056-kube-api-access-s55zm" (OuterVolumeSpecName: "kube-api-access-s55zm") pod "6a7580eb-dece-41a6-8335-33c29bc41056" (UID: "6a7580eb-dece-41a6-8335-33c29bc41056"). InnerVolumeSpecName "kube-api-access-s55zm". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 06 11:54:51 crc kubenswrapper[4958]: I1006 11:54:51.818315 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (OuterVolumeSpecName: "registry-storage") pod "6a7580eb-dece-41a6-8335-33c29bc41056" (UID: "6a7580eb-dece-41a6-8335-33c29bc41056"). InnerVolumeSpecName "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8". PluginName "kubernetes.io/csi", VolumeGidValue "" Oct 06 11:54:51 crc kubenswrapper[4958]: I1006 11:54:51.837662 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6a7580eb-dece-41a6-8335-33c29bc41056-ca-trust-extracted" (OuterVolumeSpecName: "ca-trust-extracted") pod "6a7580eb-dece-41a6-8335-33c29bc41056" (UID: "6a7580eb-dece-41a6-8335-33c29bc41056"). InnerVolumeSpecName "ca-trust-extracted". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 06 11:54:51 crc kubenswrapper[4958]: I1006 11:54:51.898946 4958 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/6a7580eb-dece-41a6-8335-33c29bc41056-bound-sa-token\") on node \"crc\" DevicePath \"\"" Oct 06 11:54:51 crc kubenswrapper[4958]: I1006 11:54:51.899006 4958 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-s55zm\" (UniqueName: \"kubernetes.io/projected/6a7580eb-dece-41a6-8335-33c29bc41056-kube-api-access-s55zm\") on node \"crc\" DevicePath \"\"" Oct 06 11:54:51 crc kubenswrapper[4958]: I1006 11:54:51.899037 4958 reconciler_common.go:293] "Volume detached for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/6a7580eb-dece-41a6-8335-33c29bc41056-ca-trust-extracted\") on node \"crc\" DevicePath \"\"" Oct 06 11:54:51 crc kubenswrapper[4958]: I1006 11:54:51.899063 4958 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/6a7580eb-dece-41a6-8335-33c29bc41056-trusted-ca\") on node \"crc\" DevicePath \"\"" Oct 06 11:54:51 crc kubenswrapper[4958]: I1006 11:54:51.899085 4958 reconciler_common.go:293] "Volume detached for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/6a7580eb-dece-41a6-8335-33c29bc41056-installation-pull-secrets\") on node \"crc\" DevicePath \"\"" Oct 06 11:54:51 crc kubenswrapper[4958]: I1006 11:54:51.899107 4958 reconciler_common.go:293] "Volume detached for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/6a7580eb-dece-41a6-8335-33c29bc41056-registry-certificates\") on node \"crc\" DevicePath \"\"" Oct 06 11:54:51 crc kubenswrapper[4958]: I1006 11:54:51.899130 4958 reconciler_common.go:293] "Volume detached for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/6a7580eb-dece-41a6-8335-33c29bc41056-registry-tls\") on node \"crc\" DevicePath \"\"" Oct 06 11:54:52 crc kubenswrapper[4958]: I1006 11:54:52.311913 4958 generic.go:334] "Generic (PLEG): container finished" podID="6a7580eb-dece-41a6-8335-33c29bc41056" containerID="14315178cacd50869c144fa520e9f78e001ce768f21035b28800848f3c29f2a3" exitCode=0 Oct 06 11:54:52 crc kubenswrapper[4958]: I1006 11:54:52.311965 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-kt8qb" event={"ID":"6a7580eb-dece-41a6-8335-33c29bc41056","Type":"ContainerDied","Data":"14315178cacd50869c144fa520e9f78e001ce768f21035b28800848f3c29f2a3"} Oct 06 11:54:52 crc kubenswrapper[4958]: I1006 11:54:52.311999 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-kt8qb" event={"ID":"6a7580eb-dece-41a6-8335-33c29bc41056","Type":"ContainerDied","Data":"49052fa5cba9ac9adfefe36493d50adf3632348c937c12f73d5b999be0c2c99e"} Oct 06 11:54:52 crc kubenswrapper[4958]: I1006 11:54:52.312022 4958 scope.go:117] "RemoveContainer" containerID="14315178cacd50869c144fa520e9f78e001ce768f21035b28800848f3c29f2a3" Oct 06 11:54:52 crc kubenswrapper[4958]: I1006 11:54:52.312168 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-kt8qb" Oct 06 11:54:52 crc kubenswrapper[4958]: I1006 11:54:52.349904 4958 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-kt8qb"] Oct 06 11:54:52 crc kubenswrapper[4958]: I1006 11:54:52.353749 4958 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-kt8qb"] Oct 06 11:54:52 crc kubenswrapper[4958]: I1006 11:54:52.360850 4958 scope.go:117] "RemoveContainer" containerID="14315178cacd50869c144fa520e9f78e001ce768f21035b28800848f3c29f2a3" Oct 06 11:54:52 crc kubenswrapper[4958]: E1006 11:54:52.361107 4958 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"14315178cacd50869c144fa520e9f78e001ce768f21035b28800848f3c29f2a3\": container with ID starting with 14315178cacd50869c144fa520e9f78e001ce768f21035b28800848f3c29f2a3 not found: ID does not exist" containerID="14315178cacd50869c144fa520e9f78e001ce768f21035b28800848f3c29f2a3" Oct 06 11:54:52 crc kubenswrapper[4958]: I1006 11:54:52.361139 4958 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"14315178cacd50869c144fa520e9f78e001ce768f21035b28800848f3c29f2a3"} err="failed to get container status \"14315178cacd50869c144fa520e9f78e001ce768f21035b28800848f3c29f2a3\": rpc error: code = NotFound desc = could not find container \"14315178cacd50869c144fa520e9f78e001ce768f21035b28800848f3c29f2a3\": container with ID starting with 14315178cacd50869c144fa520e9f78e001ce768f21035b28800848f3c29f2a3 not found: ID does not exist" Oct 06 11:54:52 crc kubenswrapper[4958]: I1006 11:54:52.925972 4958 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6a7580eb-dece-41a6-8335-33c29bc41056" path="/var/lib/kubelet/pods/6a7580eb-dece-41a6-8335-33c29bc41056/volumes" Oct 06 11:56:50 crc kubenswrapper[4958]: I1006 11:56:50.431710 4958 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-5b446d88c5-d6ffb"] Oct 06 11:56:50 crc kubenswrapper[4958]: E1006 11:56:50.432743 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6a7580eb-dece-41a6-8335-33c29bc41056" containerName="registry" Oct 06 11:56:50 crc kubenswrapper[4958]: I1006 11:56:50.432765 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="6a7580eb-dece-41a6-8335-33c29bc41056" containerName="registry" Oct 06 11:56:50 crc kubenswrapper[4958]: I1006 11:56:50.432984 4958 memory_manager.go:354] "RemoveStaleState removing state" podUID="6a7580eb-dece-41a6-8335-33c29bc41056" containerName="registry" Oct 06 11:56:50 crc kubenswrapper[4958]: I1006 11:56:50.433599 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-5b446d88c5-d6ffb" Oct 06 11:56:50 crc kubenswrapper[4958]: I1006 11:56:50.436054 4958 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-dockercfg-hclvh" Oct 06 11:56:50 crc kubenswrapper[4958]: I1006 11:56:50.436671 4958 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-cainjector-7f985d654d-75ksk"] Oct 06 11:56:50 crc kubenswrapper[4958]: I1006 11:56:50.437468 4958 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cert-manager"/"kube-root-ca.crt" Oct 06 11:56:50 crc kubenswrapper[4958]: I1006 11:56:50.437602 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-cainjector-7f985d654d-75ksk" Oct 06 11:56:50 crc kubenswrapper[4958]: I1006 11:56:50.439597 4958 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cert-manager"/"openshift-service-ca.crt" Oct 06 11:56:50 crc kubenswrapper[4958]: I1006 11:56:50.440629 4958 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-cainjector-dockercfg-bhcvv" Oct 06 11:56:50 crc kubenswrapper[4958]: I1006 11:56:50.457103 4958 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-webhook-5655c58dd6-bcxhs"] Oct 06 11:56:50 crc kubenswrapper[4958]: I1006 11:56:50.457912 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-webhook-5655c58dd6-bcxhs" Oct 06 11:56:50 crc kubenswrapper[4958]: I1006 11:56:50.459979 4958 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-webhook-dockercfg-rcw8s" Oct 06 11:56:50 crc kubenswrapper[4958]: I1006 11:56:50.461405 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-cainjector-7f985d654d-75ksk"] Oct 06 11:56:50 crc kubenswrapper[4958]: I1006 11:56:50.474305 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-webhook-5655c58dd6-bcxhs"] Oct 06 11:56:50 crc kubenswrapper[4958]: I1006 11:56:50.486439 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-5b446d88c5-d6ffb"] Oct 06 11:56:50 crc kubenswrapper[4958]: I1006 11:56:50.496248 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cn4rf\" (UniqueName: \"kubernetes.io/projected/2e49dc60-a2ba-4e79-9563-2dd8857d45b0-kube-api-access-cn4rf\") pod \"cert-manager-webhook-5655c58dd6-bcxhs\" (UID: \"2e49dc60-a2ba-4e79-9563-2dd8857d45b0\") " pod="cert-manager/cert-manager-webhook-5655c58dd6-bcxhs" Oct 06 11:56:50 crc kubenswrapper[4958]: I1006 11:56:50.496321 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dxzxj\" (UniqueName: \"kubernetes.io/projected/78fb0e67-9bf0-4357-9208-9fff92c3074c-kube-api-access-dxzxj\") pod \"cert-manager-cainjector-7f985d654d-75ksk\" (UID: \"78fb0e67-9bf0-4357-9208-9fff92c3074c\") " pod="cert-manager/cert-manager-cainjector-7f985d654d-75ksk" Oct 06 11:56:50 crc kubenswrapper[4958]: I1006 11:56:50.496404 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-brv2l\" (UniqueName: \"kubernetes.io/projected/063d4ef1-4461-4677-90de-7e746456a573-kube-api-access-brv2l\") pod \"cert-manager-5b446d88c5-d6ffb\" (UID: \"063d4ef1-4461-4677-90de-7e746456a573\") " pod="cert-manager/cert-manager-5b446d88c5-d6ffb" Oct 06 11:56:50 crc kubenswrapper[4958]: I1006 11:56:50.597527 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-brv2l\" (UniqueName: \"kubernetes.io/projected/063d4ef1-4461-4677-90de-7e746456a573-kube-api-access-brv2l\") pod \"cert-manager-5b446d88c5-d6ffb\" (UID: \"063d4ef1-4461-4677-90de-7e746456a573\") " pod="cert-manager/cert-manager-5b446d88c5-d6ffb" Oct 06 11:56:50 crc kubenswrapper[4958]: I1006 11:56:50.597606 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cn4rf\" (UniqueName: \"kubernetes.io/projected/2e49dc60-a2ba-4e79-9563-2dd8857d45b0-kube-api-access-cn4rf\") pod \"cert-manager-webhook-5655c58dd6-bcxhs\" (UID: \"2e49dc60-a2ba-4e79-9563-2dd8857d45b0\") " pod="cert-manager/cert-manager-webhook-5655c58dd6-bcxhs" Oct 06 11:56:50 crc kubenswrapper[4958]: I1006 11:56:50.597667 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dxzxj\" (UniqueName: \"kubernetes.io/projected/78fb0e67-9bf0-4357-9208-9fff92c3074c-kube-api-access-dxzxj\") pod \"cert-manager-cainjector-7f985d654d-75ksk\" (UID: \"78fb0e67-9bf0-4357-9208-9fff92c3074c\") " pod="cert-manager/cert-manager-cainjector-7f985d654d-75ksk" Oct 06 11:56:50 crc kubenswrapper[4958]: I1006 11:56:50.620995 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-brv2l\" (UniqueName: \"kubernetes.io/projected/063d4ef1-4461-4677-90de-7e746456a573-kube-api-access-brv2l\") pod \"cert-manager-5b446d88c5-d6ffb\" (UID: \"063d4ef1-4461-4677-90de-7e746456a573\") " pod="cert-manager/cert-manager-5b446d88c5-d6ffb" Oct 06 11:56:50 crc kubenswrapper[4958]: I1006 11:56:50.621817 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dxzxj\" (UniqueName: \"kubernetes.io/projected/78fb0e67-9bf0-4357-9208-9fff92c3074c-kube-api-access-dxzxj\") pod \"cert-manager-cainjector-7f985d654d-75ksk\" (UID: \"78fb0e67-9bf0-4357-9208-9fff92c3074c\") " pod="cert-manager/cert-manager-cainjector-7f985d654d-75ksk" Oct 06 11:56:50 crc kubenswrapper[4958]: I1006 11:56:50.627114 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cn4rf\" (UniqueName: \"kubernetes.io/projected/2e49dc60-a2ba-4e79-9563-2dd8857d45b0-kube-api-access-cn4rf\") pod \"cert-manager-webhook-5655c58dd6-bcxhs\" (UID: \"2e49dc60-a2ba-4e79-9563-2dd8857d45b0\") " pod="cert-manager/cert-manager-webhook-5655c58dd6-bcxhs" Oct 06 11:56:50 crc kubenswrapper[4958]: I1006 11:56:50.758572 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-5b446d88c5-d6ffb" Oct 06 11:56:50 crc kubenswrapper[4958]: I1006 11:56:50.773224 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-cainjector-7f985d654d-75ksk" Oct 06 11:56:50 crc kubenswrapper[4958]: I1006 11:56:50.786353 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-webhook-5655c58dd6-bcxhs" Oct 06 11:56:50 crc kubenswrapper[4958]: I1006 11:56:50.987324 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-5b446d88c5-d6ffb"] Oct 06 11:56:51 crc kubenswrapper[4958]: I1006 11:56:51.003055 4958 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Oct 06 11:56:51 crc kubenswrapper[4958]: I1006 11:56:51.051051 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-5b446d88c5-d6ffb" event={"ID":"063d4ef1-4461-4677-90de-7e746456a573","Type":"ContainerStarted","Data":"10620490c7d8694f3e06c980a6eaf2a302f35c5ed1c86486f009cabf1f328324"} Oct 06 11:56:51 crc kubenswrapper[4958]: I1006 11:56:51.293727 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-cainjector-7f985d654d-75ksk"] Oct 06 11:56:51 crc kubenswrapper[4958]: W1006 11:56:51.301958 4958 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod78fb0e67_9bf0_4357_9208_9fff92c3074c.slice/crio-854b71ae5c4369854d4b39b9852d3b1f9ee0aa94f5760e56b1f3eaad722403a2 WatchSource:0}: Error finding container 854b71ae5c4369854d4b39b9852d3b1f9ee0aa94f5760e56b1f3eaad722403a2: Status 404 returned error can't find the container with id 854b71ae5c4369854d4b39b9852d3b1f9ee0aa94f5760e56b1f3eaad722403a2 Oct 06 11:56:51 crc kubenswrapper[4958]: I1006 11:56:51.311275 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-webhook-5655c58dd6-bcxhs"] Oct 06 11:56:52 crc kubenswrapper[4958]: I1006 11:56:52.058763 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-webhook-5655c58dd6-bcxhs" event={"ID":"2e49dc60-a2ba-4e79-9563-2dd8857d45b0","Type":"ContainerStarted","Data":"776e6b587ec0165431b4877d0d5b8cc203533954570b41822c6007260615d3b2"} Oct 06 11:56:52 crc kubenswrapper[4958]: I1006 11:56:52.060780 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-cainjector-7f985d654d-75ksk" event={"ID":"78fb0e67-9bf0-4357-9208-9fff92c3074c","Type":"ContainerStarted","Data":"854b71ae5c4369854d4b39b9852d3b1f9ee0aa94f5760e56b1f3eaad722403a2"} Oct 06 11:56:53 crc kubenswrapper[4958]: I1006 11:56:53.802249 4958 patch_prober.go:28] interesting pod/machine-config-daemon-whw6z container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 06 11:56:53 crc kubenswrapper[4958]: I1006 11:56:53.802666 4958 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-whw6z" podUID="1a63a5e9-6d00-40ff-a080-cfcaf03a1c1b" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 06 11:56:55 crc kubenswrapper[4958]: I1006 11:56:55.080957 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-5b446d88c5-d6ffb" event={"ID":"063d4ef1-4461-4677-90de-7e746456a573","Type":"ContainerStarted","Data":"77ed286f8f921c7e7d26083858aad5f31c008238b46eb2ee1333385b572014ad"} Oct 06 11:56:55 crc kubenswrapper[4958]: I1006 11:56:55.083528 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-cainjector-7f985d654d-75ksk" event={"ID":"78fb0e67-9bf0-4357-9208-9fff92c3074c","Type":"ContainerStarted","Data":"899529da3ff79ff211830fd4cf3689b4a040bd768acd2d1ce5933cdc5619d0ce"} Oct 06 11:56:55 crc kubenswrapper[4958]: I1006 11:56:55.086114 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-webhook-5655c58dd6-bcxhs" event={"ID":"2e49dc60-a2ba-4e79-9563-2dd8857d45b0","Type":"ContainerStarted","Data":"3e158dcfa4194f67048ff297863c84c0fdc7782b784215da7001285d46b17106"} Oct 06 11:56:55 crc kubenswrapper[4958]: I1006 11:56:55.086638 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="cert-manager/cert-manager-webhook-5655c58dd6-bcxhs" Oct 06 11:56:55 crc kubenswrapper[4958]: I1006 11:56:55.117163 4958 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-5b446d88c5-d6ffb" podStartSLOduration=1.515174995 podStartE2EDuration="5.117132327s" podCreationTimestamp="2025-10-06 11:56:50 +0000 UTC" firstStartedPulling="2025-10-06 11:56:51.002805179 +0000 UTC m=+564.888830487" lastFinishedPulling="2025-10-06 11:56:54.604762481 +0000 UTC m=+568.490787819" observedRunningTime="2025-10-06 11:56:55.098507735 +0000 UTC m=+568.984533053" watchObservedRunningTime="2025-10-06 11:56:55.117132327 +0000 UTC m=+569.003157635" Oct 06 11:56:55 crc kubenswrapper[4958]: I1006 11:56:55.118688 4958 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-webhook-5655c58dd6-bcxhs" podStartSLOduration=1.783303642 podStartE2EDuration="5.118683869s" podCreationTimestamp="2025-10-06 11:56:50 +0000 UTC" firstStartedPulling="2025-10-06 11:56:51.321744381 +0000 UTC m=+565.207769709" lastFinishedPulling="2025-10-06 11:56:54.657124618 +0000 UTC m=+568.543149936" observedRunningTime="2025-10-06 11:56:55.116516586 +0000 UTC m=+569.002541894" watchObservedRunningTime="2025-10-06 11:56:55.118683869 +0000 UTC m=+569.004709177" Oct 06 11:56:55 crc kubenswrapper[4958]: I1006 11:56:55.137350 4958 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-cainjector-7f985d654d-75ksk" podStartSLOduration=1.838664858 podStartE2EDuration="5.137329801s" podCreationTimestamp="2025-10-06 11:56:50 +0000 UTC" firstStartedPulling="2025-10-06 11:56:51.306110679 +0000 UTC m=+565.192135987" lastFinishedPulling="2025-10-06 11:56:54.604775612 +0000 UTC m=+568.490800930" observedRunningTime="2025-10-06 11:56:55.135953205 +0000 UTC m=+569.021978533" watchObservedRunningTime="2025-10-06 11:56:55.137329801 +0000 UTC m=+569.023355119" Oct 06 11:57:00 crc kubenswrapper[4958]: I1006 11:57:00.791456 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="cert-manager/cert-manager-webhook-5655c58dd6-bcxhs" Oct 06 11:57:01 crc kubenswrapper[4958]: I1006 11:57:01.086429 4958 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-ntrlk"] Oct 06 11:57:01 crc kubenswrapper[4958]: I1006 11:57:01.087089 4958 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-ntrlk" podUID="cd589959-144a-41bd-b6d5-a872e5c25cee" containerName="ovn-controller" containerID="cri-o://d90731b1e085c514ef0665d4df356b853f432a1012f84a41ed82fa257d57d9bf" gracePeriod=30 Oct 06 11:57:01 crc kubenswrapper[4958]: I1006 11:57:01.087229 4958 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-ntrlk" podUID="cd589959-144a-41bd-b6d5-a872e5c25cee" containerName="northd" containerID="cri-o://d7834f1b3141381f7d868acc4aceb2809e81896cb7082de9888c3d540d062078" gracePeriod=30 Oct 06 11:57:01 crc kubenswrapper[4958]: I1006 11:57:01.087282 4958 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-ntrlk" podUID="cd589959-144a-41bd-b6d5-a872e5c25cee" containerName="kube-rbac-proxy-node" containerID="cri-o://bc57f0ff822a9fcb8c3b73146bf4879f6e05c2f92b6a4317528530e54e419278" gracePeriod=30 Oct 06 11:57:01 crc kubenswrapper[4958]: I1006 11:57:01.087288 4958 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-ntrlk" podUID="cd589959-144a-41bd-b6d5-a872e5c25cee" containerName="sbdb" containerID="cri-o://1398aae03a42cf60100ed334a3d65c9c4fd501855d5316e7e1ed3974d0c7e22e" gracePeriod=30 Oct 06 11:57:01 crc kubenswrapper[4958]: I1006 11:57:01.087341 4958 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-ntrlk" podUID="cd589959-144a-41bd-b6d5-a872e5c25cee" containerName="kube-rbac-proxy-ovn-metrics" containerID="cri-o://a38398fb9a752d0059e1adfd4ee23557572492f2d27f63ef90ed4730b742fd0e" gracePeriod=30 Oct 06 11:57:01 crc kubenswrapper[4958]: I1006 11:57:01.087310 4958 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-ntrlk" podUID="cd589959-144a-41bd-b6d5-a872e5c25cee" containerName="ovn-acl-logging" containerID="cri-o://051e26a11e5f65ab6e49664f99a5f6a87bf781bf7284f2e2e3996c0e26e77c84" gracePeriod=30 Oct 06 11:57:01 crc kubenswrapper[4958]: I1006 11:57:01.087300 4958 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-ntrlk" podUID="cd589959-144a-41bd-b6d5-a872e5c25cee" containerName="nbdb" containerID="cri-o://484bfcace3003fb125465d9296c1b04852447d674b6c4abc8fedbd3a1b6ea8be" gracePeriod=30 Oct 06 11:57:01 crc kubenswrapper[4958]: I1006 11:57:01.121706 4958 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-ntrlk" podUID="cd589959-144a-41bd-b6d5-a872e5c25cee" containerName="ovnkube-controller" containerID="cri-o://31e3aeb719b255f884ea8f41ba8f2805ba18c0324cc985c66601ceb2096c177b" gracePeriod=30 Oct 06 11:57:01 crc kubenswrapper[4958]: I1006 11:57:01.432160 4958 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-ntrlk_cd589959-144a-41bd-b6d5-a872e5c25cee/ovnkube-controller/3.log" Oct 06 11:57:01 crc kubenswrapper[4958]: I1006 11:57:01.435362 4958 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-ntrlk_cd589959-144a-41bd-b6d5-a872e5c25cee/ovn-acl-logging/0.log" Oct 06 11:57:01 crc kubenswrapper[4958]: I1006 11:57:01.436018 4958 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-ntrlk_cd589959-144a-41bd-b6d5-a872e5c25cee/ovn-controller/0.log" Oct 06 11:57:01 crc kubenswrapper[4958]: I1006 11:57:01.436566 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-ntrlk" Oct 06 11:57:01 crc kubenswrapper[4958]: I1006 11:57:01.496632 4958 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-4w7dx"] Oct 06 11:57:01 crc kubenswrapper[4958]: E1006 11:57:01.496971 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cd589959-144a-41bd-b6d5-a872e5c25cee" containerName="ovnkube-controller" Oct 06 11:57:01 crc kubenswrapper[4958]: I1006 11:57:01.496989 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="cd589959-144a-41bd-b6d5-a872e5c25cee" containerName="ovnkube-controller" Oct 06 11:57:01 crc kubenswrapper[4958]: E1006 11:57:01.497004 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cd589959-144a-41bd-b6d5-a872e5c25cee" containerName="ovnkube-controller" Oct 06 11:57:01 crc kubenswrapper[4958]: I1006 11:57:01.497037 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="cd589959-144a-41bd-b6d5-a872e5c25cee" containerName="ovnkube-controller" Oct 06 11:57:01 crc kubenswrapper[4958]: E1006 11:57:01.497051 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cd589959-144a-41bd-b6d5-a872e5c25cee" containerName="ovnkube-controller" Oct 06 11:57:01 crc kubenswrapper[4958]: I1006 11:57:01.497060 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="cd589959-144a-41bd-b6d5-a872e5c25cee" containerName="ovnkube-controller" Oct 06 11:57:01 crc kubenswrapper[4958]: E1006 11:57:01.497071 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cd589959-144a-41bd-b6d5-a872e5c25cee" containerName="nbdb" Oct 06 11:57:01 crc kubenswrapper[4958]: I1006 11:57:01.497078 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="cd589959-144a-41bd-b6d5-a872e5c25cee" containerName="nbdb" Oct 06 11:57:01 crc kubenswrapper[4958]: E1006 11:57:01.497091 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cd589959-144a-41bd-b6d5-a872e5c25cee" containerName="sbdb" Oct 06 11:57:01 crc kubenswrapper[4958]: I1006 11:57:01.497196 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="cd589959-144a-41bd-b6d5-a872e5c25cee" containerName="sbdb" Oct 06 11:57:01 crc kubenswrapper[4958]: E1006 11:57:01.497212 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cd589959-144a-41bd-b6d5-a872e5c25cee" containerName="kube-rbac-proxy-ovn-metrics" Oct 06 11:57:01 crc kubenswrapper[4958]: I1006 11:57:01.497220 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="cd589959-144a-41bd-b6d5-a872e5c25cee" containerName="kube-rbac-proxy-ovn-metrics" Oct 06 11:57:01 crc kubenswrapper[4958]: E1006 11:57:01.497230 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cd589959-144a-41bd-b6d5-a872e5c25cee" containerName="ovn-controller" Oct 06 11:57:01 crc kubenswrapper[4958]: I1006 11:57:01.497238 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="cd589959-144a-41bd-b6d5-a872e5c25cee" containerName="ovn-controller" Oct 06 11:57:01 crc kubenswrapper[4958]: E1006 11:57:01.497247 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cd589959-144a-41bd-b6d5-a872e5c25cee" containerName="northd" Oct 06 11:57:01 crc kubenswrapper[4958]: I1006 11:57:01.497281 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="cd589959-144a-41bd-b6d5-a872e5c25cee" containerName="northd" Oct 06 11:57:01 crc kubenswrapper[4958]: E1006 11:57:01.497294 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cd589959-144a-41bd-b6d5-a872e5c25cee" containerName="kube-rbac-proxy-node" Oct 06 11:57:01 crc kubenswrapper[4958]: I1006 11:57:01.497302 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="cd589959-144a-41bd-b6d5-a872e5c25cee" containerName="kube-rbac-proxy-node" Oct 06 11:57:01 crc kubenswrapper[4958]: E1006 11:57:01.497316 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cd589959-144a-41bd-b6d5-a872e5c25cee" containerName="kubecfg-setup" Oct 06 11:57:01 crc kubenswrapper[4958]: I1006 11:57:01.497324 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="cd589959-144a-41bd-b6d5-a872e5c25cee" containerName="kubecfg-setup" Oct 06 11:57:01 crc kubenswrapper[4958]: E1006 11:57:01.497361 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cd589959-144a-41bd-b6d5-a872e5c25cee" containerName="ovn-acl-logging" Oct 06 11:57:01 crc kubenswrapper[4958]: I1006 11:57:01.497370 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="cd589959-144a-41bd-b6d5-a872e5c25cee" containerName="ovn-acl-logging" Oct 06 11:57:01 crc kubenswrapper[4958]: I1006 11:57:01.497541 4958 memory_manager.go:354] "RemoveStaleState removing state" podUID="cd589959-144a-41bd-b6d5-a872e5c25cee" containerName="ovn-controller" Oct 06 11:57:01 crc kubenswrapper[4958]: I1006 11:57:01.497554 4958 memory_manager.go:354] "RemoveStaleState removing state" podUID="cd589959-144a-41bd-b6d5-a872e5c25cee" containerName="nbdb" Oct 06 11:57:01 crc kubenswrapper[4958]: I1006 11:57:01.497566 4958 memory_manager.go:354] "RemoveStaleState removing state" podUID="cd589959-144a-41bd-b6d5-a872e5c25cee" containerName="kube-rbac-proxy-node" Oct 06 11:57:01 crc kubenswrapper[4958]: I1006 11:57:01.497602 4958 memory_manager.go:354] "RemoveStaleState removing state" podUID="cd589959-144a-41bd-b6d5-a872e5c25cee" containerName="sbdb" Oct 06 11:57:01 crc kubenswrapper[4958]: I1006 11:57:01.497612 4958 memory_manager.go:354] "RemoveStaleState removing state" podUID="cd589959-144a-41bd-b6d5-a872e5c25cee" containerName="ovnkube-controller" Oct 06 11:57:01 crc kubenswrapper[4958]: I1006 11:57:01.497624 4958 memory_manager.go:354] "RemoveStaleState removing state" podUID="cd589959-144a-41bd-b6d5-a872e5c25cee" containerName="ovn-acl-logging" Oct 06 11:57:01 crc kubenswrapper[4958]: I1006 11:57:01.497633 4958 memory_manager.go:354] "RemoveStaleState removing state" podUID="cd589959-144a-41bd-b6d5-a872e5c25cee" containerName="ovnkube-controller" Oct 06 11:57:01 crc kubenswrapper[4958]: I1006 11:57:01.497643 4958 memory_manager.go:354] "RemoveStaleState removing state" podUID="cd589959-144a-41bd-b6d5-a872e5c25cee" containerName="ovnkube-controller" Oct 06 11:57:01 crc kubenswrapper[4958]: I1006 11:57:01.497655 4958 memory_manager.go:354] "RemoveStaleState removing state" podUID="cd589959-144a-41bd-b6d5-a872e5c25cee" containerName="kube-rbac-proxy-ovn-metrics" Oct 06 11:57:01 crc kubenswrapper[4958]: I1006 11:57:01.497690 4958 memory_manager.go:354] "RemoveStaleState removing state" podUID="cd589959-144a-41bd-b6d5-a872e5c25cee" containerName="northd" Oct 06 11:57:01 crc kubenswrapper[4958]: I1006 11:57:01.497699 4958 memory_manager.go:354] "RemoveStaleState removing state" podUID="cd589959-144a-41bd-b6d5-a872e5c25cee" containerName="ovnkube-controller" Oct 06 11:57:01 crc kubenswrapper[4958]: I1006 11:57:01.497709 4958 memory_manager.go:354] "RemoveStaleState removing state" podUID="cd589959-144a-41bd-b6d5-a872e5c25cee" containerName="ovnkube-controller" Oct 06 11:57:01 crc kubenswrapper[4958]: E1006 11:57:01.497884 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cd589959-144a-41bd-b6d5-a872e5c25cee" containerName="ovnkube-controller" Oct 06 11:57:01 crc kubenswrapper[4958]: I1006 11:57:01.497894 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="cd589959-144a-41bd-b6d5-a872e5c25cee" containerName="ovnkube-controller" Oct 06 11:57:01 crc kubenswrapper[4958]: E1006 11:57:01.497930 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cd589959-144a-41bd-b6d5-a872e5c25cee" containerName="ovnkube-controller" Oct 06 11:57:01 crc kubenswrapper[4958]: I1006 11:57:01.497941 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="cd589959-144a-41bd-b6d5-a872e5c25cee" containerName="ovnkube-controller" Oct 06 11:57:01 crc kubenswrapper[4958]: I1006 11:57:01.500342 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-4w7dx" Oct 06 11:57:01 crc kubenswrapper[4958]: I1006 11:57:01.553804 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/cd589959-144a-41bd-b6d5-a872e5c25cee-log-socket\") pod \"cd589959-144a-41bd-b6d5-a872e5c25cee\" (UID: \"cd589959-144a-41bd-b6d5-a872e5c25cee\") " Oct 06 11:57:01 crc kubenswrapper[4958]: I1006 11:57:01.553844 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/cd589959-144a-41bd-b6d5-a872e5c25cee-run-openvswitch\") pod \"cd589959-144a-41bd-b6d5-a872e5c25cee\" (UID: \"cd589959-144a-41bd-b6d5-a872e5c25cee\") " Oct 06 11:57:01 crc kubenswrapper[4958]: I1006 11:57:01.553871 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/cd589959-144a-41bd-b6d5-a872e5c25cee-run-systemd\") pod \"cd589959-144a-41bd-b6d5-a872e5c25cee\" (UID: \"cd589959-144a-41bd-b6d5-a872e5c25cee\") " Oct 06 11:57:01 crc kubenswrapper[4958]: I1006 11:57:01.553887 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/cd589959-144a-41bd-b6d5-a872e5c25cee-host-run-ovn-kubernetes\") pod \"cd589959-144a-41bd-b6d5-a872e5c25cee\" (UID: \"cd589959-144a-41bd-b6d5-a872e5c25cee\") " Oct 06 11:57:01 crc kubenswrapper[4958]: I1006 11:57:01.553914 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/cd589959-144a-41bd-b6d5-a872e5c25cee-ovn-node-metrics-cert\") pod \"cd589959-144a-41bd-b6d5-a872e5c25cee\" (UID: \"cd589959-144a-41bd-b6d5-a872e5c25cee\") " Oct 06 11:57:01 crc kubenswrapper[4958]: I1006 11:57:01.553945 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/cd589959-144a-41bd-b6d5-a872e5c25cee-ovnkube-config\") pod \"cd589959-144a-41bd-b6d5-a872e5c25cee\" (UID: \"cd589959-144a-41bd-b6d5-a872e5c25cee\") " Oct 06 11:57:01 crc kubenswrapper[4958]: I1006 11:57:01.553964 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/cd589959-144a-41bd-b6d5-a872e5c25cee-var-lib-openvswitch\") pod \"cd589959-144a-41bd-b6d5-a872e5c25cee\" (UID: \"cd589959-144a-41bd-b6d5-a872e5c25cee\") " Oct 06 11:57:01 crc kubenswrapper[4958]: I1006 11:57:01.553980 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/cd589959-144a-41bd-b6d5-a872e5c25cee-run-ovn\") pod \"cd589959-144a-41bd-b6d5-a872e5c25cee\" (UID: \"cd589959-144a-41bd-b6d5-a872e5c25cee\") " Oct 06 11:57:01 crc kubenswrapper[4958]: I1006 11:57:01.553996 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/cd589959-144a-41bd-b6d5-a872e5c25cee-host-slash\") pod \"cd589959-144a-41bd-b6d5-a872e5c25cee\" (UID: \"cd589959-144a-41bd-b6d5-a872e5c25cee\") " Oct 06 11:57:01 crc kubenswrapper[4958]: I1006 11:57:01.554010 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/cd589959-144a-41bd-b6d5-a872e5c25cee-env-overrides\") pod \"cd589959-144a-41bd-b6d5-a872e5c25cee\" (UID: \"cd589959-144a-41bd-b6d5-a872e5c25cee\") " Oct 06 11:57:01 crc kubenswrapper[4958]: I1006 11:57:01.554024 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/cd589959-144a-41bd-b6d5-a872e5c25cee-host-cni-bin\") pod \"cd589959-144a-41bd-b6d5-a872e5c25cee\" (UID: \"cd589959-144a-41bd-b6d5-a872e5c25cee\") " Oct 06 11:57:01 crc kubenswrapper[4958]: I1006 11:57:01.554045 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/cd589959-144a-41bd-b6d5-a872e5c25cee-node-log\") pod \"cd589959-144a-41bd-b6d5-a872e5c25cee\" (UID: \"cd589959-144a-41bd-b6d5-a872e5c25cee\") " Oct 06 11:57:01 crc kubenswrapper[4958]: I1006 11:57:01.554061 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/cd589959-144a-41bd-b6d5-a872e5c25cee-host-var-lib-cni-networks-ovn-kubernetes\") pod \"cd589959-144a-41bd-b6d5-a872e5c25cee\" (UID: \"cd589959-144a-41bd-b6d5-a872e5c25cee\") " Oct 06 11:57:01 crc kubenswrapper[4958]: I1006 11:57:01.554086 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/cd589959-144a-41bd-b6d5-a872e5c25cee-host-cni-netd\") pod \"cd589959-144a-41bd-b6d5-a872e5c25cee\" (UID: \"cd589959-144a-41bd-b6d5-a872e5c25cee\") " Oct 06 11:57:01 crc kubenswrapper[4958]: I1006 11:57:01.554131 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/cd589959-144a-41bd-b6d5-a872e5c25cee-ovnkube-script-lib\") pod \"cd589959-144a-41bd-b6d5-a872e5c25cee\" (UID: \"cd589959-144a-41bd-b6d5-a872e5c25cee\") " Oct 06 11:57:01 crc kubenswrapper[4958]: I1006 11:57:01.554175 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/cd589959-144a-41bd-b6d5-a872e5c25cee-host-kubelet\") pod \"cd589959-144a-41bd-b6d5-a872e5c25cee\" (UID: \"cd589959-144a-41bd-b6d5-a872e5c25cee\") " Oct 06 11:57:01 crc kubenswrapper[4958]: I1006 11:57:01.554211 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/cd589959-144a-41bd-b6d5-a872e5c25cee-host-run-netns\") pod \"cd589959-144a-41bd-b6d5-a872e5c25cee\" (UID: \"cd589959-144a-41bd-b6d5-a872e5c25cee\") " Oct 06 11:57:01 crc kubenswrapper[4958]: I1006 11:57:01.554229 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vzpjm\" (UniqueName: \"kubernetes.io/projected/cd589959-144a-41bd-b6d5-a872e5c25cee-kube-api-access-vzpjm\") pod \"cd589959-144a-41bd-b6d5-a872e5c25cee\" (UID: \"cd589959-144a-41bd-b6d5-a872e5c25cee\") " Oct 06 11:57:01 crc kubenswrapper[4958]: I1006 11:57:01.554263 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/cd589959-144a-41bd-b6d5-a872e5c25cee-etc-openvswitch\") pod \"cd589959-144a-41bd-b6d5-a872e5c25cee\" (UID: \"cd589959-144a-41bd-b6d5-a872e5c25cee\") " Oct 06 11:57:01 crc kubenswrapper[4958]: I1006 11:57:01.554286 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/cd589959-144a-41bd-b6d5-a872e5c25cee-systemd-units\") pod \"cd589959-144a-41bd-b6d5-a872e5c25cee\" (UID: \"cd589959-144a-41bd-b6d5-a872e5c25cee\") " Oct 06 11:57:01 crc kubenswrapper[4958]: I1006 11:57:01.554498 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/cd589959-144a-41bd-b6d5-a872e5c25cee-systemd-units" (OuterVolumeSpecName: "systemd-units") pod "cd589959-144a-41bd-b6d5-a872e5c25cee" (UID: "cd589959-144a-41bd-b6d5-a872e5c25cee"). InnerVolumeSpecName "systemd-units". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 06 11:57:01 crc kubenswrapper[4958]: I1006 11:57:01.554530 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/cd589959-144a-41bd-b6d5-a872e5c25cee-host-cni-bin" (OuterVolumeSpecName: "host-cni-bin") pod "cd589959-144a-41bd-b6d5-a872e5c25cee" (UID: "cd589959-144a-41bd-b6d5-a872e5c25cee"). InnerVolumeSpecName "host-cni-bin". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 06 11:57:01 crc kubenswrapper[4958]: I1006 11:57:01.554550 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/cd589959-144a-41bd-b6d5-a872e5c25cee-node-log" (OuterVolumeSpecName: "node-log") pod "cd589959-144a-41bd-b6d5-a872e5c25cee" (UID: "cd589959-144a-41bd-b6d5-a872e5c25cee"). InnerVolumeSpecName "node-log". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 06 11:57:01 crc kubenswrapper[4958]: I1006 11:57:01.554573 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/cd589959-144a-41bd-b6d5-a872e5c25cee-host-var-lib-cni-networks-ovn-kubernetes" (OuterVolumeSpecName: "host-var-lib-cni-networks-ovn-kubernetes") pod "cd589959-144a-41bd-b6d5-a872e5c25cee" (UID: "cd589959-144a-41bd-b6d5-a872e5c25cee"). InnerVolumeSpecName "host-var-lib-cni-networks-ovn-kubernetes". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 06 11:57:01 crc kubenswrapper[4958]: I1006 11:57:01.554591 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/cd589959-144a-41bd-b6d5-a872e5c25cee-host-cni-netd" (OuterVolumeSpecName: "host-cni-netd") pod "cd589959-144a-41bd-b6d5-a872e5c25cee" (UID: "cd589959-144a-41bd-b6d5-a872e5c25cee"). InnerVolumeSpecName "host-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 06 11:57:01 crc kubenswrapper[4958]: I1006 11:57:01.554804 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/cd589959-144a-41bd-b6d5-a872e5c25cee-host-run-netns" (OuterVolumeSpecName: "host-run-netns") pod "cd589959-144a-41bd-b6d5-a872e5c25cee" (UID: "cd589959-144a-41bd-b6d5-a872e5c25cee"). InnerVolumeSpecName "host-run-netns". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 06 11:57:01 crc kubenswrapper[4958]: I1006 11:57:01.554869 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cd589959-144a-41bd-b6d5-a872e5c25cee-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "cd589959-144a-41bd-b6d5-a872e5c25cee" (UID: "cd589959-144a-41bd-b6d5-a872e5c25cee"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 06 11:57:01 crc kubenswrapper[4958]: I1006 11:57:01.554874 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/cd589959-144a-41bd-b6d5-a872e5c25cee-host-kubelet" (OuterVolumeSpecName: "host-kubelet") pod "cd589959-144a-41bd-b6d5-a872e5c25cee" (UID: "cd589959-144a-41bd-b6d5-a872e5c25cee"). InnerVolumeSpecName "host-kubelet". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 06 11:57:01 crc kubenswrapper[4958]: I1006 11:57:01.554916 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/cd589959-144a-41bd-b6d5-a872e5c25cee-run-ovn" (OuterVolumeSpecName: "run-ovn") pod "cd589959-144a-41bd-b6d5-a872e5c25cee" (UID: "cd589959-144a-41bd-b6d5-a872e5c25cee"). InnerVolumeSpecName "run-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 06 11:57:01 crc kubenswrapper[4958]: I1006 11:57:01.554918 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/cd589959-144a-41bd-b6d5-a872e5c25cee-run-openvswitch" (OuterVolumeSpecName: "run-openvswitch") pod "cd589959-144a-41bd-b6d5-a872e5c25cee" (UID: "cd589959-144a-41bd-b6d5-a872e5c25cee"). InnerVolumeSpecName "run-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 06 11:57:01 crc kubenswrapper[4958]: I1006 11:57:01.554942 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/cd589959-144a-41bd-b6d5-a872e5c25cee-log-socket" (OuterVolumeSpecName: "log-socket") pod "cd589959-144a-41bd-b6d5-a872e5c25cee" (UID: "cd589959-144a-41bd-b6d5-a872e5c25cee"). InnerVolumeSpecName "log-socket". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 06 11:57:01 crc kubenswrapper[4958]: I1006 11:57:01.554954 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/cd589959-144a-41bd-b6d5-a872e5c25cee-host-slash" (OuterVolumeSpecName: "host-slash") pod "cd589959-144a-41bd-b6d5-a872e5c25cee" (UID: "cd589959-144a-41bd-b6d5-a872e5c25cee"). InnerVolumeSpecName "host-slash". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 06 11:57:01 crc kubenswrapper[4958]: I1006 11:57:01.554986 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cd589959-144a-41bd-b6d5-a872e5c25cee-ovnkube-script-lib" (OuterVolumeSpecName: "ovnkube-script-lib") pod "cd589959-144a-41bd-b6d5-a872e5c25cee" (UID: "cd589959-144a-41bd-b6d5-a872e5c25cee"). InnerVolumeSpecName "ovnkube-script-lib". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 06 11:57:01 crc kubenswrapper[4958]: I1006 11:57:01.555013 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/cd589959-144a-41bd-b6d5-a872e5c25cee-host-run-ovn-kubernetes" (OuterVolumeSpecName: "host-run-ovn-kubernetes") pod "cd589959-144a-41bd-b6d5-a872e5c25cee" (UID: "cd589959-144a-41bd-b6d5-a872e5c25cee"). InnerVolumeSpecName "host-run-ovn-kubernetes". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 06 11:57:01 crc kubenswrapper[4958]: I1006 11:57:01.555014 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/cd589959-144a-41bd-b6d5-a872e5c25cee-etc-openvswitch" (OuterVolumeSpecName: "etc-openvswitch") pod "cd589959-144a-41bd-b6d5-a872e5c25cee" (UID: "cd589959-144a-41bd-b6d5-a872e5c25cee"). InnerVolumeSpecName "etc-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 06 11:57:01 crc kubenswrapper[4958]: I1006 11:57:01.554887 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/cd589959-144a-41bd-b6d5-a872e5c25cee-var-lib-openvswitch" (OuterVolumeSpecName: "var-lib-openvswitch") pod "cd589959-144a-41bd-b6d5-a872e5c25cee" (UID: "cd589959-144a-41bd-b6d5-a872e5c25cee"). InnerVolumeSpecName "var-lib-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 06 11:57:01 crc kubenswrapper[4958]: I1006 11:57:01.555441 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cd589959-144a-41bd-b6d5-a872e5c25cee-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "cd589959-144a-41bd-b6d5-a872e5c25cee" (UID: "cd589959-144a-41bd-b6d5-a872e5c25cee"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 06 11:57:01 crc kubenswrapper[4958]: I1006 11:57:01.560109 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cd589959-144a-41bd-b6d5-a872e5c25cee-kube-api-access-vzpjm" (OuterVolumeSpecName: "kube-api-access-vzpjm") pod "cd589959-144a-41bd-b6d5-a872e5c25cee" (UID: "cd589959-144a-41bd-b6d5-a872e5c25cee"). InnerVolumeSpecName "kube-api-access-vzpjm". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 06 11:57:01 crc kubenswrapper[4958]: I1006 11:57:01.560416 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cd589959-144a-41bd-b6d5-a872e5c25cee-ovn-node-metrics-cert" (OuterVolumeSpecName: "ovn-node-metrics-cert") pod "cd589959-144a-41bd-b6d5-a872e5c25cee" (UID: "cd589959-144a-41bd-b6d5-a872e5c25cee"). InnerVolumeSpecName "ovn-node-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 06 11:57:01 crc kubenswrapper[4958]: I1006 11:57:01.567494 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/cd589959-144a-41bd-b6d5-a872e5c25cee-run-systemd" (OuterVolumeSpecName: "run-systemd") pod "cd589959-144a-41bd-b6d5-a872e5c25cee" (UID: "cd589959-144a-41bd-b6d5-a872e5c25cee"). InnerVolumeSpecName "run-systemd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 06 11:57:01 crc kubenswrapper[4958]: I1006 11:57:01.655202 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/31cd6b07-2919-40a4-b107-6968722ace36-var-lib-openvswitch\") pod \"ovnkube-node-4w7dx\" (UID: \"31cd6b07-2919-40a4-b107-6968722ace36\") " pod="openshift-ovn-kubernetes/ovnkube-node-4w7dx" Oct 06 11:57:01 crc kubenswrapper[4958]: I1006 11:57:01.655263 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/31cd6b07-2919-40a4-b107-6968722ace36-host-slash\") pod \"ovnkube-node-4w7dx\" (UID: \"31cd6b07-2919-40a4-b107-6968722ace36\") " pod="openshift-ovn-kubernetes/ovnkube-node-4w7dx" Oct 06 11:57:01 crc kubenswrapper[4958]: I1006 11:57:01.655294 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/31cd6b07-2919-40a4-b107-6968722ace36-ovnkube-config\") pod \"ovnkube-node-4w7dx\" (UID: \"31cd6b07-2919-40a4-b107-6968722ace36\") " pod="openshift-ovn-kubernetes/ovnkube-node-4w7dx" Oct 06 11:57:01 crc kubenswrapper[4958]: I1006 11:57:01.655330 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fmzgj\" (UniqueName: \"kubernetes.io/projected/31cd6b07-2919-40a4-b107-6968722ace36-kube-api-access-fmzgj\") pod \"ovnkube-node-4w7dx\" (UID: \"31cd6b07-2919-40a4-b107-6968722ace36\") " pod="openshift-ovn-kubernetes/ovnkube-node-4w7dx" Oct 06 11:57:01 crc kubenswrapper[4958]: I1006 11:57:01.655354 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/31cd6b07-2919-40a4-b107-6968722ace36-host-run-netns\") pod \"ovnkube-node-4w7dx\" (UID: \"31cd6b07-2919-40a4-b107-6968722ace36\") " pod="openshift-ovn-kubernetes/ovnkube-node-4w7dx" Oct 06 11:57:01 crc kubenswrapper[4958]: I1006 11:57:01.655393 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/31cd6b07-2919-40a4-b107-6968722ace36-run-systemd\") pod \"ovnkube-node-4w7dx\" (UID: \"31cd6b07-2919-40a4-b107-6968722ace36\") " pod="openshift-ovn-kubernetes/ovnkube-node-4w7dx" Oct 06 11:57:01 crc kubenswrapper[4958]: I1006 11:57:01.655416 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/31cd6b07-2919-40a4-b107-6968722ace36-ovnkube-script-lib\") pod \"ovnkube-node-4w7dx\" (UID: \"31cd6b07-2919-40a4-b107-6968722ace36\") " pod="openshift-ovn-kubernetes/ovnkube-node-4w7dx" Oct 06 11:57:01 crc kubenswrapper[4958]: I1006 11:57:01.655592 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/31cd6b07-2919-40a4-b107-6968722ace36-env-overrides\") pod \"ovnkube-node-4w7dx\" (UID: \"31cd6b07-2919-40a4-b107-6968722ace36\") " pod="openshift-ovn-kubernetes/ovnkube-node-4w7dx" Oct 06 11:57:01 crc kubenswrapper[4958]: I1006 11:57:01.655636 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/31cd6b07-2919-40a4-b107-6968722ace36-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-4w7dx\" (UID: \"31cd6b07-2919-40a4-b107-6968722ace36\") " pod="openshift-ovn-kubernetes/ovnkube-node-4w7dx" Oct 06 11:57:01 crc kubenswrapper[4958]: I1006 11:57:01.655684 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/31cd6b07-2919-40a4-b107-6968722ace36-node-log\") pod \"ovnkube-node-4w7dx\" (UID: \"31cd6b07-2919-40a4-b107-6968722ace36\") " pod="openshift-ovn-kubernetes/ovnkube-node-4w7dx" Oct 06 11:57:01 crc kubenswrapper[4958]: I1006 11:57:01.655711 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/31cd6b07-2919-40a4-b107-6968722ace36-ovn-node-metrics-cert\") pod \"ovnkube-node-4w7dx\" (UID: \"31cd6b07-2919-40a4-b107-6968722ace36\") " pod="openshift-ovn-kubernetes/ovnkube-node-4w7dx" Oct 06 11:57:01 crc kubenswrapper[4958]: I1006 11:57:01.655730 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/31cd6b07-2919-40a4-b107-6968722ace36-run-openvswitch\") pod \"ovnkube-node-4w7dx\" (UID: \"31cd6b07-2919-40a4-b107-6968722ace36\") " pod="openshift-ovn-kubernetes/ovnkube-node-4w7dx" Oct 06 11:57:01 crc kubenswrapper[4958]: I1006 11:57:01.655831 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/31cd6b07-2919-40a4-b107-6968722ace36-etc-openvswitch\") pod \"ovnkube-node-4w7dx\" (UID: \"31cd6b07-2919-40a4-b107-6968722ace36\") " pod="openshift-ovn-kubernetes/ovnkube-node-4w7dx" Oct 06 11:57:01 crc kubenswrapper[4958]: I1006 11:57:01.655906 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/31cd6b07-2919-40a4-b107-6968722ace36-run-ovn\") pod \"ovnkube-node-4w7dx\" (UID: \"31cd6b07-2919-40a4-b107-6968722ace36\") " pod="openshift-ovn-kubernetes/ovnkube-node-4w7dx" Oct 06 11:57:01 crc kubenswrapper[4958]: I1006 11:57:01.655959 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/31cd6b07-2919-40a4-b107-6968722ace36-host-cni-bin\") pod \"ovnkube-node-4w7dx\" (UID: \"31cd6b07-2919-40a4-b107-6968722ace36\") " pod="openshift-ovn-kubernetes/ovnkube-node-4w7dx" Oct 06 11:57:01 crc kubenswrapper[4958]: I1006 11:57:01.656109 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/31cd6b07-2919-40a4-b107-6968722ace36-host-kubelet\") pod \"ovnkube-node-4w7dx\" (UID: \"31cd6b07-2919-40a4-b107-6968722ace36\") " pod="openshift-ovn-kubernetes/ovnkube-node-4w7dx" Oct 06 11:57:01 crc kubenswrapper[4958]: I1006 11:57:01.656255 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/31cd6b07-2919-40a4-b107-6968722ace36-host-cni-netd\") pod \"ovnkube-node-4w7dx\" (UID: \"31cd6b07-2919-40a4-b107-6968722ace36\") " pod="openshift-ovn-kubernetes/ovnkube-node-4w7dx" Oct 06 11:57:01 crc kubenswrapper[4958]: I1006 11:57:01.656321 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/31cd6b07-2919-40a4-b107-6968722ace36-systemd-units\") pod \"ovnkube-node-4w7dx\" (UID: \"31cd6b07-2919-40a4-b107-6968722ace36\") " pod="openshift-ovn-kubernetes/ovnkube-node-4w7dx" Oct 06 11:57:01 crc kubenswrapper[4958]: I1006 11:57:01.656382 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/31cd6b07-2919-40a4-b107-6968722ace36-host-run-ovn-kubernetes\") pod \"ovnkube-node-4w7dx\" (UID: \"31cd6b07-2919-40a4-b107-6968722ace36\") " pod="openshift-ovn-kubernetes/ovnkube-node-4w7dx" Oct 06 11:57:01 crc kubenswrapper[4958]: I1006 11:57:01.656473 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/31cd6b07-2919-40a4-b107-6968722ace36-log-socket\") pod \"ovnkube-node-4w7dx\" (UID: \"31cd6b07-2919-40a4-b107-6968722ace36\") " pod="openshift-ovn-kubernetes/ovnkube-node-4w7dx" Oct 06 11:57:01 crc kubenswrapper[4958]: I1006 11:57:01.656610 4958 reconciler_common.go:293] "Volume detached for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/cd589959-144a-41bd-b6d5-a872e5c25cee-ovn-node-metrics-cert\") on node \"crc\" DevicePath \"\"" Oct 06 11:57:01 crc kubenswrapper[4958]: I1006 11:57:01.656646 4958 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/cd589959-144a-41bd-b6d5-a872e5c25cee-ovnkube-config\") on node \"crc\" DevicePath \"\"" Oct 06 11:57:01 crc kubenswrapper[4958]: I1006 11:57:01.656671 4958 reconciler_common.go:293] "Volume detached for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/cd589959-144a-41bd-b6d5-a872e5c25cee-var-lib-openvswitch\") on node \"crc\" DevicePath \"\"" Oct 06 11:57:01 crc kubenswrapper[4958]: I1006 11:57:01.656696 4958 reconciler_common.go:293] "Volume detached for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/cd589959-144a-41bd-b6d5-a872e5c25cee-run-ovn\") on node \"crc\" DevicePath \"\"" Oct 06 11:57:01 crc kubenswrapper[4958]: I1006 11:57:01.656721 4958 reconciler_common.go:293] "Volume detached for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/cd589959-144a-41bd-b6d5-a872e5c25cee-host-slash\") on node \"crc\" DevicePath \"\"" Oct 06 11:57:01 crc kubenswrapper[4958]: I1006 11:57:01.656744 4958 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/cd589959-144a-41bd-b6d5-a872e5c25cee-env-overrides\") on node \"crc\" DevicePath \"\"" Oct 06 11:57:01 crc kubenswrapper[4958]: I1006 11:57:01.656767 4958 reconciler_common.go:293] "Volume detached for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/cd589959-144a-41bd-b6d5-a872e5c25cee-host-cni-bin\") on node \"crc\" DevicePath \"\"" Oct 06 11:57:01 crc kubenswrapper[4958]: I1006 11:57:01.656793 4958 reconciler_common.go:293] "Volume detached for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/cd589959-144a-41bd-b6d5-a872e5c25cee-node-log\") on node \"crc\" DevicePath \"\"" Oct 06 11:57:01 crc kubenswrapper[4958]: I1006 11:57:01.656818 4958 reconciler_common.go:293] "Volume detached for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/cd589959-144a-41bd-b6d5-a872e5c25cee-host-var-lib-cni-networks-ovn-kubernetes\") on node \"crc\" DevicePath \"\"" Oct 06 11:57:01 crc kubenswrapper[4958]: I1006 11:57:01.656842 4958 reconciler_common.go:293] "Volume detached for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/cd589959-144a-41bd-b6d5-a872e5c25cee-host-cni-netd\") on node \"crc\" DevicePath \"\"" Oct 06 11:57:01 crc kubenswrapper[4958]: I1006 11:57:01.656870 4958 reconciler_common.go:293] "Volume detached for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/cd589959-144a-41bd-b6d5-a872e5c25cee-ovnkube-script-lib\") on node \"crc\" DevicePath \"\"" Oct 06 11:57:01 crc kubenswrapper[4958]: I1006 11:57:01.656892 4958 reconciler_common.go:293] "Volume detached for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/cd589959-144a-41bd-b6d5-a872e5c25cee-host-kubelet\") on node \"crc\" DevicePath \"\"" Oct 06 11:57:01 crc kubenswrapper[4958]: I1006 11:57:01.656917 4958 reconciler_common.go:293] "Volume detached for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/cd589959-144a-41bd-b6d5-a872e5c25cee-host-run-netns\") on node \"crc\" DevicePath \"\"" Oct 06 11:57:01 crc kubenswrapper[4958]: I1006 11:57:01.656942 4958 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vzpjm\" (UniqueName: \"kubernetes.io/projected/cd589959-144a-41bd-b6d5-a872e5c25cee-kube-api-access-vzpjm\") on node \"crc\" DevicePath \"\"" Oct 06 11:57:01 crc kubenswrapper[4958]: I1006 11:57:01.656966 4958 reconciler_common.go:293] "Volume detached for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/cd589959-144a-41bd-b6d5-a872e5c25cee-etc-openvswitch\") on node \"crc\" DevicePath \"\"" Oct 06 11:57:01 crc kubenswrapper[4958]: I1006 11:57:01.656988 4958 reconciler_common.go:293] "Volume detached for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/cd589959-144a-41bd-b6d5-a872e5c25cee-systemd-units\") on node \"crc\" DevicePath \"\"" Oct 06 11:57:01 crc kubenswrapper[4958]: I1006 11:57:01.657012 4958 reconciler_common.go:293] "Volume detached for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/cd589959-144a-41bd-b6d5-a872e5c25cee-log-socket\") on node \"crc\" DevicePath \"\"" Oct 06 11:57:01 crc kubenswrapper[4958]: I1006 11:57:01.657069 4958 reconciler_common.go:293] "Volume detached for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/cd589959-144a-41bd-b6d5-a872e5c25cee-run-openvswitch\") on node \"crc\" DevicePath \"\"" Oct 06 11:57:01 crc kubenswrapper[4958]: I1006 11:57:01.657092 4958 reconciler_common.go:293] "Volume detached for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/cd589959-144a-41bd-b6d5-a872e5c25cee-run-systemd\") on node \"crc\" DevicePath \"\"" Oct 06 11:57:01 crc kubenswrapper[4958]: I1006 11:57:01.657117 4958 reconciler_common.go:293] "Volume detached for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/cd589959-144a-41bd-b6d5-a872e5c25cee-host-run-ovn-kubernetes\") on node \"crc\" DevicePath \"\"" Oct 06 11:57:01 crc kubenswrapper[4958]: I1006 11:57:01.758829 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/31cd6b07-2919-40a4-b107-6968722ace36-var-lib-openvswitch\") pod \"ovnkube-node-4w7dx\" (UID: \"31cd6b07-2919-40a4-b107-6968722ace36\") " pod="openshift-ovn-kubernetes/ovnkube-node-4w7dx" Oct 06 11:57:01 crc kubenswrapper[4958]: I1006 11:57:01.758868 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/31cd6b07-2919-40a4-b107-6968722ace36-host-slash\") pod \"ovnkube-node-4w7dx\" (UID: \"31cd6b07-2919-40a4-b107-6968722ace36\") " pod="openshift-ovn-kubernetes/ovnkube-node-4w7dx" Oct 06 11:57:01 crc kubenswrapper[4958]: I1006 11:57:01.758897 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/31cd6b07-2919-40a4-b107-6968722ace36-ovnkube-config\") pod \"ovnkube-node-4w7dx\" (UID: \"31cd6b07-2919-40a4-b107-6968722ace36\") " pod="openshift-ovn-kubernetes/ovnkube-node-4w7dx" Oct 06 11:57:01 crc kubenswrapper[4958]: I1006 11:57:01.758923 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fmzgj\" (UniqueName: \"kubernetes.io/projected/31cd6b07-2919-40a4-b107-6968722ace36-kube-api-access-fmzgj\") pod \"ovnkube-node-4w7dx\" (UID: \"31cd6b07-2919-40a4-b107-6968722ace36\") " pod="openshift-ovn-kubernetes/ovnkube-node-4w7dx" Oct 06 11:57:01 crc kubenswrapper[4958]: I1006 11:57:01.758940 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/31cd6b07-2919-40a4-b107-6968722ace36-host-run-netns\") pod \"ovnkube-node-4w7dx\" (UID: \"31cd6b07-2919-40a4-b107-6968722ace36\") " pod="openshift-ovn-kubernetes/ovnkube-node-4w7dx" Oct 06 11:57:01 crc kubenswrapper[4958]: I1006 11:57:01.758956 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/31cd6b07-2919-40a4-b107-6968722ace36-run-systemd\") pod \"ovnkube-node-4w7dx\" (UID: \"31cd6b07-2919-40a4-b107-6968722ace36\") " pod="openshift-ovn-kubernetes/ovnkube-node-4w7dx" Oct 06 11:57:01 crc kubenswrapper[4958]: I1006 11:57:01.758962 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/31cd6b07-2919-40a4-b107-6968722ace36-var-lib-openvswitch\") pod \"ovnkube-node-4w7dx\" (UID: \"31cd6b07-2919-40a4-b107-6968722ace36\") " pod="openshift-ovn-kubernetes/ovnkube-node-4w7dx" Oct 06 11:57:01 crc kubenswrapper[4958]: I1006 11:57:01.758969 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/31cd6b07-2919-40a4-b107-6968722ace36-ovnkube-script-lib\") pod \"ovnkube-node-4w7dx\" (UID: \"31cd6b07-2919-40a4-b107-6968722ace36\") " pod="openshift-ovn-kubernetes/ovnkube-node-4w7dx" Oct 06 11:57:01 crc kubenswrapper[4958]: I1006 11:57:01.759446 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/31cd6b07-2919-40a4-b107-6968722ace36-host-run-netns\") pod \"ovnkube-node-4w7dx\" (UID: \"31cd6b07-2919-40a4-b107-6968722ace36\") " pod="openshift-ovn-kubernetes/ovnkube-node-4w7dx" Oct 06 11:57:01 crc kubenswrapper[4958]: I1006 11:57:01.759461 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/31cd6b07-2919-40a4-b107-6968722ace36-env-overrides\") pod \"ovnkube-node-4w7dx\" (UID: \"31cd6b07-2919-40a4-b107-6968722ace36\") " pod="openshift-ovn-kubernetes/ovnkube-node-4w7dx" Oct 06 11:57:01 crc kubenswrapper[4958]: I1006 11:57:01.759500 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/31cd6b07-2919-40a4-b107-6968722ace36-run-systemd\") pod \"ovnkube-node-4w7dx\" (UID: \"31cd6b07-2919-40a4-b107-6968722ace36\") " pod="openshift-ovn-kubernetes/ovnkube-node-4w7dx" Oct 06 11:57:01 crc kubenswrapper[4958]: I1006 11:57:01.758968 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/31cd6b07-2919-40a4-b107-6968722ace36-host-slash\") pod \"ovnkube-node-4w7dx\" (UID: \"31cd6b07-2919-40a4-b107-6968722ace36\") " pod="openshift-ovn-kubernetes/ovnkube-node-4w7dx" Oct 06 11:57:01 crc kubenswrapper[4958]: I1006 11:57:01.759517 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/31cd6b07-2919-40a4-b107-6968722ace36-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-4w7dx\" (UID: \"31cd6b07-2919-40a4-b107-6968722ace36\") " pod="openshift-ovn-kubernetes/ovnkube-node-4w7dx" Oct 06 11:57:01 crc kubenswrapper[4958]: I1006 11:57:01.759548 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/31cd6b07-2919-40a4-b107-6968722ace36-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-4w7dx\" (UID: \"31cd6b07-2919-40a4-b107-6968722ace36\") " pod="openshift-ovn-kubernetes/ovnkube-node-4w7dx" Oct 06 11:57:01 crc kubenswrapper[4958]: I1006 11:57:01.759645 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/31cd6b07-2919-40a4-b107-6968722ace36-ovnkube-script-lib\") pod \"ovnkube-node-4w7dx\" (UID: \"31cd6b07-2919-40a4-b107-6968722ace36\") " pod="openshift-ovn-kubernetes/ovnkube-node-4w7dx" Oct 06 11:57:01 crc kubenswrapper[4958]: I1006 11:57:01.759651 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/31cd6b07-2919-40a4-b107-6968722ace36-node-log\") pod \"ovnkube-node-4w7dx\" (UID: \"31cd6b07-2919-40a4-b107-6968722ace36\") " pod="openshift-ovn-kubernetes/ovnkube-node-4w7dx" Oct 06 11:57:01 crc kubenswrapper[4958]: I1006 11:57:01.759686 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/31cd6b07-2919-40a4-b107-6968722ace36-node-log\") pod \"ovnkube-node-4w7dx\" (UID: \"31cd6b07-2919-40a4-b107-6968722ace36\") " pod="openshift-ovn-kubernetes/ovnkube-node-4w7dx" Oct 06 11:57:01 crc kubenswrapper[4958]: I1006 11:57:01.759702 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/31cd6b07-2919-40a4-b107-6968722ace36-run-openvswitch\") pod \"ovnkube-node-4w7dx\" (UID: \"31cd6b07-2919-40a4-b107-6968722ace36\") " pod="openshift-ovn-kubernetes/ovnkube-node-4w7dx" Oct 06 11:57:01 crc kubenswrapper[4958]: I1006 11:57:01.759737 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/31cd6b07-2919-40a4-b107-6968722ace36-ovn-node-metrics-cert\") pod \"ovnkube-node-4w7dx\" (UID: \"31cd6b07-2919-40a4-b107-6968722ace36\") " pod="openshift-ovn-kubernetes/ovnkube-node-4w7dx" Oct 06 11:57:01 crc kubenswrapper[4958]: I1006 11:57:01.759739 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/31cd6b07-2919-40a4-b107-6968722ace36-run-openvswitch\") pod \"ovnkube-node-4w7dx\" (UID: \"31cd6b07-2919-40a4-b107-6968722ace36\") " pod="openshift-ovn-kubernetes/ovnkube-node-4w7dx" Oct 06 11:57:01 crc kubenswrapper[4958]: I1006 11:57:01.759770 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/31cd6b07-2919-40a4-b107-6968722ace36-etc-openvswitch\") pod \"ovnkube-node-4w7dx\" (UID: \"31cd6b07-2919-40a4-b107-6968722ace36\") " pod="openshift-ovn-kubernetes/ovnkube-node-4w7dx" Oct 06 11:57:01 crc kubenswrapper[4958]: I1006 11:57:01.759808 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/31cd6b07-2919-40a4-b107-6968722ace36-run-ovn\") pod \"ovnkube-node-4w7dx\" (UID: \"31cd6b07-2919-40a4-b107-6968722ace36\") " pod="openshift-ovn-kubernetes/ovnkube-node-4w7dx" Oct 06 11:57:01 crc kubenswrapper[4958]: I1006 11:57:01.759827 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/31cd6b07-2919-40a4-b107-6968722ace36-host-cni-bin\") pod \"ovnkube-node-4w7dx\" (UID: \"31cd6b07-2919-40a4-b107-6968722ace36\") " pod="openshift-ovn-kubernetes/ovnkube-node-4w7dx" Oct 06 11:57:01 crc kubenswrapper[4958]: I1006 11:57:01.759850 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/31cd6b07-2919-40a4-b107-6968722ace36-host-kubelet\") pod \"ovnkube-node-4w7dx\" (UID: \"31cd6b07-2919-40a4-b107-6968722ace36\") " pod="openshift-ovn-kubernetes/ovnkube-node-4w7dx" Oct 06 11:57:01 crc kubenswrapper[4958]: I1006 11:57:01.759871 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/31cd6b07-2919-40a4-b107-6968722ace36-host-cni-netd\") pod \"ovnkube-node-4w7dx\" (UID: \"31cd6b07-2919-40a4-b107-6968722ace36\") " pod="openshift-ovn-kubernetes/ovnkube-node-4w7dx" Oct 06 11:57:01 crc kubenswrapper[4958]: I1006 11:57:01.759889 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/31cd6b07-2919-40a4-b107-6968722ace36-systemd-units\") pod \"ovnkube-node-4w7dx\" (UID: \"31cd6b07-2919-40a4-b107-6968722ace36\") " pod="openshift-ovn-kubernetes/ovnkube-node-4w7dx" Oct 06 11:57:01 crc kubenswrapper[4958]: I1006 11:57:01.759907 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/31cd6b07-2919-40a4-b107-6968722ace36-host-run-ovn-kubernetes\") pod \"ovnkube-node-4w7dx\" (UID: \"31cd6b07-2919-40a4-b107-6968722ace36\") " pod="openshift-ovn-kubernetes/ovnkube-node-4w7dx" Oct 06 11:57:01 crc kubenswrapper[4958]: I1006 11:57:01.759912 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/31cd6b07-2919-40a4-b107-6968722ace36-host-cni-bin\") pod \"ovnkube-node-4w7dx\" (UID: \"31cd6b07-2919-40a4-b107-6968722ace36\") " pod="openshift-ovn-kubernetes/ovnkube-node-4w7dx" Oct 06 11:57:01 crc kubenswrapper[4958]: I1006 11:57:01.759924 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/31cd6b07-2919-40a4-b107-6968722ace36-log-socket\") pod \"ovnkube-node-4w7dx\" (UID: \"31cd6b07-2919-40a4-b107-6968722ace36\") " pod="openshift-ovn-kubernetes/ovnkube-node-4w7dx" Oct 06 11:57:01 crc kubenswrapper[4958]: I1006 11:57:01.759959 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/31cd6b07-2919-40a4-b107-6968722ace36-host-kubelet\") pod \"ovnkube-node-4w7dx\" (UID: \"31cd6b07-2919-40a4-b107-6968722ace36\") " pod="openshift-ovn-kubernetes/ovnkube-node-4w7dx" Oct 06 11:57:01 crc kubenswrapper[4958]: I1006 11:57:01.759869 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/31cd6b07-2919-40a4-b107-6968722ace36-run-ovn\") pod \"ovnkube-node-4w7dx\" (UID: \"31cd6b07-2919-40a4-b107-6968722ace36\") " pod="openshift-ovn-kubernetes/ovnkube-node-4w7dx" Oct 06 11:57:01 crc kubenswrapper[4958]: I1006 11:57:01.759825 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/31cd6b07-2919-40a4-b107-6968722ace36-etc-openvswitch\") pod \"ovnkube-node-4w7dx\" (UID: \"31cd6b07-2919-40a4-b107-6968722ace36\") " pod="openshift-ovn-kubernetes/ovnkube-node-4w7dx" Oct 06 11:57:01 crc kubenswrapper[4958]: I1006 11:57:01.759944 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/31cd6b07-2919-40a4-b107-6968722ace36-log-socket\") pod \"ovnkube-node-4w7dx\" (UID: \"31cd6b07-2919-40a4-b107-6968722ace36\") " pod="openshift-ovn-kubernetes/ovnkube-node-4w7dx" Oct 06 11:57:01 crc kubenswrapper[4958]: I1006 11:57:01.759985 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/31cd6b07-2919-40a4-b107-6968722ace36-ovnkube-config\") pod \"ovnkube-node-4w7dx\" (UID: \"31cd6b07-2919-40a4-b107-6968722ace36\") " pod="openshift-ovn-kubernetes/ovnkube-node-4w7dx" Oct 06 11:57:01 crc kubenswrapper[4958]: I1006 11:57:01.760006 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/31cd6b07-2919-40a4-b107-6968722ace36-host-cni-netd\") pod \"ovnkube-node-4w7dx\" (UID: \"31cd6b07-2919-40a4-b107-6968722ace36\") " pod="openshift-ovn-kubernetes/ovnkube-node-4w7dx" Oct 06 11:57:01 crc kubenswrapper[4958]: I1006 11:57:01.760015 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/31cd6b07-2919-40a4-b107-6968722ace36-env-overrides\") pod \"ovnkube-node-4w7dx\" (UID: \"31cd6b07-2919-40a4-b107-6968722ace36\") " pod="openshift-ovn-kubernetes/ovnkube-node-4w7dx" Oct 06 11:57:01 crc kubenswrapper[4958]: I1006 11:57:01.760025 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/31cd6b07-2919-40a4-b107-6968722ace36-host-run-ovn-kubernetes\") pod \"ovnkube-node-4w7dx\" (UID: \"31cd6b07-2919-40a4-b107-6968722ace36\") " pod="openshift-ovn-kubernetes/ovnkube-node-4w7dx" Oct 06 11:57:01 crc kubenswrapper[4958]: I1006 11:57:01.760018 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/31cd6b07-2919-40a4-b107-6968722ace36-systemd-units\") pod \"ovnkube-node-4w7dx\" (UID: \"31cd6b07-2919-40a4-b107-6968722ace36\") " pod="openshift-ovn-kubernetes/ovnkube-node-4w7dx" Oct 06 11:57:01 crc kubenswrapper[4958]: I1006 11:57:01.763432 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/31cd6b07-2919-40a4-b107-6968722ace36-ovn-node-metrics-cert\") pod \"ovnkube-node-4w7dx\" (UID: \"31cd6b07-2919-40a4-b107-6968722ace36\") " pod="openshift-ovn-kubernetes/ovnkube-node-4w7dx" Oct 06 11:57:01 crc kubenswrapper[4958]: I1006 11:57:01.775250 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fmzgj\" (UniqueName: \"kubernetes.io/projected/31cd6b07-2919-40a4-b107-6968722ace36-kube-api-access-fmzgj\") pod \"ovnkube-node-4w7dx\" (UID: \"31cd6b07-2919-40a4-b107-6968722ace36\") " pod="openshift-ovn-kubernetes/ovnkube-node-4w7dx" Oct 06 11:57:01 crc kubenswrapper[4958]: I1006 11:57:01.812675 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-4w7dx" Oct 06 11:57:02 crc kubenswrapper[4958]: I1006 11:57:02.131218 4958 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-4w4h5_8198c8fd-8ec9-4a56-9e87-4e3967fb8ec7/kube-multus/2.log" Oct 06 11:57:02 crc kubenswrapper[4958]: I1006 11:57:02.132108 4958 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-4w4h5_8198c8fd-8ec9-4a56-9e87-4e3967fb8ec7/kube-multus/1.log" Oct 06 11:57:02 crc kubenswrapper[4958]: I1006 11:57:02.132187 4958 generic.go:334] "Generic (PLEG): container finished" podID="8198c8fd-8ec9-4a56-9e87-4e3967fb8ec7" containerID="0bcbdb53e28bf48f1081ca622ed415816a291e7ae71edd74a7d1f241c95fe82e" exitCode=2 Oct 06 11:57:02 crc kubenswrapper[4958]: I1006 11:57:02.132237 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-4w4h5" event={"ID":"8198c8fd-8ec9-4a56-9e87-4e3967fb8ec7","Type":"ContainerDied","Data":"0bcbdb53e28bf48f1081ca622ed415816a291e7ae71edd74a7d1f241c95fe82e"} Oct 06 11:57:02 crc kubenswrapper[4958]: I1006 11:57:02.132307 4958 scope.go:117] "RemoveContainer" containerID="4e73d50647a8c8e49d503c873fc1939a3747681ec2646bd44db73acf5c7740d0" Oct 06 11:57:02 crc kubenswrapper[4958]: I1006 11:57:02.132833 4958 scope.go:117] "RemoveContainer" containerID="0bcbdb53e28bf48f1081ca622ed415816a291e7ae71edd74a7d1f241c95fe82e" Oct 06 11:57:02 crc kubenswrapper[4958]: E1006 11:57:02.133370 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-multus\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-multus pod=multus-4w4h5_openshift-multus(8198c8fd-8ec9-4a56-9e87-4e3967fb8ec7)\"" pod="openshift-multus/multus-4w4h5" podUID="8198c8fd-8ec9-4a56-9e87-4e3967fb8ec7" Oct 06 11:57:02 crc kubenswrapper[4958]: I1006 11:57:02.136603 4958 generic.go:334] "Generic (PLEG): container finished" podID="31cd6b07-2919-40a4-b107-6968722ace36" containerID="73597e73873b8fe9a5b489f0717a5c8e3ab1b548b77e95ee58587a7312bac0bf" exitCode=0 Oct 06 11:57:02 crc kubenswrapper[4958]: I1006 11:57:02.136694 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-4w7dx" event={"ID":"31cd6b07-2919-40a4-b107-6968722ace36","Type":"ContainerDied","Data":"73597e73873b8fe9a5b489f0717a5c8e3ab1b548b77e95ee58587a7312bac0bf"} Oct 06 11:57:02 crc kubenswrapper[4958]: I1006 11:57:02.136726 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-4w7dx" event={"ID":"31cd6b07-2919-40a4-b107-6968722ace36","Type":"ContainerStarted","Data":"7347d1ef3c4d49bd5914b5142c38a9fbfea9b75d63bbabbf8da130e00dde9103"} Oct 06 11:57:02 crc kubenswrapper[4958]: I1006 11:57:02.143615 4958 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-ntrlk_cd589959-144a-41bd-b6d5-a872e5c25cee/ovnkube-controller/3.log" Oct 06 11:57:02 crc kubenswrapper[4958]: I1006 11:57:02.153823 4958 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-ntrlk_cd589959-144a-41bd-b6d5-a872e5c25cee/ovn-acl-logging/0.log" Oct 06 11:57:02 crc kubenswrapper[4958]: I1006 11:57:02.154478 4958 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-ntrlk_cd589959-144a-41bd-b6d5-a872e5c25cee/ovn-controller/0.log" Oct 06 11:57:02 crc kubenswrapper[4958]: I1006 11:57:02.154888 4958 generic.go:334] "Generic (PLEG): container finished" podID="cd589959-144a-41bd-b6d5-a872e5c25cee" containerID="31e3aeb719b255f884ea8f41ba8f2805ba18c0324cc985c66601ceb2096c177b" exitCode=0 Oct 06 11:57:02 crc kubenswrapper[4958]: I1006 11:57:02.154916 4958 generic.go:334] "Generic (PLEG): container finished" podID="cd589959-144a-41bd-b6d5-a872e5c25cee" containerID="1398aae03a42cf60100ed334a3d65c9c4fd501855d5316e7e1ed3974d0c7e22e" exitCode=0 Oct 06 11:57:02 crc kubenswrapper[4958]: I1006 11:57:02.154931 4958 generic.go:334] "Generic (PLEG): container finished" podID="cd589959-144a-41bd-b6d5-a872e5c25cee" containerID="484bfcace3003fb125465d9296c1b04852447d674b6c4abc8fedbd3a1b6ea8be" exitCode=0 Oct 06 11:57:02 crc kubenswrapper[4958]: I1006 11:57:02.154942 4958 generic.go:334] "Generic (PLEG): container finished" podID="cd589959-144a-41bd-b6d5-a872e5c25cee" containerID="d7834f1b3141381f7d868acc4aceb2809e81896cb7082de9888c3d540d062078" exitCode=0 Oct 06 11:57:02 crc kubenswrapper[4958]: I1006 11:57:02.154957 4958 generic.go:334] "Generic (PLEG): container finished" podID="cd589959-144a-41bd-b6d5-a872e5c25cee" containerID="a38398fb9a752d0059e1adfd4ee23557572492f2d27f63ef90ed4730b742fd0e" exitCode=0 Oct 06 11:57:02 crc kubenswrapper[4958]: I1006 11:57:02.154968 4958 generic.go:334] "Generic (PLEG): container finished" podID="cd589959-144a-41bd-b6d5-a872e5c25cee" containerID="bc57f0ff822a9fcb8c3b73146bf4879f6e05c2f92b6a4317528530e54e419278" exitCode=0 Oct 06 11:57:02 crc kubenswrapper[4958]: I1006 11:57:02.154980 4958 generic.go:334] "Generic (PLEG): container finished" podID="cd589959-144a-41bd-b6d5-a872e5c25cee" containerID="051e26a11e5f65ab6e49664f99a5f6a87bf781bf7284f2e2e3996c0e26e77c84" exitCode=143 Oct 06 11:57:02 crc kubenswrapper[4958]: I1006 11:57:02.154991 4958 generic.go:334] "Generic (PLEG): container finished" podID="cd589959-144a-41bd-b6d5-a872e5c25cee" containerID="d90731b1e085c514ef0665d4df356b853f432a1012f84a41ed82fa257d57d9bf" exitCode=143 Oct 06 11:57:02 crc kubenswrapper[4958]: I1006 11:57:02.155019 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-ntrlk" event={"ID":"cd589959-144a-41bd-b6d5-a872e5c25cee","Type":"ContainerDied","Data":"31e3aeb719b255f884ea8f41ba8f2805ba18c0324cc985c66601ceb2096c177b"} Oct 06 11:57:02 crc kubenswrapper[4958]: I1006 11:57:02.155057 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-ntrlk" event={"ID":"cd589959-144a-41bd-b6d5-a872e5c25cee","Type":"ContainerDied","Data":"1398aae03a42cf60100ed334a3d65c9c4fd501855d5316e7e1ed3974d0c7e22e"} Oct 06 11:57:02 crc kubenswrapper[4958]: I1006 11:57:02.155079 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-ntrlk" event={"ID":"cd589959-144a-41bd-b6d5-a872e5c25cee","Type":"ContainerDied","Data":"484bfcace3003fb125465d9296c1b04852447d674b6c4abc8fedbd3a1b6ea8be"} Oct 06 11:57:02 crc kubenswrapper[4958]: I1006 11:57:02.155095 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-ntrlk" event={"ID":"cd589959-144a-41bd-b6d5-a872e5c25cee","Type":"ContainerDied","Data":"d7834f1b3141381f7d868acc4aceb2809e81896cb7082de9888c3d540d062078"} Oct 06 11:57:02 crc kubenswrapper[4958]: I1006 11:57:02.155112 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-ntrlk" event={"ID":"cd589959-144a-41bd-b6d5-a872e5c25cee","Type":"ContainerDied","Data":"a38398fb9a752d0059e1adfd4ee23557572492f2d27f63ef90ed4730b742fd0e"} Oct 06 11:57:02 crc kubenswrapper[4958]: I1006 11:57:02.155130 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-ntrlk" event={"ID":"cd589959-144a-41bd-b6d5-a872e5c25cee","Type":"ContainerDied","Data":"bc57f0ff822a9fcb8c3b73146bf4879f6e05c2f92b6a4317528530e54e419278"} Oct 06 11:57:02 crc kubenswrapper[4958]: I1006 11:57:02.155173 4958 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"31e3aeb719b255f884ea8f41ba8f2805ba18c0324cc985c66601ceb2096c177b"} Oct 06 11:57:02 crc kubenswrapper[4958]: I1006 11:57:02.155190 4958 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"0d26f1c937b4f4a0ec130d9451086f3770f092498d61310b65bdd8c74a68549e"} Oct 06 11:57:02 crc kubenswrapper[4958]: I1006 11:57:02.155200 4958 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"1398aae03a42cf60100ed334a3d65c9c4fd501855d5316e7e1ed3974d0c7e22e"} Oct 06 11:57:02 crc kubenswrapper[4958]: I1006 11:57:02.155210 4958 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"484bfcace3003fb125465d9296c1b04852447d674b6c4abc8fedbd3a1b6ea8be"} Oct 06 11:57:02 crc kubenswrapper[4958]: I1006 11:57:02.155219 4958 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"d7834f1b3141381f7d868acc4aceb2809e81896cb7082de9888c3d540d062078"} Oct 06 11:57:02 crc kubenswrapper[4958]: I1006 11:57:02.155227 4958 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"a38398fb9a752d0059e1adfd4ee23557572492f2d27f63ef90ed4730b742fd0e"} Oct 06 11:57:02 crc kubenswrapper[4958]: I1006 11:57:02.155235 4958 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"bc57f0ff822a9fcb8c3b73146bf4879f6e05c2f92b6a4317528530e54e419278"} Oct 06 11:57:02 crc kubenswrapper[4958]: I1006 11:57:02.155243 4958 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"051e26a11e5f65ab6e49664f99a5f6a87bf781bf7284f2e2e3996c0e26e77c84"} Oct 06 11:57:02 crc kubenswrapper[4958]: I1006 11:57:02.155251 4958 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"d90731b1e085c514ef0665d4df356b853f432a1012f84a41ed82fa257d57d9bf"} Oct 06 11:57:02 crc kubenswrapper[4958]: I1006 11:57:02.155259 4958 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"738882f7b5c7b87e6d6426a997dd7efe5daf1d97973cf1085097cc0f5e790f00"} Oct 06 11:57:02 crc kubenswrapper[4958]: I1006 11:57:02.155271 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-ntrlk" event={"ID":"cd589959-144a-41bd-b6d5-a872e5c25cee","Type":"ContainerDied","Data":"051e26a11e5f65ab6e49664f99a5f6a87bf781bf7284f2e2e3996c0e26e77c84"} Oct 06 11:57:02 crc kubenswrapper[4958]: I1006 11:57:02.155326 4958 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"31e3aeb719b255f884ea8f41ba8f2805ba18c0324cc985c66601ceb2096c177b"} Oct 06 11:57:02 crc kubenswrapper[4958]: I1006 11:57:02.155341 4958 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"0d26f1c937b4f4a0ec130d9451086f3770f092498d61310b65bdd8c74a68549e"} Oct 06 11:57:02 crc kubenswrapper[4958]: I1006 11:57:02.155349 4958 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"1398aae03a42cf60100ed334a3d65c9c4fd501855d5316e7e1ed3974d0c7e22e"} Oct 06 11:57:02 crc kubenswrapper[4958]: I1006 11:57:02.155358 4958 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"484bfcace3003fb125465d9296c1b04852447d674b6c4abc8fedbd3a1b6ea8be"} Oct 06 11:57:02 crc kubenswrapper[4958]: I1006 11:57:02.155366 4958 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"d7834f1b3141381f7d868acc4aceb2809e81896cb7082de9888c3d540d062078"} Oct 06 11:57:02 crc kubenswrapper[4958]: I1006 11:57:02.155374 4958 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"a38398fb9a752d0059e1adfd4ee23557572492f2d27f63ef90ed4730b742fd0e"} Oct 06 11:57:02 crc kubenswrapper[4958]: I1006 11:57:02.155414 4958 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"bc57f0ff822a9fcb8c3b73146bf4879f6e05c2f92b6a4317528530e54e419278"} Oct 06 11:57:02 crc kubenswrapper[4958]: I1006 11:57:02.155424 4958 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"051e26a11e5f65ab6e49664f99a5f6a87bf781bf7284f2e2e3996c0e26e77c84"} Oct 06 11:57:02 crc kubenswrapper[4958]: I1006 11:57:02.155433 4958 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"d90731b1e085c514ef0665d4df356b853f432a1012f84a41ed82fa257d57d9bf"} Oct 06 11:57:02 crc kubenswrapper[4958]: I1006 11:57:02.155444 4958 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"738882f7b5c7b87e6d6426a997dd7efe5daf1d97973cf1085097cc0f5e790f00"} Oct 06 11:57:02 crc kubenswrapper[4958]: I1006 11:57:02.155458 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-ntrlk" event={"ID":"cd589959-144a-41bd-b6d5-a872e5c25cee","Type":"ContainerDied","Data":"d90731b1e085c514ef0665d4df356b853f432a1012f84a41ed82fa257d57d9bf"} Oct 06 11:57:02 crc kubenswrapper[4958]: I1006 11:57:02.155500 4958 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"31e3aeb719b255f884ea8f41ba8f2805ba18c0324cc985c66601ceb2096c177b"} Oct 06 11:57:02 crc kubenswrapper[4958]: I1006 11:57:02.155514 4958 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"0d26f1c937b4f4a0ec130d9451086f3770f092498d61310b65bdd8c74a68549e"} Oct 06 11:57:02 crc kubenswrapper[4958]: I1006 11:57:02.155522 4958 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"1398aae03a42cf60100ed334a3d65c9c4fd501855d5316e7e1ed3974d0c7e22e"} Oct 06 11:57:02 crc kubenswrapper[4958]: I1006 11:57:02.155532 4958 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"484bfcace3003fb125465d9296c1b04852447d674b6c4abc8fedbd3a1b6ea8be"} Oct 06 11:57:02 crc kubenswrapper[4958]: I1006 11:57:02.155541 4958 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"d7834f1b3141381f7d868acc4aceb2809e81896cb7082de9888c3d540d062078"} Oct 06 11:57:02 crc kubenswrapper[4958]: I1006 11:57:02.155549 4958 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"a38398fb9a752d0059e1adfd4ee23557572492f2d27f63ef90ed4730b742fd0e"} Oct 06 11:57:02 crc kubenswrapper[4958]: I1006 11:57:02.155593 4958 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"bc57f0ff822a9fcb8c3b73146bf4879f6e05c2f92b6a4317528530e54e419278"} Oct 06 11:57:02 crc kubenswrapper[4958]: I1006 11:57:02.155605 4958 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"051e26a11e5f65ab6e49664f99a5f6a87bf781bf7284f2e2e3996c0e26e77c84"} Oct 06 11:57:02 crc kubenswrapper[4958]: I1006 11:57:02.155614 4958 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"d90731b1e085c514ef0665d4df356b853f432a1012f84a41ed82fa257d57d9bf"} Oct 06 11:57:02 crc kubenswrapper[4958]: I1006 11:57:02.155623 4958 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"738882f7b5c7b87e6d6426a997dd7efe5daf1d97973cf1085097cc0f5e790f00"} Oct 06 11:57:02 crc kubenswrapper[4958]: I1006 11:57:02.155636 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-ntrlk" event={"ID":"cd589959-144a-41bd-b6d5-a872e5c25cee","Type":"ContainerDied","Data":"9d9ed4bdaca68aafc466a717a228d9c59567fe0afcb34a0474de16fd324d1ee7"} Oct 06 11:57:02 crc kubenswrapper[4958]: I1006 11:57:02.155677 4958 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"31e3aeb719b255f884ea8f41ba8f2805ba18c0324cc985c66601ceb2096c177b"} Oct 06 11:57:02 crc kubenswrapper[4958]: I1006 11:57:02.155691 4958 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"0d26f1c937b4f4a0ec130d9451086f3770f092498d61310b65bdd8c74a68549e"} Oct 06 11:57:02 crc kubenswrapper[4958]: I1006 11:57:02.155699 4958 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"1398aae03a42cf60100ed334a3d65c9c4fd501855d5316e7e1ed3974d0c7e22e"} Oct 06 11:57:02 crc kubenswrapper[4958]: I1006 11:57:02.155708 4958 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"484bfcace3003fb125465d9296c1b04852447d674b6c4abc8fedbd3a1b6ea8be"} Oct 06 11:57:02 crc kubenswrapper[4958]: I1006 11:57:02.155719 4958 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"d7834f1b3141381f7d868acc4aceb2809e81896cb7082de9888c3d540d062078"} Oct 06 11:57:02 crc kubenswrapper[4958]: I1006 11:57:02.155728 4958 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"a38398fb9a752d0059e1adfd4ee23557572492f2d27f63ef90ed4730b742fd0e"} Oct 06 11:57:02 crc kubenswrapper[4958]: I1006 11:57:02.155736 4958 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"bc57f0ff822a9fcb8c3b73146bf4879f6e05c2f92b6a4317528530e54e419278"} Oct 06 11:57:02 crc kubenswrapper[4958]: I1006 11:57:02.155776 4958 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"051e26a11e5f65ab6e49664f99a5f6a87bf781bf7284f2e2e3996c0e26e77c84"} Oct 06 11:57:02 crc kubenswrapper[4958]: I1006 11:57:02.155787 4958 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"d90731b1e085c514ef0665d4df356b853f432a1012f84a41ed82fa257d57d9bf"} Oct 06 11:57:02 crc kubenswrapper[4958]: I1006 11:57:02.155797 4958 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"738882f7b5c7b87e6d6426a997dd7efe5daf1d97973cf1085097cc0f5e790f00"} Oct 06 11:57:02 crc kubenswrapper[4958]: I1006 11:57:02.155772 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-ntrlk" Oct 06 11:57:02 crc kubenswrapper[4958]: I1006 11:57:02.170406 4958 scope.go:117] "RemoveContainer" containerID="31e3aeb719b255f884ea8f41ba8f2805ba18c0324cc985c66601ceb2096c177b" Oct 06 11:57:02 crc kubenswrapper[4958]: I1006 11:57:02.191683 4958 scope.go:117] "RemoveContainer" containerID="0d26f1c937b4f4a0ec130d9451086f3770f092498d61310b65bdd8c74a68549e" Oct 06 11:57:02 crc kubenswrapper[4958]: I1006 11:57:02.213891 4958 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-ntrlk"] Oct 06 11:57:02 crc kubenswrapper[4958]: I1006 11:57:02.217838 4958 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-ntrlk"] Oct 06 11:57:02 crc kubenswrapper[4958]: I1006 11:57:02.222808 4958 scope.go:117] "RemoveContainer" containerID="1398aae03a42cf60100ed334a3d65c9c4fd501855d5316e7e1ed3974d0c7e22e" Oct 06 11:57:02 crc kubenswrapper[4958]: I1006 11:57:02.253209 4958 scope.go:117] "RemoveContainer" containerID="484bfcace3003fb125465d9296c1b04852447d674b6c4abc8fedbd3a1b6ea8be" Oct 06 11:57:02 crc kubenswrapper[4958]: I1006 11:57:02.270084 4958 scope.go:117] "RemoveContainer" containerID="d7834f1b3141381f7d868acc4aceb2809e81896cb7082de9888c3d540d062078" Oct 06 11:57:02 crc kubenswrapper[4958]: I1006 11:57:02.288069 4958 scope.go:117] "RemoveContainer" containerID="a38398fb9a752d0059e1adfd4ee23557572492f2d27f63ef90ed4730b742fd0e" Oct 06 11:57:02 crc kubenswrapper[4958]: I1006 11:57:02.299278 4958 scope.go:117] "RemoveContainer" containerID="bc57f0ff822a9fcb8c3b73146bf4879f6e05c2f92b6a4317528530e54e419278" Oct 06 11:57:02 crc kubenswrapper[4958]: I1006 11:57:02.311215 4958 scope.go:117] "RemoveContainer" containerID="051e26a11e5f65ab6e49664f99a5f6a87bf781bf7284f2e2e3996c0e26e77c84" Oct 06 11:57:02 crc kubenswrapper[4958]: I1006 11:57:02.328623 4958 scope.go:117] "RemoveContainer" containerID="d90731b1e085c514ef0665d4df356b853f432a1012f84a41ed82fa257d57d9bf" Oct 06 11:57:02 crc kubenswrapper[4958]: I1006 11:57:02.348180 4958 scope.go:117] "RemoveContainer" containerID="738882f7b5c7b87e6d6426a997dd7efe5daf1d97973cf1085097cc0f5e790f00" Oct 06 11:57:02 crc kubenswrapper[4958]: I1006 11:57:02.382358 4958 scope.go:117] "RemoveContainer" containerID="31e3aeb719b255f884ea8f41ba8f2805ba18c0324cc985c66601ceb2096c177b" Oct 06 11:57:02 crc kubenswrapper[4958]: E1006 11:57:02.382911 4958 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"31e3aeb719b255f884ea8f41ba8f2805ba18c0324cc985c66601ceb2096c177b\": container with ID starting with 31e3aeb719b255f884ea8f41ba8f2805ba18c0324cc985c66601ceb2096c177b not found: ID does not exist" containerID="31e3aeb719b255f884ea8f41ba8f2805ba18c0324cc985c66601ceb2096c177b" Oct 06 11:57:02 crc kubenswrapper[4958]: I1006 11:57:02.382961 4958 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"31e3aeb719b255f884ea8f41ba8f2805ba18c0324cc985c66601ceb2096c177b"} err="failed to get container status \"31e3aeb719b255f884ea8f41ba8f2805ba18c0324cc985c66601ceb2096c177b\": rpc error: code = NotFound desc = could not find container \"31e3aeb719b255f884ea8f41ba8f2805ba18c0324cc985c66601ceb2096c177b\": container with ID starting with 31e3aeb719b255f884ea8f41ba8f2805ba18c0324cc985c66601ceb2096c177b not found: ID does not exist" Oct 06 11:57:02 crc kubenswrapper[4958]: I1006 11:57:02.382986 4958 scope.go:117] "RemoveContainer" containerID="0d26f1c937b4f4a0ec130d9451086f3770f092498d61310b65bdd8c74a68549e" Oct 06 11:57:02 crc kubenswrapper[4958]: E1006 11:57:02.383434 4958 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0d26f1c937b4f4a0ec130d9451086f3770f092498d61310b65bdd8c74a68549e\": container with ID starting with 0d26f1c937b4f4a0ec130d9451086f3770f092498d61310b65bdd8c74a68549e not found: ID does not exist" containerID="0d26f1c937b4f4a0ec130d9451086f3770f092498d61310b65bdd8c74a68549e" Oct 06 11:57:02 crc kubenswrapper[4958]: I1006 11:57:02.383453 4958 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0d26f1c937b4f4a0ec130d9451086f3770f092498d61310b65bdd8c74a68549e"} err="failed to get container status \"0d26f1c937b4f4a0ec130d9451086f3770f092498d61310b65bdd8c74a68549e\": rpc error: code = NotFound desc = could not find container \"0d26f1c937b4f4a0ec130d9451086f3770f092498d61310b65bdd8c74a68549e\": container with ID starting with 0d26f1c937b4f4a0ec130d9451086f3770f092498d61310b65bdd8c74a68549e not found: ID does not exist" Oct 06 11:57:02 crc kubenswrapper[4958]: I1006 11:57:02.383466 4958 scope.go:117] "RemoveContainer" containerID="1398aae03a42cf60100ed334a3d65c9c4fd501855d5316e7e1ed3974d0c7e22e" Oct 06 11:57:02 crc kubenswrapper[4958]: E1006 11:57:02.383739 4958 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1398aae03a42cf60100ed334a3d65c9c4fd501855d5316e7e1ed3974d0c7e22e\": container with ID starting with 1398aae03a42cf60100ed334a3d65c9c4fd501855d5316e7e1ed3974d0c7e22e not found: ID does not exist" containerID="1398aae03a42cf60100ed334a3d65c9c4fd501855d5316e7e1ed3974d0c7e22e" Oct 06 11:57:02 crc kubenswrapper[4958]: I1006 11:57:02.383762 4958 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1398aae03a42cf60100ed334a3d65c9c4fd501855d5316e7e1ed3974d0c7e22e"} err="failed to get container status \"1398aae03a42cf60100ed334a3d65c9c4fd501855d5316e7e1ed3974d0c7e22e\": rpc error: code = NotFound desc = could not find container \"1398aae03a42cf60100ed334a3d65c9c4fd501855d5316e7e1ed3974d0c7e22e\": container with ID starting with 1398aae03a42cf60100ed334a3d65c9c4fd501855d5316e7e1ed3974d0c7e22e not found: ID does not exist" Oct 06 11:57:02 crc kubenswrapper[4958]: I1006 11:57:02.383774 4958 scope.go:117] "RemoveContainer" containerID="484bfcace3003fb125465d9296c1b04852447d674b6c4abc8fedbd3a1b6ea8be" Oct 06 11:57:02 crc kubenswrapper[4958]: E1006 11:57:02.384166 4958 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"484bfcace3003fb125465d9296c1b04852447d674b6c4abc8fedbd3a1b6ea8be\": container with ID starting with 484bfcace3003fb125465d9296c1b04852447d674b6c4abc8fedbd3a1b6ea8be not found: ID does not exist" containerID="484bfcace3003fb125465d9296c1b04852447d674b6c4abc8fedbd3a1b6ea8be" Oct 06 11:57:02 crc kubenswrapper[4958]: I1006 11:57:02.384225 4958 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"484bfcace3003fb125465d9296c1b04852447d674b6c4abc8fedbd3a1b6ea8be"} err="failed to get container status \"484bfcace3003fb125465d9296c1b04852447d674b6c4abc8fedbd3a1b6ea8be\": rpc error: code = NotFound desc = could not find container \"484bfcace3003fb125465d9296c1b04852447d674b6c4abc8fedbd3a1b6ea8be\": container with ID starting with 484bfcace3003fb125465d9296c1b04852447d674b6c4abc8fedbd3a1b6ea8be not found: ID does not exist" Oct 06 11:57:02 crc kubenswrapper[4958]: I1006 11:57:02.384284 4958 scope.go:117] "RemoveContainer" containerID="d7834f1b3141381f7d868acc4aceb2809e81896cb7082de9888c3d540d062078" Oct 06 11:57:02 crc kubenswrapper[4958]: E1006 11:57:02.385205 4958 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d7834f1b3141381f7d868acc4aceb2809e81896cb7082de9888c3d540d062078\": container with ID starting with d7834f1b3141381f7d868acc4aceb2809e81896cb7082de9888c3d540d062078 not found: ID does not exist" containerID="d7834f1b3141381f7d868acc4aceb2809e81896cb7082de9888c3d540d062078" Oct 06 11:57:02 crc kubenswrapper[4958]: I1006 11:57:02.385256 4958 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d7834f1b3141381f7d868acc4aceb2809e81896cb7082de9888c3d540d062078"} err="failed to get container status \"d7834f1b3141381f7d868acc4aceb2809e81896cb7082de9888c3d540d062078\": rpc error: code = NotFound desc = could not find container \"d7834f1b3141381f7d868acc4aceb2809e81896cb7082de9888c3d540d062078\": container with ID starting with d7834f1b3141381f7d868acc4aceb2809e81896cb7082de9888c3d540d062078 not found: ID does not exist" Oct 06 11:57:02 crc kubenswrapper[4958]: I1006 11:57:02.385286 4958 scope.go:117] "RemoveContainer" containerID="a38398fb9a752d0059e1adfd4ee23557572492f2d27f63ef90ed4730b742fd0e" Oct 06 11:57:02 crc kubenswrapper[4958]: E1006 11:57:02.385570 4958 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a38398fb9a752d0059e1adfd4ee23557572492f2d27f63ef90ed4730b742fd0e\": container with ID starting with a38398fb9a752d0059e1adfd4ee23557572492f2d27f63ef90ed4730b742fd0e not found: ID does not exist" containerID="a38398fb9a752d0059e1adfd4ee23557572492f2d27f63ef90ed4730b742fd0e" Oct 06 11:57:02 crc kubenswrapper[4958]: I1006 11:57:02.385592 4958 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a38398fb9a752d0059e1adfd4ee23557572492f2d27f63ef90ed4730b742fd0e"} err="failed to get container status \"a38398fb9a752d0059e1adfd4ee23557572492f2d27f63ef90ed4730b742fd0e\": rpc error: code = NotFound desc = could not find container \"a38398fb9a752d0059e1adfd4ee23557572492f2d27f63ef90ed4730b742fd0e\": container with ID starting with a38398fb9a752d0059e1adfd4ee23557572492f2d27f63ef90ed4730b742fd0e not found: ID does not exist" Oct 06 11:57:02 crc kubenswrapper[4958]: I1006 11:57:02.385606 4958 scope.go:117] "RemoveContainer" containerID="bc57f0ff822a9fcb8c3b73146bf4879f6e05c2f92b6a4317528530e54e419278" Oct 06 11:57:02 crc kubenswrapper[4958]: E1006 11:57:02.385884 4958 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"bc57f0ff822a9fcb8c3b73146bf4879f6e05c2f92b6a4317528530e54e419278\": container with ID starting with bc57f0ff822a9fcb8c3b73146bf4879f6e05c2f92b6a4317528530e54e419278 not found: ID does not exist" containerID="bc57f0ff822a9fcb8c3b73146bf4879f6e05c2f92b6a4317528530e54e419278" Oct 06 11:57:02 crc kubenswrapper[4958]: I1006 11:57:02.385988 4958 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bc57f0ff822a9fcb8c3b73146bf4879f6e05c2f92b6a4317528530e54e419278"} err="failed to get container status \"bc57f0ff822a9fcb8c3b73146bf4879f6e05c2f92b6a4317528530e54e419278\": rpc error: code = NotFound desc = could not find container \"bc57f0ff822a9fcb8c3b73146bf4879f6e05c2f92b6a4317528530e54e419278\": container with ID starting with bc57f0ff822a9fcb8c3b73146bf4879f6e05c2f92b6a4317528530e54e419278 not found: ID does not exist" Oct 06 11:57:02 crc kubenswrapper[4958]: I1006 11:57:02.386015 4958 scope.go:117] "RemoveContainer" containerID="051e26a11e5f65ab6e49664f99a5f6a87bf781bf7284f2e2e3996c0e26e77c84" Oct 06 11:57:02 crc kubenswrapper[4958]: E1006 11:57:02.386597 4958 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"051e26a11e5f65ab6e49664f99a5f6a87bf781bf7284f2e2e3996c0e26e77c84\": container with ID starting with 051e26a11e5f65ab6e49664f99a5f6a87bf781bf7284f2e2e3996c0e26e77c84 not found: ID does not exist" containerID="051e26a11e5f65ab6e49664f99a5f6a87bf781bf7284f2e2e3996c0e26e77c84" Oct 06 11:57:02 crc kubenswrapper[4958]: I1006 11:57:02.386622 4958 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"051e26a11e5f65ab6e49664f99a5f6a87bf781bf7284f2e2e3996c0e26e77c84"} err="failed to get container status \"051e26a11e5f65ab6e49664f99a5f6a87bf781bf7284f2e2e3996c0e26e77c84\": rpc error: code = NotFound desc = could not find container \"051e26a11e5f65ab6e49664f99a5f6a87bf781bf7284f2e2e3996c0e26e77c84\": container with ID starting with 051e26a11e5f65ab6e49664f99a5f6a87bf781bf7284f2e2e3996c0e26e77c84 not found: ID does not exist" Oct 06 11:57:02 crc kubenswrapper[4958]: I1006 11:57:02.386645 4958 scope.go:117] "RemoveContainer" containerID="d90731b1e085c514ef0665d4df356b853f432a1012f84a41ed82fa257d57d9bf" Oct 06 11:57:02 crc kubenswrapper[4958]: E1006 11:57:02.386922 4958 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d90731b1e085c514ef0665d4df356b853f432a1012f84a41ed82fa257d57d9bf\": container with ID starting with d90731b1e085c514ef0665d4df356b853f432a1012f84a41ed82fa257d57d9bf not found: ID does not exist" containerID="d90731b1e085c514ef0665d4df356b853f432a1012f84a41ed82fa257d57d9bf" Oct 06 11:57:02 crc kubenswrapper[4958]: I1006 11:57:02.386968 4958 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d90731b1e085c514ef0665d4df356b853f432a1012f84a41ed82fa257d57d9bf"} err="failed to get container status \"d90731b1e085c514ef0665d4df356b853f432a1012f84a41ed82fa257d57d9bf\": rpc error: code = NotFound desc = could not find container \"d90731b1e085c514ef0665d4df356b853f432a1012f84a41ed82fa257d57d9bf\": container with ID starting with d90731b1e085c514ef0665d4df356b853f432a1012f84a41ed82fa257d57d9bf not found: ID does not exist" Oct 06 11:57:02 crc kubenswrapper[4958]: I1006 11:57:02.386996 4958 scope.go:117] "RemoveContainer" containerID="738882f7b5c7b87e6d6426a997dd7efe5daf1d97973cf1085097cc0f5e790f00" Oct 06 11:57:02 crc kubenswrapper[4958]: E1006 11:57:02.387517 4958 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"738882f7b5c7b87e6d6426a997dd7efe5daf1d97973cf1085097cc0f5e790f00\": container with ID starting with 738882f7b5c7b87e6d6426a997dd7efe5daf1d97973cf1085097cc0f5e790f00 not found: ID does not exist" containerID="738882f7b5c7b87e6d6426a997dd7efe5daf1d97973cf1085097cc0f5e790f00" Oct 06 11:57:02 crc kubenswrapper[4958]: I1006 11:57:02.387536 4958 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"738882f7b5c7b87e6d6426a997dd7efe5daf1d97973cf1085097cc0f5e790f00"} err="failed to get container status \"738882f7b5c7b87e6d6426a997dd7efe5daf1d97973cf1085097cc0f5e790f00\": rpc error: code = NotFound desc = could not find container \"738882f7b5c7b87e6d6426a997dd7efe5daf1d97973cf1085097cc0f5e790f00\": container with ID starting with 738882f7b5c7b87e6d6426a997dd7efe5daf1d97973cf1085097cc0f5e790f00 not found: ID does not exist" Oct 06 11:57:02 crc kubenswrapper[4958]: I1006 11:57:02.387550 4958 scope.go:117] "RemoveContainer" containerID="31e3aeb719b255f884ea8f41ba8f2805ba18c0324cc985c66601ceb2096c177b" Oct 06 11:57:02 crc kubenswrapper[4958]: I1006 11:57:02.387946 4958 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"31e3aeb719b255f884ea8f41ba8f2805ba18c0324cc985c66601ceb2096c177b"} err="failed to get container status \"31e3aeb719b255f884ea8f41ba8f2805ba18c0324cc985c66601ceb2096c177b\": rpc error: code = NotFound desc = could not find container \"31e3aeb719b255f884ea8f41ba8f2805ba18c0324cc985c66601ceb2096c177b\": container with ID starting with 31e3aeb719b255f884ea8f41ba8f2805ba18c0324cc985c66601ceb2096c177b not found: ID does not exist" Oct 06 11:57:02 crc kubenswrapper[4958]: I1006 11:57:02.387992 4958 scope.go:117] "RemoveContainer" containerID="0d26f1c937b4f4a0ec130d9451086f3770f092498d61310b65bdd8c74a68549e" Oct 06 11:57:02 crc kubenswrapper[4958]: I1006 11:57:02.388571 4958 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0d26f1c937b4f4a0ec130d9451086f3770f092498d61310b65bdd8c74a68549e"} err="failed to get container status \"0d26f1c937b4f4a0ec130d9451086f3770f092498d61310b65bdd8c74a68549e\": rpc error: code = NotFound desc = could not find container \"0d26f1c937b4f4a0ec130d9451086f3770f092498d61310b65bdd8c74a68549e\": container with ID starting with 0d26f1c937b4f4a0ec130d9451086f3770f092498d61310b65bdd8c74a68549e not found: ID does not exist" Oct 06 11:57:02 crc kubenswrapper[4958]: I1006 11:57:02.388589 4958 scope.go:117] "RemoveContainer" containerID="1398aae03a42cf60100ed334a3d65c9c4fd501855d5316e7e1ed3974d0c7e22e" Oct 06 11:57:02 crc kubenswrapper[4958]: I1006 11:57:02.388863 4958 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1398aae03a42cf60100ed334a3d65c9c4fd501855d5316e7e1ed3974d0c7e22e"} err="failed to get container status \"1398aae03a42cf60100ed334a3d65c9c4fd501855d5316e7e1ed3974d0c7e22e\": rpc error: code = NotFound desc = could not find container \"1398aae03a42cf60100ed334a3d65c9c4fd501855d5316e7e1ed3974d0c7e22e\": container with ID starting with 1398aae03a42cf60100ed334a3d65c9c4fd501855d5316e7e1ed3974d0c7e22e not found: ID does not exist" Oct 06 11:57:02 crc kubenswrapper[4958]: I1006 11:57:02.388895 4958 scope.go:117] "RemoveContainer" containerID="484bfcace3003fb125465d9296c1b04852447d674b6c4abc8fedbd3a1b6ea8be" Oct 06 11:57:02 crc kubenswrapper[4958]: I1006 11:57:02.389191 4958 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"484bfcace3003fb125465d9296c1b04852447d674b6c4abc8fedbd3a1b6ea8be"} err="failed to get container status \"484bfcace3003fb125465d9296c1b04852447d674b6c4abc8fedbd3a1b6ea8be\": rpc error: code = NotFound desc = could not find container \"484bfcace3003fb125465d9296c1b04852447d674b6c4abc8fedbd3a1b6ea8be\": container with ID starting with 484bfcace3003fb125465d9296c1b04852447d674b6c4abc8fedbd3a1b6ea8be not found: ID does not exist" Oct 06 11:57:02 crc kubenswrapper[4958]: I1006 11:57:02.389214 4958 scope.go:117] "RemoveContainer" containerID="d7834f1b3141381f7d868acc4aceb2809e81896cb7082de9888c3d540d062078" Oct 06 11:57:02 crc kubenswrapper[4958]: I1006 11:57:02.389628 4958 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d7834f1b3141381f7d868acc4aceb2809e81896cb7082de9888c3d540d062078"} err="failed to get container status \"d7834f1b3141381f7d868acc4aceb2809e81896cb7082de9888c3d540d062078\": rpc error: code = NotFound desc = could not find container \"d7834f1b3141381f7d868acc4aceb2809e81896cb7082de9888c3d540d062078\": container with ID starting with d7834f1b3141381f7d868acc4aceb2809e81896cb7082de9888c3d540d062078 not found: ID does not exist" Oct 06 11:57:02 crc kubenswrapper[4958]: I1006 11:57:02.389665 4958 scope.go:117] "RemoveContainer" containerID="a38398fb9a752d0059e1adfd4ee23557572492f2d27f63ef90ed4730b742fd0e" Oct 06 11:57:02 crc kubenswrapper[4958]: I1006 11:57:02.389894 4958 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a38398fb9a752d0059e1adfd4ee23557572492f2d27f63ef90ed4730b742fd0e"} err="failed to get container status \"a38398fb9a752d0059e1adfd4ee23557572492f2d27f63ef90ed4730b742fd0e\": rpc error: code = NotFound desc = could not find container \"a38398fb9a752d0059e1adfd4ee23557572492f2d27f63ef90ed4730b742fd0e\": container with ID starting with a38398fb9a752d0059e1adfd4ee23557572492f2d27f63ef90ed4730b742fd0e not found: ID does not exist" Oct 06 11:57:02 crc kubenswrapper[4958]: I1006 11:57:02.389939 4958 scope.go:117] "RemoveContainer" containerID="bc57f0ff822a9fcb8c3b73146bf4879f6e05c2f92b6a4317528530e54e419278" Oct 06 11:57:02 crc kubenswrapper[4958]: I1006 11:57:02.390197 4958 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bc57f0ff822a9fcb8c3b73146bf4879f6e05c2f92b6a4317528530e54e419278"} err="failed to get container status \"bc57f0ff822a9fcb8c3b73146bf4879f6e05c2f92b6a4317528530e54e419278\": rpc error: code = NotFound desc = could not find container \"bc57f0ff822a9fcb8c3b73146bf4879f6e05c2f92b6a4317528530e54e419278\": container with ID starting with bc57f0ff822a9fcb8c3b73146bf4879f6e05c2f92b6a4317528530e54e419278 not found: ID does not exist" Oct 06 11:57:02 crc kubenswrapper[4958]: I1006 11:57:02.390213 4958 scope.go:117] "RemoveContainer" containerID="051e26a11e5f65ab6e49664f99a5f6a87bf781bf7284f2e2e3996c0e26e77c84" Oct 06 11:57:02 crc kubenswrapper[4958]: I1006 11:57:02.390606 4958 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"051e26a11e5f65ab6e49664f99a5f6a87bf781bf7284f2e2e3996c0e26e77c84"} err="failed to get container status \"051e26a11e5f65ab6e49664f99a5f6a87bf781bf7284f2e2e3996c0e26e77c84\": rpc error: code = NotFound desc = could not find container \"051e26a11e5f65ab6e49664f99a5f6a87bf781bf7284f2e2e3996c0e26e77c84\": container with ID starting with 051e26a11e5f65ab6e49664f99a5f6a87bf781bf7284f2e2e3996c0e26e77c84 not found: ID does not exist" Oct 06 11:57:02 crc kubenswrapper[4958]: I1006 11:57:02.390622 4958 scope.go:117] "RemoveContainer" containerID="d90731b1e085c514ef0665d4df356b853f432a1012f84a41ed82fa257d57d9bf" Oct 06 11:57:02 crc kubenswrapper[4958]: I1006 11:57:02.390929 4958 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d90731b1e085c514ef0665d4df356b853f432a1012f84a41ed82fa257d57d9bf"} err="failed to get container status \"d90731b1e085c514ef0665d4df356b853f432a1012f84a41ed82fa257d57d9bf\": rpc error: code = NotFound desc = could not find container \"d90731b1e085c514ef0665d4df356b853f432a1012f84a41ed82fa257d57d9bf\": container with ID starting with d90731b1e085c514ef0665d4df356b853f432a1012f84a41ed82fa257d57d9bf not found: ID does not exist" Oct 06 11:57:02 crc kubenswrapper[4958]: I1006 11:57:02.390958 4958 scope.go:117] "RemoveContainer" containerID="738882f7b5c7b87e6d6426a997dd7efe5daf1d97973cf1085097cc0f5e790f00" Oct 06 11:57:02 crc kubenswrapper[4958]: I1006 11:57:02.391193 4958 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"738882f7b5c7b87e6d6426a997dd7efe5daf1d97973cf1085097cc0f5e790f00"} err="failed to get container status \"738882f7b5c7b87e6d6426a997dd7efe5daf1d97973cf1085097cc0f5e790f00\": rpc error: code = NotFound desc = could not find container \"738882f7b5c7b87e6d6426a997dd7efe5daf1d97973cf1085097cc0f5e790f00\": container with ID starting with 738882f7b5c7b87e6d6426a997dd7efe5daf1d97973cf1085097cc0f5e790f00 not found: ID does not exist" Oct 06 11:57:02 crc kubenswrapper[4958]: I1006 11:57:02.391228 4958 scope.go:117] "RemoveContainer" containerID="31e3aeb719b255f884ea8f41ba8f2805ba18c0324cc985c66601ceb2096c177b" Oct 06 11:57:02 crc kubenswrapper[4958]: I1006 11:57:02.391500 4958 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"31e3aeb719b255f884ea8f41ba8f2805ba18c0324cc985c66601ceb2096c177b"} err="failed to get container status \"31e3aeb719b255f884ea8f41ba8f2805ba18c0324cc985c66601ceb2096c177b\": rpc error: code = NotFound desc = could not find container \"31e3aeb719b255f884ea8f41ba8f2805ba18c0324cc985c66601ceb2096c177b\": container with ID starting with 31e3aeb719b255f884ea8f41ba8f2805ba18c0324cc985c66601ceb2096c177b not found: ID does not exist" Oct 06 11:57:02 crc kubenswrapper[4958]: I1006 11:57:02.391517 4958 scope.go:117] "RemoveContainer" containerID="0d26f1c937b4f4a0ec130d9451086f3770f092498d61310b65bdd8c74a68549e" Oct 06 11:57:02 crc kubenswrapper[4958]: I1006 11:57:02.391819 4958 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0d26f1c937b4f4a0ec130d9451086f3770f092498d61310b65bdd8c74a68549e"} err="failed to get container status \"0d26f1c937b4f4a0ec130d9451086f3770f092498d61310b65bdd8c74a68549e\": rpc error: code = NotFound desc = could not find container \"0d26f1c937b4f4a0ec130d9451086f3770f092498d61310b65bdd8c74a68549e\": container with ID starting with 0d26f1c937b4f4a0ec130d9451086f3770f092498d61310b65bdd8c74a68549e not found: ID does not exist" Oct 06 11:57:02 crc kubenswrapper[4958]: I1006 11:57:02.391854 4958 scope.go:117] "RemoveContainer" containerID="1398aae03a42cf60100ed334a3d65c9c4fd501855d5316e7e1ed3974d0c7e22e" Oct 06 11:57:02 crc kubenswrapper[4958]: I1006 11:57:02.392537 4958 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1398aae03a42cf60100ed334a3d65c9c4fd501855d5316e7e1ed3974d0c7e22e"} err="failed to get container status \"1398aae03a42cf60100ed334a3d65c9c4fd501855d5316e7e1ed3974d0c7e22e\": rpc error: code = NotFound desc = could not find container \"1398aae03a42cf60100ed334a3d65c9c4fd501855d5316e7e1ed3974d0c7e22e\": container with ID starting with 1398aae03a42cf60100ed334a3d65c9c4fd501855d5316e7e1ed3974d0c7e22e not found: ID does not exist" Oct 06 11:57:02 crc kubenswrapper[4958]: I1006 11:57:02.392568 4958 scope.go:117] "RemoveContainer" containerID="484bfcace3003fb125465d9296c1b04852447d674b6c4abc8fedbd3a1b6ea8be" Oct 06 11:57:02 crc kubenswrapper[4958]: I1006 11:57:02.392847 4958 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"484bfcace3003fb125465d9296c1b04852447d674b6c4abc8fedbd3a1b6ea8be"} err="failed to get container status \"484bfcace3003fb125465d9296c1b04852447d674b6c4abc8fedbd3a1b6ea8be\": rpc error: code = NotFound desc = could not find container \"484bfcace3003fb125465d9296c1b04852447d674b6c4abc8fedbd3a1b6ea8be\": container with ID starting with 484bfcace3003fb125465d9296c1b04852447d674b6c4abc8fedbd3a1b6ea8be not found: ID does not exist" Oct 06 11:57:02 crc kubenswrapper[4958]: I1006 11:57:02.392865 4958 scope.go:117] "RemoveContainer" containerID="d7834f1b3141381f7d868acc4aceb2809e81896cb7082de9888c3d540d062078" Oct 06 11:57:02 crc kubenswrapper[4958]: I1006 11:57:02.393212 4958 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d7834f1b3141381f7d868acc4aceb2809e81896cb7082de9888c3d540d062078"} err="failed to get container status \"d7834f1b3141381f7d868acc4aceb2809e81896cb7082de9888c3d540d062078\": rpc error: code = NotFound desc = could not find container \"d7834f1b3141381f7d868acc4aceb2809e81896cb7082de9888c3d540d062078\": container with ID starting with d7834f1b3141381f7d868acc4aceb2809e81896cb7082de9888c3d540d062078 not found: ID does not exist" Oct 06 11:57:02 crc kubenswrapper[4958]: I1006 11:57:02.393247 4958 scope.go:117] "RemoveContainer" containerID="a38398fb9a752d0059e1adfd4ee23557572492f2d27f63ef90ed4730b742fd0e" Oct 06 11:57:02 crc kubenswrapper[4958]: I1006 11:57:02.393680 4958 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a38398fb9a752d0059e1adfd4ee23557572492f2d27f63ef90ed4730b742fd0e"} err="failed to get container status \"a38398fb9a752d0059e1adfd4ee23557572492f2d27f63ef90ed4730b742fd0e\": rpc error: code = NotFound desc = could not find container \"a38398fb9a752d0059e1adfd4ee23557572492f2d27f63ef90ed4730b742fd0e\": container with ID starting with a38398fb9a752d0059e1adfd4ee23557572492f2d27f63ef90ed4730b742fd0e not found: ID does not exist" Oct 06 11:57:02 crc kubenswrapper[4958]: I1006 11:57:02.393695 4958 scope.go:117] "RemoveContainer" containerID="bc57f0ff822a9fcb8c3b73146bf4879f6e05c2f92b6a4317528530e54e419278" Oct 06 11:57:02 crc kubenswrapper[4958]: I1006 11:57:02.393977 4958 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bc57f0ff822a9fcb8c3b73146bf4879f6e05c2f92b6a4317528530e54e419278"} err="failed to get container status \"bc57f0ff822a9fcb8c3b73146bf4879f6e05c2f92b6a4317528530e54e419278\": rpc error: code = NotFound desc = could not find container \"bc57f0ff822a9fcb8c3b73146bf4879f6e05c2f92b6a4317528530e54e419278\": container with ID starting with bc57f0ff822a9fcb8c3b73146bf4879f6e05c2f92b6a4317528530e54e419278 not found: ID does not exist" Oct 06 11:57:02 crc kubenswrapper[4958]: I1006 11:57:02.394012 4958 scope.go:117] "RemoveContainer" containerID="051e26a11e5f65ab6e49664f99a5f6a87bf781bf7284f2e2e3996c0e26e77c84" Oct 06 11:57:02 crc kubenswrapper[4958]: I1006 11:57:02.395167 4958 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"051e26a11e5f65ab6e49664f99a5f6a87bf781bf7284f2e2e3996c0e26e77c84"} err="failed to get container status \"051e26a11e5f65ab6e49664f99a5f6a87bf781bf7284f2e2e3996c0e26e77c84\": rpc error: code = NotFound desc = could not find container \"051e26a11e5f65ab6e49664f99a5f6a87bf781bf7284f2e2e3996c0e26e77c84\": container with ID starting with 051e26a11e5f65ab6e49664f99a5f6a87bf781bf7284f2e2e3996c0e26e77c84 not found: ID does not exist" Oct 06 11:57:02 crc kubenswrapper[4958]: I1006 11:57:02.395184 4958 scope.go:117] "RemoveContainer" containerID="d90731b1e085c514ef0665d4df356b853f432a1012f84a41ed82fa257d57d9bf" Oct 06 11:57:02 crc kubenswrapper[4958]: I1006 11:57:02.395560 4958 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d90731b1e085c514ef0665d4df356b853f432a1012f84a41ed82fa257d57d9bf"} err="failed to get container status \"d90731b1e085c514ef0665d4df356b853f432a1012f84a41ed82fa257d57d9bf\": rpc error: code = NotFound desc = could not find container \"d90731b1e085c514ef0665d4df356b853f432a1012f84a41ed82fa257d57d9bf\": container with ID starting with d90731b1e085c514ef0665d4df356b853f432a1012f84a41ed82fa257d57d9bf not found: ID does not exist" Oct 06 11:57:02 crc kubenswrapper[4958]: I1006 11:57:02.395603 4958 scope.go:117] "RemoveContainer" containerID="738882f7b5c7b87e6d6426a997dd7efe5daf1d97973cf1085097cc0f5e790f00" Oct 06 11:57:02 crc kubenswrapper[4958]: I1006 11:57:02.396196 4958 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"738882f7b5c7b87e6d6426a997dd7efe5daf1d97973cf1085097cc0f5e790f00"} err="failed to get container status \"738882f7b5c7b87e6d6426a997dd7efe5daf1d97973cf1085097cc0f5e790f00\": rpc error: code = NotFound desc = could not find container \"738882f7b5c7b87e6d6426a997dd7efe5daf1d97973cf1085097cc0f5e790f00\": container with ID starting with 738882f7b5c7b87e6d6426a997dd7efe5daf1d97973cf1085097cc0f5e790f00 not found: ID does not exist" Oct 06 11:57:02 crc kubenswrapper[4958]: I1006 11:57:02.396215 4958 scope.go:117] "RemoveContainer" containerID="31e3aeb719b255f884ea8f41ba8f2805ba18c0324cc985c66601ceb2096c177b" Oct 06 11:57:02 crc kubenswrapper[4958]: I1006 11:57:02.396665 4958 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"31e3aeb719b255f884ea8f41ba8f2805ba18c0324cc985c66601ceb2096c177b"} err="failed to get container status \"31e3aeb719b255f884ea8f41ba8f2805ba18c0324cc985c66601ceb2096c177b\": rpc error: code = NotFound desc = could not find container \"31e3aeb719b255f884ea8f41ba8f2805ba18c0324cc985c66601ceb2096c177b\": container with ID starting with 31e3aeb719b255f884ea8f41ba8f2805ba18c0324cc985c66601ceb2096c177b not found: ID does not exist" Oct 06 11:57:02 crc kubenswrapper[4958]: I1006 11:57:02.396727 4958 scope.go:117] "RemoveContainer" containerID="0d26f1c937b4f4a0ec130d9451086f3770f092498d61310b65bdd8c74a68549e" Oct 06 11:57:02 crc kubenswrapper[4958]: I1006 11:57:02.397117 4958 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0d26f1c937b4f4a0ec130d9451086f3770f092498d61310b65bdd8c74a68549e"} err="failed to get container status \"0d26f1c937b4f4a0ec130d9451086f3770f092498d61310b65bdd8c74a68549e\": rpc error: code = NotFound desc = could not find container \"0d26f1c937b4f4a0ec130d9451086f3770f092498d61310b65bdd8c74a68549e\": container with ID starting with 0d26f1c937b4f4a0ec130d9451086f3770f092498d61310b65bdd8c74a68549e not found: ID does not exist" Oct 06 11:57:02 crc kubenswrapper[4958]: I1006 11:57:02.397177 4958 scope.go:117] "RemoveContainer" containerID="1398aae03a42cf60100ed334a3d65c9c4fd501855d5316e7e1ed3974d0c7e22e" Oct 06 11:57:02 crc kubenswrapper[4958]: I1006 11:57:02.397571 4958 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1398aae03a42cf60100ed334a3d65c9c4fd501855d5316e7e1ed3974d0c7e22e"} err="failed to get container status \"1398aae03a42cf60100ed334a3d65c9c4fd501855d5316e7e1ed3974d0c7e22e\": rpc error: code = NotFound desc = could not find container \"1398aae03a42cf60100ed334a3d65c9c4fd501855d5316e7e1ed3974d0c7e22e\": container with ID starting with 1398aae03a42cf60100ed334a3d65c9c4fd501855d5316e7e1ed3974d0c7e22e not found: ID does not exist" Oct 06 11:57:02 crc kubenswrapper[4958]: I1006 11:57:02.397606 4958 scope.go:117] "RemoveContainer" containerID="484bfcace3003fb125465d9296c1b04852447d674b6c4abc8fedbd3a1b6ea8be" Oct 06 11:57:02 crc kubenswrapper[4958]: I1006 11:57:02.398047 4958 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"484bfcace3003fb125465d9296c1b04852447d674b6c4abc8fedbd3a1b6ea8be"} err="failed to get container status \"484bfcace3003fb125465d9296c1b04852447d674b6c4abc8fedbd3a1b6ea8be\": rpc error: code = NotFound desc = could not find container \"484bfcace3003fb125465d9296c1b04852447d674b6c4abc8fedbd3a1b6ea8be\": container with ID starting with 484bfcace3003fb125465d9296c1b04852447d674b6c4abc8fedbd3a1b6ea8be not found: ID does not exist" Oct 06 11:57:02 crc kubenswrapper[4958]: I1006 11:57:02.398130 4958 scope.go:117] "RemoveContainer" containerID="d7834f1b3141381f7d868acc4aceb2809e81896cb7082de9888c3d540d062078" Oct 06 11:57:02 crc kubenswrapper[4958]: I1006 11:57:02.398607 4958 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d7834f1b3141381f7d868acc4aceb2809e81896cb7082de9888c3d540d062078"} err="failed to get container status \"d7834f1b3141381f7d868acc4aceb2809e81896cb7082de9888c3d540d062078\": rpc error: code = NotFound desc = could not find container \"d7834f1b3141381f7d868acc4aceb2809e81896cb7082de9888c3d540d062078\": container with ID starting with d7834f1b3141381f7d868acc4aceb2809e81896cb7082de9888c3d540d062078 not found: ID does not exist" Oct 06 11:57:02 crc kubenswrapper[4958]: I1006 11:57:02.398641 4958 scope.go:117] "RemoveContainer" containerID="a38398fb9a752d0059e1adfd4ee23557572492f2d27f63ef90ed4730b742fd0e" Oct 06 11:57:02 crc kubenswrapper[4958]: I1006 11:57:02.398912 4958 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a38398fb9a752d0059e1adfd4ee23557572492f2d27f63ef90ed4730b742fd0e"} err="failed to get container status \"a38398fb9a752d0059e1adfd4ee23557572492f2d27f63ef90ed4730b742fd0e\": rpc error: code = NotFound desc = could not find container \"a38398fb9a752d0059e1adfd4ee23557572492f2d27f63ef90ed4730b742fd0e\": container with ID starting with a38398fb9a752d0059e1adfd4ee23557572492f2d27f63ef90ed4730b742fd0e not found: ID does not exist" Oct 06 11:57:02 crc kubenswrapper[4958]: I1006 11:57:02.398927 4958 scope.go:117] "RemoveContainer" containerID="bc57f0ff822a9fcb8c3b73146bf4879f6e05c2f92b6a4317528530e54e419278" Oct 06 11:57:02 crc kubenswrapper[4958]: I1006 11:57:02.399151 4958 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bc57f0ff822a9fcb8c3b73146bf4879f6e05c2f92b6a4317528530e54e419278"} err="failed to get container status \"bc57f0ff822a9fcb8c3b73146bf4879f6e05c2f92b6a4317528530e54e419278\": rpc error: code = NotFound desc = could not find container \"bc57f0ff822a9fcb8c3b73146bf4879f6e05c2f92b6a4317528530e54e419278\": container with ID starting with bc57f0ff822a9fcb8c3b73146bf4879f6e05c2f92b6a4317528530e54e419278 not found: ID does not exist" Oct 06 11:57:02 crc kubenswrapper[4958]: I1006 11:57:02.399223 4958 scope.go:117] "RemoveContainer" containerID="051e26a11e5f65ab6e49664f99a5f6a87bf781bf7284f2e2e3996c0e26e77c84" Oct 06 11:57:02 crc kubenswrapper[4958]: I1006 11:57:02.399708 4958 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"051e26a11e5f65ab6e49664f99a5f6a87bf781bf7284f2e2e3996c0e26e77c84"} err="failed to get container status \"051e26a11e5f65ab6e49664f99a5f6a87bf781bf7284f2e2e3996c0e26e77c84\": rpc error: code = NotFound desc = could not find container \"051e26a11e5f65ab6e49664f99a5f6a87bf781bf7284f2e2e3996c0e26e77c84\": container with ID starting with 051e26a11e5f65ab6e49664f99a5f6a87bf781bf7284f2e2e3996c0e26e77c84 not found: ID does not exist" Oct 06 11:57:02 crc kubenswrapper[4958]: I1006 11:57:02.399773 4958 scope.go:117] "RemoveContainer" containerID="d90731b1e085c514ef0665d4df356b853f432a1012f84a41ed82fa257d57d9bf" Oct 06 11:57:02 crc kubenswrapper[4958]: I1006 11:57:02.400017 4958 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d90731b1e085c514ef0665d4df356b853f432a1012f84a41ed82fa257d57d9bf"} err="failed to get container status \"d90731b1e085c514ef0665d4df356b853f432a1012f84a41ed82fa257d57d9bf\": rpc error: code = NotFound desc = could not find container \"d90731b1e085c514ef0665d4df356b853f432a1012f84a41ed82fa257d57d9bf\": container with ID starting with d90731b1e085c514ef0665d4df356b853f432a1012f84a41ed82fa257d57d9bf not found: ID does not exist" Oct 06 11:57:02 crc kubenswrapper[4958]: I1006 11:57:02.400080 4958 scope.go:117] "RemoveContainer" containerID="738882f7b5c7b87e6d6426a997dd7efe5daf1d97973cf1085097cc0f5e790f00" Oct 06 11:57:02 crc kubenswrapper[4958]: I1006 11:57:02.400554 4958 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"738882f7b5c7b87e6d6426a997dd7efe5daf1d97973cf1085097cc0f5e790f00"} err="failed to get container status \"738882f7b5c7b87e6d6426a997dd7efe5daf1d97973cf1085097cc0f5e790f00\": rpc error: code = NotFound desc = could not find container \"738882f7b5c7b87e6d6426a997dd7efe5daf1d97973cf1085097cc0f5e790f00\": container with ID starting with 738882f7b5c7b87e6d6426a997dd7efe5daf1d97973cf1085097cc0f5e790f00 not found: ID does not exist" Oct 06 11:57:02 crc kubenswrapper[4958]: I1006 11:57:02.922300 4958 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cd589959-144a-41bd-b6d5-a872e5c25cee" path="/var/lib/kubelet/pods/cd589959-144a-41bd-b6d5-a872e5c25cee/volumes" Oct 06 11:57:03 crc kubenswrapper[4958]: I1006 11:57:03.176280 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-4w7dx" event={"ID":"31cd6b07-2919-40a4-b107-6968722ace36","Type":"ContainerStarted","Data":"d4745e29b3cbcaac4bfd61d7f62200131bd8ae76f01bc41d8bd762604650a325"} Oct 06 11:57:03 crc kubenswrapper[4958]: I1006 11:57:03.176352 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-4w7dx" event={"ID":"31cd6b07-2919-40a4-b107-6968722ace36","Type":"ContainerStarted","Data":"8702e1b346d0298f66062d32d19d8d95c68c1e3104132c448bed16671bfdb595"} Oct 06 11:57:03 crc kubenswrapper[4958]: I1006 11:57:03.176370 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-4w7dx" event={"ID":"31cd6b07-2919-40a4-b107-6968722ace36","Type":"ContainerStarted","Data":"7a00284934601488f5998b004ac8b325bc36413d0f53eeb0d0cf46619ce9243f"} Oct 06 11:57:03 crc kubenswrapper[4958]: I1006 11:57:03.176384 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-4w7dx" event={"ID":"31cd6b07-2919-40a4-b107-6968722ace36","Type":"ContainerStarted","Data":"a4b8d13ffd524b822d331a77bb0176e205b89ad3e2ba28b6b1c278cfe778dbb1"} Oct 06 11:57:03 crc kubenswrapper[4958]: I1006 11:57:03.176395 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-4w7dx" event={"ID":"31cd6b07-2919-40a4-b107-6968722ace36","Type":"ContainerStarted","Data":"5f3d3fd8a7c69bdbb8b4b47a9de056cecebd4fe7ae2dd8e8256120736964aa1a"} Oct 06 11:57:03 crc kubenswrapper[4958]: I1006 11:57:03.176406 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-4w7dx" event={"ID":"31cd6b07-2919-40a4-b107-6968722ace36","Type":"ContainerStarted","Data":"26e6cce4efa807e0971deb79dab3e63c5065591e63c1807217869e4916a26a45"} Oct 06 11:57:03 crc kubenswrapper[4958]: I1006 11:57:03.179848 4958 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-4w4h5_8198c8fd-8ec9-4a56-9e87-4e3967fb8ec7/kube-multus/2.log" Oct 06 11:57:05 crc kubenswrapper[4958]: I1006 11:57:05.195651 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-4w7dx" event={"ID":"31cd6b07-2919-40a4-b107-6968722ace36","Type":"ContainerStarted","Data":"bb5e2d97283b3f78a07e1b303d10c3b273ba77eb751e4e256ba13f5ab50b7d02"} Oct 06 11:57:08 crc kubenswrapper[4958]: I1006 11:57:08.219464 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-4w7dx" event={"ID":"31cd6b07-2919-40a4-b107-6968722ace36","Type":"ContainerStarted","Data":"c67ce9946dcfabac4a94df2f226aa226e783b1d8e0b3f0de30058a0f5e66b96f"} Oct 06 11:57:08 crc kubenswrapper[4958]: I1006 11:57:08.221398 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-4w7dx" Oct 06 11:57:08 crc kubenswrapper[4958]: I1006 11:57:08.221437 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-4w7dx" Oct 06 11:57:08 crc kubenswrapper[4958]: I1006 11:57:08.221505 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-4w7dx" Oct 06 11:57:08 crc kubenswrapper[4958]: I1006 11:57:08.255991 4958 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-node-4w7dx" podStartSLOduration=7.255967014 podStartE2EDuration="7.255967014s" podCreationTimestamp="2025-10-06 11:57:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-06 11:57:08.253942706 +0000 UTC m=+582.139968044" watchObservedRunningTime="2025-10-06 11:57:08.255967014 +0000 UTC m=+582.141992362" Oct 06 11:57:08 crc kubenswrapper[4958]: I1006 11:57:08.261854 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-4w7dx" Oct 06 11:57:08 crc kubenswrapper[4958]: I1006 11:57:08.270176 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-4w7dx" Oct 06 11:57:13 crc kubenswrapper[4958]: I1006 11:57:13.913519 4958 scope.go:117] "RemoveContainer" containerID="0bcbdb53e28bf48f1081ca622ed415816a291e7ae71edd74a7d1f241c95fe82e" Oct 06 11:57:13 crc kubenswrapper[4958]: E1006 11:57:13.914327 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-multus\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-multus pod=multus-4w4h5_openshift-multus(8198c8fd-8ec9-4a56-9e87-4e3967fb8ec7)\"" pod="openshift-multus/multus-4w4h5" podUID="8198c8fd-8ec9-4a56-9e87-4e3967fb8ec7" Oct 06 11:57:23 crc kubenswrapper[4958]: I1006 11:57:23.801790 4958 patch_prober.go:28] interesting pod/machine-config-daemon-whw6z container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 06 11:57:23 crc kubenswrapper[4958]: I1006 11:57:23.802447 4958 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-whw6z" podUID="1a63a5e9-6d00-40ff-a080-cfcaf03a1c1b" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 06 11:57:26 crc kubenswrapper[4958]: I1006 11:57:26.920491 4958 scope.go:117] "RemoveContainer" containerID="0bcbdb53e28bf48f1081ca622ed415816a291e7ae71edd74a7d1f241c95fe82e" Oct 06 11:57:28 crc kubenswrapper[4958]: I1006 11:57:28.347693 4958 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-4w4h5_8198c8fd-8ec9-4a56-9e87-4e3967fb8ec7/kube-multus/2.log" Oct 06 11:57:28 crc kubenswrapper[4958]: I1006 11:57:28.348171 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-4w4h5" event={"ID":"8198c8fd-8ec9-4a56-9e87-4e3967fb8ec7","Type":"ContainerStarted","Data":"36b33e4f4a62444d69625609a874b43f3594e85c05c62dca16ca03ae6f77a50c"} Oct 06 11:57:31 crc kubenswrapper[4958]: I1006 11:57:31.841957 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-4w7dx" Oct 06 11:57:42 crc kubenswrapper[4958]: I1006 11:57:42.496870 4958 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/fa9831ede5d93c33d525b70ce6ddf94e500d80992af75a3305fe98835c5x2pl"] Oct 06 11:57:42 crc kubenswrapper[4958]: I1006 11:57:42.499550 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/fa9831ede5d93c33d525b70ce6ddf94e500d80992af75a3305fe98835c5x2pl" Oct 06 11:57:42 crc kubenswrapper[4958]: I1006 11:57:42.505952 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vmwhc" Oct 06 11:57:42 crc kubenswrapper[4958]: I1006 11:57:42.507201 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/fa9831ede5d93c33d525b70ce6ddf94e500d80992af75a3305fe98835c5x2pl"] Oct 06 11:57:42 crc kubenswrapper[4958]: I1006 11:57:42.622931 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6n474\" (UniqueName: \"kubernetes.io/projected/0b95314f-617e-41fc-9afd-26ea796825c8-kube-api-access-6n474\") pod \"fa9831ede5d93c33d525b70ce6ddf94e500d80992af75a3305fe98835c5x2pl\" (UID: \"0b95314f-617e-41fc-9afd-26ea796825c8\") " pod="openshift-marketplace/fa9831ede5d93c33d525b70ce6ddf94e500d80992af75a3305fe98835c5x2pl" Oct 06 11:57:42 crc kubenswrapper[4958]: I1006 11:57:42.623343 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/0b95314f-617e-41fc-9afd-26ea796825c8-util\") pod \"fa9831ede5d93c33d525b70ce6ddf94e500d80992af75a3305fe98835c5x2pl\" (UID: \"0b95314f-617e-41fc-9afd-26ea796825c8\") " pod="openshift-marketplace/fa9831ede5d93c33d525b70ce6ddf94e500d80992af75a3305fe98835c5x2pl" Oct 06 11:57:42 crc kubenswrapper[4958]: I1006 11:57:42.623619 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/0b95314f-617e-41fc-9afd-26ea796825c8-bundle\") pod \"fa9831ede5d93c33d525b70ce6ddf94e500d80992af75a3305fe98835c5x2pl\" (UID: \"0b95314f-617e-41fc-9afd-26ea796825c8\") " pod="openshift-marketplace/fa9831ede5d93c33d525b70ce6ddf94e500d80992af75a3305fe98835c5x2pl" Oct 06 11:57:42 crc kubenswrapper[4958]: I1006 11:57:42.725082 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6n474\" (UniqueName: \"kubernetes.io/projected/0b95314f-617e-41fc-9afd-26ea796825c8-kube-api-access-6n474\") pod \"fa9831ede5d93c33d525b70ce6ddf94e500d80992af75a3305fe98835c5x2pl\" (UID: \"0b95314f-617e-41fc-9afd-26ea796825c8\") " pod="openshift-marketplace/fa9831ede5d93c33d525b70ce6ddf94e500d80992af75a3305fe98835c5x2pl" Oct 06 11:57:42 crc kubenswrapper[4958]: I1006 11:57:42.725478 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/0b95314f-617e-41fc-9afd-26ea796825c8-util\") pod \"fa9831ede5d93c33d525b70ce6ddf94e500d80992af75a3305fe98835c5x2pl\" (UID: \"0b95314f-617e-41fc-9afd-26ea796825c8\") " pod="openshift-marketplace/fa9831ede5d93c33d525b70ce6ddf94e500d80992af75a3305fe98835c5x2pl" Oct 06 11:57:42 crc kubenswrapper[4958]: I1006 11:57:42.725762 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/0b95314f-617e-41fc-9afd-26ea796825c8-bundle\") pod \"fa9831ede5d93c33d525b70ce6ddf94e500d80992af75a3305fe98835c5x2pl\" (UID: \"0b95314f-617e-41fc-9afd-26ea796825c8\") " pod="openshift-marketplace/fa9831ede5d93c33d525b70ce6ddf94e500d80992af75a3305fe98835c5x2pl" Oct 06 11:57:42 crc kubenswrapper[4958]: I1006 11:57:42.726569 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/0b95314f-617e-41fc-9afd-26ea796825c8-util\") pod \"fa9831ede5d93c33d525b70ce6ddf94e500d80992af75a3305fe98835c5x2pl\" (UID: \"0b95314f-617e-41fc-9afd-26ea796825c8\") " pod="openshift-marketplace/fa9831ede5d93c33d525b70ce6ddf94e500d80992af75a3305fe98835c5x2pl" Oct 06 11:57:42 crc kubenswrapper[4958]: I1006 11:57:42.726569 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/0b95314f-617e-41fc-9afd-26ea796825c8-bundle\") pod \"fa9831ede5d93c33d525b70ce6ddf94e500d80992af75a3305fe98835c5x2pl\" (UID: \"0b95314f-617e-41fc-9afd-26ea796825c8\") " pod="openshift-marketplace/fa9831ede5d93c33d525b70ce6ddf94e500d80992af75a3305fe98835c5x2pl" Oct 06 11:57:42 crc kubenswrapper[4958]: I1006 11:57:42.758541 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6n474\" (UniqueName: \"kubernetes.io/projected/0b95314f-617e-41fc-9afd-26ea796825c8-kube-api-access-6n474\") pod \"fa9831ede5d93c33d525b70ce6ddf94e500d80992af75a3305fe98835c5x2pl\" (UID: \"0b95314f-617e-41fc-9afd-26ea796825c8\") " pod="openshift-marketplace/fa9831ede5d93c33d525b70ce6ddf94e500d80992af75a3305fe98835c5x2pl" Oct 06 11:57:42 crc kubenswrapper[4958]: I1006 11:57:42.827557 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/fa9831ede5d93c33d525b70ce6ddf94e500d80992af75a3305fe98835c5x2pl" Oct 06 11:57:43 crc kubenswrapper[4958]: I1006 11:57:43.315577 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/fa9831ede5d93c33d525b70ce6ddf94e500d80992af75a3305fe98835c5x2pl"] Oct 06 11:57:43 crc kubenswrapper[4958]: W1006 11:57:43.327290 4958 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0b95314f_617e_41fc_9afd_26ea796825c8.slice/crio-b939c0a57b682664b4e90af54680d1d848b2e4cb8fd8ec61b1d0cfe4bf22c4ac WatchSource:0}: Error finding container b939c0a57b682664b4e90af54680d1d848b2e4cb8fd8ec61b1d0cfe4bf22c4ac: Status 404 returned error can't find the container with id b939c0a57b682664b4e90af54680d1d848b2e4cb8fd8ec61b1d0cfe4bf22c4ac Oct 06 11:57:43 crc kubenswrapper[4958]: I1006 11:57:43.457968 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/fa9831ede5d93c33d525b70ce6ddf94e500d80992af75a3305fe98835c5x2pl" event={"ID":"0b95314f-617e-41fc-9afd-26ea796825c8","Type":"ContainerStarted","Data":"b939c0a57b682664b4e90af54680d1d848b2e4cb8fd8ec61b1d0cfe4bf22c4ac"} Oct 06 11:57:44 crc kubenswrapper[4958]: I1006 11:57:44.469691 4958 generic.go:334] "Generic (PLEG): container finished" podID="0b95314f-617e-41fc-9afd-26ea796825c8" containerID="fafbb75a375542b43399d0955f070d7da0c7fb11ac2fa07bce328b8177836d6d" exitCode=0 Oct 06 11:57:44 crc kubenswrapper[4958]: I1006 11:57:44.469766 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/fa9831ede5d93c33d525b70ce6ddf94e500d80992af75a3305fe98835c5x2pl" event={"ID":"0b95314f-617e-41fc-9afd-26ea796825c8","Type":"ContainerDied","Data":"fafbb75a375542b43399d0955f070d7da0c7fb11ac2fa07bce328b8177836d6d"} Oct 06 11:57:46 crc kubenswrapper[4958]: I1006 11:57:46.487402 4958 generic.go:334] "Generic (PLEG): container finished" podID="0b95314f-617e-41fc-9afd-26ea796825c8" containerID="00c8cadd40ee4eac52adfd14b37c843b89563479626343dcc3bd4a9b734e1a2c" exitCode=0 Oct 06 11:57:46 crc kubenswrapper[4958]: I1006 11:57:46.487538 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/fa9831ede5d93c33d525b70ce6ddf94e500d80992af75a3305fe98835c5x2pl" event={"ID":"0b95314f-617e-41fc-9afd-26ea796825c8","Type":"ContainerDied","Data":"00c8cadd40ee4eac52adfd14b37c843b89563479626343dcc3bd4a9b734e1a2c"} Oct 06 11:57:47 crc kubenswrapper[4958]: I1006 11:57:47.502395 4958 generic.go:334] "Generic (PLEG): container finished" podID="0b95314f-617e-41fc-9afd-26ea796825c8" containerID="d7826c7f99ec0be3bec3bb61364fe64254aa39e64a980b5f1c936f0c6fcf55d6" exitCode=0 Oct 06 11:57:47 crc kubenswrapper[4958]: I1006 11:57:47.502456 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/fa9831ede5d93c33d525b70ce6ddf94e500d80992af75a3305fe98835c5x2pl" event={"ID":"0b95314f-617e-41fc-9afd-26ea796825c8","Type":"ContainerDied","Data":"d7826c7f99ec0be3bec3bb61364fe64254aa39e64a980b5f1c936f0c6fcf55d6"} Oct 06 11:57:48 crc kubenswrapper[4958]: I1006 11:57:48.829014 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/fa9831ede5d93c33d525b70ce6ddf94e500d80992af75a3305fe98835c5x2pl" Oct 06 11:57:48 crc kubenswrapper[4958]: I1006 11:57:48.909949 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6n474\" (UniqueName: \"kubernetes.io/projected/0b95314f-617e-41fc-9afd-26ea796825c8-kube-api-access-6n474\") pod \"0b95314f-617e-41fc-9afd-26ea796825c8\" (UID: \"0b95314f-617e-41fc-9afd-26ea796825c8\") " Oct 06 11:57:48 crc kubenswrapper[4958]: I1006 11:57:48.910063 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/0b95314f-617e-41fc-9afd-26ea796825c8-util\") pod \"0b95314f-617e-41fc-9afd-26ea796825c8\" (UID: \"0b95314f-617e-41fc-9afd-26ea796825c8\") " Oct 06 11:57:48 crc kubenswrapper[4958]: I1006 11:57:48.910175 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/0b95314f-617e-41fc-9afd-26ea796825c8-bundle\") pod \"0b95314f-617e-41fc-9afd-26ea796825c8\" (UID: \"0b95314f-617e-41fc-9afd-26ea796825c8\") " Oct 06 11:57:48 crc kubenswrapper[4958]: I1006 11:57:48.911477 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0b95314f-617e-41fc-9afd-26ea796825c8-bundle" (OuterVolumeSpecName: "bundle") pod "0b95314f-617e-41fc-9afd-26ea796825c8" (UID: "0b95314f-617e-41fc-9afd-26ea796825c8"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 06 11:57:48 crc kubenswrapper[4958]: I1006 11:57:48.925514 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0b95314f-617e-41fc-9afd-26ea796825c8-kube-api-access-6n474" (OuterVolumeSpecName: "kube-api-access-6n474") pod "0b95314f-617e-41fc-9afd-26ea796825c8" (UID: "0b95314f-617e-41fc-9afd-26ea796825c8"). InnerVolumeSpecName "kube-api-access-6n474". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 06 11:57:48 crc kubenswrapper[4958]: I1006 11:57:48.940929 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0b95314f-617e-41fc-9afd-26ea796825c8-util" (OuterVolumeSpecName: "util") pod "0b95314f-617e-41fc-9afd-26ea796825c8" (UID: "0b95314f-617e-41fc-9afd-26ea796825c8"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 06 11:57:49 crc kubenswrapper[4958]: I1006 11:57:49.011584 4958 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/0b95314f-617e-41fc-9afd-26ea796825c8-util\") on node \"crc\" DevicePath \"\"" Oct 06 11:57:49 crc kubenswrapper[4958]: I1006 11:57:49.011638 4958 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/0b95314f-617e-41fc-9afd-26ea796825c8-bundle\") on node \"crc\" DevicePath \"\"" Oct 06 11:57:49 crc kubenswrapper[4958]: I1006 11:57:49.011657 4958 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6n474\" (UniqueName: \"kubernetes.io/projected/0b95314f-617e-41fc-9afd-26ea796825c8-kube-api-access-6n474\") on node \"crc\" DevicePath \"\"" Oct 06 11:57:49 crc kubenswrapper[4958]: I1006 11:57:49.520649 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/fa9831ede5d93c33d525b70ce6ddf94e500d80992af75a3305fe98835c5x2pl" event={"ID":"0b95314f-617e-41fc-9afd-26ea796825c8","Type":"ContainerDied","Data":"b939c0a57b682664b4e90af54680d1d848b2e4cb8fd8ec61b1d0cfe4bf22c4ac"} Oct 06 11:57:49 crc kubenswrapper[4958]: I1006 11:57:49.521031 4958 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b939c0a57b682664b4e90af54680d1d848b2e4cb8fd8ec61b1d0cfe4bf22c4ac" Oct 06 11:57:49 crc kubenswrapper[4958]: I1006 11:57:49.520760 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/fa9831ede5d93c33d525b70ce6ddf94e500d80992af75a3305fe98835c5x2pl" Oct 06 11:57:53 crc kubenswrapper[4958]: I1006 11:57:53.802203 4958 patch_prober.go:28] interesting pod/machine-config-daemon-whw6z container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 06 11:57:53 crc kubenswrapper[4958]: I1006 11:57:53.802539 4958 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-whw6z" podUID="1a63a5e9-6d00-40ff-a080-cfcaf03a1c1b" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 06 11:57:53 crc kubenswrapper[4958]: I1006 11:57:53.802585 4958 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-whw6z" Oct 06 11:57:53 crc kubenswrapper[4958]: I1006 11:57:53.803107 4958 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"62259fb4a3daeef462272162942ee3efe9a1f7d5314ed03623fbe14dfc330edf"} pod="openshift-machine-config-operator/machine-config-daemon-whw6z" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Oct 06 11:57:53 crc kubenswrapper[4958]: I1006 11:57:53.803205 4958 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-whw6z" podUID="1a63a5e9-6d00-40ff-a080-cfcaf03a1c1b" containerName="machine-config-daemon" containerID="cri-o://62259fb4a3daeef462272162942ee3efe9a1f7d5314ed03623fbe14dfc330edf" gracePeriod=600 Oct 06 11:57:54 crc kubenswrapper[4958]: I1006 11:57:54.088382 4958 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-operator-858ddd8f98-cl46g"] Oct 06 11:57:54 crc kubenswrapper[4958]: E1006 11:57:54.088629 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0b95314f-617e-41fc-9afd-26ea796825c8" containerName="util" Oct 06 11:57:54 crc kubenswrapper[4958]: I1006 11:57:54.088648 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="0b95314f-617e-41fc-9afd-26ea796825c8" containerName="util" Oct 06 11:57:54 crc kubenswrapper[4958]: E1006 11:57:54.088665 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0b95314f-617e-41fc-9afd-26ea796825c8" containerName="extract" Oct 06 11:57:54 crc kubenswrapper[4958]: I1006 11:57:54.088675 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="0b95314f-617e-41fc-9afd-26ea796825c8" containerName="extract" Oct 06 11:57:54 crc kubenswrapper[4958]: E1006 11:57:54.088692 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0b95314f-617e-41fc-9afd-26ea796825c8" containerName="pull" Oct 06 11:57:54 crc kubenswrapper[4958]: I1006 11:57:54.088700 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="0b95314f-617e-41fc-9afd-26ea796825c8" containerName="pull" Oct 06 11:57:54 crc kubenswrapper[4958]: I1006 11:57:54.088824 4958 memory_manager.go:354] "RemoveStaleState removing state" podUID="0b95314f-617e-41fc-9afd-26ea796825c8" containerName="extract" Oct 06 11:57:54 crc kubenswrapper[4958]: I1006 11:57:54.089301 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-operator-858ddd8f98-cl46g" Oct 06 11:57:54 crc kubenswrapper[4958]: I1006 11:57:54.091862 4958 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"kube-root-ca.crt" Oct 06 11:57:54 crc kubenswrapper[4958]: I1006 11:57:54.091970 4958 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"openshift-service-ca.crt" Oct 06 11:57:54 crc kubenswrapper[4958]: I1006 11:57:54.092103 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"nmstate-operator-dockercfg-x9rvm" Oct 06 11:57:54 crc kubenswrapper[4958]: I1006 11:57:54.103892 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-operator-858ddd8f98-cl46g"] Oct 06 11:57:54 crc kubenswrapper[4958]: I1006 11:57:54.281614 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qbl7l\" (UniqueName: \"kubernetes.io/projected/66c0e594-48e4-4f2f-b25e-5b69f377d6e2-kube-api-access-qbl7l\") pod \"nmstate-operator-858ddd8f98-cl46g\" (UID: \"66c0e594-48e4-4f2f-b25e-5b69f377d6e2\") " pod="openshift-nmstate/nmstate-operator-858ddd8f98-cl46g" Oct 06 11:57:54 crc kubenswrapper[4958]: I1006 11:57:54.383537 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qbl7l\" (UniqueName: \"kubernetes.io/projected/66c0e594-48e4-4f2f-b25e-5b69f377d6e2-kube-api-access-qbl7l\") pod \"nmstate-operator-858ddd8f98-cl46g\" (UID: \"66c0e594-48e4-4f2f-b25e-5b69f377d6e2\") " pod="openshift-nmstate/nmstate-operator-858ddd8f98-cl46g" Oct 06 11:57:54 crc kubenswrapper[4958]: I1006 11:57:54.406725 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qbl7l\" (UniqueName: \"kubernetes.io/projected/66c0e594-48e4-4f2f-b25e-5b69f377d6e2-kube-api-access-qbl7l\") pod \"nmstate-operator-858ddd8f98-cl46g\" (UID: \"66c0e594-48e4-4f2f-b25e-5b69f377d6e2\") " pod="openshift-nmstate/nmstate-operator-858ddd8f98-cl46g" Oct 06 11:57:54 crc kubenswrapper[4958]: I1006 11:57:54.563287 4958 generic.go:334] "Generic (PLEG): container finished" podID="1a63a5e9-6d00-40ff-a080-cfcaf03a1c1b" containerID="62259fb4a3daeef462272162942ee3efe9a1f7d5314ed03623fbe14dfc330edf" exitCode=0 Oct 06 11:57:54 crc kubenswrapper[4958]: I1006 11:57:54.563360 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-whw6z" event={"ID":"1a63a5e9-6d00-40ff-a080-cfcaf03a1c1b","Type":"ContainerDied","Data":"62259fb4a3daeef462272162942ee3efe9a1f7d5314ed03623fbe14dfc330edf"} Oct 06 11:57:54 crc kubenswrapper[4958]: I1006 11:57:54.563410 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-whw6z" event={"ID":"1a63a5e9-6d00-40ff-a080-cfcaf03a1c1b","Type":"ContainerStarted","Data":"50ea0a44529e4bdc070f78b8c163d73c0feb9c99ae2d2366012f3431b888a961"} Oct 06 11:57:54 crc kubenswrapper[4958]: I1006 11:57:54.563441 4958 scope.go:117] "RemoveContainer" containerID="995bc778c4c0807c4500542d2b3c01981314abfb47f00f805741a4ecb4ef1873" Oct 06 11:57:54 crc kubenswrapper[4958]: I1006 11:57:54.705064 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-operator-858ddd8f98-cl46g" Oct 06 11:57:55 crc kubenswrapper[4958]: I1006 11:57:55.003558 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-operator-858ddd8f98-cl46g"] Oct 06 11:57:55 crc kubenswrapper[4958]: I1006 11:57:55.572923 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-operator-858ddd8f98-cl46g" event={"ID":"66c0e594-48e4-4f2f-b25e-5b69f377d6e2","Type":"ContainerStarted","Data":"86cca622f9369addea40a906697ed05f8800319a048f9e01623baebbd5c81df3"} Oct 06 11:57:57 crc kubenswrapper[4958]: I1006 11:57:57.589573 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-operator-858ddd8f98-cl46g" event={"ID":"66c0e594-48e4-4f2f-b25e-5b69f377d6e2","Type":"ContainerStarted","Data":"50943614e421a804573c21d42f6d31f108f3863e6bcf08d00ea2e7f94fe8d3cc"} Oct 06 11:57:57 crc kubenswrapper[4958]: I1006 11:57:57.609903 4958 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-operator-858ddd8f98-cl46g" podStartSLOduration=1.364478413 podStartE2EDuration="3.609875698s" podCreationTimestamp="2025-10-06 11:57:54 +0000 UTC" firstStartedPulling="2025-10-06 11:57:55.019645022 +0000 UTC m=+628.905670330" lastFinishedPulling="2025-10-06 11:57:57.265042317 +0000 UTC m=+631.151067615" observedRunningTime="2025-10-06 11:57:57.607133502 +0000 UTC m=+631.493158820" watchObservedRunningTime="2025-10-06 11:57:57.609875698 +0000 UTC m=+631.495901046" Oct 06 11:58:02 crc kubenswrapper[4958]: I1006 11:58:02.906835 4958 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-metrics-fdff9cb8d-cmgch"] Oct 06 11:58:02 crc kubenswrapper[4958]: I1006 11:58:02.908727 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-metrics-fdff9cb8d-cmgch" Oct 06 11:58:02 crc kubenswrapper[4958]: I1006 11:58:02.910821 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"nmstate-handler-dockercfg-7bzx6" Oct 06 11:58:02 crc kubenswrapper[4958]: I1006 11:58:02.940356 4958 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-webhook-6cdbc54649-82rnp"] Oct 06 11:58:02 crc kubenswrapper[4958]: I1006 11:58:02.941526 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-webhook-6cdbc54649-82rnp" Oct 06 11:58:02 crc kubenswrapper[4958]: I1006 11:58:02.943434 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"openshift-nmstate-webhook" Oct 06 11:58:02 crc kubenswrapper[4958]: I1006 11:58:02.959214 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-webhook-6cdbc54649-82rnp"] Oct 06 11:58:02 crc kubenswrapper[4958]: I1006 11:58:02.961725 4958 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-handler-6vlgl"] Oct 06 11:58:02 crc kubenswrapper[4958]: I1006 11:58:02.962391 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-handler-6vlgl" Oct 06 11:58:02 crc kubenswrapper[4958]: I1006 11:58:02.971396 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-metrics-fdff9cb8d-cmgch"] Oct 06 11:58:02 crc kubenswrapper[4958]: I1006 11:58:02.994962 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-87djg\" (UniqueName: \"kubernetes.io/projected/89638e7b-fd22-4b85-8ee9-7eb5353f06c0-kube-api-access-87djg\") pod \"nmstate-handler-6vlgl\" (UID: \"89638e7b-fd22-4b85-8ee9-7eb5353f06c0\") " pod="openshift-nmstate/nmstate-handler-6vlgl" Oct 06 11:58:02 crc kubenswrapper[4958]: I1006 11:58:02.995007 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zm9n9\" (UniqueName: \"kubernetes.io/projected/fda4902f-9bcc-419f-80f2-40a46dc2e7dd-kube-api-access-zm9n9\") pod \"nmstate-metrics-fdff9cb8d-cmgch\" (UID: \"fda4902f-9bcc-419f-80f2-40a46dc2e7dd\") " pod="openshift-nmstate/nmstate-metrics-fdff9cb8d-cmgch" Oct 06 11:58:02 crc kubenswrapper[4958]: I1006 11:58:02.995040 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/89638e7b-fd22-4b85-8ee9-7eb5353f06c0-nmstate-lock\") pod \"nmstate-handler-6vlgl\" (UID: \"89638e7b-fd22-4b85-8ee9-7eb5353f06c0\") " pod="openshift-nmstate/nmstate-handler-6vlgl" Oct 06 11:58:02 crc kubenswrapper[4958]: I1006 11:58:02.995056 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/89638e7b-fd22-4b85-8ee9-7eb5353f06c0-ovs-socket\") pod \"nmstate-handler-6vlgl\" (UID: \"89638e7b-fd22-4b85-8ee9-7eb5353f06c0\") " pod="openshift-nmstate/nmstate-handler-6vlgl" Oct 06 11:58:02 crc kubenswrapper[4958]: I1006 11:58:02.995081 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/4605847c-947f-4955-b80b-87bb98b3c946-tls-key-pair\") pod \"nmstate-webhook-6cdbc54649-82rnp\" (UID: \"4605847c-947f-4955-b80b-87bb98b3c946\") " pod="openshift-nmstate/nmstate-webhook-6cdbc54649-82rnp" Oct 06 11:58:02 crc kubenswrapper[4958]: I1006 11:58:02.995097 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/89638e7b-fd22-4b85-8ee9-7eb5353f06c0-dbus-socket\") pod \"nmstate-handler-6vlgl\" (UID: \"89638e7b-fd22-4b85-8ee9-7eb5353f06c0\") " pod="openshift-nmstate/nmstate-handler-6vlgl" Oct 06 11:58:02 crc kubenswrapper[4958]: I1006 11:58:02.995132 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xjlzr\" (UniqueName: \"kubernetes.io/projected/4605847c-947f-4955-b80b-87bb98b3c946-kube-api-access-xjlzr\") pod \"nmstate-webhook-6cdbc54649-82rnp\" (UID: \"4605847c-947f-4955-b80b-87bb98b3c946\") " pod="openshift-nmstate/nmstate-webhook-6cdbc54649-82rnp" Oct 06 11:58:03 crc kubenswrapper[4958]: I1006 11:58:03.061223 4958 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-console-plugin-6b874cbd85-rd5gn"] Oct 06 11:58:03 crc kubenswrapper[4958]: I1006 11:58:03.062008 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-console-plugin-6b874cbd85-rd5gn" Oct 06 11:58:03 crc kubenswrapper[4958]: I1006 11:58:03.064232 4958 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"nginx-conf" Oct 06 11:58:03 crc kubenswrapper[4958]: I1006 11:58:03.070432 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-console-plugin-6b874cbd85-rd5gn"] Oct 06 11:58:03 crc kubenswrapper[4958]: I1006 11:58:03.070956 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"default-dockercfg-m2jfn" Oct 06 11:58:03 crc kubenswrapper[4958]: I1006 11:58:03.071445 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"plugin-serving-cert" Oct 06 11:58:03 crc kubenswrapper[4958]: I1006 11:58:03.096831 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-87djg\" (UniqueName: \"kubernetes.io/projected/89638e7b-fd22-4b85-8ee9-7eb5353f06c0-kube-api-access-87djg\") pod \"nmstate-handler-6vlgl\" (UID: \"89638e7b-fd22-4b85-8ee9-7eb5353f06c0\") " pod="openshift-nmstate/nmstate-handler-6vlgl" Oct 06 11:58:03 crc kubenswrapper[4958]: I1006 11:58:03.096885 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zm9n9\" (UniqueName: \"kubernetes.io/projected/fda4902f-9bcc-419f-80f2-40a46dc2e7dd-kube-api-access-zm9n9\") pod \"nmstate-metrics-fdff9cb8d-cmgch\" (UID: \"fda4902f-9bcc-419f-80f2-40a46dc2e7dd\") " pod="openshift-nmstate/nmstate-metrics-fdff9cb8d-cmgch" Oct 06 11:58:03 crc kubenswrapper[4958]: I1006 11:58:03.096910 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/5f482352-713c-4502-aded-dfe37c5fa8bc-plugin-serving-cert\") pod \"nmstate-console-plugin-6b874cbd85-rd5gn\" (UID: \"5f482352-713c-4502-aded-dfe37c5fa8bc\") " pod="openshift-nmstate/nmstate-console-plugin-6b874cbd85-rd5gn" Oct 06 11:58:03 crc kubenswrapper[4958]: I1006 11:58:03.096946 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/89638e7b-fd22-4b85-8ee9-7eb5353f06c0-nmstate-lock\") pod \"nmstate-handler-6vlgl\" (UID: \"89638e7b-fd22-4b85-8ee9-7eb5353f06c0\") " pod="openshift-nmstate/nmstate-handler-6vlgl" Oct 06 11:58:03 crc kubenswrapper[4958]: I1006 11:58:03.096966 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/89638e7b-fd22-4b85-8ee9-7eb5353f06c0-ovs-socket\") pod \"nmstate-handler-6vlgl\" (UID: \"89638e7b-fd22-4b85-8ee9-7eb5353f06c0\") " pod="openshift-nmstate/nmstate-handler-6vlgl" Oct 06 11:58:03 crc kubenswrapper[4958]: I1006 11:58:03.096983 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hltlr\" (UniqueName: \"kubernetes.io/projected/5f482352-713c-4502-aded-dfe37c5fa8bc-kube-api-access-hltlr\") pod \"nmstate-console-plugin-6b874cbd85-rd5gn\" (UID: \"5f482352-713c-4502-aded-dfe37c5fa8bc\") " pod="openshift-nmstate/nmstate-console-plugin-6b874cbd85-rd5gn" Oct 06 11:58:03 crc kubenswrapper[4958]: I1006 11:58:03.097009 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/4605847c-947f-4955-b80b-87bb98b3c946-tls-key-pair\") pod \"nmstate-webhook-6cdbc54649-82rnp\" (UID: \"4605847c-947f-4955-b80b-87bb98b3c946\") " pod="openshift-nmstate/nmstate-webhook-6cdbc54649-82rnp" Oct 06 11:58:03 crc kubenswrapper[4958]: I1006 11:58:03.097027 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/89638e7b-fd22-4b85-8ee9-7eb5353f06c0-dbus-socket\") pod \"nmstate-handler-6vlgl\" (UID: \"89638e7b-fd22-4b85-8ee9-7eb5353f06c0\") " pod="openshift-nmstate/nmstate-handler-6vlgl" Oct 06 11:58:03 crc kubenswrapper[4958]: I1006 11:58:03.097048 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/89638e7b-fd22-4b85-8ee9-7eb5353f06c0-ovs-socket\") pod \"nmstate-handler-6vlgl\" (UID: \"89638e7b-fd22-4b85-8ee9-7eb5353f06c0\") " pod="openshift-nmstate/nmstate-handler-6vlgl" Oct 06 11:58:03 crc kubenswrapper[4958]: I1006 11:58:03.097067 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/89638e7b-fd22-4b85-8ee9-7eb5353f06c0-nmstate-lock\") pod \"nmstate-handler-6vlgl\" (UID: \"89638e7b-fd22-4b85-8ee9-7eb5353f06c0\") " pod="openshift-nmstate/nmstate-handler-6vlgl" Oct 06 11:58:03 crc kubenswrapper[4958]: I1006 11:58:03.097080 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5f482352-713c-4502-aded-dfe37c5fa8bc-nginx-conf\") pod \"nmstate-console-plugin-6b874cbd85-rd5gn\" (UID: \"5f482352-713c-4502-aded-dfe37c5fa8bc\") " pod="openshift-nmstate/nmstate-console-plugin-6b874cbd85-rd5gn" Oct 06 11:58:03 crc kubenswrapper[4958]: E1006 11:58:03.097182 4958 secret.go:188] Couldn't get secret openshift-nmstate/openshift-nmstate-webhook: secret "openshift-nmstate-webhook" not found Oct 06 11:58:03 crc kubenswrapper[4958]: I1006 11:58:03.097187 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xjlzr\" (UniqueName: \"kubernetes.io/projected/4605847c-947f-4955-b80b-87bb98b3c946-kube-api-access-xjlzr\") pod \"nmstate-webhook-6cdbc54649-82rnp\" (UID: \"4605847c-947f-4955-b80b-87bb98b3c946\") " pod="openshift-nmstate/nmstate-webhook-6cdbc54649-82rnp" Oct 06 11:58:03 crc kubenswrapper[4958]: E1006 11:58:03.097265 4958 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/4605847c-947f-4955-b80b-87bb98b3c946-tls-key-pair podName:4605847c-947f-4955-b80b-87bb98b3c946 nodeName:}" failed. No retries permitted until 2025-10-06 11:58:03.597242742 +0000 UTC m=+637.483268050 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "tls-key-pair" (UniqueName: "kubernetes.io/secret/4605847c-947f-4955-b80b-87bb98b3c946-tls-key-pair") pod "nmstate-webhook-6cdbc54649-82rnp" (UID: "4605847c-947f-4955-b80b-87bb98b3c946") : secret "openshift-nmstate-webhook" not found Oct 06 11:58:03 crc kubenswrapper[4958]: I1006 11:58:03.097302 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/89638e7b-fd22-4b85-8ee9-7eb5353f06c0-dbus-socket\") pod \"nmstate-handler-6vlgl\" (UID: \"89638e7b-fd22-4b85-8ee9-7eb5353f06c0\") " pod="openshift-nmstate/nmstate-handler-6vlgl" Oct 06 11:58:03 crc kubenswrapper[4958]: I1006 11:58:03.114305 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-87djg\" (UniqueName: \"kubernetes.io/projected/89638e7b-fd22-4b85-8ee9-7eb5353f06c0-kube-api-access-87djg\") pod \"nmstate-handler-6vlgl\" (UID: \"89638e7b-fd22-4b85-8ee9-7eb5353f06c0\") " pod="openshift-nmstate/nmstate-handler-6vlgl" Oct 06 11:58:03 crc kubenswrapper[4958]: I1006 11:58:03.114540 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xjlzr\" (UniqueName: \"kubernetes.io/projected/4605847c-947f-4955-b80b-87bb98b3c946-kube-api-access-xjlzr\") pod \"nmstate-webhook-6cdbc54649-82rnp\" (UID: \"4605847c-947f-4955-b80b-87bb98b3c946\") " pod="openshift-nmstate/nmstate-webhook-6cdbc54649-82rnp" Oct 06 11:58:03 crc kubenswrapper[4958]: I1006 11:58:03.115252 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zm9n9\" (UniqueName: \"kubernetes.io/projected/fda4902f-9bcc-419f-80f2-40a46dc2e7dd-kube-api-access-zm9n9\") pod \"nmstate-metrics-fdff9cb8d-cmgch\" (UID: \"fda4902f-9bcc-419f-80f2-40a46dc2e7dd\") " pod="openshift-nmstate/nmstate-metrics-fdff9cb8d-cmgch" Oct 06 11:58:03 crc kubenswrapper[4958]: I1006 11:58:03.198191 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5f482352-713c-4502-aded-dfe37c5fa8bc-nginx-conf\") pod \"nmstate-console-plugin-6b874cbd85-rd5gn\" (UID: \"5f482352-713c-4502-aded-dfe37c5fa8bc\") " pod="openshift-nmstate/nmstate-console-plugin-6b874cbd85-rd5gn" Oct 06 11:58:03 crc kubenswrapper[4958]: I1006 11:58:03.198287 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/5f482352-713c-4502-aded-dfe37c5fa8bc-plugin-serving-cert\") pod \"nmstate-console-plugin-6b874cbd85-rd5gn\" (UID: \"5f482352-713c-4502-aded-dfe37c5fa8bc\") " pod="openshift-nmstate/nmstate-console-plugin-6b874cbd85-rd5gn" Oct 06 11:58:03 crc kubenswrapper[4958]: I1006 11:58:03.198336 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hltlr\" (UniqueName: \"kubernetes.io/projected/5f482352-713c-4502-aded-dfe37c5fa8bc-kube-api-access-hltlr\") pod \"nmstate-console-plugin-6b874cbd85-rd5gn\" (UID: \"5f482352-713c-4502-aded-dfe37c5fa8bc\") " pod="openshift-nmstate/nmstate-console-plugin-6b874cbd85-rd5gn" Oct 06 11:58:03 crc kubenswrapper[4958]: I1006 11:58:03.199309 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5f482352-713c-4502-aded-dfe37c5fa8bc-nginx-conf\") pod \"nmstate-console-plugin-6b874cbd85-rd5gn\" (UID: \"5f482352-713c-4502-aded-dfe37c5fa8bc\") " pod="openshift-nmstate/nmstate-console-plugin-6b874cbd85-rd5gn" Oct 06 11:58:03 crc kubenswrapper[4958]: I1006 11:58:03.202773 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/5f482352-713c-4502-aded-dfe37c5fa8bc-plugin-serving-cert\") pod \"nmstate-console-plugin-6b874cbd85-rd5gn\" (UID: \"5f482352-713c-4502-aded-dfe37c5fa8bc\") " pod="openshift-nmstate/nmstate-console-plugin-6b874cbd85-rd5gn" Oct 06 11:58:03 crc kubenswrapper[4958]: I1006 11:58:03.223983 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hltlr\" (UniqueName: \"kubernetes.io/projected/5f482352-713c-4502-aded-dfe37c5fa8bc-kube-api-access-hltlr\") pod \"nmstate-console-plugin-6b874cbd85-rd5gn\" (UID: \"5f482352-713c-4502-aded-dfe37c5fa8bc\") " pod="openshift-nmstate/nmstate-console-plugin-6b874cbd85-rd5gn" Oct 06 11:58:03 crc kubenswrapper[4958]: I1006 11:58:03.231988 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-metrics-fdff9cb8d-cmgch" Oct 06 11:58:03 crc kubenswrapper[4958]: I1006 11:58:03.234193 4958 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/console-77b75945b8-2br6j"] Oct 06 11:58:03 crc kubenswrapper[4958]: I1006 11:58:03.234784 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-77b75945b8-2br6j" Oct 06 11:58:03 crc kubenswrapper[4958]: I1006 11:58:03.244228 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-77b75945b8-2br6j"] Oct 06 11:58:03 crc kubenswrapper[4958]: I1006 11:58:03.279511 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-handler-6vlgl" Oct 06 11:58:03 crc kubenswrapper[4958]: I1006 11:58:03.299383 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v2hwm\" (UniqueName: \"kubernetes.io/projected/c244cf55-bc9a-4cd2-a5fe-ef11db8bc83d-kube-api-access-v2hwm\") pod \"console-77b75945b8-2br6j\" (UID: \"c244cf55-bc9a-4cd2-a5fe-ef11db8bc83d\") " pod="openshift-console/console-77b75945b8-2br6j" Oct 06 11:58:03 crc kubenswrapper[4958]: I1006 11:58:03.299533 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c244cf55-bc9a-4cd2-a5fe-ef11db8bc83d-trusted-ca-bundle\") pod \"console-77b75945b8-2br6j\" (UID: \"c244cf55-bc9a-4cd2-a5fe-ef11db8bc83d\") " pod="openshift-console/console-77b75945b8-2br6j" Oct 06 11:58:03 crc kubenswrapper[4958]: I1006 11:58:03.299581 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/c244cf55-bc9a-4cd2-a5fe-ef11db8bc83d-service-ca\") pod \"console-77b75945b8-2br6j\" (UID: \"c244cf55-bc9a-4cd2-a5fe-ef11db8bc83d\") " pod="openshift-console/console-77b75945b8-2br6j" Oct 06 11:58:03 crc kubenswrapper[4958]: I1006 11:58:03.299605 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/c244cf55-bc9a-4cd2-a5fe-ef11db8bc83d-oauth-serving-cert\") pod \"console-77b75945b8-2br6j\" (UID: \"c244cf55-bc9a-4cd2-a5fe-ef11db8bc83d\") " pod="openshift-console/console-77b75945b8-2br6j" Oct 06 11:58:03 crc kubenswrapper[4958]: I1006 11:58:03.299648 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/c244cf55-bc9a-4cd2-a5fe-ef11db8bc83d-console-oauth-config\") pod \"console-77b75945b8-2br6j\" (UID: \"c244cf55-bc9a-4cd2-a5fe-ef11db8bc83d\") " pod="openshift-console/console-77b75945b8-2br6j" Oct 06 11:58:03 crc kubenswrapper[4958]: I1006 11:58:03.299673 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/c244cf55-bc9a-4cd2-a5fe-ef11db8bc83d-console-config\") pod \"console-77b75945b8-2br6j\" (UID: \"c244cf55-bc9a-4cd2-a5fe-ef11db8bc83d\") " pod="openshift-console/console-77b75945b8-2br6j" Oct 06 11:58:03 crc kubenswrapper[4958]: I1006 11:58:03.299689 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/c244cf55-bc9a-4cd2-a5fe-ef11db8bc83d-console-serving-cert\") pod \"console-77b75945b8-2br6j\" (UID: \"c244cf55-bc9a-4cd2-a5fe-ef11db8bc83d\") " pod="openshift-console/console-77b75945b8-2br6j" Oct 06 11:58:03 crc kubenswrapper[4958]: I1006 11:58:03.374844 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-console-plugin-6b874cbd85-rd5gn" Oct 06 11:58:03 crc kubenswrapper[4958]: I1006 11:58:03.403180 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v2hwm\" (UniqueName: \"kubernetes.io/projected/c244cf55-bc9a-4cd2-a5fe-ef11db8bc83d-kube-api-access-v2hwm\") pod \"console-77b75945b8-2br6j\" (UID: \"c244cf55-bc9a-4cd2-a5fe-ef11db8bc83d\") " pod="openshift-console/console-77b75945b8-2br6j" Oct 06 11:58:03 crc kubenswrapper[4958]: I1006 11:58:03.403241 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c244cf55-bc9a-4cd2-a5fe-ef11db8bc83d-trusted-ca-bundle\") pod \"console-77b75945b8-2br6j\" (UID: \"c244cf55-bc9a-4cd2-a5fe-ef11db8bc83d\") " pod="openshift-console/console-77b75945b8-2br6j" Oct 06 11:58:03 crc kubenswrapper[4958]: I1006 11:58:03.403296 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/c244cf55-bc9a-4cd2-a5fe-ef11db8bc83d-service-ca\") pod \"console-77b75945b8-2br6j\" (UID: \"c244cf55-bc9a-4cd2-a5fe-ef11db8bc83d\") " pod="openshift-console/console-77b75945b8-2br6j" Oct 06 11:58:03 crc kubenswrapper[4958]: I1006 11:58:03.403320 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/c244cf55-bc9a-4cd2-a5fe-ef11db8bc83d-oauth-serving-cert\") pod \"console-77b75945b8-2br6j\" (UID: \"c244cf55-bc9a-4cd2-a5fe-ef11db8bc83d\") " pod="openshift-console/console-77b75945b8-2br6j" Oct 06 11:58:03 crc kubenswrapper[4958]: I1006 11:58:03.403344 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/c244cf55-bc9a-4cd2-a5fe-ef11db8bc83d-console-oauth-config\") pod \"console-77b75945b8-2br6j\" (UID: \"c244cf55-bc9a-4cd2-a5fe-ef11db8bc83d\") " pod="openshift-console/console-77b75945b8-2br6j" Oct 06 11:58:03 crc kubenswrapper[4958]: I1006 11:58:03.403394 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/c244cf55-bc9a-4cd2-a5fe-ef11db8bc83d-console-config\") pod \"console-77b75945b8-2br6j\" (UID: \"c244cf55-bc9a-4cd2-a5fe-ef11db8bc83d\") " pod="openshift-console/console-77b75945b8-2br6j" Oct 06 11:58:03 crc kubenswrapper[4958]: I1006 11:58:03.403418 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/c244cf55-bc9a-4cd2-a5fe-ef11db8bc83d-console-serving-cert\") pod \"console-77b75945b8-2br6j\" (UID: \"c244cf55-bc9a-4cd2-a5fe-ef11db8bc83d\") " pod="openshift-console/console-77b75945b8-2br6j" Oct 06 11:58:03 crc kubenswrapper[4958]: I1006 11:58:03.404269 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/c244cf55-bc9a-4cd2-a5fe-ef11db8bc83d-service-ca\") pod \"console-77b75945b8-2br6j\" (UID: \"c244cf55-bc9a-4cd2-a5fe-ef11db8bc83d\") " pod="openshift-console/console-77b75945b8-2br6j" Oct 06 11:58:03 crc kubenswrapper[4958]: I1006 11:58:03.404615 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c244cf55-bc9a-4cd2-a5fe-ef11db8bc83d-trusted-ca-bundle\") pod \"console-77b75945b8-2br6j\" (UID: \"c244cf55-bc9a-4cd2-a5fe-ef11db8bc83d\") " pod="openshift-console/console-77b75945b8-2br6j" Oct 06 11:58:03 crc kubenswrapper[4958]: I1006 11:58:03.404913 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/c244cf55-bc9a-4cd2-a5fe-ef11db8bc83d-oauth-serving-cert\") pod \"console-77b75945b8-2br6j\" (UID: \"c244cf55-bc9a-4cd2-a5fe-ef11db8bc83d\") " pod="openshift-console/console-77b75945b8-2br6j" Oct 06 11:58:03 crc kubenswrapper[4958]: I1006 11:58:03.405320 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/c244cf55-bc9a-4cd2-a5fe-ef11db8bc83d-console-config\") pod \"console-77b75945b8-2br6j\" (UID: \"c244cf55-bc9a-4cd2-a5fe-ef11db8bc83d\") " pod="openshift-console/console-77b75945b8-2br6j" Oct 06 11:58:03 crc kubenswrapper[4958]: I1006 11:58:03.409444 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/c244cf55-bc9a-4cd2-a5fe-ef11db8bc83d-console-serving-cert\") pod \"console-77b75945b8-2br6j\" (UID: \"c244cf55-bc9a-4cd2-a5fe-ef11db8bc83d\") " pod="openshift-console/console-77b75945b8-2br6j" Oct 06 11:58:03 crc kubenswrapper[4958]: I1006 11:58:03.409809 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/c244cf55-bc9a-4cd2-a5fe-ef11db8bc83d-console-oauth-config\") pod \"console-77b75945b8-2br6j\" (UID: \"c244cf55-bc9a-4cd2-a5fe-ef11db8bc83d\") " pod="openshift-console/console-77b75945b8-2br6j" Oct 06 11:58:03 crc kubenswrapper[4958]: I1006 11:58:03.426991 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v2hwm\" (UniqueName: \"kubernetes.io/projected/c244cf55-bc9a-4cd2-a5fe-ef11db8bc83d-kube-api-access-v2hwm\") pod \"console-77b75945b8-2br6j\" (UID: \"c244cf55-bc9a-4cd2-a5fe-ef11db8bc83d\") " pod="openshift-console/console-77b75945b8-2br6j" Oct 06 11:58:03 crc kubenswrapper[4958]: I1006 11:58:03.552861 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-console-plugin-6b874cbd85-rd5gn"] Oct 06 11:58:03 crc kubenswrapper[4958]: W1006 11:58:03.559895 4958 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5f482352_713c_4502_aded_dfe37c5fa8bc.slice/crio-b4ea1afed6502c11905c9652e2caecef180aea99c12b721935c88d2650c3969b WatchSource:0}: Error finding container b4ea1afed6502c11905c9652e2caecef180aea99c12b721935c88d2650c3969b: Status 404 returned error can't find the container with id b4ea1afed6502c11905c9652e2caecef180aea99c12b721935c88d2650c3969b Oct 06 11:58:03 crc kubenswrapper[4958]: I1006 11:58:03.606249 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/4605847c-947f-4955-b80b-87bb98b3c946-tls-key-pair\") pod \"nmstate-webhook-6cdbc54649-82rnp\" (UID: \"4605847c-947f-4955-b80b-87bb98b3c946\") " pod="openshift-nmstate/nmstate-webhook-6cdbc54649-82rnp" Oct 06 11:58:03 crc kubenswrapper[4958]: I1006 11:58:03.609680 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/4605847c-947f-4955-b80b-87bb98b3c946-tls-key-pair\") pod \"nmstate-webhook-6cdbc54649-82rnp\" (UID: \"4605847c-947f-4955-b80b-87bb98b3c946\") " pod="openshift-nmstate/nmstate-webhook-6cdbc54649-82rnp" Oct 06 11:58:03 crc kubenswrapper[4958]: I1006 11:58:03.610396 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-77b75945b8-2br6j" Oct 06 11:58:03 crc kubenswrapper[4958]: I1006 11:58:03.636699 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-handler-6vlgl" event={"ID":"89638e7b-fd22-4b85-8ee9-7eb5353f06c0","Type":"ContainerStarted","Data":"773512fa72de56745fada8df191a67833e7178d810c398a54268e7b3c8a716e6"} Oct 06 11:58:03 crc kubenswrapper[4958]: I1006 11:58:03.637855 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-console-plugin-6b874cbd85-rd5gn" event={"ID":"5f482352-713c-4502-aded-dfe37c5fa8bc","Type":"ContainerStarted","Data":"b4ea1afed6502c11905c9652e2caecef180aea99c12b721935c88d2650c3969b"} Oct 06 11:58:03 crc kubenswrapper[4958]: I1006 11:58:03.663471 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-metrics-fdff9cb8d-cmgch"] Oct 06 11:58:03 crc kubenswrapper[4958]: W1006 11:58:03.673892 4958 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podfda4902f_9bcc_419f_80f2_40a46dc2e7dd.slice/crio-429764266b2bd1d58bd9b9ea4acf91f4097f0db7d9d5b22551830c8d20941445 WatchSource:0}: Error finding container 429764266b2bd1d58bd9b9ea4acf91f4097f0db7d9d5b22551830c8d20941445: Status 404 returned error can't find the container with id 429764266b2bd1d58bd9b9ea4acf91f4097f0db7d9d5b22551830c8d20941445 Oct 06 11:58:03 crc kubenswrapper[4958]: I1006 11:58:03.861821 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-webhook-6cdbc54649-82rnp" Oct 06 11:58:04 crc kubenswrapper[4958]: I1006 11:58:04.015063 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-77b75945b8-2br6j"] Oct 06 11:58:04 crc kubenswrapper[4958]: W1006 11:58:04.032207 4958 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc244cf55_bc9a_4cd2_a5fe_ef11db8bc83d.slice/crio-282d9b888e37404486ae78fff6015911ffaf8ffcf424b42d8e730a60dc63fbd8 WatchSource:0}: Error finding container 282d9b888e37404486ae78fff6015911ffaf8ffcf424b42d8e730a60dc63fbd8: Status 404 returned error can't find the container with id 282d9b888e37404486ae78fff6015911ffaf8ffcf424b42d8e730a60dc63fbd8 Oct 06 11:58:04 crc kubenswrapper[4958]: I1006 11:58:04.091450 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-webhook-6cdbc54649-82rnp"] Oct 06 11:58:04 crc kubenswrapper[4958]: W1006 11:58:04.101499 4958 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4605847c_947f_4955_b80b_87bb98b3c946.slice/crio-e6568db5dd830c9272edebc2f57e2f0aa66edbc0f8787a70bc83aef3ba69aa3b WatchSource:0}: Error finding container e6568db5dd830c9272edebc2f57e2f0aa66edbc0f8787a70bc83aef3ba69aa3b: Status 404 returned error can't find the container with id e6568db5dd830c9272edebc2f57e2f0aa66edbc0f8787a70bc83aef3ba69aa3b Oct 06 11:58:04 crc kubenswrapper[4958]: I1006 11:58:04.653984 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-fdff9cb8d-cmgch" event={"ID":"fda4902f-9bcc-419f-80f2-40a46dc2e7dd","Type":"ContainerStarted","Data":"429764266b2bd1d58bd9b9ea4acf91f4097f0db7d9d5b22551830c8d20941445"} Oct 06 11:58:04 crc kubenswrapper[4958]: I1006 11:58:04.657117 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-webhook-6cdbc54649-82rnp" event={"ID":"4605847c-947f-4955-b80b-87bb98b3c946","Type":"ContainerStarted","Data":"e6568db5dd830c9272edebc2f57e2f0aa66edbc0f8787a70bc83aef3ba69aa3b"} Oct 06 11:58:04 crc kubenswrapper[4958]: I1006 11:58:04.661780 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-77b75945b8-2br6j" event={"ID":"c244cf55-bc9a-4cd2-a5fe-ef11db8bc83d","Type":"ContainerStarted","Data":"eb6d15c1e5a6559bae11649fcabf5fb1b23e73591fcd1ce912b6b7211ceb6f3b"} Oct 06 11:58:04 crc kubenswrapper[4958]: I1006 11:58:04.661946 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-77b75945b8-2br6j" event={"ID":"c244cf55-bc9a-4cd2-a5fe-ef11db8bc83d","Type":"ContainerStarted","Data":"282d9b888e37404486ae78fff6015911ffaf8ffcf424b42d8e730a60dc63fbd8"} Oct 06 11:58:04 crc kubenswrapper[4958]: I1006 11:58:04.681139 4958 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-77b75945b8-2br6j" podStartSLOduration=1.681104248 podStartE2EDuration="1.681104248s" podCreationTimestamp="2025-10-06 11:58:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-06 11:58:04.68021471 +0000 UTC m=+638.566240038" watchObservedRunningTime="2025-10-06 11:58:04.681104248 +0000 UTC m=+638.567129596" Oct 06 11:58:07 crc kubenswrapper[4958]: I1006 11:58:07.682285 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-fdff9cb8d-cmgch" event={"ID":"fda4902f-9bcc-419f-80f2-40a46dc2e7dd","Type":"ContainerStarted","Data":"15033ae7b761e4715297e919165accd70258978f9a63ddbfeee881f2d8c11966"} Oct 06 11:58:07 crc kubenswrapper[4958]: I1006 11:58:07.684089 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-handler-6vlgl" event={"ID":"89638e7b-fd22-4b85-8ee9-7eb5353f06c0","Type":"ContainerStarted","Data":"e59a1de423503abcc5a3d784b5766e2188bff5289fb10cf8680cef40033ec8b3"} Oct 06 11:58:07 crc kubenswrapper[4958]: I1006 11:58:07.684323 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-nmstate/nmstate-handler-6vlgl" Oct 06 11:58:07 crc kubenswrapper[4958]: I1006 11:58:07.685471 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-console-plugin-6b874cbd85-rd5gn" event={"ID":"5f482352-713c-4502-aded-dfe37c5fa8bc","Type":"ContainerStarted","Data":"e9140619a764d27e6769250c4a9a122387336e99e0ee9c3e2fe73526a76e23e1"} Oct 06 11:58:07 crc kubenswrapper[4958]: I1006 11:58:07.686977 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-webhook-6cdbc54649-82rnp" event={"ID":"4605847c-947f-4955-b80b-87bb98b3c946","Type":"ContainerStarted","Data":"9dc5897923620468abb1402d4866358f3683d16867262695b49dfa37a2e87a31"} Oct 06 11:58:07 crc kubenswrapper[4958]: I1006 11:58:07.687187 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-nmstate/nmstate-webhook-6cdbc54649-82rnp" Oct 06 11:58:07 crc kubenswrapper[4958]: I1006 11:58:07.714755 4958 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-handler-6vlgl" podStartSLOduration=2.115835183 podStartE2EDuration="5.71469842s" podCreationTimestamp="2025-10-06 11:58:02 +0000 UTC" firstStartedPulling="2025-10-06 11:58:03.303727748 +0000 UTC m=+637.189753056" lastFinishedPulling="2025-10-06 11:58:06.902590975 +0000 UTC m=+640.788616293" observedRunningTime="2025-10-06 11:58:07.700130624 +0000 UTC m=+641.586155942" watchObservedRunningTime="2025-10-06 11:58:07.71469842 +0000 UTC m=+641.600723748" Oct 06 11:58:07 crc kubenswrapper[4958]: I1006 11:58:07.728185 4958 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-webhook-6cdbc54649-82rnp" podStartSLOduration=2.935644389 podStartE2EDuration="5.728166232s" podCreationTimestamp="2025-10-06 11:58:02 +0000 UTC" firstStartedPulling="2025-10-06 11:58:04.103794326 +0000 UTC m=+637.989819634" lastFinishedPulling="2025-10-06 11:58:06.896316159 +0000 UTC m=+640.782341477" observedRunningTime="2025-10-06 11:58:07.723510316 +0000 UTC m=+641.609535644" watchObservedRunningTime="2025-10-06 11:58:07.728166232 +0000 UTC m=+641.614191540" Oct 06 11:58:07 crc kubenswrapper[4958]: I1006 11:58:07.742867 4958 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-console-plugin-6b874cbd85-rd5gn" podStartSLOduration=1.402596065 podStartE2EDuration="4.742848011s" podCreationTimestamp="2025-10-06 11:58:03 +0000 UTC" firstStartedPulling="2025-10-06 11:58:03.562241906 +0000 UTC m=+637.448267204" lastFinishedPulling="2025-10-06 11:58:06.902493852 +0000 UTC m=+640.788519150" observedRunningTime="2025-10-06 11:58:07.738447994 +0000 UTC m=+641.624473312" watchObservedRunningTime="2025-10-06 11:58:07.742848011 +0000 UTC m=+641.628873319" Oct 06 11:58:10 crc kubenswrapper[4958]: I1006 11:58:10.713015 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-fdff9cb8d-cmgch" event={"ID":"fda4902f-9bcc-419f-80f2-40a46dc2e7dd","Type":"ContainerStarted","Data":"41f304ca33891d024c6e2f288b9a0e03edf081885f923fc043e014e095d0537e"} Oct 06 11:58:10 crc kubenswrapper[4958]: I1006 11:58:10.739639 4958 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-metrics-fdff9cb8d-cmgch" podStartSLOduration=2.113408189 podStartE2EDuration="8.739616211s" podCreationTimestamp="2025-10-06 11:58:02 +0000 UTC" firstStartedPulling="2025-10-06 11:58:03.676253417 +0000 UTC m=+637.562278725" lastFinishedPulling="2025-10-06 11:58:10.302461439 +0000 UTC m=+644.188486747" observedRunningTime="2025-10-06 11:58:10.735717039 +0000 UTC m=+644.621742437" watchObservedRunningTime="2025-10-06 11:58:10.739616211 +0000 UTC m=+644.625641559" Oct 06 11:58:13 crc kubenswrapper[4958]: I1006 11:58:13.320845 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-nmstate/nmstate-handler-6vlgl" Oct 06 11:58:13 crc kubenswrapper[4958]: I1006 11:58:13.612276 4958 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-77b75945b8-2br6j" Oct 06 11:58:13 crc kubenswrapper[4958]: I1006 11:58:13.612380 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-77b75945b8-2br6j" Oct 06 11:58:13 crc kubenswrapper[4958]: I1006 11:58:13.620834 4958 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-77b75945b8-2br6j" Oct 06 11:58:13 crc kubenswrapper[4958]: I1006 11:58:13.741862 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-77b75945b8-2br6j" Oct 06 11:58:13 crc kubenswrapper[4958]: I1006 11:58:13.816857 4958 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-f9d7485db-ccpgf"] Oct 06 11:58:23 crc kubenswrapper[4958]: I1006 11:58:23.869492 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-nmstate/nmstate-webhook-6cdbc54649-82rnp" Oct 06 11:58:38 crc kubenswrapper[4958]: I1006 11:58:38.322915 4958 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/8f2f4ee801e5826a37d84a7b1fc4ccbf6b79de668302737d0f1152d8d2rtm7t"] Oct 06 11:58:38 crc kubenswrapper[4958]: I1006 11:58:38.325989 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/8f2f4ee801e5826a37d84a7b1fc4ccbf6b79de668302737d0f1152d8d2rtm7t" Oct 06 11:58:38 crc kubenswrapper[4958]: I1006 11:58:38.328127 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vmwhc" Oct 06 11:58:38 crc kubenswrapper[4958]: I1006 11:58:38.336488 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/8f2f4ee801e5826a37d84a7b1fc4ccbf6b79de668302737d0f1152d8d2rtm7t"] Oct 06 11:58:38 crc kubenswrapper[4958]: I1006 11:58:38.432484 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/ea75ba71-b6d0-4620-a412-26fb313a0bff-util\") pod \"8f2f4ee801e5826a37d84a7b1fc4ccbf6b79de668302737d0f1152d8d2rtm7t\" (UID: \"ea75ba71-b6d0-4620-a412-26fb313a0bff\") " pod="openshift-marketplace/8f2f4ee801e5826a37d84a7b1fc4ccbf6b79de668302737d0f1152d8d2rtm7t" Oct 06 11:58:38 crc kubenswrapper[4958]: I1006 11:58:38.432576 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/ea75ba71-b6d0-4620-a412-26fb313a0bff-bundle\") pod \"8f2f4ee801e5826a37d84a7b1fc4ccbf6b79de668302737d0f1152d8d2rtm7t\" (UID: \"ea75ba71-b6d0-4620-a412-26fb313a0bff\") " pod="openshift-marketplace/8f2f4ee801e5826a37d84a7b1fc4ccbf6b79de668302737d0f1152d8d2rtm7t" Oct 06 11:58:38 crc kubenswrapper[4958]: I1006 11:58:38.432669 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kngww\" (UniqueName: \"kubernetes.io/projected/ea75ba71-b6d0-4620-a412-26fb313a0bff-kube-api-access-kngww\") pod \"8f2f4ee801e5826a37d84a7b1fc4ccbf6b79de668302737d0f1152d8d2rtm7t\" (UID: \"ea75ba71-b6d0-4620-a412-26fb313a0bff\") " pod="openshift-marketplace/8f2f4ee801e5826a37d84a7b1fc4ccbf6b79de668302737d0f1152d8d2rtm7t" Oct 06 11:58:38 crc kubenswrapper[4958]: I1006 11:58:38.533300 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/ea75ba71-b6d0-4620-a412-26fb313a0bff-util\") pod \"8f2f4ee801e5826a37d84a7b1fc4ccbf6b79de668302737d0f1152d8d2rtm7t\" (UID: \"ea75ba71-b6d0-4620-a412-26fb313a0bff\") " pod="openshift-marketplace/8f2f4ee801e5826a37d84a7b1fc4ccbf6b79de668302737d0f1152d8d2rtm7t" Oct 06 11:58:38 crc kubenswrapper[4958]: I1006 11:58:38.533360 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/ea75ba71-b6d0-4620-a412-26fb313a0bff-bundle\") pod \"8f2f4ee801e5826a37d84a7b1fc4ccbf6b79de668302737d0f1152d8d2rtm7t\" (UID: \"ea75ba71-b6d0-4620-a412-26fb313a0bff\") " pod="openshift-marketplace/8f2f4ee801e5826a37d84a7b1fc4ccbf6b79de668302737d0f1152d8d2rtm7t" Oct 06 11:58:38 crc kubenswrapper[4958]: I1006 11:58:38.533415 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kngww\" (UniqueName: \"kubernetes.io/projected/ea75ba71-b6d0-4620-a412-26fb313a0bff-kube-api-access-kngww\") pod \"8f2f4ee801e5826a37d84a7b1fc4ccbf6b79de668302737d0f1152d8d2rtm7t\" (UID: \"ea75ba71-b6d0-4620-a412-26fb313a0bff\") " pod="openshift-marketplace/8f2f4ee801e5826a37d84a7b1fc4ccbf6b79de668302737d0f1152d8d2rtm7t" Oct 06 11:58:38 crc kubenswrapper[4958]: I1006 11:58:38.534102 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/ea75ba71-b6d0-4620-a412-26fb313a0bff-util\") pod \"8f2f4ee801e5826a37d84a7b1fc4ccbf6b79de668302737d0f1152d8d2rtm7t\" (UID: \"ea75ba71-b6d0-4620-a412-26fb313a0bff\") " pod="openshift-marketplace/8f2f4ee801e5826a37d84a7b1fc4ccbf6b79de668302737d0f1152d8d2rtm7t" Oct 06 11:58:38 crc kubenswrapper[4958]: I1006 11:58:38.534197 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/ea75ba71-b6d0-4620-a412-26fb313a0bff-bundle\") pod \"8f2f4ee801e5826a37d84a7b1fc4ccbf6b79de668302737d0f1152d8d2rtm7t\" (UID: \"ea75ba71-b6d0-4620-a412-26fb313a0bff\") " pod="openshift-marketplace/8f2f4ee801e5826a37d84a7b1fc4ccbf6b79de668302737d0f1152d8d2rtm7t" Oct 06 11:58:38 crc kubenswrapper[4958]: I1006 11:58:38.561548 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kngww\" (UniqueName: \"kubernetes.io/projected/ea75ba71-b6d0-4620-a412-26fb313a0bff-kube-api-access-kngww\") pod \"8f2f4ee801e5826a37d84a7b1fc4ccbf6b79de668302737d0f1152d8d2rtm7t\" (UID: \"ea75ba71-b6d0-4620-a412-26fb313a0bff\") " pod="openshift-marketplace/8f2f4ee801e5826a37d84a7b1fc4ccbf6b79de668302737d0f1152d8d2rtm7t" Oct 06 11:58:38 crc kubenswrapper[4958]: I1006 11:58:38.642443 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/8f2f4ee801e5826a37d84a7b1fc4ccbf6b79de668302737d0f1152d8d2rtm7t" Oct 06 11:58:38 crc kubenswrapper[4958]: I1006 11:58:38.876577 4958 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-console/console-f9d7485db-ccpgf" podUID="799bd962-f454-498a-88e6-58793b08d732" containerName="console" containerID="cri-o://c40c27e16230f41ae4aea9fd481e99813ed2d4e52340ea5e3497662b06da0e71" gracePeriod=15 Oct 06 11:58:39 crc kubenswrapper[4958]: I1006 11:58:39.140988 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/8f2f4ee801e5826a37d84a7b1fc4ccbf6b79de668302737d0f1152d8d2rtm7t"] Oct 06 11:58:39 crc kubenswrapper[4958]: I1006 11:58:39.283616 4958 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-f9d7485db-ccpgf_799bd962-f454-498a-88e6-58793b08d732/console/0.log" Oct 06 11:58:39 crc kubenswrapper[4958]: I1006 11:58:39.284089 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-ccpgf" Oct 06 11:58:39 crc kubenswrapper[4958]: I1006 11:58:39.344976 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/799bd962-f454-498a-88e6-58793b08d732-trusted-ca-bundle\") pod \"799bd962-f454-498a-88e6-58793b08d732\" (UID: \"799bd962-f454-498a-88e6-58793b08d732\") " Oct 06 11:58:39 crc kubenswrapper[4958]: I1006 11:58:39.345161 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4sdjd\" (UniqueName: \"kubernetes.io/projected/799bd962-f454-498a-88e6-58793b08d732-kube-api-access-4sdjd\") pod \"799bd962-f454-498a-88e6-58793b08d732\" (UID: \"799bd962-f454-498a-88e6-58793b08d732\") " Oct 06 11:58:39 crc kubenswrapper[4958]: I1006 11:58:39.345287 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/799bd962-f454-498a-88e6-58793b08d732-console-oauth-config\") pod \"799bd962-f454-498a-88e6-58793b08d732\" (UID: \"799bd962-f454-498a-88e6-58793b08d732\") " Oct 06 11:58:39 crc kubenswrapper[4958]: I1006 11:58:39.345321 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/799bd962-f454-498a-88e6-58793b08d732-console-serving-cert\") pod \"799bd962-f454-498a-88e6-58793b08d732\" (UID: \"799bd962-f454-498a-88e6-58793b08d732\") " Oct 06 11:58:39 crc kubenswrapper[4958]: I1006 11:58:39.345391 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/799bd962-f454-498a-88e6-58793b08d732-oauth-serving-cert\") pod \"799bd962-f454-498a-88e6-58793b08d732\" (UID: \"799bd962-f454-498a-88e6-58793b08d732\") " Oct 06 11:58:39 crc kubenswrapper[4958]: I1006 11:58:39.345436 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/799bd962-f454-498a-88e6-58793b08d732-console-config\") pod \"799bd962-f454-498a-88e6-58793b08d732\" (UID: \"799bd962-f454-498a-88e6-58793b08d732\") " Oct 06 11:58:39 crc kubenswrapper[4958]: I1006 11:58:39.345497 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/799bd962-f454-498a-88e6-58793b08d732-service-ca\") pod \"799bd962-f454-498a-88e6-58793b08d732\" (UID: \"799bd962-f454-498a-88e6-58793b08d732\") " Oct 06 11:58:39 crc kubenswrapper[4958]: I1006 11:58:39.346332 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/799bd962-f454-498a-88e6-58793b08d732-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "799bd962-f454-498a-88e6-58793b08d732" (UID: "799bd962-f454-498a-88e6-58793b08d732"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 06 11:58:39 crc kubenswrapper[4958]: I1006 11:58:39.346371 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/799bd962-f454-498a-88e6-58793b08d732-console-config" (OuterVolumeSpecName: "console-config") pod "799bd962-f454-498a-88e6-58793b08d732" (UID: "799bd962-f454-498a-88e6-58793b08d732"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 06 11:58:39 crc kubenswrapper[4958]: I1006 11:58:39.346633 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/799bd962-f454-498a-88e6-58793b08d732-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "799bd962-f454-498a-88e6-58793b08d732" (UID: "799bd962-f454-498a-88e6-58793b08d732"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 06 11:58:39 crc kubenswrapper[4958]: I1006 11:58:39.346730 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/799bd962-f454-498a-88e6-58793b08d732-service-ca" (OuterVolumeSpecName: "service-ca") pod "799bd962-f454-498a-88e6-58793b08d732" (UID: "799bd962-f454-498a-88e6-58793b08d732"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 06 11:58:39 crc kubenswrapper[4958]: I1006 11:58:39.352566 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/799bd962-f454-498a-88e6-58793b08d732-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "799bd962-f454-498a-88e6-58793b08d732" (UID: "799bd962-f454-498a-88e6-58793b08d732"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 06 11:58:39 crc kubenswrapper[4958]: I1006 11:58:39.352807 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/799bd962-f454-498a-88e6-58793b08d732-kube-api-access-4sdjd" (OuterVolumeSpecName: "kube-api-access-4sdjd") pod "799bd962-f454-498a-88e6-58793b08d732" (UID: "799bd962-f454-498a-88e6-58793b08d732"). InnerVolumeSpecName "kube-api-access-4sdjd". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 06 11:58:39 crc kubenswrapper[4958]: I1006 11:58:39.354662 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/799bd962-f454-498a-88e6-58793b08d732-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "799bd962-f454-498a-88e6-58793b08d732" (UID: "799bd962-f454-498a-88e6-58793b08d732"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 06 11:58:39 crc kubenswrapper[4958]: I1006 11:58:39.447321 4958 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4sdjd\" (UniqueName: \"kubernetes.io/projected/799bd962-f454-498a-88e6-58793b08d732-kube-api-access-4sdjd\") on node \"crc\" DevicePath \"\"" Oct 06 11:58:39 crc kubenswrapper[4958]: I1006 11:58:39.447374 4958 reconciler_common.go:293] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/799bd962-f454-498a-88e6-58793b08d732-console-oauth-config\") on node \"crc\" DevicePath \"\"" Oct 06 11:58:39 crc kubenswrapper[4958]: I1006 11:58:39.447394 4958 reconciler_common.go:293] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/799bd962-f454-498a-88e6-58793b08d732-console-serving-cert\") on node \"crc\" DevicePath \"\"" Oct 06 11:58:39 crc kubenswrapper[4958]: I1006 11:58:39.447412 4958 reconciler_common.go:293] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/799bd962-f454-498a-88e6-58793b08d732-oauth-serving-cert\") on node \"crc\" DevicePath \"\"" Oct 06 11:58:39 crc kubenswrapper[4958]: I1006 11:58:39.447428 4958 reconciler_common.go:293] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/799bd962-f454-498a-88e6-58793b08d732-console-config\") on node \"crc\" DevicePath \"\"" Oct 06 11:58:39 crc kubenswrapper[4958]: I1006 11:58:39.447447 4958 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/799bd962-f454-498a-88e6-58793b08d732-service-ca\") on node \"crc\" DevicePath \"\"" Oct 06 11:58:39 crc kubenswrapper[4958]: I1006 11:58:39.447464 4958 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/799bd962-f454-498a-88e6-58793b08d732-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 06 11:58:39 crc kubenswrapper[4958]: I1006 11:58:39.926390 4958 generic.go:334] "Generic (PLEG): container finished" podID="ea75ba71-b6d0-4620-a412-26fb313a0bff" containerID="959ce7f735bbfcf9adf7fa1da4306fa9321db1537af0a7efa9d555a4f0eb1006" exitCode=0 Oct 06 11:58:39 crc kubenswrapper[4958]: I1006 11:58:39.926532 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/8f2f4ee801e5826a37d84a7b1fc4ccbf6b79de668302737d0f1152d8d2rtm7t" event={"ID":"ea75ba71-b6d0-4620-a412-26fb313a0bff","Type":"ContainerDied","Data":"959ce7f735bbfcf9adf7fa1da4306fa9321db1537af0a7efa9d555a4f0eb1006"} Oct 06 11:58:39 crc kubenswrapper[4958]: I1006 11:58:39.926615 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/8f2f4ee801e5826a37d84a7b1fc4ccbf6b79de668302737d0f1152d8d2rtm7t" event={"ID":"ea75ba71-b6d0-4620-a412-26fb313a0bff","Type":"ContainerStarted","Data":"636781ac9b6947fa228e426659f0148248c7500a10f16e086d0388c08bd6b856"} Oct 06 11:58:39 crc kubenswrapper[4958]: I1006 11:58:39.931048 4958 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-f9d7485db-ccpgf_799bd962-f454-498a-88e6-58793b08d732/console/0.log" Oct 06 11:58:39 crc kubenswrapper[4958]: I1006 11:58:39.931101 4958 generic.go:334] "Generic (PLEG): container finished" podID="799bd962-f454-498a-88e6-58793b08d732" containerID="c40c27e16230f41ae4aea9fd481e99813ed2d4e52340ea5e3497662b06da0e71" exitCode=2 Oct 06 11:58:39 crc kubenswrapper[4958]: I1006 11:58:39.931140 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-ccpgf" event={"ID":"799bd962-f454-498a-88e6-58793b08d732","Type":"ContainerDied","Data":"c40c27e16230f41ae4aea9fd481e99813ed2d4e52340ea5e3497662b06da0e71"} Oct 06 11:58:39 crc kubenswrapper[4958]: I1006 11:58:39.931203 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-ccpgf" event={"ID":"799bd962-f454-498a-88e6-58793b08d732","Type":"ContainerDied","Data":"3cde3f926bcccfd98d212a28aff8f431f9e977c5333b923bbc8451e9e51fc56f"} Oct 06 11:58:39 crc kubenswrapper[4958]: I1006 11:58:39.931230 4958 scope.go:117] "RemoveContainer" containerID="c40c27e16230f41ae4aea9fd481e99813ed2d4e52340ea5e3497662b06da0e71" Oct 06 11:58:39 crc kubenswrapper[4958]: I1006 11:58:39.931259 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-ccpgf" Oct 06 11:58:39 crc kubenswrapper[4958]: I1006 11:58:39.977204 4958 scope.go:117] "RemoveContainer" containerID="c40c27e16230f41ae4aea9fd481e99813ed2d4e52340ea5e3497662b06da0e71" Oct 06 11:58:39 crc kubenswrapper[4958]: E1006 11:58:39.978051 4958 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c40c27e16230f41ae4aea9fd481e99813ed2d4e52340ea5e3497662b06da0e71\": container with ID starting with c40c27e16230f41ae4aea9fd481e99813ed2d4e52340ea5e3497662b06da0e71 not found: ID does not exist" containerID="c40c27e16230f41ae4aea9fd481e99813ed2d4e52340ea5e3497662b06da0e71" Oct 06 11:58:39 crc kubenswrapper[4958]: I1006 11:58:39.978124 4958 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c40c27e16230f41ae4aea9fd481e99813ed2d4e52340ea5e3497662b06da0e71"} err="failed to get container status \"c40c27e16230f41ae4aea9fd481e99813ed2d4e52340ea5e3497662b06da0e71\": rpc error: code = NotFound desc = could not find container \"c40c27e16230f41ae4aea9fd481e99813ed2d4e52340ea5e3497662b06da0e71\": container with ID starting with c40c27e16230f41ae4aea9fd481e99813ed2d4e52340ea5e3497662b06da0e71 not found: ID does not exist" Oct 06 11:58:40 crc kubenswrapper[4958]: I1006 11:58:40.000308 4958 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-f9d7485db-ccpgf"] Oct 06 11:58:40 crc kubenswrapper[4958]: I1006 11:58:40.004349 4958 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-console/console-f9d7485db-ccpgf"] Oct 06 11:58:40 crc kubenswrapper[4958]: I1006 11:58:40.926241 4958 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="799bd962-f454-498a-88e6-58793b08d732" path="/var/lib/kubelet/pods/799bd962-f454-498a-88e6-58793b08d732/volumes" Oct 06 11:58:42 crc kubenswrapper[4958]: I1006 11:58:42.954922 4958 generic.go:334] "Generic (PLEG): container finished" podID="ea75ba71-b6d0-4620-a412-26fb313a0bff" containerID="62083a5cdf5b653b98ed302648f6767670bf955917636279644c430c2ba4b788" exitCode=0 Oct 06 11:58:42 crc kubenswrapper[4958]: I1006 11:58:42.955008 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/8f2f4ee801e5826a37d84a7b1fc4ccbf6b79de668302737d0f1152d8d2rtm7t" event={"ID":"ea75ba71-b6d0-4620-a412-26fb313a0bff","Type":"ContainerDied","Data":"62083a5cdf5b653b98ed302648f6767670bf955917636279644c430c2ba4b788"} Oct 06 11:58:43 crc kubenswrapper[4958]: I1006 11:58:43.966008 4958 generic.go:334] "Generic (PLEG): container finished" podID="ea75ba71-b6d0-4620-a412-26fb313a0bff" containerID="81117d7852097a0edaed96b7ac7fef43e95dcc6b1642323b3a912f168c03fb9c" exitCode=0 Oct 06 11:58:43 crc kubenswrapper[4958]: I1006 11:58:43.966235 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/8f2f4ee801e5826a37d84a7b1fc4ccbf6b79de668302737d0f1152d8d2rtm7t" event={"ID":"ea75ba71-b6d0-4620-a412-26fb313a0bff","Type":"ContainerDied","Data":"81117d7852097a0edaed96b7ac7fef43e95dcc6b1642323b3a912f168c03fb9c"} Oct 06 11:58:45 crc kubenswrapper[4958]: I1006 11:58:45.259972 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/8f2f4ee801e5826a37d84a7b1fc4ccbf6b79de668302737d0f1152d8d2rtm7t" Oct 06 11:58:45 crc kubenswrapper[4958]: I1006 11:58:45.342095 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kngww\" (UniqueName: \"kubernetes.io/projected/ea75ba71-b6d0-4620-a412-26fb313a0bff-kube-api-access-kngww\") pod \"ea75ba71-b6d0-4620-a412-26fb313a0bff\" (UID: \"ea75ba71-b6d0-4620-a412-26fb313a0bff\") " Oct 06 11:58:45 crc kubenswrapper[4958]: I1006 11:58:45.342334 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/ea75ba71-b6d0-4620-a412-26fb313a0bff-util\") pod \"ea75ba71-b6d0-4620-a412-26fb313a0bff\" (UID: \"ea75ba71-b6d0-4620-a412-26fb313a0bff\") " Oct 06 11:58:45 crc kubenswrapper[4958]: I1006 11:58:45.342461 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/ea75ba71-b6d0-4620-a412-26fb313a0bff-bundle\") pod \"ea75ba71-b6d0-4620-a412-26fb313a0bff\" (UID: \"ea75ba71-b6d0-4620-a412-26fb313a0bff\") " Oct 06 11:58:45 crc kubenswrapper[4958]: I1006 11:58:45.343931 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ea75ba71-b6d0-4620-a412-26fb313a0bff-bundle" (OuterVolumeSpecName: "bundle") pod "ea75ba71-b6d0-4620-a412-26fb313a0bff" (UID: "ea75ba71-b6d0-4620-a412-26fb313a0bff"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 06 11:58:45 crc kubenswrapper[4958]: I1006 11:58:45.350629 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ea75ba71-b6d0-4620-a412-26fb313a0bff-kube-api-access-kngww" (OuterVolumeSpecName: "kube-api-access-kngww") pod "ea75ba71-b6d0-4620-a412-26fb313a0bff" (UID: "ea75ba71-b6d0-4620-a412-26fb313a0bff"). InnerVolumeSpecName "kube-api-access-kngww". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 06 11:58:45 crc kubenswrapper[4958]: I1006 11:58:45.358368 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ea75ba71-b6d0-4620-a412-26fb313a0bff-util" (OuterVolumeSpecName: "util") pod "ea75ba71-b6d0-4620-a412-26fb313a0bff" (UID: "ea75ba71-b6d0-4620-a412-26fb313a0bff"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 06 11:58:45 crc kubenswrapper[4958]: I1006 11:58:45.444820 4958 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/ea75ba71-b6d0-4620-a412-26fb313a0bff-util\") on node \"crc\" DevicePath \"\"" Oct 06 11:58:45 crc kubenswrapper[4958]: I1006 11:58:45.445125 4958 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/ea75ba71-b6d0-4620-a412-26fb313a0bff-bundle\") on node \"crc\" DevicePath \"\"" Oct 06 11:58:45 crc kubenswrapper[4958]: I1006 11:58:45.445290 4958 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kngww\" (UniqueName: \"kubernetes.io/projected/ea75ba71-b6d0-4620-a412-26fb313a0bff-kube-api-access-kngww\") on node \"crc\" DevicePath \"\"" Oct 06 11:58:45 crc kubenswrapper[4958]: I1006 11:58:45.986854 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/8f2f4ee801e5826a37d84a7b1fc4ccbf6b79de668302737d0f1152d8d2rtm7t" event={"ID":"ea75ba71-b6d0-4620-a412-26fb313a0bff","Type":"ContainerDied","Data":"636781ac9b6947fa228e426659f0148248c7500a10f16e086d0388c08bd6b856"} Oct 06 11:58:45 crc kubenswrapper[4958]: I1006 11:58:45.986912 4958 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="636781ac9b6947fa228e426659f0148248c7500a10f16e086d0388c08bd6b856" Oct 06 11:58:45 crc kubenswrapper[4958]: I1006 11:58:45.986987 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/8f2f4ee801e5826a37d84a7b1fc4ccbf6b79de668302737d0f1152d8d2rtm7t" Oct 06 11:58:53 crc kubenswrapper[4958]: I1006 11:58:53.615975 4958 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/metallb-operator-controller-manager-7bff9bd6d4-tzpwc"] Oct 06 11:58:53 crc kubenswrapper[4958]: E1006 11:58:53.616740 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ea75ba71-b6d0-4620-a412-26fb313a0bff" containerName="pull" Oct 06 11:58:53 crc kubenswrapper[4958]: I1006 11:58:53.616756 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="ea75ba71-b6d0-4620-a412-26fb313a0bff" containerName="pull" Oct 06 11:58:53 crc kubenswrapper[4958]: E1006 11:58:53.616770 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ea75ba71-b6d0-4620-a412-26fb313a0bff" containerName="extract" Oct 06 11:58:53 crc kubenswrapper[4958]: I1006 11:58:53.616778 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="ea75ba71-b6d0-4620-a412-26fb313a0bff" containerName="extract" Oct 06 11:58:53 crc kubenswrapper[4958]: E1006 11:58:53.616792 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ea75ba71-b6d0-4620-a412-26fb313a0bff" containerName="util" Oct 06 11:58:53 crc kubenswrapper[4958]: I1006 11:58:53.616800 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="ea75ba71-b6d0-4620-a412-26fb313a0bff" containerName="util" Oct 06 11:58:53 crc kubenswrapper[4958]: E1006 11:58:53.616815 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="799bd962-f454-498a-88e6-58793b08d732" containerName="console" Oct 06 11:58:53 crc kubenswrapper[4958]: I1006 11:58:53.616823 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="799bd962-f454-498a-88e6-58793b08d732" containerName="console" Oct 06 11:58:53 crc kubenswrapper[4958]: I1006 11:58:53.616948 4958 memory_manager.go:354] "RemoveStaleState removing state" podUID="799bd962-f454-498a-88e6-58793b08d732" containerName="console" Oct 06 11:58:53 crc kubenswrapper[4958]: I1006 11:58:53.616960 4958 memory_manager.go:354] "RemoveStaleState removing state" podUID="ea75ba71-b6d0-4620-a412-26fb313a0bff" containerName="extract" Oct 06 11:58:53 crc kubenswrapper[4958]: I1006 11:58:53.617439 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-controller-manager-7bff9bd6d4-tzpwc" Oct 06 11:58:53 crc kubenswrapper[4958]: I1006 11:58:53.619700 4958 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-webhook-server-cert" Oct 06 11:58:53 crc kubenswrapper[4958]: I1006 11:58:53.619830 4958 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"kube-root-ca.crt" Oct 06 11:58:53 crc kubenswrapper[4958]: I1006 11:58:53.621222 4958 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"manager-account-dockercfg-97zm7" Oct 06 11:58:53 crc kubenswrapper[4958]: I1006 11:58:53.621246 4958 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-controller-manager-service-cert" Oct 06 11:58:53 crc kubenswrapper[4958]: I1006 11:58:53.621608 4958 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"openshift-service-ca.crt" Oct 06 11:58:53 crc kubenswrapper[4958]: I1006 11:58:53.632371 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-controller-manager-7bff9bd6d4-tzpwc"] Oct 06 11:58:53 crc kubenswrapper[4958]: I1006 11:58:53.671851 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/cf4a240a-9885-4f19-aea0-799fe1715bb3-webhook-cert\") pod \"metallb-operator-controller-manager-7bff9bd6d4-tzpwc\" (UID: \"cf4a240a-9885-4f19-aea0-799fe1715bb3\") " pod="metallb-system/metallb-operator-controller-manager-7bff9bd6d4-tzpwc" Oct 06 11:58:53 crc kubenswrapper[4958]: I1006 11:58:53.671966 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/cf4a240a-9885-4f19-aea0-799fe1715bb3-apiservice-cert\") pod \"metallb-operator-controller-manager-7bff9bd6d4-tzpwc\" (UID: \"cf4a240a-9885-4f19-aea0-799fe1715bb3\") " pod="metallb-system/metallb-operator-controller-manager-7bff9bd6d4-tzpwc" Oct 06 11:58:53 crc kubenswrapper[4958]: I1006 11:58:53.671995 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zdl48\" (UniqueName: \"kubernetes.io/projected/cf4a240a-9885-4f19-aea0-799fe1715bb3-kube-api-access-zdl48\") pod \"metallb-operator-controller-manager-7bff9bd6d4-tzpwc\" (UID: \"cf4a240a-9885-4f19-aea0-799fe1715bb3\") " pod="metallb-system/metallb-operator-controller-manager-7bff9bd6d4-tzpwc" Oct 06 11:58:53 crc kubenswrapper[4958]: I1006 11:58:53.773305 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/cf4a240a-9885-4f19-aea0-799fe1715bb3-apiservice-cert\") pod \"metallb-operator-controller-manager-7bff9bd6d4-tzpwc\" (UID: \"cf4a240a-9885-4f19-aea0-799fe1715bb3\") " pod="metallb-system/metallb-operator-controller-manager-7bff9bd6d4-tzpwc" Oct 06 11:58:53 crc kubenswrapper[4958]: I1006 11:58:53.773347 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zdl48\" (UniqueName: \"kubernetes.io/projected/cf4a240a-9885-4f19-aea0-799fe1715bb3-kube-api-access-zdl48\") pod \"metallb-operator-controller-manager-7bff9bd6d4-tzpwc\" (UID: \"cf4a240a-9885-4f19-aea0-799fe1715bb3\") " pod="metallb-system/metallb-operator-controller-manager-7bff9bd6d4-tzpwc" Oct 06 11:58:53 crc kubenswrapper[4958]: I1006 11:58:53.773410 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/cf4a240a-9885-4f19-aea0-799fe1715bb3-webhook-cert\") pod \"metallb-operator-controller-manager-7bff9bd6d4-tzpwc\" (UID: \"cf4a240a-9885-4f19-aea0-799fe1715bb3\") " pod="metallb-system/metallb-operator-controller-manager-7bff9bd6d4-tzpwc" Oct 06 11:58:53 crc kubenswrapper[4958]: I1006 11:58:53.778807 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/cf4a240a-9885-4f19-aea0-799fe1715bb3-apiservice-cert\") pod \"metallb-operator-controller-manager-7bff9bd6d4-tzpwc\" (UID: \"cf4a240a-9885-4f19-aea0-799fe1715bb3\") " pod="metallb-system/metallb-operator-controller-manager-7bff9bd6d4-tzpwc" Oct 06 11:58:53 crc kubenswrapper[4958]: I1006 11:58:53.787698 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zdl48\" (UniqueName: \"kubernetes.io/projected/cf4a240a-9885-4f19-aea0-799fe1715bb3-kube-api-access-zdl48\") pod \"metallb-operator-controller-manager-7bff9bd6d4-tzpwc\" (UID: \"cf4a240a-9885-4f19-aea0-799fe1715bb3\") " pod="metallb-system/metallb-operator-controller-manager-7bff9bd6d4-tzpwc" Oct 06 11:58:53 crc kubenswrapper[4958]: I1006 11:58:53.794641 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/cf4a240a-9885-4f19-aea0-799fe1715bb3-webhook-cert\") pod \"metallb-operator-controller-manager-7bff9bd6d4-tzpwc\" (UID: \"cf4a240a-9885-4f19-aea0-799fe1715bb3\") " pod="metallb-system/metallb-operator-controller-manager-7bff9bd6d4-tzpwc" Oct 06 11:58:53 crc kubenswrapper[4958]: I1006 11:58:53.851263 4958 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/metallb-operator-webhook-server-698665c988-htg28"] Oct 06 11:58:53 crc kubenswrapper[4958]: I1006 11:58:53.852093 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-webhook-server-698665c988-htg28" Oct 06 11:58:53 crc kubenswrapper[4958]: I1006 11:58:53.858701 4958 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-webhook-cert" Oct 06 11:58:53 crc kubenswrapper[4958]: I1006 11:58:53.858990 4958 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"controller-dockercfg-d9r4b" Oct 06 11:58:53 crc kubenswrapper[4958]: I1006 11:58:53.860627 4958 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-webhook-server-service-cert" Oct 06 11:58:53 crc kubenswrapper[4958]: I1006 11:58:53.869431 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-webhook-server-698665c988-htg28"] Oct 06 11:58:53 crc kubenswrapper[4958]: I1006 11:58:53.975426 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/b9d4c539-c6bf-4300-a2cc-9647dbb9fe53-apiservice-cert\") pod \"metallb-operator-webhook-server-698665c988-htg28\" (UID: \"b9d4c539-c6bf-4300-a2cc-9647dbb9fe53\") " pod="metallb-system/metallb-operator-webhook-server-698665c988-htg28" Oct 06 11:58:53 crc kubenswrapper[4958]: I1006 11:58:53.975516 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fkl8g\" (UniqueName: \"kubernetes.io/projected/b9d4c539-c6bf-4300-a2cc-9647dbb9fe53-kube-api-access-fkl8g\") pod \"metallb-operator-webhook-server-698665c988-htg28\" (UID: \"b9d4c539-c6bf-4300-a2cc-9647dbb9fe53\") " pod="metallb-system/metallb-operator-webhook-server-698665c988-htg28" Oct 06 11:58:53 crc kubenswrapper[4958]: I1006 11:58:53.975644 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/b9d4c539-c6bf-4300-a2cc-9647dbb9fe53-webhook-cert\") pod \"metallb-operator-webhook-server-698665c988-htg28\" (UID: \"b9d4c539-c6bf-4300-a2cc-9647dbb9fe53\") " pod="metallb-system/metallb-operator-webhook-server-698665c988-htg28" Oct 06 11:58:53 crc kubenswrapper[4958]: I1006 11:58:53.989256 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-controller-manager-7bff9bd6d4-tzpwc" Oct 06 11:58:54 crc kubenswrapper[4958]: I1006 11:58:54.078585 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/b9d4c539-c6bf-4300-a2cc-9647dbb9fe53-apiservice-cert\") pod \"metallb-operator-webhook-server-698665c988-htg28\" (UID: \"b9d4c539-c6bf-4300-a2cc-9647dbb9fe53\") " pod="metallb-system/metallb-operator-webhook-server-698665c988-htg28" Oct 06 11:58:54 crc kubenswrapper[4958]: I1006 11:58:54.078841 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fkl8g\" (UniqueName: \"kubernetes.io/projected/b9d4c539-c6bf-4300-a2cc-9647dbb9fe53-kube-api-access-fkl8g\") pod \"metallb-operator-webhook-server-698665c988-htg28\" (UID: \"b9d4c539-c6bf-4300-a2cc-9647dbb9fe53\") " pod="metallb-system/metallb-operator-webhook-server-698665c988-htg28" Oct 06 11:58:54 crc kubenswrapper[4958]: I1006 11:58:54.078918 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/b9d4c539-c6bf-4300-a2cc-9647dbb9fe53-webhook-cert\") pod \"metallb-operator-webhook-server-698665c988-htg28\" (UID: \"b9d4c539-c6bf-4300-a2cc-9647dbb9fe53\") " pod="metallb-system/metallb-operator-webhook-server-698665c988-htg28" Oct 06 11:58:54 crc kubenswrapper[4958]: I1006 11:58:54.095024 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/b9d4c539-c6bf-4300-a2cc-9647dbb9fe53-webhook-cert\") pod \"metallb-operator-webhook-server-698665c988-htg28\" (UID: \"b9d4c539-c6bf-4300-a2cc-9647dbb9fe53\") " pod="metallb-system/metallb-operator-webhook-server-698665c988-htg28" Oct 06 11:58:54 crc kubenswrapper[4958]: I1006 11:58:54.094590 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/b9d4c539-c6bf-4300-a2cc-9647dbb9fe53-apiservice-cert\") pod \"metallb-operator-webhook-server-698665c988-htg28\" (UID: \"b9d4c539-c6bf-4300-a2cc-9647dbb9fe53\") " pod="metallb-system/metallb-operator-webhook-server-698665c988-htg28" Oct 06 11:58:54 crc kubenswrapper[4958]: I1006 11:58:54.104075 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fkl8g\" (UniqueName: \"kubernetes.io/projected/b9d4c539-c6bf-4300-a2cc-9647dbb9fe53-kube-api-access-fkl8g\") pod \"metallb-operator-webhook-server-698665c988-htg28\" (UID: \"b9d4c539-c6bf-4300-a2cc-9647dbb9fe53\") " pod="metallb-system/metallb-operator-webhook-server-698665c988-htg28" Oct 06 11:58:54 crc kubenswrapper[4958]: I1006 11:58:54.163977 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-webhook-server-698665c988-htg28" Oct 06 11:58:54 crc kubenswrapper[4958]: I1006 11:58:54.385941 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-controller-manager-7bff9bd6d4-tzpwc"] Oct 06 11:58:54 crc kubenswrapper[4958]: W1006 11:58:54.393852 4958 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podcf4a240a_9885_4f19_aea0_799fe1715bb3.slice/crio-e78c0766459ebfc258d490352bd13c2dcff078fdb671fb8f13fa099475cdd5b7 WatchSource:0}: Error finding container e78c0766459ebfc258d490352bd13c2dcff078fdb671fb8f13fa099475cdd5b7: Status 404 returned error can't find the container with id e78c0766459ebfc258d490352bd13c2dcff078fdb671fb8f13fa099475cdd5b7 Oct 06 11:58:54 crc kubenswrapper[4958]: I1006 11:58:54.405710 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-webhook-server-698665c988-htg28"] Oct 06 11:58:54 crc kubenswrapper[4958]: W1006 11:58:54.411205 4958 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb9d4c539_c6bf_4300_a2cc_9647dbb9fe53.slice/crio-5d0bd49d53c254bd6d729895f5f3352eaf47b4a3103f8187be8931e9d04bdeb7 WatchSource:0}: Error finding container 5d0bd49d53c254bd6d729895f5f3352eaf47b4a3103f8187be8931e9d04bdeb7: Status 404 returned error can't find the container with id 5d0bd49d53c254bd6d729895f5f3352eaf47b4a3103f8187be8931e9d04bdeb7 Oct 06 11:58:55 crc kubenswrapper[4958]: I1006 11:58:55.054512 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-webhook-server-698665c988-htg28" event={"ID":"b9d4c539-c6bf-4300-a2cc-9647dbb9fe53","Type":"ContainerStarted","Data":"5d0bd49d53c254bd6d729895f5f3352eaf47b4a3103f8187be8931e9d04bdeb7"} Oct 06 11:58:55 crc kubenswrapper[4958]: I1006 11:58:55.057072 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-controller-manager-7bff9bd6d4-tzpwc" event={"ID":"cf4a240a-9885-4f19-aea0-799fe1715bb3","Type":"ContainerStarted","Data":"e78c0766459ebfc258d490352bd13c2dcff078fdb671fb8f13fa099475cdd5b7"} Oct 06 11:59:00 crc kubenswrapper[4958]: I1006 11:59:00.093622 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-webhook-server-698665c988-htg28" event={"ID":"b9d4c539-c6bf-4300-a2cc-9647dbb9fe53","Type":"ContainerStarted","Data":"46b59dca57f6726e0f25ff5a2aadbd00274a7a1659658f3a3999b9b4d98f6217"} Oct 06 11:59:00 crc kubenswrapper[4958]: I1006 11:59:00.094184 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/metallb-operator-webhook-server-698665c988-htg28" Oct 06 11:59:00 crc kubenswrapper[4958]: I1006 11:59:00.096137 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-controller-manager-7bff9bd6d4-tzpwc" event={"ID":"cf4a240a-9885-4f19-aea0-799fe1715bb3","Type":"ContainerStarted","Data":"45e7b6692cc46ee4ea277076b4a768fc035a6c81b3a26cab35039affb916998d"} Oct 06 11:59:00 crc kubenswrapper[4958]: I1006 11:59:00.096287 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/metallb-operator-controller-manager-7bff9bd6d4-tzpwc" Oct 06 11:59:00 crc kubenswrapper[4958]: I1006 11:59:00.114048 4958 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/metallb-operator-webhook-server-698665c988-htg28" podStartSLOduration=2.403646009 podStartE2EDuration="7.114030208s" podCreationTimestamp="2025-10-06 11:58:53 +0000 UTC" firstStartedPulling="2025-10-06 11:58:54.414572541 +0000 UTC m=+688.300597849" lastFinishedPulling="2025-10-06 11:58:59.12495674 +0000 UTC m=+693.010982048" observedRunningTime="2025-10-06 11:59:00.109408573 +0000 UTC m=+693.995433881" watchObservedRunningTime="2025-10-06 11:59:00.114030208 +0000 UTC m=+694.000055506" Oct 06 11:59:00 crc kubenswrapper[4958]: I1006 11:59:00.129400 4958 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/metallb-operator-controller-manager-7bff9bd6d4-tzpwc" podStartSLOduration=2.416479131 podStartE2EDuration="7.129385259s" podCreationTimestamp="2025-10-06 11:58:53 +0000 UTC" firstStartedPulling="2025-10-06 11:58:54.396316129 +0000 UTC m=+688.282341437" lastFinishedPulling="2025-10-06 11:58:59.109222227 +0000 UTC m=+692.995247565" observedRunningTime="2025-10-06 11:59:00.126745266 +0000 UTC m=+694.012770574" watchObservedRunningTime="2025-10-06 11:59:00.129385259 +0000 UTC m=+694.015410567" Oct 06 11:59:14 crc kubenswrapper[4958]: I1006 11:59:14.175862 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/metallb-operator-webhook-server-698665c988-htg28" Oct 06 11:59:33 crc kubenswrapper[4958]: I1006 11:59:33.993792 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/metallb-operator-controller-manager-7bff9bd6d4-tzpwc" Oct 06 11:59:34 crc kubenswrapper[4958]: I1006 11:59:34.959207 4958 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/frr-k8s-webhook-server-64bf5d555-vf4w8"] Oct 06 11:59:34 crc kubenswrapper[4958]: I1006 11:59:34.960441 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-webhook-server-64bf5d555-vf4w8" Oct 06 11:59:34 crc kubenswrapper[4958]: I1006 11:59:34.964450 4958 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-daemon-dockercfg-cpghv" Oct 06 11:59:34 crc kubenswrapper[4958]: I1006 11:59:34.968835 4958 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/frr-k8s-mllk7"] Oct 06 11:59:34 crc kubenswrapper[4958]: I1006 11:59:34.971859 4958 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-webhook-server-cert" Oct 06 11:59:34 crc kubenswrapper[4958]: I1006 11:59:34.974573 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-mllk7" Oct 06 11:59:34 crc kubenswrapper[4958]: I1006 11:59:34.976657 4958 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"frr-startup" Oct 06 11:59:34 crc kubenswrapper[4958]: I1006 11:59:34.985198 4958 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-certs-secret" Oct 06 11:59:34 crc kubenswrapper[4958]: I1006 11:59:34.989811 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/frr-k8s-webhook-server-64bf5d555-vf4w8"] Oct 06 11:59:35 crc kubenswrapper[4958]: I1006 11:59:35.048623 4958 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/controller-68d546b9d8-tz4r7"] Oct 06 11:59:35 crc kubenswrapper[4958]: I1006 11:59:35.050100 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/controller-68d546b9d8-tz4r7" Oct 06 11:59:35 crc kubenswrapper[4958]: I1006 11:59:35.052095 4958 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"controller-certs-secret" Oct 06 11:59:35 crc kubenswrapper[4958]: I1006 11:59:35.057997 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/f4ed48dc-17a8-4241-9c4e-8febcebb2c45-cert\") pod \"frr-k8s-webhook-server-64bf5d555-vf4w8\" (UID: \"f4ed48dc-17a8-4241-9c4e-8febcebb2c45\") " pod="metallb-system/frr-k8s-webhook-server-64bf5d555-vf4w8" Oct 06 11:59:35 crc kubenswrapper[4958]: I1006 11:59:35.058034 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/2cab249a-2dc5-4211-a567-b55c234a8853-metrics-certs\") pod \"frr-k8s-mllk7\" (UID: \"2cab249a-2dc5-4211-a567-b55c234a8853\") " pod="metallb-system/frr-k8s-mllk7" Oct 06 11:59:35 crc kubenswrapper[4958]: I1006 11:59:35.058059 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-82zgq\" (UniqueName: \"kubernetes.io/projected/f4ed48dc-17a8-4241-9c4e-8febcebb2c45-kube-api-access-82zgq\") pod \"frr-k8s-webhook-server-64bf5d555-vf4w8\" (UID: \"f4ed48dc-17a8-4241-9c4e-8febcebb2c45\") " pod="metallb-system/frr-k8s-webhook-server-64bf5d555-vf4w8" Oct 06 11:59:35 crc kubenswrapper[4958]: I1006 11:59:35.058075 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n6jvx\" (UniqueName: \"kubernetes.io/projected/2cab249a-2dc5-4211-a567-b55c234a8853-kube-api-access-n6jvx\") pod \"frr-k8s-mllk7\" (UID: \"2cab249a-2dc5-4211-a567-b55c234a8853\") " pod="metallb-system/frr-k8s-mllk7" Oct 06 11:59:35 crc kubenswrapper[4958]: I1006 11:59:35.058097 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/2cab249a-2dc5-4211-a567-b55c234a8853-reloader\") pod \"frr-k8s-mllk7\" (UID: \"2cab249a-2dc5-4211-a567-b55c234a8853\") " pod="metallb-system/frr-k8s-mllk7" Oct 06 11:59:35 crc kubenswrapper[4958]: I1006 11:59:35.058122 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/2cab249a-2dc5-4211-a567-b55c234a8853-frr-startup\") pod \"frr-k8s-mllk7\" (UID: \"2cab249a-2dc5-4211-a567-b55c234a8853\") " pod="metallb-system/frr-k8s-mllk7" Oct 06 11:59:35 crc kubenswrapper[4958]: I1006 11:59:35.058160 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/2cab249a-2dc5-4211-a567-b55c234a8853-metrics\") pod \"frr-k8s-mllk7\" (UID: \"2cab249a-2dc5-4211-a567-b55c234a8853\") " pod="metallb-system/frr-k8s-mllk7" Oct 06 11:59:35 crc kubenswrapper[4958]: I1006 11:59:35.058189 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/2cab249a-2dc5-4211-a567-b55c234a8853-frr-sockets\") pod \"frr-k8s-mllk7\" (UID: \"2cab249a-2dc5-4211-a567-b55c234a8853\") " pod="metallb-system/frr-k8s-mllk7" Oct 06 11:59:35 crc kubenswrapper[4958]: I1006 11:59:35.058205 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/2cab249a-2dc5-4211-a567-b55c234a8853-frr-conf\") pod \"frr-k8s-mllk7\" (UID: \"2cab249a-2dc5-4211-a567-b55c234a8853\") " pod="metallb-system/frr-k8s-mllk7" Oct 06 11:59:35 crc kubenswrapper[4958]: I1006 11:59:35.069298 4958 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/speaker-qpvkk"] Oct 06 11:59:35 crc kubenswrapper[4958]: I1006 11:59:35.070315 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/speaker-qpvkk" Oct 06 11:59:35 crc kubenswrapper[4958]: I1006 11:59:35.072332 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/controller-68d546b9d8-tz4r7"] Oct 06 11:59:35 crc kubenswrapper[4958]: I1006 11:59:35.073468 4958 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"speaker-certs-secret" Oct 06 11:59:35 crc kubenswrapper[4958]: I1006 11:59:35.073710 4958 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"metallb-excludel2" Oct 06 11:59:35 crc kubenswrapper[4958]: I1006 11:59:35.073818 4958 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-memberlist" Oct 06 11:59:35 crc kubenswrapper[4958]: I1006 11:59:35.073927 4958 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"speaker-dockercfg-vqz7k" Oct 06 11:59:35 crc kubenswrapper[4958]: I1006 11:59:35.158786 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g29wm\" (UniqueName: \"kubernetes.io/projected/7d0d517b-7a87-4cd1-9039-998c3765332f-kube-api-access-g29wm\") pod \"speaker-qpvkk\" (UID: \"7d0d517b-7a87-4cd1-9039-998c3765332f\") " pod="metallb-system/speaker-qpvkk" Oct 06 11:59:35 crc kubenswrapper[4958]: I1006 11:59:35.158831 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/2cab249a-2dc5-4211-a567-b55c234a8853-reloader\") pod \"frr-k8s-mllk7\" (UID: \"2cab249a-2dc5-4211-a567-b55c234a8853\") " pod="metallb-system/frr-k8s-mllk7" Oct 06 11:59:35 crc kubenswrapper[4958]: I1006 11:59:35.158854 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/38cbcdd4-5fc7-4c9f-9adc-d2b4c8362ce5-metrics-certs\") pod \"controller-68d546b9d8-tz4r7\" (UID: \"38cbcdd4-5fc7-4c9f-9adc-d2b4c8362ce5\") " pod="metallb-system/controller-68d546b9d8-tz4r7" Oct 06 11:59:35 crc kubenswrapper[4958]: I1006 11:59:35.158878 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/7d0d517b-7a87-4cd1-9039-998c3765332f-memberlist\") pod \"speaker-qpvkk\" (UID: \"7d0d517b-7a87-4cd1-9039-998c3765332f\") " pod="metallb-system/speaker-qpvkk" Oct 06 11:59:35 crc kubenswrapper[4958]: I1006 11:59:35.158906 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/2cab249a-2dc5-4211-a567-b55c234a8853-frr-startup\") pod \"frr-k8s-mllk7\" (UID: \"2cab249a-2dc5-4211-a567-b55c234a8853\") " pod="metallb-system/frr-k8s-mllk7" Oct 06 11:59:35 crc kubenswrapper[4958]: I1006 11:59:35.158932 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/2cab249a-2dc5-4211-a567-b55c234a8853-metrics\") pod \"frr-k8s-mllk7\" (UID: \"2cab249a-2dc5-4211-a567-b55c234a8853\") " pod="metallb-system/frr-k8s-mllk7" Oct 06 11:59:35 crc kubenswrapper[4958]: I1006 11:59:35.159000 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/38cbcdd4-5fc7-4c9f-9adc-d2b4c8362ce5-cert\") pod \"controller-68d546b9d8-tz4r7\" (UID: \"38cbcdd4-5fc7-4c9f-9adc-d2b4c8362ce5\") " pod="metallb-system/controller-68d546b9d8-tz4r7" Oct 06 11:59:35 crc kubenswrapper[4958]: I1006 11:59:35.159020 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p2rrj\" (UniqueName: \"kubernetes.io/projected/38cbcdd4-5fc7-4c9f-9adc-d2b4c8362ce5-kube-api-access-p2rrj\") pod \"controller-68d546b9d8-tz4r7\" (UID: \"38cbcdd4-5fc7-4c9f-9adc-d2b4c8362ce5\") " pod="metallb-system/controller-68d546b9d8-tz4r7" Oct 06 11:59:35 crc kubenswrapper[4958]: I1006 11:59:35.159040 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/2cab249a-2dc5-4211-a567-b55c234a8853-frr-sockets\") pod \"frr-k8s-mllk7\" (UID: \"2cab249a-2dc5-4211-a567-b55c234a8853\") " pod="metallb-system/frr-k8s-mllk7" Oct 06 11:59:35 crc kubenswrapper[4958]: I1006 11:59:35.159071 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/2cab249a-2dc5-4211-a567-b55c234a8853-frr-conf\") pod \"frr-k8s-mllk7\" (UID: \"2cab249a-2dc5-4211-a567-b55c234a8853\") " pod="metallb-system/frr-k8s-mllk7" Oct 06 11:59:35 crc kubenswrapper[4958]: I1006 11:59:35.159599 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/7d0d517b-7a87-4cd1-9039-998c3765332f-metrics-certs\") pod \"speaker-qpvkk\" (UID: \"7d0d517b-7a87-4cd1-9039-998c3765332f\") " pod="metallb-system/speaker-qpvkk" Oct 06 11:59:35 crc kubenswrapper[4958]: I1006 11:59:35.159623 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/f4ed48dc-17a8-4241-9c4e-8febcebb2c45-cert\") pod \"frr-k8s-webhook-server-64bf5d555-vf4w8\" (UID: \"f4ed48dc-17a8-4241-9c4e-8febcebb2c45\") " pod="metallb-system/frr-k8s-webhook-server-64bf5d555-vf4w8" Oct 06 11:59:35 crc kubenswrapper[4958]: I1006 11:59:35.159773 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/7d0d517b-7a87-4cd1-9039-998c3765332f-metallb-excludel2\") pod \"speaker-qpvkk\" (UID: \"7d0d517b-7a87-4cd1-9039-998c3765332f\") " pod="metallb-system/speaker-qpvkk" Oct 06 11:59:35 crc kubenswrapper[4958]: I1006 11:59:35.159801 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/2cab249a-2dc5-4211-a567-b55c234a8853-metrics-certs\") pod \"frr-k8s-mllk7\" (UID: \"2cab249a-2dc5-4211-a567-b55c234a8853\") " pod="metallb-system/frr-k8s-mllk7" Oct 06 11:59:35 crc kubenswrapper[4958]: I1006 11:59:35.159541 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/2cab249a-2dc5-4211-a567-b55c234a8853-frr-sockets\") pod \"frr-k8s-mllk7\" (UID: \"2cab249a-2dc5-4211-a567-b55c234a8853\") " pod="metallb-system/frr-k8s-mllk7" Oct 06 11:59:35 crc kubenswrapper[4958]: I1006 11:59:35.159694 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/2cab249a-2dc5-4211-a567-b55c234a8853-reloader\") pod \"frr-k8s-mllk7\" (UID: \"2cab249a-2dc5-4211-a567-b55c234a8853\") " pod="metallb-system/frr-k8s-mllk7" Oct 06 11:59:35 crc kubenswrapper[4958]: E1006 11:59:35.159732 4958 secret.go:188] Couldn't get secret metallb-system/frr-k8s-webhook-server-cert: secret "frr-k8s-webhook-server-cert" not found Oct 06 11:59:35 crc kubenswrapper[4958]: I1006 11:59:35.159453 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/2cab249a-2dc5-4211-a567-b55c234a8853-metrics\") pod \"frr-k8s-mllk7\" (UID: \"2cab249a-2dc5-4211-a567-b55c234a8853\") " pod="metallb-system/frr-k8s-mllk7" Oct 06 11:59:35 crc kubenswrapper[4958]: I1006 11:59:35.159492 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/2cab249a-2dc5-4211-a567-b55c234a8853-frr-conf\") pod \"frr-k8s-mllk7\" (UID: \"2cab249a-2dc5-4211-a567-b55c234a8853\") " pod="metallb-system/frr-k8s-mllk7" Oct 06 11:59:35 crc kubenswrapper[4958]: E1006 11:59:35.159900 4958 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/f4ed48dc-17a8-4241-9c4e-8febcebb2c45-cert podName:f4ed48dc-17a8-4241-9c4e-8febcebb2c45 nodeName:}" failed. No retries permitted until 2025-10-06 11:59:35.659883896 +0000 UTC m=+729.545909204 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/f4ed48dc-17a8-4241-9c4e-8febcebb2c45-cert") pod "frr-k8s-webhook-server-64bf5d555-vf4w8" (UID: "f4ed48dc-17a8-4241-9c4e-8febcebb2c45") : secret "frr-k8s-webhook-server-cert" not found Oct 06 11:59:35 crc kubenswrapper[4958]: I1006 11:59:35.160025 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n6jvx\" (UniqueName: \"kubernetes.io/projected/2cab249a-2dc5-4211-a567-b55c234a8853-kube-api-access-n6jvx\") pod \"frr-k8s-mllk7\" (UID: \"2cab249a-2dc5-4211-a567-b55c234a8853\") " pod="metallb-system/frr-k8s-mllk7" Oct 06 11:59:35 crc kubenswrapper[4958]: I1006 11:59:35.160056 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/2cab249a-2dc5-4211-a567-b55c234a8853-frr-startup\") pod \"frr-k8s-mllk7\" (UID: \"2cab249a-2dc5-4211-a567-b55c234a8853\") " pod="metallb-system/frr-k8s-mllk7" Oct 06 11:59:35 crc kubenswrapper[4958]: I1006 11:59:35.160067 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-82zgq\" (UniqueName: \"kubernetes.io/projected/f4ed48dc-17a8-4241-9c4e-8febcebb2c45-kube-api-access-82zgq\") pod \"frr-k8s-webhook-server-64bf5d555-vf4w8\" (UID: \"f4ed48dc-17a8-4241-9c4e-8febcebb2c45\") " pod="metallb-system/frr-k8s-webhook-server-64bf5d555-vf4w8" Oct 06 11:59:35 crc kubenswrapper[4958]: I1006 11:59:35.165528 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/2cab249a-2dc5-4211-a567-b55c234a8853-metrics-certs\") pod \"frr-k8s-mllk7\" (UID: \"2cab249a-2dc5-4211-a567-b55c234a8853\") " pod="metallb-system/frr-k8s-mllk7" Oct 06 11:59:35 crc kubenswrapper[4958]: I1006 11:59:35.184247 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-82zgq\" (UniqueName: \"kubernetes.io/projected/f4ed48dc-17a8-4241-9c4e-8febcebb2c45-kube-api-access-82zgq\") pod \"frr-k8s-webhook-server-64bf5d555-vf4w8\" (UID: \"f4ed48dc-17a8-4241-9c4e-8febcebb2c45\") " pod="metallb-system/frr-k8s-webhook-server-64bf5d555-vf4w8" Oct 06 11:59:35 crc kubenswrapper[4958]: I1006 11:59:35.191599 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n6jvx\" (UniqueName: \"kubernetes.io/projected/2cab249a-2dc5-4211-a567-b55c234a8853-kube-api-access-n6jvx\") pod \"frr-k8s-mllk7\" (UID: \"2cab249a-2dc5-4211-a567-b55c234a8853\") " pod="metallb-system/frr-k8s-mllk7" Oct 06 11:59:35 crc kubenswrapper[4958]: I1006 11:59:35.261189 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/7d0d517b-7a87-4cd1-9039-998c3765332f-metrics-certs\") pod \"speaker-qpvkk\" (UID: \"7d0d517b-7a87-4cd1-9039-998c3765332f\") " pod="metallb-system/speaker-qpvkk" Oct 06 11:59:35 crc kubenswrapper[4958]: I1006 11:59:35.261261 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/7d0d517b-7a87-4cd1-9039-998c3765332f-metallb-excludel2\") pod \"speaker-qpvkk\" (UID: \"7d0d517b-7a87-4cd1-9039-998c3765332f\") " pod="metallb-system/speaker-qpvkk" Oct 06 11:59:35 crc kubenswrapper[4958]: I1006 11:59:35.261300 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g29wm\" (UniqueName: \"kubernetes.io/projected/7d0d517b-7a87-4cd1-9039-998c3765332f-kube-api-access-g29wm\") pod \"speaker-qpvkk\" (UID: \"7d0d517b-7a87-4cd1-9039-998c3765332f\") " pod="metallb-system/speaker-qpvkk" Oct 06 11:59:35 crc kubenswrapper[4958]: I1006 11:59:35.261323 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/38cbcdd4-5fc7-4c9f-9adc-d2b4c8362ce5-metrics-certs\") pod \"controller-68d546b9d8-tz4r7\" (UID: \"38cbcdd4-5fc7-4c9f-9adc-d2b4c8362ce5\") " pod="metallb-system/controller-68d546b9d8-tz4r7" Oct 06 11:59:35 crc kubenswrapper[4958]: I1006 11:59:35.261347 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/7d0d517b-7a87-4cd1-9039-998c3765332f-memberlist\") pod \"speaker-qpvkk\" (UID: \"7d0d517b-7a87-4cd1-9039-998c3765332f\") " pod="metallb-system/speaker-qpvkk" Oct 06 11:59:35 crc kubenswrapper[4958]: I1006 11:59:35.261401 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/38cbcdd4-5fc7-4c9f-9adc-d2b4c8362ce5-cert\") pod \"controller-68d546b9d8-tz4r7\" (UID: \"38cbcdd4-5fc7-4c9f-9adc-d2b4c8362ce5\") " pod="metallb-system/controller-68d546b9d8-tz4r7" Oct 06 11:59:35 crc kubenswrapper[4958]: I1006 11:59:35.261423 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p2rrj\" (UniqueName: \"kubernetes.io/projected/38cbcdd4-5fc7-4c9f-9adc-d2b4c8362ce5-kube-api-access-p2rrj\") pod \"controller-68d546b9d8-tz4r7\" (UID: \"38cbcdd4-5fc7-4c9f-9adc-d2b4c8362ce5\") " pod="metallb-system/controller-68d546b9d8-tz4r7" Oct 06 11:59:35 crc kubenswrapper[4958]: E1006 11:59:35.261803 4958 secret.go:188] Couldn't get secret metallb-system/metallb-memberlist: secret "metallb-memberlist" not found Oct 06 11:59:35 crc kubenswrapper[4958]: E1006 11:59:35.261857 4958 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/7d0d517b-7a87-4cd1-9039-998c3765332f-memberlist podName:7d0d517b-7a87-4cd1-9039-998c3765332f nodeName:}" failed. No retries permitted until 2025-10-06 11:59:35.761839713 +0000 UTC m=+729.647865021 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "memberlist" (UniqueName: "kubernetes.io/secret/7d0d517b-7a87-4cd1-9039-998c3765332f-memberlist") pod "speaker-qpvkk" (UID: "7d0d517b-7a87-4cd1-9039-998c3765332f") : secret "metallb-memberlist" not found Oct 06 11:59:35 crc kubenswrapper[4958]: I1006 11:59:35.262622 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/7d0d517b-7a87-4cd1-9039-998c3765332f-metallb-excludel2\") pod \"speaker-qpvkk\" (UID: \"7d0d517b-7a87-4cd1-9039-998c3765332f\") " pod="metallb-system/speaker-qpvkk" Oct 06 11:59:35 crc kubenswrapper[4958]: I1006 11:59:35.263847 4958 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-webhook-cert" Oct 06 11:59:35 crc kubenswrapper[4958]: I1006 11:59:35.265405 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/7d0d517b-7a87-4cd1-9039-998c3765332f-metrics-certs\") pod \"speaker-qpvkk\" (UID: \"7d0d517b-7a87-4cd1-9039-998c3765332f\") " pod="metallb-system/speaker-qpvkk" Oct 06 11:59:35 crc kubenswrapper[4958]: I1006 11:59:35.270872 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/38cbcdd4-5fc7-4c9f-9adc-d2b4c8362ce5-metrics-certs\") pod \"controller-68d546b9d8-tz4r7\" (UID: \"38cbcdd4-5fc7-4c9f-9adc-d2b4c8362ce5\") " pod="metallb-system/controller-68d546b9d8-tz4r7" Oct 06 11:59:35 crc kubenswrapper[4958]: I1006 11:59:35.276631 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/38cbcdd4-5fc7-4c9f-9adc-d2b4c8362ce5-cert\") pod \"controller-68d546b9d8-tz4r7\" (UID: \"38cbcdd4-5fc7-4c9f-9adc-d2b4c8362ce5\") " pod="metallb-system/controller-68d546b9d8-tz4r7" Oct 06 11:59:35 crc kubenswrapper[4958]: I1006 11:59:35.280724 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p2rrj\" (UniqueName: \"kubernetes.io/projected/38cbcdd4-5fc7-4c9f-9adc-d2b4c8362ce5-kube-api-access-p2rrj\") pod \"controller-68d546b9d8-tz4r7\" (UID: \"38cbcdd4-5fc7-4c9f-9adc-d2b4c8362ce5\") " pod="metallb-system/controller-68d546b9d8-tz4r7" Oct 06 11:59:35 crc kubenswrapper[4958]: I1006 11:59:35.281791 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g29wm\" (UniqueName: \"kubernetes.io/projected/7d0d517b-7a87-4cd1-9039-998c3765332f-kube-api-access-g29wm\") pod \"speaker-qpvkk\" (UID: \"7d0d517b-7a87-4cd1-9039-998c3765332f\") " pod="metallb-system/speaker-qpvkk" Oct 06 11:59:35 crc kubenswrapper[4958]: I1006 11:59:35.303673 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-mllk7" Oct 06 11:59:35 crc kubenswrapper[4958]: I1006 11:59:35.367761 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/controller-68d546b9d8-tz4r7" Oct 06 11:59:35 crc kubenswrapper[4958]: I1006 11:59:35.604551 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/controller-68d546b9d8-tz4r7"] Oct 06 11:59:35 crc kubenswrapper[4958]: W1006 11:59:35.611234 4958 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod38cbcdd4_5fc7_4c9f_9adc_d2b4c8362ce5.slice/crio-da7139ff90582d62f03d0b1248ada13f1a9e5227ee96c63dae96c16d77af55c0 WatchSource:0}: Error finding container da7139ff90582d62f03d0b1248ada13f1a9e5227ee96c63dae96c16d77af55c0: Status 404 returned error can't find the container with id da7139ff90582d62f03d0b1248ada13f1a9e5227ee96c63dae96c16d77af55c0 Oct 06 11:59:35 crc kubenswrapper[4958]: I1006 11:59:35.666698 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/f4ed48dc-17a8-4241-9c4e-8febcebb2c45-cert\") pod \"frr-k8s-webhook-server-64bf5d555-vf4w8\" (UID: \"f4ed48dc-17a8-4241-9c4e-8febcebb2c45\") " pod="metallb-system/frr-k8s-webhook-server-64bf5d555-vf4w8" Oct 06 11:59:35 crc kubenswrapper[4958]: I1006 11:59:35.675213 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/f4ed48dc-17a8-4241-9c4e-8febcebb2c45-cert\") pod \"frr-k8s-webhook-server-64bf5d555-vf4w8\" (UID: \"f4ed48dc-17a8-4241-9c4e-8febcebb2c45\") " pod="metallb-system/frr-k8s-webhook-server-64bf5d555-vf4w8" Oct 06 11:59:35 crc kubenswrapper[4958]: I1006 11:59:35.768609 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/7d0d517b-7a87-4cd1-9039-998c3765332f-memberlist\") pod \"speaker-qpvkk\" (UID: \"7d0d517b-7a87-4cd1-9039-998c3765332f\") " pod="metallb-system/speaker-qpvkk" Oct 06 11:59:35 crc kubenswrapper[4958]: E1006 11:59:35.768778 4958 secret.go:188] Couldn't get secret metallb-system/metallb-memberlist: secret "metallb-memberlist" not found Oct 06 11:59:35 crc kubenswrapper[4958]: E1006 11:59:35.768860 4958 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/7d0d517b-7a87-4cd1-9039-998c3765332f-memberlist podName:7d0d517b-7a87-4cd1-9039-998c3765332f nodeName:}" failed. No retries permitted until 2025-10-06 11:59:36.768840169 +0000 UTC m=+730.654865487 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "memberlist" (UniqueName: "kubernetes.io/secret/7d0d517b-7a87-4cd1-9039-998c3765332f-memberlist") pod "speaker-qpvkk" (UID: "7d0d517b-7a87-4cd1-9039-998c3765332f") : secret "metallb-memberlist" not found Oct 06 11:59:35 crc kubenswrapper[4958]: I1006 11:59:35.891915 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-webhook-server-64bf5d555-vf4w8" Oct 06 11:59:36 crc kubenswrapper[4958]: I1006 11:59:36.164712 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/frr-k8s-webhook-server-64bf5d555-vf4w8"] Oct 06 11:59:36 crc kubenswrapper[4958]: W1006 11:59:36.170562 4958 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf4ed48dc_17a8_4241_9c4e_8febcebb2c45.slice/crio-a433e6f0904e8863e8cb9f0d8bff9350901d41b0c305dfbd12f78608c035af7b WatchSource:0}: Error finding container a433e6f0904e8863e8cb9f0d8bff9350901d41b0c305dfbd12f78608c035af7b: Status 404 returned error can't find the container with id a433e6f0904e8863e8cb9f0d8bff9350901d41b0c305dfbd12f78608c035af7b Oct 06 11:59:36 crc kubenswrapper[4958]: I1006 11:59:36.343084 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-mllk7" event={"ID":"2cab249a-2dc5-4211-a567-b55c234a8853","Type":"ContainerStarted","Data":"6bd5a0aef3f5884703990680a194ddb63c3a791274333cdb20e9b9d27fba9e0f"} Oct 06 11:59:36 crc kubenswrapper[4958]: I1006 11:59:36.344537 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-webhook-server-64bf5d555-vf4w8" event={"ID":"f4ed48dc-17a8-4241-9c4e-8febcebb2c45","Type":"ContainerStarted","Data":"a433e6f0904e8863e8cb9f0d8bff9350901d41b0c305dfbd12f78608c035af7b"} Oct 06 11:59:36 crc kubenswrapper[4958]: I1006 11:59:36.346571 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-68d546b9d8-tz4r7" event={"ID":"38cbcdd4-5fc7-4c9f-9adc-d2b4c8362ce5","Type":"ContainerStarted","Data":"f327c880cf3de3b767b4899d5031da57891e25b6ea0ff569125284acc29ad65d"} Oct 06 11:59:36 crc kubenswrapper[4958]: I1006 11:59:36.346601 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-68d546b9d8-tz4r7" event={"ID":"38cbcdd4-5fc7-4c9f-9adc-d2b4c8362ce5","Type":"ContainerStarted","Data":"73c609eeb8ce2750e6df838bc673b56b3beea3927f0a432bf48ef2f4771199ad"} Oct 06 11:59:36 crc kubenswrapper[4958]: I1006 11:59:36.346614 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-68d546b9d8-tz4r7" event={"ID":"38cbcdd4-5fc7-4c9f-9adc-d2b4c8362ce5","Type":"ContainerStarted","Data":"da7139ff90582d62f03d0b1248ada13f1a9e5227ee96c63dae96c16d77af55c0"} Oct 06 11:59:36 crc kubenswrapper[4958]: I1006 11:59:36.346769 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/controller-68d546b9d8-tz4r7" Oct 06 11:59:36 crc kubenswrapper[4958]: I1006 11:59:36.369477 4958 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/controller-68d546b9d8-tz4r7" podStartSLOduration=1.369460356 podStartE2EDuration="1.369460356s" podCreationTimestamp="2025-10-06 11:59:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-06 11:59:36.36568699 +0000 UTC m=+730.251712308" watchObservedRunningTime="2025-10-06 11:59:36.369460356 +0000 UTC m=+730.255485674" Oct 06 11:59:36 crc kubenswrapper[4958]: I1006 11:59:36.783634 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/7d0d517b-7a87-4cd1-9039-998c3765332f-memberlist\") pod \"speaker-qpvkk\" (UID: \"7d0d517b-7a87-4cd1-9039-998c3765332f\") " pod="metallb-system/speaker-qpvkk" Oct 06 11:59:36 crc kubenswrapper[4958]: I1006 11:59:36.791044 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/7d0d517b-7a87-4cd1-9039-998c3765332f-memberlist\") pod \"speaker-qpvkk\" (UID: \"7d0d517b-7a87-4cd1-9039-998c3765332f\") " pod="metallb-system/speaker-qpvkk" Oct 06 11:59:36 crc kubenswrapper[4958]: I1006 11:59:36.889036 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/speaker-qpvkk" Oct 06 11:59:36 crc kubenswrapper[4958]: W1006 11:59:36.922232 4958 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod7d0d517b_7a87_4cd1_9039_998c3765332f.slice/crio-cd06ae106ae75f605ffaa62474c65b79d1a676abe73ebd54f60e571af9dfe170 WatchSource:0}: Error finding container cd06ae106ae75f605ffaa62474c65b79d1a676abe73ebd54f60e571af9dfe170: Status 404 returned error can't find the container with id cd06ae106ae75f605ffaa62474c65b79d1a676abe73ebd54f60e571af9dfe170 Oct 06 11:59:37 crc kubenswrapper[4958]: I1006 11:59:37.353203 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-qpvkk" event={"ID":"7d0d517b-7a87-4cd1-9039-998c3765332f","Type":"ContainerStarted","Data":"63acb497b1d906918bf36876e0fa2af6bfa2718727cf10117d1a8f32eb0cf8c9"} Oct 06 11:59:37 crc kubenswrapper[4958]: I1006 11:59:37.353525 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-qpvkk" event={"ID":"7d0d517b-7a87-4cd1-9039-998c3765332f","Type":"ContainerStarted","Data":"cd06ae106ae75f605ffaa62474c65b79d1a676abe73ebd54f60e571af9dfe170"} Oct 06 11:59:38 crc kubenswrapper[4958]: I1006 11:59:38.375307 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-qpvkk" event={"ID":"7d0d517b-7a87-4cd1-9039-998c3765332f","Type":"ContainerStarted","Data":"3b86fa9a25eb111bdc12fb1f91f105dbf2ddc56ba8a2c734b035dd1115523f47"} Oct 06 11:59:38 crc kubenswrapper[4958]: I1006 11:59:38.376089 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/speaker-qpvkk" Oct 06 11:59:38 crc kubenswrapper[4958]: I1006 11:59:38.391858 4958 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/speaker-qpvkk" podStartSLOduration=3.391837171 podStartE2EDuration="3.391837171s" podCreationTimestamp="2025-10-06 11:59:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-06 11:59:38.389828346 +0000 UTC m=+732.275853654" watchObservedRunningTime="2025-10-06 11:59:38.391837171 +0000 UTC m=+732.277862479" Oct 06 11:59:43 crc kubenswrapper[4958]: I1006 11:59:43.413329 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-webhook-server-64bf5d555-vf4w8" event={"ID":"f4ed48dc-17a8-4241-9c4e-8febcebb2c45","Type":"ContainerStarted","Data":"0f7fced528249bc790ae0542dc8de9c3125e9e8c27e943b84d6f42c27a253f7a"} Oct 06 11:59:43 crc kubenswrapper[4958]: I1006 11:59:43.413952 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/frr-k8s-webhook-server-64bf5d555-vf4w8" Oct 06 11:59:43 crc kubenswrapper[4958]: I1006 11:59:43.416119 4958 generic.go:334] "Generic (PLEG): container finished" podID="2cab249a-2dc5-4211-a567-b55c234a8853" containerID="12cb6008703ea5091c7d438694f46583f66766a8212a37581a3321c78860a2ab" exitCode=0 Oct 06 11:59:43 crc kubenswrapper[4958]: I1006 11:59:43.416223 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-mllk7" event={"ID":"2cab249a-2dc5-4211-a567-b55c234a8853","Type":"ContainerDied","Data":"12cb6008703ea5091c7d438694f46583f66766a8212a37581a3321c78860a2ab"} Oct 06 11:59:43 crc kubenswrapper[4958]: I1006 11:59:43.436139 4958 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/frr-k8s-webhook-server-64bf5d555-vf4w8" podStartSLOduration=2.787149398 podStartE2EDuration="9.43611564s" podCreationTimestamp="2025-10-06 11:59:34 +0000 UTC" firstStartedPulling="2025-10-06 11:59:36.172367992 +0000 UTC m=+730.058393300" lastFinishedPulling="2025-10-06 11:59:42.821334204 +0000 UTC m=+736.707359542" observedRunningTime="2025-10-06 11:59:43.433670627 +0000 UTC m=+737.319695965" watchObservedRunningTime="2025-10-06 11:59:43.43611564 +0000 UTC m=+737.322140988" Oct 06 11:59:44 crc kubenswrapper[4958]: I1006 11:59:44.432360 4958 generic.go:334] "Generic (PLEG): container finished" podID="2cab249a-2dc5-4211-a567-b55c234a8853" containerID="e663e8f9756c1d87196bd110d4a84d7b50d224cc86fad2eac6a6069326a726b1" exitCode=0 Oct 06 11:59:44 crc kubenswrapper[4958]: I1006 11:59:44.433188 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-mllk7" event={"ID":"2cab249a-2dc5-4211-a567-b55c234a8853","Type":"ContainerDied","Data":"e663e8f9756c1d87196bd110d4a84d7b50d224cc86fad2eac6a6069326a726b1"} Oct 06 11:59:45 crc kubenswrapper[4958]: I1006 11:59:45.376550 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/controller-68d546b9d8-tz4r7" Oct 06 11:59:45 crc kubenswrapper[4958]: I1006 11:59:45.444618 4958 generic.go:334] "Generic (PLEG): container finished" podID="2cab249a-2dc5-4211-a567-b55c234a8853" containerID="ec1daee313da394ba5d2b140f1b664a9f48d4051e71a31dfd66d37003013dd75" exitCode=0 Oct 06 11:59:45 crc kubenswrapper[4958]: I1006 11:59:45.444869 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-mllk7" event={"ID":"2cab249a-2dc5-4211-a567-b55c234a8853","Type":"ContainerDied","Data":"ec1daee313da394ba5d2b140f1b664a9f48d4051e71a31dfd66d37003013dd75"} Oct 06 11:59:46 crc kubenswrapper[4958]: I1006 11:59:46.460167 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-mllk7" event={"ID":"2cab249a-2dc5-4211-a567-b55c234a8853","Type":"ContainerStarted","Data":"14b869ce864ff8181e767972a93e1d9752442ce69c176d50d460ef345df696dc"} Oct 06 11:59:46 crc kubenswrapper[4958]: I1006 11:59:46.460244 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-mllk7" event={"ID":"2cab249a-2dc5-4211-a567-b55c234a8853","Type":"ContainerStarted","Data":"8fcf5a77aadc13a406bd3335e8c47d5e0e21495e705f632a663a5d6a9843d6fa"} Oct 06 11:59:46 crc kubenswrapper[4958]: I1006 11:59:46.460260 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-mllk7" event={"ID":"2cab249a-2dc5-4211-a567-b55c234a8853","Type":"ContainerStarted","Data":"8be3301f6d8c819d8272ceb49b780e320051c8f5632f8abc70e129c823ffd9d3"} Oct 06 11:59:46 crc kubenswrapper[4958]: I1006 11:59:46.460271 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-mllk7" event={"ID":"2cab249a-2dc5-4211-a567-b55c234a8853","Type":"ContainerStarted","Data":"5f414d12578e5594a0e1a475b55607024ed99bbcb3e90517201fa0a3dd8d7be9"} Oct 06 11:59:46 crc kubenswrapper[4958]: I1006 11:59:46.460281 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-mllk7" event={"ID":"2cab249a-2dc5-4211-a567-b55c234a8853","Type":"ContainerStarted","Data":"5f39dde37346e7a3894e780a36b6c7ddc98a8e04b367aad46c38ff92134609f5"} Oct 06 11:59:47 crc kubenswrapper[4958]: I1006 11:59:47.480453 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-mllk7" event={"ID":"2cab249a-2dc5-4211-a567-b55c234a8853","Type":"ContainerStarted","Data":"396da6ed31bee921d581ae8f367383a5db7da9ef9e440b2ae6437f144a987979"} Oct 06 11:59:47 crc kubenswrapper[4958]: I1006 11:59:47.480848 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/frr-k8s-mllk7" Oct 06 11:59:47 crc kubenswrapper[4958]: I1006 11:59:47.517458 4958 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/frr-k8s-mllk7" podStartSLOduration=6.082424932 podStartE2EDuration="13.517433061s" podCreationTimestamp="2025-10-06 11:59:34 +0000 UTC" firstStartedPulling="2025-10-06 11:59:35.430395498 +0000 UTC m=+729.316420816" lastFinishedPulling="2025-10-06 11:59:42.865403617 +0000 UTC m=+736.751428945" observedRunningTime="2025-10-06 11:59:47.51627497 +0000 UTC m=+741.402300318" watchObservedRunningTime="2025-10-06 11:59:47.517433061 +0000 UTC m=+741.403458409" Oct 06 11:59:50 crc kubenswrapper[4958]: I1006 11:59:50.304346 4958 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="metallb-system/frr-k8s-mllk7" Oct 06 11:59:50 crc kubenswrapper[4958]: I1006 11:59:50.373447 4958 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="metallb-system/frr-k8s-mllk7" Oct 06 11:59:55 crc kubenswrapper[4958]: I1006 11:59:55.308456 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/frr-k8s-mllk7" Oct 06 11:59:55 crc kubenswrapper[4958]: I1006 11:59:55.901798 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/frr-k8s-webhook-server-64bf5d555-vf4w8" Oct 06 11:59:56 crc kubenswrapper[4958]: I1006 11:59:56.894476 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/speaker-qpvkk" Oct 06 12:00:00 crc kubenswrapper[4958]: I1006 12:00:00.160010 4958 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29329200-chprd"] Oct 06 12:00:00 crc kubenswrapper[4958]: I1006 12:00:00.162086 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29329200-chprd" Oct 06 12:00:00 crc kubenswrapper[4958]: I1006 12:00:00.165716 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Oct 06 12:00:00 crc kubenswrapper[4958]: I1006 12:00:00.166066 4958 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Oct 06 12:00:00 crc kubenswrapper[4958]: I1006 12:00:00.178672 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29329200-chprd"] Oct 06 12:00:00 crc kubenswrapper[4958]: I1006 12:00:00.266463 4958 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-index-g9wxg"] Oct 06 12:00:00 crc kubenswrapper[4958]: I1006 12:00:00.267780 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-g9wxg" Oct 06 12:00:00 crc kubenswrapper[4958]: I1006 12:00:00.270584 4958 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack-operators"/"kube-root-ca.crt" Oct 06 12:00:00 crc kubenswrapper[4958]: I1006 12:00:00.270943 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-index-dockercfg-vkq86" Oct 06 12:00:00 crc kubenswrapper[4958]: I1006 12:00:00.272332 4958 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack-operators"/"openshift-service-ca.crt" Oct 06 12:00:00 crc kubenswrapper[4958]: I1006 12:00:00.286791 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-g9wxg"] Oct 06 12:00:00 crc kubenswrapper[4958]: I1006 12:00:00.291649 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jlvvx\" (UniqueName: \"kubernetes.io/projected/3f003179-aa9d-491f-aefb-aaedbeaf375b-kube-api-access-jlvvx\") pod \"collect-profiles-29329200-chprd\" (UID: \"3f003179-aa9d-491f-aefb-aaedbeaf375b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29329200-chprd" Oct 06 12:00:00 crc kubenswrapper[4958]: I1006 12:00:00.291934 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/3f003179-aa9d-491f-aefb-aaedbeaf375b-config-volume\") pod \"collect-profiles-29329200-chprd\" (UID: \"3f003179-aa9d-491f-aefb-aaedbeaf375b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29329200-chprd" Oct 06 12:00:00 crc kubenswrapper[4958]: I1006 12:00:00.292044 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/3f003179-aa9d-491f-aefb-aaedbeaf375b-secret-volume\") pod \"collect-profiles-29329200-chprd\" (UID: \"3f003179-aa9d-491f-aefb-aaedbeaf375b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29329200-chprd" Oct 06 12:00:00 crc kubenswrapper[4958]: I1006 12:00:00.392948 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xppt6\" (UniqueName: \"kubernetes.io/projected/7283fba9-72df-4561-9dbb-e495bf4f5fec-kube-api-access-xppt6\") pod \"openstack-operator-index-g9wxg\" (UID: \"7283fba9-72df-4561-9dbb-e495bf4f5fec\") " pod="openstack-operators/openstack-operator-index-g9wxg" Oct 06 12:00:00 crc kubenswrapper[4958]: I1006 12:00:00.393017 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/3f003179-aa9d-491f-aefb-aaedbeaf375b-secret-volume\") pod \"collect-profiles-29329200-chprd\" (UID: \"3f003179-aa9d-491f-aefb-aaedbeaf375b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29329200-chprd" Oct 06 12:00:00 crc kubenswrapper[4958]: I1006 12:00:00.393094 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jlvvx\" (UniqueName: \"kubernetes.io/projected/3f003179-aa9d-491f-aefb-aaedbeaf375b-kube-api-access-jlvvx\") pod \"collect-profiles-29329200-chprd\" (UID: \"3f003179-aa9d-491f-aefb-aaedbeaf375b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29329200-chprd" Oct 06 12:00:00 crc kubenswrapper[4958]: I1006 12:00:00.393123 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/3f003179-aa9d-491f-aefb-aaedbeaf375b-config-volume\") pod \"collect-profiles-29329200-chprd\" (UID: \"3f003179-aa9d-491f-aefb-aaedbeaf375b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29329200-chprd" Oct 06 12:00:00 crc kubenswrapper[4958]: I1006 12:00:00.394062 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/3f003179-aa9d-491f-aefb-aaedbeaf375b-config-volume\") pod \"collect-profiles-29329200-chprd\" (UID: \"3f003179-aa9d-491f-aefb-aaedbeaf375b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29329200-chprd" Oct 06 12:00:00 crc kubenswrapper[4958]: I1006 12:00:00.403415 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/3f003179-aa9d-491f-aefb-aaedbeaf375b-secret-volume\") pod \"collect-profiles-29329200-chprd\" (UID: \"3f003179-aa9d-491f-aefb-aaedbeaf375b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29329200-chprd" Oct 06 12:00:00 crc kubenswrapper[4958]: I1006 12:00:00.410115 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jlvvx\" (UniqueName: \"kubernetes.io/projected/3f003179-aa9d-491f-aefb-aaedbeaf375b-kube-api-access-jlvvx\") pod \"collect-profiles-29329200-chprd\" (UID: \"3f003179-aa9d-491f-aefb-aaedbeaf375b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29329200-chprd" Oct 06 12:00:00 crc kubenswrapper[4958]: I1006 12:00:00.484115 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29329200-chprd" Oct 06 12:00:00 crc kubenswrapper[4958]: I1006 12:00:00.494718 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xppt6\" (UniqueName: \"kubernetes.io/projected/7283fba9-72df-4561-9dbb-e495bf4f5fec-kube-api-access-xppt6\") pod \"openstack-operator-index-g9wxg\" (UID: \"7283fba9-72df-4561-9dbb-e495bf4f5fec\") " pod="openstack-operators/openstack-operator-index-g9wxg" Oct 06 12:00:00 crc kubenswrapper[4958]: I1006 12:00:00.516699 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xppt6\" (UniqueName: \"kubernetes.io/projected/7283fba9-72df-4561-9dbb-e495bf4f5fec-kube-api-access-xppt6\") pod \"openstack-operator-index-g9wxg\" (UID: \"7283fba9-72df-4561-9dbb-e495bf4f5fec\") " pod="openstack-operators/openstack-operator-index-g9wxg" Oct 06 12:00:00 crc kubenswrapper[4958]: I1006 12:00:00.580551 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-g9wxg" Oct 06 12:00:00 crc kubenswrapper[4958]: I1006 12:00:00.727208 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29329200-chprd"] Oct 06 12:00:00 crc kubenswrapper[4958]: I1006 12:00:00.801633 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-g9wxg"] Oct 06 12:00:00 crc kubenswrapper[4958]: W1006 12:00:00.807535 4958 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod7283fba9_72df_4561_9dbb_e495bf4f5fec.slice/crio-3f9d95aeec123c94bbd070046448d4b0f886bc439d1f582dbccbc568cd7c2ee4 WatchSource:0}: Error finding container 3f9d95aeec123c94bbd070046448d4b0f886bc439d1f582dbccbc568cd7c2ee4: Status 404 returned error can't find the container with id 3f9d95aeec123c94bbd070046448d4b0f886bc439d1f582dbccbc568cd7c2ee4 Oct 06 12:00:01 crc kubenswrapper[4958]: I1006 12:00:01.592475 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-g9wxg" event={"ID":"7283fba9-72df-4561-9dbb-e495bf4f5fec","Type":"ContainerStarted","Data":"3f9d95aeec123c94bbd070046448d4b0f886bc439d1f582dbccbc568cd7c2ee4"} Oct 06 12:00:01 crc kubenswrapper[4958]: I1006 12:00:01.594283 4958 generic.go:334] "Generic (PLEG): container finished" podID="3f003179-aa9d-491f-aefb-aaedbeaf375b" containerID="022212a3d1184d485ef8da170af4a07c86e89e97b2c33c18c3fff9675515fcdc" exitCode=0 Oct 06 12:00:01 crc kubenswrapper[4958]: I1006 12:00:01.594332 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29329200-chprd" event={"ID":"3f003179-aa9d-491f-aefb-aaedbeaf375b","Type":"ContainerDied","Data":"022212a3d1184d485ef8da170af4a07c86e89e97b2c33c18c3fff9675515fcdc"} Oct 06 12:00:01 crc kubenswrapper[4958]: I1006 12:00:01.594364 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29329200-chprd" event={"ID":"3f003179-aa9d-491f-aefb-aaedbeaf375b","Type":"ContainerStarted","Data":"ee107b17bb59f378a7944b609b8b2dead24a203583dc82d1ce7ba00fe69d72ee"} Oct 06 12:00:03 crc kubenswrapper[4958]: I1006 12:00:03.042071 4958 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/openstack-operator-index-g9wxg"] Oct 06 12:00:03 crc kubenswrapper[4958]: I1006 12:00:03.134632 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29329200-chprd" Oct 06 12:00:03 crc kubenswrapper[4958]: I1006 12:00:03.241426 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/3f003179-aa9d-491f-aefb-aaedbeaf375b-secret-volume\") pod \"3f003179-aa9d-491f-aefb-aaedbeaf375b\" (UID: \"3f003179-aa9d-491f-aefb-aaedbeaf375b\") " Oct 06 12:00:03 crc kubenswrapper[4958]: I1006 12:00:03.242446 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jlvvx\" (UniqueName: \"kubernetes.io/projected/3f003179-aa9d-491f-aefb-aaedbeaf375b-kube-api-access-jlvvx\") pod \"3f003179-aa9d-491f-aefb-aaedbeaf375b\" (UID: \"3f003179-aa9d-491f-aefb-aaedbeaf375b\") " Oct 06 12:00:03 crc kubenswrapper[4958]: I1006 12:00:03.242813 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/3f003179-aa9d-491f-aefb-aaedbeaf375b-config-volume\") pod \"3f003179-aa9d-491f-aefb-aaedbeaf375b\" (UID: \"3f003179-aa9d-491f-aefb-aaedbeaf375b\") " Oct 06 12:00:03 crc kubenswrapper[4958]: I1006 12:00:03.243238 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3f003179-aa9d-491f-aefb-aaedbeaf375b-config-volume" (OuterVolumeSpecName: "config-volume") pod "3f003179-aa9d-491f-aefb-aaedbeaf375b" (UID: "3f003179-aa9d-491f-aefb-aaedbeaf375b"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 06 12:00:03 crc kubenswrapper[4958]: I1006 12:00:03.245977 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3f003179-aa9d-491f-aefb-aaedbeaf375b-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "3f003179-aa9d-491f-aefb-aaedbeaf375b" (UID: "3f003179-aa9d-491f-aefb-aaedbeaf375b"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 06 12:00:03 crc kubenswrapper[4958]: I1006 12:00:03.246352 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3f003179-aa9d-491f-aefb-aaedbeaf375b-kube-api-access-jlvvx" (OuterVolumeSpecName: "kube-api-access-jlvvx") pod "3f003179-aa9d-491f-aefb-aaedbeaf375b" (UID: "3f003179-aa9d-491f-aefb-aaedbeaf375b"). InnerVolumeSpecName "kube-api-access-jlvvx". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 06 12:00:03 crc kubenswrapper[4958]: I1006 12:00:03.344518 4958 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/3f003179-aa9d-491f-aefb-aaedbeaf375b-config-volume\") on node \"crc\" DevicePath \"\"" Oct 06 12:00:03 crc kubenswrapper[4958]: I1006 12:00:03.344559 4958 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/3f003179-aa9d-491f-aefb-aaedbeaf375b-secret-volume\") on node \"crc\" DevicePath \"\"" Oct 06 12:00:03 crc kubenswrapper[4958]: I1006 12:00:03.344573 4958 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jlvvx\" (UniqueName: \"kubernetes.io/projected/3f003179-aa9d-491f-aefb-aaedbeaf375b-kube-api-access-jlvvx\") on node \"crc\" DevicePath \"\"" Oct 06 12:00:03 crc kubenswrapper[4958]: I1006 12:00:03.609259 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-g9wxg" event={"ID":"7283fba9-72df-4561-9dbb-e495bf4f5fec","Type":"ContainerStarted","Data":"5ba5c1b600b357165485e5dda273b7701e7dbcfc261ac9fd2453ba6b0977f0c1"} Oct 06 12:00:03 crc kubenswrapper[4958]: I1006 12:00:03.609452 4958 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack-operators/openstack-operator-index-g9wxg" podUID="7283fba9-72df-4561-9dbb-e495bf4f5fec" containerName="registry-server" containerID="cri-o://5ba5c1b600b357165485e5dda273b7701e7dbcfc261ac9fd2453ba6b0977f0c1" gracePeriod=2 Oct 06 12:00:03 crc kubenswrapper[4958]: I1006 12:00:03.613471 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29329200-chprd" event={"ID":"3f003179-aa9d-491f-aefb-aaedbeaf375b","Type":"ContainerDied","Data":"ee107b17bb59f378a7944b609b8b2dead24a203583dc82d1ce7ba00fe69d72ee"} Oct 06 12:00:03 crc kubenswrapper[4958]: I1006 12:00:03.613538 4958 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ee107b17bb59f378a7944b609b8b2dead24a203583dc82d1ce7ba00fe69d72ee" Oct 06 12:00:03 crc kubenswrapper[4958]: I1006 12:00:03.613557 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29329200-chprd" Oct 06 12:00:03 crc kubenswrapper[4958]: I1006 12:00:03.630363 4958 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-index-g9wxg" podStartSLOduration=1.172428253 podStartE2EDuration="3.630348332s" podCreationTimestamp="2025-10-06 12:00:00 +0000 UTC" firstStartedPulling="2025-10-06 12:00:00.809566334 +0000 UTC m=+754.695591642" lastFinishedPulling="2025-10-06 12:00:03.267486423 +0000 UTC m=+757.153511721" observedRunningTime="2025-10-06 12:00:03.62966587 +0000 UTC m=+757.515691208" watchObservedRunningTime="2025-10-06 12:00:03.630348332 +0000 UTC m=+757.516373640" Oct 06 12:00:03 crc kubenswrapper[4958]: I1006 12:00:03.652846 4958 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-index-wdw62"] Oct 06 12:00:03 crc kubenswrapper[4958]: E1006 12:00:03.653943 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3f003179-aa9d-491f-aefb-aaedbeaf375b" containerName="collect-profiles" Oct 06 12:00:03 crc kubenswrapper[4958]: I1006 12:00:03.654301 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="3f003179-aa9d-491f-aefb-aaedbeaf375b" containerName="collect-profiles" Oct 06 12:00:03 crc kubenswrapper[4958]: I1006 12:00:03.654914 4958 memory_manager.go:354] "RemoveStaleState removing state" podUID="3f003179-aa9d-491f-aefb-aaedbeaf375b" containerName="collect-profiles" Oct 06 12:00:03 crc kubenswrapper[4958]: I1006 12:00:03.656251 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-wdw62" Oct 06 12:00:03 crc kubenswrapper[4958]: I1006 12:00:03.663174 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-wdw62"] Oct 06 12:00:03 crc kubenswrapper[4958]: I1006 12:00:03.850118 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-flj9f\" (UniqueName: \"kubernetes.io/projected/e61771dc-7e62-4e42-a99a-c8eae920cb26-kube-api-access-flj9f\") pod \"openstack-operator-index-wdw62\" (UID: \"e61771dc-7e62-4e42-a99a-c8eae920cb26\") " pod="openstack-operators/openstack-operator-index-wdw62" Oct 06 12:00:03 crc kubenswrapper[4958]: I1006 12:00:03.951503 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-flj9f\" (UniqueName: \"kubernetes.io/projected/e61771dc-7e62-4e42-a99a-c8eae920cb26-kube-api-access-flj9f\") pod \"openstack-operator-index-wdw62\" (UID: \"e61771dc-7e62-4e42-a99a-c8eae920cb26\") " pod="openstack-operators/openstack-operator-index-wdw62" Oct 06 12:00:03 crc kubenswrapper[4958]: I1006 12:00:03.973258 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-flj9f\" (UniqueName: \"kubernetes.io/projected/e61771dc-7e62-4e42-a99a-c8eae920cb26-kube-api-access-flj9f\") pod \"openstack-operator-index-wdw62\" (UID: \"e61771dc-7e62-4e42-a99a-c8eae920cb26\") " pod="openstack-operators/openstack-operator-index-wdw62" Oct 06 12:00:03 crc kubenswrapper[4958]: I1006 12:00:03.987125 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-g9wxg" Oct 06 12:00:04 crc kubenswrapper[4958]: I1006 12:00:04.022204 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-wdw62" Oct 06 12:00:04 crc kubenswrapper[4958]: I1006 12:00:04.053076 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xppt6\" (UniqueName: \"kubernetes.io/projected/7283fba9-72df-4561-9dbb-e495bf4f5fec-kube-api-access-xppt6\") pod \"7283fba9-72df-4561-9dbb-e495bf4f5fec\" (UID: \"7283fba9-72df-4561-9dbb-e495bf4f5fec\") " Oct 06 12:00:04 crc kubenswrapper[4958]: I1006 12:00:04.056725 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7283fba9-72df-4561-9dbb-e495bf4f5fec-kube-api-access-xppt6" (OuterVolumeSpecName: "kube-api-access-xppt6") pod "7283fba9-72df-4561-9dbb-e495bf4f5fec" (UID: "7283fba9-72df-4561-9dbb-e495bf4f5fec"). InnerVolumeSpecName "kube-api-access-xppt6". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 06 12:00:04 crc kubenswrapper[4958]: I1006 12:00:04.155004 4958 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xppt6\" (UniqueName: \"kubernetes.io/projected/7283fba9-72df-4561-9dbb-e495bf4f5fec-kube-api-access-xppt6\") on node \"crc\" DevicePath \"\"" Oct 06 12:00:04 crc kubenswrapper[4958]: I1006 12:00:04.299554 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-wdw62"] Oct 06 12:00:04 crc kubenswrapper[4958]: I1006 12:00:04.620500 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-wdw62" event={"ID":"e61771dc-7e62-4e42-a99a-c8eae920cb26","Type":"ContainerStarted","Data":"25dbe325c54a1cac0ba991224bf8b1ae378528b3d5a91853062831919e63c82f"} Oct 06 12:00:04 crc kubenswrapper[4958]: I1006 12:00:04.620582 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-wdw62" event={"ID":"e61771dc-7e62-4e42-a99a-c8eae920cb26","Type":"ContainerStarted","Data":"8a24f32fd94ff1eee12029d89f4e9ae1963cbc66d4d0bb042bd63c3baddae4a2"} Oct 06 12:00:04 crc kubenswrapper[4958]: I1006 12:00:04.622072 4958 generic.go:334] "Generic (PLEG): container finished" podID="7283fba9-72df-4561-9dbb-e495bf4f5fec" containerID="5ba5c1b600b357165485e5dda273b7701e7dbcfc261ac9fd2453ba6b0977f0c1" exitCode=0 Oct 06 12:00:04 crc kubenswrapper[4958]: I1006 12:00:04.622152 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-g9wxg" event={"ID":"7283fba9-72df-4561-9dbb-e495bf4f5fec","Type":"ContainerDied","Data":"5ba5c1b600b357165485e5dda273b7701e7dbcfc261ac9fd2453ba6b0977f0c1"} Oct 06 12:00:04 crc kubenswrapper[4958]: I1006 12:00:04.622188 4958 scope.go:117] "RemoveContainer" containerID="5ba5c1b600b357165485e5dda273b7701e7dbcfc261ac9fd2453ba6b0977f0c1" Oct 06 12:00:04 crc kubenswrapper[4958]: I1006 12:00:04.622232 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-g9wxg" event={"ID":"7283fba9-72df-4561-9dbb-e495bf4f5fec","Type":"ContainerDied","Data":"3f9d95aeec123c94bbd070046448d4b0f886bc439d1f582dbccbc568cd7c2ee4"} Oct 06 12:00:04 crc kubenswrapper[4958]: I1006 12:00:04.622125 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-g9wxg" Oct 06 12:00:04 crc kubenswrapper[4958]: I1006 12:00:04.637114 4958 scope.go:117] "RemoveContainer" containerID="5ba5c1b600b357165485e5dda273b7701e7dbcfc261ac9fd2453ba6b0977f0c1" Oct 06 12:00:04 crc kubenswrapper[4958]: E1006 12:00:04.638006 4958 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5ba5c1b600b357165485e5dda273b7701e7dbcfc261ac9fd2453ba6b0977f0c1\": container with ID starting with 5ba5c1b600b357165485e5dda273b7701e7dbcfc261ac9fd2453ba6b0977f0c1 not found: ID does not exist" containerID="5ba5c1b600b357165485e5dda273b7701e7dbcfc261ac9fd2453ba6b0977f0c1" Oct 06 12:00:04 crc kubenswrapper[4958]: I1006 12:00:04.638044 4958 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5ba5c1b600b357165485e5dda273b7701e7dbcfc261ac9fd2453ba6b0977f0c1"} err="failed to get container status \"5ba5c1b600b357165485e5dda273b7701e7dbcfc261ac9fd2453ba6b0977f0c1\": rpc error: code = NotFound desc = could not find container \"5ba5c1b600b357165485e5dda273b7701e7dbcfc261ac9fd2453ba6b0977f0c1\": container with ID starting with 5ba5c1b600b357165485e5dda273b7701e7dbcfc261ac9fd2453ba6b0977f0c1 not found: ID does not exist" Oct 06 12:00:04 crc kubenswrapper[4958]: I1006 12:00:04.641492 4958 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-index-wdw62" podStartSLOduration=1.592687368 podStartE2EDuration="1.641440463s" podCreationTimestamp="2025-10-06 12:00:03 +0000 UTC" firstStartedPulling="2025-10-06 12:00:04.314837009 +0000 UTC m=+758.200862317" lastFinishedPulling="2025-10-06 12:00:04.363590094 +0000 UTC m=+758.249615412" observedRunningTime="2025-10-06 12:00:04.637902661 +0000 UTC m=+758.523927989" watchObservedRunningTime="2025-10-06 12:00:04.641440463 +0000 UTC m=+758.527465771" Oct 06 12:00:04 crc kubenswrapper[4958]: I1006 12:00:04.660402 4958 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/openstack-operator-index-g9wxg"] Oct 06 12:00:04 crc kubenswrapper[4958]: I1006 12:00:04.664050 4958 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack-operators/openstack-operator-index-g9wxg"] Oct 06 12:00:04 crc kubenswrapper[4958]: I1006 12:00:04.926342 4958 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7283fba9-72df-4561-9dbb-e495bf4f5fec" path="/var/lib/kubelet/pods/7283fba9-72df-4561-9dbb-e495bf4f5fec/volumes" Oct 06 12:00:05 crc kubenswrapper[4958]: I1006 12:00:05.512439 4958 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-thrt8"] Oct 06 12:00:05 crc kubenswrapper[4958]: I1006 12:00:05.512639 4958 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-879f6c89f-thrt8" podUID="cbf992a4-d7af-455c-953b-c865445feb6c" containerName="controller-manager" containerID="cri-o://cfcab5d80b99f4da114997dd75b2bcad7b74f295924ac69ca30394659bec0e60" gracePeriod=30 Oct 06 12:00:05 crc kubenswrapper[4958]: I1006 12:00:05.591072 4958 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-fmlg2"] Oct 06 12:00:05 crc kubenswrapper[4958]: I1006 12:00:05.591327 4958 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-fmlg2" podUID="5af269ee-1565-4c36-a416-c4e2e7397fc5" containerName="route-controller-manager" containerID="cri-o://ace380a116ed4519de71a9140281ab0f6ea44de1d65345c9fe5a367ec3bd5e73" gracePeriod=30 Oct 06 12:00:05 crc kubenswrapper[4958]: I1006 12:00:05.952092 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-thrt8" Oct 06 12:00:05 crc kubenswrapper[4958]: I1006 12:00:05.981081 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-fmlg2" Oct 06 12:00:06 crc kubenswrapper[4958]: I1006 12:00:06.083304 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5af269ee-1565-4c36-a416-c4e2e7397fc5-config\") pod \"5af269ee-1565-4c36-a416-c4e2e7397fc5\" (UID: \"5af269ee-1565-4c36-a416-c4e2e7397fc5\") " Oct 06 12:00:06 crc kubenswrapper[4958]: I1006 12:00:06.083687 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/cbf992a4-d7af-455c-953b-c865445feb6c-proxy-ca-bundles\") pod \"cbf992a4-d7af-455c-953b-c865445feb6c\" (UID: \"cbf992a4-d7af-455c-953b-c865445feb6c\") " Oct 06 12:00:06 crc kubenswrapper[4958]: I1006 12:00:06.084031 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cskx9\" (UniqueName: \"kubernetes.io/projected/5af269ee-1565-4c36-a416-c4e2e7397fc5-kube-api-access-cskx9\") pod \"5af269ee-1565-4c36-a416-c4e2e7397fc5\" (UID: \"5af269ee-1565-4c36-a416-c4e2e7397fc5\") " Oct 06 12:00:06 crc kubenswrapper[4958]: I1006 12:00:06.084246 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/cbf992a4-d7af-455c-953b-c865445feb6c-serving-cert\") pod \"cbf992a4-d7af-455c-953b-c865445feb6c\" (UID: \"cbf992a4-d7af-455c-953b-c865445feb6c\") " Oct 06 12:00:06 crc kubenswrapper[4958]: I1006 12:00:06.084401 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5af269ee-1565-4c36-a416-c4e2e7397fc5-config" (OuterVolumeSpecName: "config") pod "5af269ee-1565-4c36-a416-c4e2e7397fc5" (UID: "5af269ee-1565-4c36-a416-c4e2e7397fc5"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 06 12:00:06 crc kubenswrapper[4958]: I1006 12:00:06.084430 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cbf992a4-d7af-455c-953b-c865445feb6c-config\") pod \"cbf992a4-d7af-455c-953b-c865445feb6c\" (UID: \"cbf992a4-d7af-455c-953b-c865445feb6c\") " Oct 06 12:00:06 crc kubenswrapper[4958]: I1006 12:00:06.084720 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5af269ee-1565-4c36-a416-c4e2e7397fc5-client-ca\") pod \"5af269ee-1565-4c36-a416-c4e2e7397fc5\" (UID: \"5af269ee-1565-4c36-a416-c4e2e7397fc5\") " Oct 06 12:00:06 crc kubenswrapper[4958]: I1006 12:00:06.084902 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s5lk7\" (UniqueName: \"kubernetes.io/projected/cbf992a4-d7af-455c-953b-c865445feb6c-kube-api-access-s5lk7\") pod \"cbf992a4-d7af-455c-953b-c865445feb6c\" (UID: \"cbf992a4-d7af-455c-953b-c865445feb6c\") " Oct 06 12:00:06 crc kubenswrapper[4958]: I1006 12:00:06.085056 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5af269ee-1565-4c36-a416-c4e2e7397fc5-serving-cert\") pod \"5af269ee-1565-4c36-a416-c4e2e7397fc5\" (UID: \"5af269ee-1565-4c36-a416-c4e2e7397fc5\") " Oct 06 12:00:06 crc kubenswrapper[4958]: I1006 12:00:06.085217 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/cbf992a4-d7af-455c-953b-c865445feb6c-client-ca\") pod \"cbf992a4-d7af-455c-953b-c865445feb6c\" (UID: \"cbf992a4-d7af-455c-953b-c865445feb6c\") " Oct 06 12:00:06 crc kubenswrapper[4958]: I1006 12:00:06.085771 4958 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5af269ee-1565-4c36-a416-c4e2e7397fc5-config\") on node \"crc\" DevicePath \"\"" Oct 06 12:00:06 crc kubenswrapper[4958]: I1006 12:00:06.086127 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5af269ee-1565-4c36-a416-c4e2e7397fc5-client-ca" (OuterVolumeSpecName: "client-ca") pod "5af269ee-1565-4c36-a416-c4e2e7397fc5" (UID: "5af269ee-1565-4c36-a416-c4e2e7397fc5"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 06 12:00:06 crc kubenswrapper[4958]: I1006 12:00:06.086653 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cbf992a4-d7af-455c-953b-c865445feb6c-config" (OuterVolumeSpecName: "config") pod "cbf992a4-d7af-455c-953b-c865445feb6c" (UID: "cbf992a4-d7af-455c-953b-c865445feb6c"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 06 12:00:06 crc kubenswrapper[4958]: I1006 12:00:06.086740 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cbf992a4-d7af-455c-953b-c865445feb6c-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "cbf992a4-d7af-455c-953b-c865445feb6c" (UID: "cbf992a4-d7af-455c-953b-c865445feb6c"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 06 12:00:06 crc kubenswrapper[4958]: I1006 12:00:06.086943 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cbf992a4-d7af-455c-953b-c865445feb6c-client-ca" (OuterVolumeSpecName: "client-ca") pod "cbf992a4-d7af-455c-953b-c865445feb6c" (UID: "cbf992a4-d7af-455c-953b-c865445feb6c"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 06 12:00:06 crc kubenswrapper[4958]: I1006 12:00:06.090561 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cbf992a4-d7af-455c-953b-c865445feb6c-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "cbf992a4-d7af-455c-953b-c865445feb6c" (UID: "cbf992a4-d7af-455c-953b-c865445feb6c"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 06 12:00:06 crc kubenswrapper[4958]: I1006 12:00:06.091263 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cbf992a4-d7af-455c-953b-c865445feb6c-kube-api-access-s5lk7" (OuterVolumeSpecName: "kube-api-access-s5lk7") pod "cbf992a4-d7af-455c-953b-c865445feb6c" (UID: "cbf992a4-d7af-455c-953b-c865445feb6c"). InnerVolumeSpecName "kube-api-access-s5lk7". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 06 12:00:06 crc kubenswrapper[4958]: I1006 12:00:06.091271 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5af269ee-1565-4c36-a416-c4e2e7397fc5-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "5af269ee-1565-4c36-a416-c4e2e7397fc5" (UID: "5af269ee-1565-4c36-a416-c4e2e7397fc5"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 06 12:00:06 crc kubenswrapper[4958]: I1006 12:00:06.092497 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5af269ee-1565-4c36-a416-c4e2e7397fc5-kube-api-access-cskx9" (OuterVolumeSpecName: "kube-api-access-cskx9") pod "5af269ee-1565-4c36-a416-c4e2e7397fc5" (UID: "5af269ee-1565-4c36-a416-c4e2e7397fc5"). InnerVolumeSpecName "kube-api-access-cskx9". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 06 12:00:06 crc kubenswrapper[4958]: I1006 12:00:06.187946 4958 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cskx9\" (UniqueName: \"kubernetes.io/projected/5af269ee-1565-4c36-a416-c4e2e7397fc5-kube-api-access-cskx9\") on node \"crc\" DevicePath \"\"" Oct 06 12:00:06 crc kubenswrapper[4958]: I1006 12:00:06.188023 4958 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/cbf992a4-d7af-455c-953b-c865445feb6c-serving-cert\") on node \"crc\" DevicePath \"\"" Oct 06 12:00:06 crc kubenswrapper[4958]: I1006 12:00:06.188053 4958 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cbf992a4-d7af-455c-953b-c865445feb6c-config\") on node \"crc\" DevicePath \"\"" Oct 06 12:00:06 crc kubenswrapper[4958]: I1006 12:00:06.188077 4958 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5af269ee-1565-4c36-a416-c4e2e7397fc5-client-ca\") on node \"crc\" DevicePath \"\"" Oct 06 12:00:06 crc kubenswrapper[4958]: I1006 12:00:06.188103 4958 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-s5lk7\" (UniqueName: \"kubernetes.io/projected/cbf992a4-d7af-455c-953b-c865445feb6c-kube-api-access-s5lk7\") on node \"crc\" DevicePath \"\"" Oct 06 12:00:06 crc kubenswrapper[4958]: I1006 12:00:06.188135 4958 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5af269ee-1565-4c36-a416-c4e2e7397fc5-serving-cert\") on node \"crc\" DevicePath \"\"" Oct 06 12:00:06 crc kubenswrapper[4958]: I1006 12:00:06.188191 4958 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/cbf992a4-d7af-455c-953b-c865445feb6c-client-ca\") on node \"crc\" DevicePath \"\"" Oct 06 12:00:06 crc kubenswrapper[4958]: I1006 12:00:06.188215 4958 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/cbf992a4-d7af-455c-953b-c865445feb6c-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Oct 06 12:00:06 crc kubenswrapper[4958]: I1006 12:00:06.639207 4958 generic.go:334] "Generic (PLEG): container finished" podID="5af269ee-1565-4c36-a416-c4e2e7397fc5" containerID="ace380a116ed4519de71a9140281ab0f6ea44de1d65345c9fe5a367ec3bd5e73" exitCode=0 Oct 06 12:00:06 crc kubenswrapper[4958]: I1006 12:00:06.639261 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-fmlg2" Oct 06 12:00:06 crc kubenswrapper[4958]: I1006 12:00:06.639336 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-fmlg2" event={"ID":"5af269ee-1565-4c36-a416-c4e2e7397fc5","Type":"ContainerDied","Data":"ace380a116ed4519de71a9140281ab0f6ea44de1d65345c9fe5a367ec3bd5e73"} Oct 06 12:00:06 crc kubenswrapper[4958]: I1006 12:00:06.639416 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-fmlg2" event={"ID":"5af269ee-1565-4c36-a416-c4e2e7397fc5","Type":"ContainerDied","Data":"c0c05ff7108ed9a6e6874257ea66c5c2cd7dc794bdb1a691edd9db2ec827c41d"} Oct 06 12:00:06 crc kubenswrapper[4958]: I1006 12:00:06.639448 4958 scope.go:117] "RemoveContainer" containerID="ace380a116ed4519de71a9140281ab0f6ea44de1d65345c9fe5a367ec3bd5e73" Oct 06 12:00:06 crc kubenswrapper[4958]: I1006 12:00:06.644724 4958 generic.go:334] "Generic (PLEG): container finished" podID="cbf992a4-d7af-455c-953b-c865445feb6c" containerID="cfcab5d80b99f4da114997dd75b2bcad7b74f295924ac69ca30394659bec0e60" exitCode=0 Oct 06 12:00:06 crc kubenswrapper[4958]: I1006 12:00:06.644777 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-thrt8" event={"ID":"cbf992a4-d7af-455c-953b-c865445feb6c","Type":"ContainerDied","Data":"cfcab5d80b99f4da114997dd75b2bcad7b74f295924ac69ca30394659bec0e60"} Oct 06 12:00:06 crc kubenswrapper[4958]: I1006 12:00:06.644809 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-thrt8" event={"ID":"cbf992a4-d7af-455c-953b-c865445feb6c","Type":"ContainerDied","Data":"18d80ed7d70cc68fdd1f03c7113d8070b382683417a0fbd9dd44b3ca98033247"} Oct 06 12:00:06 crc kubenswrapper[4958]: I1006 12:00:06.644873 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-thrt8" Oct 06 12:00:06 crc kubenswrapper[4958]: I1006 12:00:06.673808 4958 scope.go:117] "RemoveContainer" containerID="ace380a116ed4519de71a9140281ab0f6ea44de1d65345c9fe5a367ec3bd5e73" Oct 06 12:00:06 crc kubenswrapper[4958]: E1006 12:00:06.674886 4958 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ace380a116ed4519de71a9140281ab0f6ea44de1d65345c9fe5a367ec3bd5e73\": container with ID starting with ace380a116ed4519de71a9140281ab0f6ea44de1d65345c9fe5a367ec3bd5e73 not found: ID does not exist" containerID="ace380a116ed4519de71a9140281ab0f6ea44de1d65345c9fe5a367ec3bd5e73" Oct 06 12:00:06 crc kubenswrapper[4958]: I1006 12:00:06.674944 4958 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ace380a116ed4519de71a9140281ab0f6ea44de1d65345c9fe5a367ec3bd5e73"} err="failed to get container status \"ace380a116ed4519de71a9140281ab0f6ea44de1d65345c9fe5a367ec3bd5e73\": rpc error: code = NotFound desc = could not find container \"ace380a116ed4519de71a9140281ab0f6ea44de1d65345c9fe5a367ec3bd5e73\": container with ID starting with ace380a116ed4519de71a9140281ab0f6ea44de1d65345c9fe5a367ec3bd5e73 not found: ID does not exist" Oct 06 12:00:06 crc kubenswrapper[4958]: I1006 12:00:06.674976 4958 scope.go:117] "RemoveContainer" containerID="cfcab5d80b99f4da114997dd75b2bcad7b74f295924ac69ca30394659bec0e60" Oct 06 12:00:06 crc kubenswrapper[4958]: I1006 12:00:06.699981 4958 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-fmlg2"] Oct 06 12:00:06 crc kubenswrapper[4958]: I1006 12:00:06.703312 4958 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-fmlg2"] Oct 06 12:00:06 crc kubenswrapper[4958]: I1006 12:00:06.706372 4958 scope.go:117] "RemoveContainer" containerID="cfcab5d80b99f4da114997dd75b2bcad7b74f295924ac69ca30394659bec0e60" Oct 06 12:00:06 crc kubenswrapper[4958]: E1006 12:00:06.706946 4958 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"cfcab5d80b99f4da114997dd75b2bcad7b74f295924ac69ca30394659bec0e60\": container with ID starting with cfcab5d80b99f4da114997dd75b2bcad7b74f295924ac69ca30394659bec0e60 not found: ID does not exist" containerID="cfcab5d80b99f4da114997dd75b2bcad7b74f295924ac69ca30394659bec0e60" Oct 06 12:00:06 crc kubenswrapper[4958]: I1006 12:00:06.706992 4958 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cfcab5d80b99f4da114997dd75b2bcad7b74f295924ac69ca30394659bec0e60"} err="failed to get container status \"cfcab5d80b99f4da114997dd75b2bcad7b74f295924ac69ca30394659bec0e60\": rpc error: code = NotFound desc = could not find container \"cfcab5d80b99f4da114997dd75b2bcad7b74f295924ac69ca30394659bec0e60\": container with ID starting with cfcab5d80b99f4da114997dd75b2bcad7b74f295924ac69ca30394659bec0e60 not found: ID does not exist" Oct 06 12:00:06 crc kubenswrapper[4958]: I1006 12:00:06.734279 4958 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-thrt8"] Oct 06 12:00:06 crc kubenswrapper[4958]: I1006 12:00:06.739603 4958 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-thrt8"] Oct 06 12:00:06 crc kubenswrapper[4958]: I1006 12:00:06.922969 4958 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5af269ee-1565-4c36-a416-c4e2e7397fc5" path="/var/lib/kubelet/pods/5af269ee-1565-4c36-a416-c4e2e7397fc5/volumes" Oct 06 12:00:06 crc kubenswrapper[4958]: I1006 12:00:06.924069 4958 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cbf992a4-d7af-455c-953b-c865445feb6c" path="/var/lib/kubelet/pods/cbf992a4-d7af-455c-953b-c865445feb6c/volumes" Oct 06 12:00:07 crc kubenswrapper[4958]: I1006 12:00:07.177433 4958 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-84f4d7dc6-tg8rz"] Oct 06 12:00:07 crc kubenswrapper[4958]: E1006 12:00:07.177667 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5af269ee-1565-4c36-a416-c4e2e7397fc5" containerName="route-controller-manager" Oct 06 12:00:07 crc kubenswrapper[4958]: I1006 12:00:07.177679 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="5af269ee-1565-4c36-a416-c4e2e7397fc5" containerName="route-controller-manager" Oct 06 12:00:07 crc kubenswrapper[4958]: E1006 12:00:07.177696 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cbf992a4-d7af-455c-953b-c865445feb6c" containerName="controller-manager" Oct 06 12:00:07 crc kubenswrapper[4958]: I1006 12:00:07.177702 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="cbf992a4-d7af-455c-953b-c865445feb6c" containerName="controller-manager" Oct 06 12:00:07 crc kubenswrapper[4958]: E1006 12:00:07.177715 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7283fba9-72df-4561-9dbb-e495bf4f5fec" containerName="registry-server" Oct 06 12:00:07 crc kubenswrapper[4958]: I1006 12:00:07.177721 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="7283fba9-72df-4561-9dbb-e495bf4f5fec" containerName="registry-server" Oct 06 12:00:07 crc kubenswrapper[4958]: I1006 12:00:07.177814 4958 memory_manager.go:354] "RemoveStaleState removing state" podUID="5af269ee-1565-4c36-a416-c4e2e7397fc5" containerName="route-controller-manager" Oct 06 12:00:07 crc kubenswrapper[4958]: I1006 12:00:07.177831 4958 memory_manager.go:354] "RemoveStaleState removing state" podUID="7283fba9-72df-4561-9dbb-e495bf4f5fec" containerName="registry-server" Oct 06 12:00:07 crc kubenswrapper[4958]: I1006 12:00:07.177840 4958 memory_manager.go:354] "RemoveStaleState removing state" podUID="cbf992a4-d7af-455c-953b-c865445feb6c" containerName="controller-manager" Oct 06 12:00:07 crc kubenswrapper[4958]: I1006 12:00:07.178233 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-84f4d7dc6-tg8rz" Oct 06 12:00:07 crc kubenswrapper[4958]: I1006 12:00:07.184333 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-84f4d7dc6-tg8rz"] Oct 06 12:00:07 crc kubenswrapper[4958]: I1006 12:00:07.184802 4958 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Oct 06 12:00:07 crc kubenswrapper[4958]: I1006 12:00:07.184947 4958 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Oct 06 12:00:07 crc kubenswrapper[4958]: I1006 12:00:07.184993 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Oct 06 12:00:07 crc kubenswrapper[4958]: I1006 12:00:07.185340 4958 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Oct 06 12:00:07 crc kubenswrapper[4958]: I1006 12:00:07.185569 4958 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Oct 06 12:00:07 crc kubenswrapper[4958]: I1006 12:00:07.186288 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Oct 06 12:00:07 crc kubenswrapper[4958]: I1006 12:00:07.193643 4958 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Oct 06 12:00:07 crc kubenswrapper[4958]: I1006 12:00:07.202506 4958 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6b6777b44c-vpv9m"] Oct 06 12:00:07 crc kubenswrapper[4958]: I1006 12:00:07.202716 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sb2tk\" (UniqueName: \"kubernetes.io/projected/5cbb584c-ce0a-4782-9a71-d9ef11eb8f3a-kube-api-access-sb2tk\") pod \"controller-manager-84f4d7dc6-tg8rz\" (UID: \"5cbb584c-ce0a-4782-9a71-d9ef11eb8f3a\") " pod="openshift-controller-manager/controller-manager-84f4d7dc6-tg8rz" Oct 06 12:00:07 crc kubenswrapper[4958]: I1006 12:00:07.202754 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5cbb584c-ce0a-4782-9a71-d9ef11eb8f3a-serving-cert\") pod \"controller-manager-84f4d7dc6-tg8rz\" (UID: \"5cbb584c-ce0a-4782-9a71-d9ef11eb8f3a\") " pod="openshift-controller-manager/controller-manager-84f4d7dc6-tg8rz" Oct 06 12:00:07 crc kubenswrapper[4958]: I1006 12:00:07.202780 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5cbb584c-ce0a-4782-9a71-d9ef11eb8f3a-client-ca\") pod \"controller-manager-84f4d7dc6-tg8rz\" (UID: \"5cbb584c-ce0a-4782-9a71-d9ef11eb8f3a\") " pod="openshift-controller-manager/controller-manager-84f4d7dc6-tg8rz" Oct 06 12:00:07 crc kubenswrapper[4958]: I1006 12:00:07.202805 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5cbb584c-ce0a-4782-9a71-d9ef11eb8f3a-config\") pod \"controller-manager-84f4d7dc6-tg8rz\" (UID: \"5cbb584c-ce0a-4782-9a71-d9ef11eb8f3a\") " pod="openshift-controller-manager/controller-manager-84f4d7dc6-tg8rz" Oct 06 12:00:07 crc kubenswrapper[4958]: I1006 12:00:07.202854 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/5cbb584c-ce0a-4782-9a71-d9ef11eb8f3a-proxy-ca-bundles\") pod \"controller-manager-84f4d7dc6-tg8rz\" (UID: \"5cbb584c-ce0a-4782-9a71-d9ef11eb8f3a\") " pod="openshift-controller-manager/controller-manager-84f4d7dc6-tg8rz" Oct 06 12:00:07 crc kubenswrapper[4958]: I1006 12:00:07.205678 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6b6777b44c-vpv9m" Oct 06 12:00:07 crc kubenswrapper[4958]: I1006 12:00:07.208724 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Oct 06 12:00:07 crc kubenswrapper[4958]: I1006 12:00:07.208840 4958 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Oct 06 12:00:07 crc kubenswrapper[4958]: I1006 12:00:07.208843 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Oct 06 12:00:07 crc kubenswrapper[4958]: I1006 12:00:07.208922 4958 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Oct 06 12:00:07 crc kubenswrapper[4958]: I1006 12:00:07.209196 4958 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Oct 06 12:00:07 crc kubenswrapper[4958]: I1006 12:00:07.209237 4958 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Oct 06 12:00:07 crc kubenswrapper[4958]: I1006 12:00:07.220096 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6b6777b44c-vpv9m"] Oct 06 12:00:07 crc kubenswrapper[4958]: I1006 12:00:07.303231 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/279a4b4e-f36d-4f1b-8031-a0e31155afd4-client-ca\") pod \"route-controller-manager-6b6777b44c-vpv9m\" (UID: \"279a4b4e-f36d-4f1b-8031-a0e31155afd4\") " pod="openshift-route-controller-manager/route-controller-manager-6b6777b44c-vpv9m" Oct 06 12:00:07 crc kubenswrapper[4958]: I1006 12:00:07.303292 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/279a4b4e-f36d-4f1b-8031-a0e31155afd4-serving-cert\") pod \"route-controller-manager-6b6777b44c-vpv9m\" (UID: \"279a4b4e-f36d-4f1b-8031-a0e31155afd4\") " pod="openshift-route-controller-manager/route-controller-manager-6b6777b44c-vpv9m" Oct 06 12:00:07 crc kubenswrapper[4958]: I1006 12:00:07.303488 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/5cbb584c-ce0a-4782-9a71-d9ef11eb8f3a-proxy-ca-bundles\") pod \"controller-manager-84f4d7dc6-tg8rz\" (UID: \"5cbb584c-ce0a-4782-9a71-d9ef11eb8f3a\") " pod="openshift-controller-manager/controller-manager-84f4d7dc6-tg8rz" Oct 06 12:00:07 crc kubenswrapper[4958]: I1006 12:00:07.303584 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mv6jp\" (UniqueName: \"kubernetes.io/projected/279a4b4e-f36d-4f1b-8031-a0e31155afd4-kube-api-access-mv6jp\") pod \"route-controller-manager-6b6777b44c-vpv9m\" (UID: \"279a4b4e-f36d-4f1b-8031-a0e31155afd4\") " pod="openshift-route-controller-manager/route-controller-manager-6b6777b44c-vpv9m" Oct 06 12:00:07 crc kubenswrapper[4958]: I1006 12:00:07.303643 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sb2tk\" (UniqueName: \"kubernetes.io/projected/5cbb584c-ce0a-4782-9a71-d9ef11eb8f3a-kube-api-access-sb2tk\") pod \"controller-manager-84f4d7dc6-tg8rz\" (UID: \"5cbb584c-ce0a-4782-9a71-d9ef11eb8f3a\") " pod="openshift-controller-manager/controller-manager-84f4d7dc6-tg8rz" Oct 06 12:00:07 crc kubenswrapper[4958]: I1006 12:00:07.303711 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5cbb584c-ce0a-4782-9a71-d9ef11eb8f3a-serving-cert\") pod \"controller-manager-84f4d7dc6-tg8rz\" (UID: \"5cbb584c-ce0a-4782-9a71-d9ef11eb8f3a\") " pod="openshift-controller-manager/controller-manager-84f4d7dc6-tg8rz" Oct 06 12:00:07 crc kubenswrapper[4958]: I1006 12:00:07.303771 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5cbb584c-ce0a-4782-9a71-d9ef11eb8f3a-client-ca\") pod \"controller-manager-84f4d7dc6-tg8rz\" (UID: \"5cbb584c-ce0a-4782-9a71-d9ef11eb8f3a\") " pod="openshift-controller-manager/controller-manager-84f4d7dc6-tg8rz" Oct 06 12:00:07 crc kubenswrapper[4958]: I1006 12:00:07.303833 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/279a4b4e-f36d-4f1b-8031-a0e31155afd4-config\") pod \"route-controller-manager-6b6777b44c-vpv9m\" (UID: \"279a4b4e-f36d-4f1b-8031-a0e31155afd4\") " pod="openshift-route-controller-manager/route-controller-manager-6b6777b44c-vpv9m" Oct 06 12:00:07 crc kubenswrapper[4958]: I1006 12:00:07.304267 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5cbb584c-ce0a-4782-9a71-d9ef11eb8f3a-config\") pod \"controller-manager-84f4d7dc6-tg8rz\" (UID: \"5cbb584c-ce0a-4782-9a71-d9ef11eb8f3a\") " pod="openshift-controller-manager/controller-manager-84f4d7dc6-tg8rz" Oct 06 12:00:07 crc kubenswrapper[4958]: I1006 12:00:07.305626 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5cbb584c-ce0a-4782-9a71-d9ef11eb8f3a-client-ca\") pod \"controller-manager-84f4d7dc6-tg8rz\" (UID: \"5cbb584c-ce0a-4782-9a71-d9ef11eb8f3a\") " pod="openshift-controller-manager/controller-manager-84f4d7dc6-tg8rz" Oct 06 12:00:07 crc kubenswrapper[4958]: I1006 12:00:07.306061 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5cbb584c-ce0a-4782-9a71-d9ef11eb8f3a-config\") pod \"controller-manager-84f4d7dc6-tg8rz\" (UID: \"5cbb584c-ce0a-4782-9a71-d9ef11eb8f3a\") " pod="openshift-controller-manager/controller-manager-84f4d7dc6-tg8rz" Oct 06 12:00:07 crc kubenswrapper[4958]: I1006 12:00:07.306899 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/5cbb584c-ce0a-4782-9a71-d9ef11eb8f3a-proxy-ca-bundles\") pod \"controller-manager-84f4d7dc6-tg8rz\" (UID: \"5cbb584c-ce0a-4782-9a71-d9ef11eb8f3a\") " pod="openshift-controller-manager/controller-manager-84f4d7dc6-tg8rz" Oct 06 12:00:07 crc kubenswrapper[4958]: I1006 12:00:07.310451 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5cbb584c-ce0a-4782-9a71-d9ef11eb8f3a-serving-cert\") pod \"controller-manager-84f4d7dc6-tg8rz\" (UID: \"5cbb584c-ce0a-4782-9a71-d9ef11eb8f3a\") " pod="openshift-controller-manager/controller-manager-84f4d7dc6-tg8rz" Oct 06 12:00:07 crc kubenswrapper[4958]: I1006 12:00:07.318848 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sb2tk\" (UniqueName: \"kubernetes.io/projected/5cbb584c-ce0a-4782-9a71-d9ef11eb8f3a-kube-api-access-sb2tk\") pod \"controller-manager-84f4d7dc6-tg8rz\" (UID: \"5cbb584c-ce0a-4782-9a71-d9ef11eb8f3a\") " pod="openshift-controller-manager/controller-manager-84f4d7dc6-tg8rz" Oct 06 12:00:07 crc kubenswrapper[4958]: I1006 12:00:07.405621 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/279a4b4e-f36d-4f1b-8031-a0e31155afd4-config\") pod \"route-controller-manager-6b6777b44c-vpv9m\" (UID: \"279a4b4e-f36d-4f1b-8031-a0e31155afd4\") " pod="openshift-route-controller-manager/route-controller-manager-6b6777b44c-vpv9m" Oct 06 12:00:07 crc kubenswrapper[4958]: I1006 12:00:07.405686 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/279a4b4e-f36d-4f1b-8031-a0e31155afd4-client-ca\") pod \"route-controller-manager-6b6777b44c-vpv9m\" (UID: \"279a4b4e-f36d-4f1b-8031-a0e31155afd4\") " pod="openshift-route-controller-manager/route-controller-manager-6b6777b44c-vpv9m" Oct 06 12:00:07 crc kubenswrapper[4958]: I1006 12:00:07.405704 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/279a4b4e-f36d-4f1b-8031-a0e31155afd4-serving-cert\") pod \"route-controller-manager-6b6777b44c-vpv9m\" (UID: \"279a4b4e-f36d-4f1b-8031-a0e31155afd4\") " pod="openshift-route-controller-manager/route-controller-manager-6b6777b44c-vpv9m" Oct 06 12:00:07 crc kubenswrapper[4958]: I1006 12:00:07.405741 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mv6jp\" (UniqueName: \"kubernetes.io/projected/279a4b4e-f36d-4f1b-8031-a0e31155afd4-kube-api-access-mv6jp\") pod \"route-controller-manager-6b6777b44c-vpv9m\" (UID: \"279a4b4e-f36d-4f1b-8031-a0e31155afd4\") " pod="openshift-route-controller-manager/route-controller-manager-6b6777b44c-vpv9m" Oct 06 12:00:07 crc kubenswrapper[4958]: I1006 12:00:07.407417 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/279a4b4e-f36d-4f1b-8031-a0e31155afd4-client-ca\") pod \"route-controller-manager-6b6777b44c-vpv9m\" (UID: \"279a4b4e-f36d-4f1b-8031-a0e31155afd4\") " pod="openshift-route-controller-manager/route-controller-manager-6b6777b44c-vpv9m" Oct 06 12:00:07 crc kubenswrapper[4958]: I1006 12:00:07.407459 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/279a4b4e-f36d-4f1b-8031-a0e31155afd4-config\") pod \"route-controller-manager-6b6777b44c-vpv9m\" (UID: \"279a4b4e-f36d-4f1b-8031-a0e31155afd4\") " pod="openshift-route-controller-manager/route-controller-manager-6b6777b44c-vpv9m" Oct 06 12:00:07 crc kubenswrapper[4958]: I1006 12:00:07.409075 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/279a4b4e-f36d-4f1b-8031-a0e31155afd4-serving-cert\") pod \"route-controller-manager-6b6777b44c-vpv9m\" (UID: \"279a4b4e-f36d-4f1b-8031-a0e31155afd4\") " pod="openshift-route-controller-manager/route-controller-manager-6b6777b44c-vpv9m" Oct 06 12:00:07 crc kubenswrapper[4958]: I1006 12:00:07.423974 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mv6jp\" (UniqueName: \"kubernetes.io/projected/279a4b4e-f36d-4f1b-8031-a0e31155afd4-kube-api-access-mv6jp\") pod \"route-controller-manager-6b6777b44c-vpv9m\" (UID: \"279a4b4e-f36d-4f1b-8031-a0e31155afd4\") " pod="openshift-route-controller-manager/route-controller-manager-6b6777b44c-vpv9m" Oct 06 12:00:07 crc kubenswrapper[4958]: I1006 12:00:07.510118 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-84f4d7dc6-tg8rz" Oct 06 12:00:07 crc kubenswrapper[4958]: I1006 12:00:07.526300 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6b6777b44c-vpv9m" Oct 06 12:00:07 crc kubenswrapper[4958]: I1006 12:00:07.748566 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-84f4d7dc6-tg8rz"] Oct 06 12:00:07 crc kubenswrapper[4958]: I1006 12:00:07.791798 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6b6777b44c-vpv9m"] Oct 06 12:00:07 crc kubenswrapper[4958]: W1006 12:00:07.804700 4958 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod279a4b4e_f36d_4f1b_8031_a0e31155afd4.slice/crio-0dba03edb6477ac6f7e2ec2f70562595a3d959041ad01c4ae226404a13295e31 WatchSource:0}: Error finding container 0dba03edb6477ac6f7e2ec2f70562595a3d959041ad01c4ae226404a13295e31: Status 404 returned error can't find the container with id 0dba03edb6477ac6f7e2ec2f70562595a3d959041ad01c4ae226404a13295e31 Oct 06 12:00:08 crc kubenswrapper[4958]: I1006 12:00:08.668284 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-84f4d7dc6-tg8rz" event={"ID":"5cbb584c-ce0a-4782-9a71-d9ef11eb8f3a","Type":"ContainerStarted","Data":"b05020520d9f44ab2a45febefa0695036280f20253a1f1287bb1754a1bc8002a"} Oct 06 12:00:08 crc kubenswrapper[4958]: I1006 12:00:08.668895 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-84f4d7dc6-tg8rz" Oct 06 12:00:08 crc kubenswrapper[4958]: I1006 12:00:08.668915 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-84f4d7dc6-tg8rz" event={"ID":"5cbb584c-ce0a-4782-9a71-d9ef11eb8f3a","Type":"ContainerStarted","Data":"ef629a2a5b77cbe9f0e8cff6b15013f1d3e515a9d623754c9c2d67210d694f5c"} Oct 06 12:00:08 crc kubenswrapper[4958]: I1006 12:00:08.670957 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6b6777b44c-vpv9m" event={"ID":"279a4b4e-f36d-4f1b-8031-a0e31155afd4","Type":"ContainerStarted","Data":"753fbd2cfb5ef7e9031f62e09586726dfe4afbb30e52c2cd5dcd3880a72c1bb3"} Oct 06 12:00:08 crc kubenswrapper[4958]: I1006 12:00:08.671008 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6b6777b44c-vpv9m" event={"ID":"279a4b4e-f36d-4f1b-8031-a0e31155afd4","Type":"ContainerStarted","Data":"0dba03edb6477ac6f7e2ec2f70562595a3d959041ad01c4ae226404a13295e31"} Oct 06 12:00:08 crc kubenswrapper[4958]: I1006 12:00:08.671273 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-6b6777b44c-vpv9m" Oct 06 12:00:08 crc kubenswrapper[4958]: I1006 12:00:08.674621 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-84f4d7dc6-tg8rz" Oct 06 12:00:08 crc kubenswrapper[4958]: I1006 12:00:08.677032 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-6b6777b44c-vpv9m" Oct 06 12:00:08 crc kubenswrapper[4958]: I1006 12:00:08.690779 4958 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-84f4d7dc6-tg8rz" podStartSLOduration=1.690763514 podStartE2EDuration="1.690763514s" podCreationTimestamp="2025-10-06 12:00:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-06 12:00:08.688567395 +0000 UTC m=+762.574592713" watchObservedRunningTime="2025-10-06 12:00:08.690763514 +0000 UTC m=+762.576788832" Oct 06 12:00:08 crc kubenswrapper[4958]: I1006 12:00:08.737310 4958 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-6b6777b44c-vpv9m" podStartSLOduration=1.737283479 podStartE2EDuration="1.737283479s" podCreationTimestamp="2025-10-06 12:00:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-06 12:00:08.729723197 +0000 UTC m=+762.615748505" watchObservedRunningTime="2025-10-06 12:00:08.737283479 +0000 UTC m=+762.623308777" Oct 06 12:00:13 crc kubenswrapper[4958]: I1006 12:00:13.015470 4958 dynamic_cafile_content.go:123] "Loaded a new CA Bundle and Verifier" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Oct 06 12:00:14 crc kubenswrapper[4958]: I1006 12:00:14.022490 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-index-wdw62" Oct 06 12:00:14 crc kubenswrapper[4958]: I1006 12:00:14.023079 4958 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack-operators/openstack-operator-index-wdw62" Oct 06 12:00:14 crc kubenswrapper[4958]: I1006 12:00:14.065744 4958 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack-operators/openstack-operator-index-wdw62" Oct 06 12:00:14 crc kubenswrapper[4958]: I1006 12:00:14.744023 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-index-wdw62" Oct 06 12:00:16 crc kubenswrapper[4958]: I1006 12:00:16.307862 4958 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/f5a5c0b02fdcad19334731c5311e773a7ecde0ba9b698f7efdee87252acvwtt"] Oct 06 12:00:16 crc kubenswrapper[4958]: I1006 12:00:16.310270 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/f5a5c0b02fdcad19334731c5311e773a7ecde0ba9b698f7efdee87252acvwtt" Oct 06 12:00:16 crc kubenswrapper[4958]: I1006 12:00:16.317455 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/f5a5c0b02fdcad19334731c5311e773a7ecde0ba9b698f7efdee87252acvwtt"] Oct 06 12:00:16 crc kubenswrapper[4958]: I1006 12:00:16.319650 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"default-dockercfg-l48x6" Oct 06 12:00:16 crc kubenswrapper[4958]: I1006 12:00:16.440680 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/825a268b-7779-4e3a-b87c-6769d02e8213-bundle\") pod \"f5a5c0b02fdcad19334731c5311e773a7ecde0ba9b698f7efdee87252acvwtt\" (UID: \"825a268b-7779-4e3a-b87c-6769d02e8213\") " pod="openstack-operators/f5a5c0b02fdcad19334731c5311e773a7ecde0ba9b698f7efdee87252acvwtt" Oct 06 12:00:16 crc kubenswrapper[4958]: I1006 12:00:16.440848 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7nqsw\" (UniqueName: \"kubernetes.io/projected/825a268b-7779-4e3a-b87c-6769d02e8213-kube-api-access-7nqsw\") pod \"f5a5c0b02fdcad19334731c5311e773a7ecde0ba9b698f7efdee87252acvwtt\" (UID: \"825a268b-7779-4e3a-b87c-6769d02e8213\") " pod="openstack-operators/f5a5c0b02fdcad19334731c5311e773a7ecde0ba9b698f7efdee87252acvwtt" Oct 06 12:00:16 crc kubenswrapper[4958]: I1006 12:00:16.440897 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/825a268b-7779-4e3a-b87c-6769d02e8213-util\") pod \"f5a5c0b02fdcad19334731c5311e773a7ecde0ba9b698f7efdee87252acvwtt\" (UID: \"825a268b-7779-4e3a-b87c-6769d02e8213\") " pod="openstack-operators/f5a5c0b02fdcad19334731c5311e773a7ecde0ba9b698f7efdee87252acvwtt" Oct 06 12:00:16 crc kubenswrapper[4958]: I1006 12:00:16.542056 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7nqsw\" (UniqueName: \"kubernetes.io/projected/825a268b-7779-4e3a-b87c-6769d02e8213-kube-api-access-7nqsw\") pod \"f5a5c0b02fdcad19334731c5311e773a7ecde0ba9b698f7efdee87252acvwtt\" (UID: \"825a268b-7779-4e3a-b87c-6769d02e8213\") " pod="openstack-operators/f5a5c0b02fdcad19334731c5311e773a7ecde0ba9b698f7efdee87252acvwtt" Oct 06 12:00:16 crc kubenswrapper[4958]: I1006 12:00:16.542196 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/825a268b-7779-4e3a-b87c-6769d02e8213-util\") pod \"f5a5c0b02fdcad19334731c5311e773a7ecde0ba9b698f7efdee87252acvwtt\" (UID: \"825a268b-7779-4e3a-b87c-6769d02e8213\") " pod="openstack-operators/f5a5c0b02fdcad19334731c5311e773a7ecde0ba9b698f7efdee87252acvwtt" Oct 06 12:00:16 crc kubenswrapper[4958]: I1006 12:00:16.542261 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/825a268b-7779-4e3a-b87c-6769d02e8213-bundle\") pod \"f5a5c0b02fdcad19334731c5311e773a7ecde0ba9b698f7efdee87252acvwtt\" (UID: \"825a268b-7779-4e3a-b87c-6769d02e8213\") " pod="openstack-operators/f5a5c0b02fdcad19334731c5311e773a7ecde0ba9b698f7efdee87252acvwtt" Oct 06 12:00:16 crc kubenswrapper[4958]: I1006 12:00:16.542952 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/825a268b-7779-4e3a-b87c-6769d02e8213-bundle\") pod \"f5a5c0b02fdcad19334731c5311e773a7ecde0ba9b698f7efdee87252acvwtt\" (UID: \"825a268b-7779-4e3a-b87c-6769d02e8213\") " pod="openstack-operators/f5a5c0b02fdcad19334731c5311e773a7ecde0ba9b698f7efdee87252acvwtt" Oct 06 12:00:16 crc kubenswrapper[4958]: I1006 12:00:16.543075 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/825a268b-7779-4e3a-b87c-6769d02e8213-util\") pod \"f5a5c0b02fdcad19334731c5311e773a7ecde0ba9b698f7efdee87252acvwtt\" (UID: \"825a268b-7779-4e3a-b87c-6769d02e8213\") " pod="openstack-operators/f5a5c0b02fdcad19334731c5311e773a7ecde0ba9b698f7efdee87252acvwtt" Oct 06 12:00:16 crc kubenswrapper[4958]: I1006 12:00:16.573673 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7nqsw\" (UniqueName: \"kubernetes.io/projected/825a268b-7779-4e3a-b87c-6769d02e8213-kube-api-access-7nqsw\") pod \"f5a5c0b02fdcad19334731c5311e773a7ecde0ba9b698f7efdee87252acvwtt\" (UID: \"825a268b-7779-4e3a-b87c-6769d02e8213\") " pod="openstack-operators/f5a5c0b02fdcad19334731c5311e773a7ecde0ba9b698f7efdee87252acvwtt" Oct 06 12:00:16 crc kubenswrapper[4958]: I1006 12:00:16.643379 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/f5a5c0b02fdcad19334731c5311e773a7ecde0ba9b698f7efdee87252acvwtt" Oct 06 12:00:17 crc kubenswrapper[4958]: I1006 12:00:17.143890 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/f5a5c0b02fdcad19334731c5311e773a7ecde0ba9b698f7efdee87252acvwtt"] Oct 06 12:00:17 crc kubenswrapper[4958]: I1006 12:00:17.734661 4958 generic.go:334] "Generic (PLEG): container finished" podID="825a268b-7779-4e3a-b87c-6769d02e8213" containerID="d66063e84a966a91b333fb2434d3d7211c36d335461a6fdc90bef22c5394de44" exitCode=0 Oct 06 12:00:17 crc kubenswrapper[4958]: I1006 12:00:17.734743 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/f5a5c0b02fdcad19334731c5311e773a7ecde0ba9b698f7efdee87252acvwtt" event={"ID":"825a268b-7779-4e3a-b87c-6769d02e8213","Type":"ContainerDied","Data":"d66063e84a966a91b333fb2434d3d7211c36d335461a6fdc90bef22c5394de44"} Oct 06 12:00:17 crc kubenswrapper[4958]: I1006 12:00:17.735057 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/f5a5c0b02fdcad19334731c5311e773a7ecde0ba9b698f7efdee87252acvwtt" event={"ID":"825a268b-7779-4e3a-b87c-6769d02e8213","Type":"ContainerStarted","Data":"f8e86b8c556c277f52677fbd813f16fa725e231cad59af7e2f03a7e9a6d0bab9"} Oct 06 12:00:18 crc kubenswrapper[4958]: I1006 12:00:18.746627 4958 generic.go:334] "Generic (PLEG): container finished" podID="825a268b-7779-4e3a-b87c-6769d02e8213" containerID="d66e6f95e5667c648c2959fc7445a137a7b7cbb9ea76459255b1c8c59927e620" exitCode=0 Oct 06 12:00:18 crc kubenswrapper[4958]: I1006 12:00:18.746720 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/f5a5c0b02fdcad19334731c5311e773a7ecde0ba9b698f7efdee87252acvwtt" event={"ID":"825a268b-7779-4e3a-b87c-6769d02e8213","Type":"ContainerDied","Data":"d66e6f95e5667c648c2959fc7445a137a7b7cbb9ea76459255b1c8c59927e620"} Oct 06 12:00:19 crc kubenswrapper[4958]: I1006 12:00:19.759529 4958 generic.go:334] "Generic (PLEG): container finished" podID="825a268b-7779-4e3a-b87c-6769d02e8213" containerID="5bff48423e5be513060db699cb9900a8fee295059c84a0d544513ce8d5d0aa1f" exitCode=0 Oct 06 12:00:19 crc kubenswrapper[4958]: I1006 12:00:19.759584 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/f5a5c0b02fdcad19334731c5311e773a7ecde0ba9b698f7efdee87252acvwtt" event={"ID":"825a268b-7779-4e3a-b87c-6769d02e8213","Type":"ContainerDied","Data":"5bff48423e5be513060db699cb9900a8fee295059c84a0d544513ce8d5d0aa1f"} Oct 06 12:00:20 crc kubenswrapper[4958]: I1006 12:00:20.259292 4958 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-rdtfs"] Oct 06 12:00:20 crc kubenswrapper[4958]: I1006 12:00:20.261749 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-rdtfs" Oct 06 12:00:20 crc kubenswrapper[4958]: I1006 12:00:20.296749 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-rdtfs"] Oct 06 12:00:20 crc kubenswrapper[4958]: I1006 12:00:20.305031 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kkrcx\" (UniqueName: \"kubernetes.io/projected/376a23d7-acf1-4160-9aa6-fbe911c96837-kube-api-access-kkrcx\") pod \"community-operators-rdtfs\" (UID: \"376a23d7-acf1-4160-9aa6-fbe911c96837\") " pod="openshift-marketplace/community-operators-rdtfs" Oct 06 12:00:20 crc kubenswrapper[4958]: I1006 12:00:20.305128 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/376a23d7-acf1-4160-9aa6-fbe911c96837-utilities\") pod \"community-operators-rdtfs\" (UID: \"376a23d7-acf1-4160-9aa6-fbe911c96837\") " pod="openshift-marketplace/community-operators-rdtfs" Oct 06 12:00:20 crc kubenswrapper[4958]: I1006 12:00:20.305267 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/376a23d7-acf1-4160-9aa6-fbe911c96837-catalog-content\") pod \"community-operators-rdtfs\" (UID: \"376a23d7-acf1-4160-9aa6-fbe911c96837\") " pod="openshift-marketplace/community-operators-rdtfs" Oct 06 12:00:20 crc kubenswrapper[4958]: I1006 12:00:20.406406 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kkrcx\" (UniqueName: \"kubernetes.io/projected/376a23d7-acf1-4160-9aa6-fbe911c96837-kube-api-access-kkrcx\") pod \"community-operators-rdtfs\" (UID: \"376a23d7-acf1-4160-9aa6-fbe911c96837\") " pod="openshift-marketplace/community-operators-rdtfs" Oct 06 12:00:20 crc kubenswrapper[4958]: I1006 12:00:20.406495 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/376a23d7-acf1-4160-9aa6-fbe911c96837-utilities\") pod \"community-operators-rdtfs\" (UID: \"376a23d7-acf1-4160-9aa6-fbe911c96837\") " pod="openshift-marketplace/community-operators-rdtfs" Oct 06 12:00:20 crc kubenswrapper[4958]: I1006 12:00:20.407185 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/376a23d7-acf1-4160-9aa6-fbe911c96837-catalog-content\") pod \"community-operators-rdtfs\" (UID: \"376a23d7-acf1-4160-9aa6-fbe911c96837\") " pod="openshift-marketplace/community-operators-rdtfs" Oct 06 12:00:20 crc kubenswrapper[4958]: I1006 12:00:20.407228 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/376a23d7-acf1-4160-9aa6-fbe911c96837-utilities\") pod \"community-operators-rdtfs\" (UID: \"376a23d7-acf1-4160-9aa6-fbe911c96837\") " pod="openshift-marketplace/community-operators-rdtfs" Oct 06 12:00:20 crc kubenswrapper[4958]: I1006 12:00:20.407273 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/376a23d7-acf1-4160-9aa6-fbe911c96837-catalog-content\") pod \"community-operators-rdtfs\" (UID: \"376a23d7-acf1-4160-9aa6-fbe911c96837\") " pod="openshift-marketplace/community-operators-rdtfs" Oct 06 12:00:20 crc kubenswrapper[4958]: I1006 12:00:20.435774 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kkrcx\" (UniqueName: \"kubernetes.io/projected/376a23d7-acf1-4160-9aa6-fbe911c96837-kube-api-access-kkrcx\") pod \"community-operators-rdtfs\" (UID: \"376a23d7-acf1-4160-9aa6-fbe911c96837\") " pod="openshift-marketplace/community-operators-rdtfs" Oct 06 12:00:20 crc kubenswrapper[4958]: I1006 12:00:20.602901 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-rdtfs" Oct 06 12:00:21 crc kubenswrapper[4958]: I1006 12:00:21.141661 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-rdtfs"] Oct 06 12:00:21 crc kubenswrapper[4958]: W1006 12:00:21.143731 4958 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod376a23d7_acf1_4160_9aa6_fbe911c96837.slice/crio-2fe753263cd22d954353d20c13c647cafdcf04c6acd9f991b8ef560127644170 WatchSource:0}: Error finding container 2fe753263cd22d954353d20c13c647cafdcf04c6acd9f991b8ef560127644170: Status 404 returned error can't find the container with id 2fe753263cd22d954353d20c13c647cafdcf04c6acd9f991b8ef560127644170 Oct 06 12:00:21 crc kubenswrapper[4958]: I1006 12:00:21.216984 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/f5a5c0b02fdcad19334731c5311e773a7ecde0ba9b698f7efdee87252acvwtt" Oct 06 12:00:21 crc kubenswrapper[4958]: I1006 12:00:21.317881 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/825a268b-7779-4e3a-b87c-6769d02e8213-util\") pod \"825a268b-7779-4e3a-b87c-6769d02e8213\" (UID: \"825a268b-7779-4e3a-b87c-6769d02e8213\") " Oct 06 12:00:21 crc kubenswrapper[4958]: I1006 12:00:21.318025 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/825a268b-7779-4e3a-b87c-6769d02e8213-bundle\") pod \"825a268b-7779-4e3a-b87c-6769d02e8213\" (UID: \"825a268b-7779-4e3a-b87c-6769d02e8213\") " Oct 06 12:00:21 crc kubenswrapper[4958]: I1006 12:00:21.318115 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7nqsw\" (UniqueName: \"kubernetes.io/projected/825a268b-7779-4e3a-b87c-6769d02e8213-kube-api-access-7nqsw\") pod \"825a268b-7779-4e3a-b87c-6769d02e8213\" (UID: \"825a268b-7779-4e3a-b87c-6769d02e8213\") " Oct 06 12:00:21 crc kubenswrapper[4958]: I1006 12:00:21.318835 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/825a268b-7779-4e3a-b87c-6769d02e8213-bundle" (OuterVolumeSpecName: "bundle") pod "825a268b-7779-4e3a-b87c-6769d02e8213" (UID: "825a268b-7779-4e3a-b87c-6769d02e8213"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 06 12:00:21 crc kubenswrapper[4958]: I1006 12:00:21.325493 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/825a268b-7779-4e3a-b87c-6769d02e8213-kube-api-access-7nqsw" (OuterVolumeSpecName: "kube-api-access-7nqsw") pod "825a268b-7779-4e3a-b87c-6769d02e8213" (UID: "825a268b-7779-4e3a-b87c-6769d02e8213"). InnerVolumeSpecName "kube-api-access-7nqsw". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 06 12:00:21 crc kubenswrapper[4958]: I1006 12:00:21.330833 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/825a268b-7779-4e3a-b87c-6769d02e8213-util" (OuterVolumeSpecName: "util") pod "825a268b-7779-4e3a-b87c-6769d02e8213" (UID: "825a268b-7779-4e3a-b87c-6769d02e8213"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 06 12:00:21 crc kubenswrapper[4958]: I1006 12:00:21.419870 4958 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/825a268b-7779-4e3a-b87c-6769d02e8213-util\") on node \"crc\" DevicePath \"\"" Oct 06 12:00:21 crc kubenswrapper[4958]: I1006 12:00:21.419902 4958 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/825a268b-7779-4e3a-b87c-6769d02e8213-bundle\") on node \"crc\" DevicePath \"\"" Oct 06 12:00:21 crc kubenswrapper[4958]: I1006 12:00:21.419913 4958 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7nqsw\" (UniqueName: \"kubernetes.io/projected/825a268b-7779-4e3a-b87c-6769d02e8213-kube-api-access-7nqsw\") on node \"crc\" DevicePath \"\"" Oct 06 12:00:21 crc kubenswrapper[4958]: I1006 12:00:21.816394 4958 generic.go:334] "Generic (PLEG): container finished" podID="376a23d7-acf1-4160-9aa6-fbe911c96837" containerID="af59ad16972e4f10c09e0f07f4ad1df78334d08ab5cf369ef4017279f7547360" exitCode=0 Oct 06 12:00:21 crc kubenswrapper[4958]: I1006 12:00:21.816467 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-rdtfs" event={"ID":"376a23d7-acf1-4160-9aa6-fbe911c96837","Type":"ContainerDied","Data":"af59ad16972e4f10c09e0f07f4ad1df78334d08ab5cf369ef4017279f7547360"} Oct 06 12:00:21 crc kubenswrapper[4958]: I1006 12:00:21.816545 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-rdtfs" event={"ID":"376a23d7-acf1-4160-9aa6-fbe911c96837","Type":"ContainerStarted","Data":"2fe753263cd22d954353d20c13c647cafdcf04c6acd9f991b8ef560127644170"} Oct 06 12:00:21 crc kubenswrapper[4958]: I1006 12:00:21.820855 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/f5a5c0b02fdcad19334731c5311e773a7ecde0ba9b698f7efdee87252acvwtt" event={"ID":"825a268b-7779-4e3a-b87c-6769d02e8213","Type":"ContainerDied","Data":"f8e86b8c556c277f52677fbd813f16fa725e231cad59af7e2f03a7e9a6d0bab9"} Oct 06 12:00:21 crc kubenswrapper[4958]: I1006 12:00:21.820958 4958 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f8e86b8c556c277f52677fbd813f16fa725e231cad59af7e2f03a7e9a6d0bab9" Oct 06 12:00:21 crc kubenswrapper[4958]: I1006 12:00:21.821043 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/f5a5c0b02fdcad19334731c5311e773a7ecde0ba9b698f7efdee87252acvwtt" Oct 06 12:00:22 crc kubenswrapper[4958]: I1006 12:00:22.835596 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-rdtfs" event={"ID":"376a23d7-acf1-4160-9aa6-fbe911c96837","Type":"ContainerStarted","Data":"83906d3c0d89561a0f2fb1c07dae9c33982863ef7145d9dea0b96096c304ce70"} Oct 06 12:00:23 crc kubenswrapper[4958]: I1006 12:00:23.802370 4958 patch_prober.go:28] interesting pod/machine-config-daemon-whw6z container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 06 12:00:23 crc kubenswrapper[4958]: I1006 12:00:23.802439 4958 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-whw6z" podUID="1a63a5e9-6d00-40ff-a080-cfcaf03a1c1b" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 06 12:00:23 crc kubenswrapper[4958]: I1006 12:00:23.843018 4958 generic.go:334] "Generic (PLEG): container finished" podID="376a23d7-acf1-4160-9aa6-fbe911c96837" containerID="83906d3c0d89561a0f2fb1c07dae9c33982863ef7145d9dea0b96096c304ce70" exitCode=0 Oct 06 12:00:23 crc kubenswrapper[4958]: I1006 12:00:23.843082 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-rdtfs" event={"ID":"376a23d7-acf1-4160-9aa6-fbe911c96837","Type":"ContainerDied","Data":"83906d3c0d89561a0f2fb1c07dae9c33982863ef7145d9dea0b96096c304ce70"} Oct 06 12:00:24 crc kubenswrapper[4958]: I1006 12:00:24.854783 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-rdtfs" event={"ID":"376a23d7-acf1-4160-9aa6-fbe911c96837","Type":"ContainerStarted","Data":"eab06c162f2078942bfb50032d54439ac4a068dac71bd5445acd87c30029fe3e"} Oct 06 12:00:24 crc kubenswrapper[4958]: I1006 12:00:24.883811 4958 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-rdtfs" podStartSLOduration=2.302031911 podStartE2EDuration="4.883784029s" podCreationTimestamp="2025-10-06 12:00:20 +0000 UTC" firstStartedPulling="2025-10-06 12:00:21.821931836 +0000 UTC m=+775.707957154" lastFinishedPulling="2025-10-06 12:00:24.403683924 +0000 UTC m=+778.289709272" observedRunningTime="2025-10-06 12:00:24.880569683 +0000 UTC m=+778.766595021" watchObservedRunningTime="2025-10-06 12:00:24.883784029 +0000 UTC m=+778.769809367" Oct 06 12:00:27 crc kubenswrapper[4958]: I1006 12:00:27.708681 4958 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-controller-operator-57448bb547-2ptw6"] Oct 06 12:00:27 crc kubenswrapper[4958]: E1006 12:00:27.709501 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="825a268b-7779-4e3a-b87c-6769d02e8213" containerName="pull" Oct 06 12:00:27 crc kubenswrapper[4958]: I1006 12:00:27.709523 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="825a268b-7779-4e3a-b87c-6769d02e8213" containerName="pull" Oct 06 12:00:27 crc kubenswrapper[4958]: E1006 12:00:27.709739 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="825a268b-7779-4e3a-b87c-6769d02e8213" containerName="util" Oct 06 12:00:27 crc kubenswrapper[4958]: I1006 12:00:27.709750 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="825a268b-7779-4e3a-b87c-6769d02e8213" containerName="util" Oct 06 12:00:27 crc kubenswrapper[4958]: E1006 12:00:27.709761 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="825a268b-7779-4e3a-b87c-6769d02e8213" containerName="extract" Oct 06 12:00:27 crc kubenswrapper[4958]: I1006 12:00:27.709772 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="825a268b-7779-4e3a-b87c-6769d02e8213" containerName="extract" Oct 06 12:00:27 crc kubenswrapper[4958]: I1006 12:00:27.709971 4958 memory_manager.go:354] "RemoveStaleState removing state" podUID="825a268b-7779-4e3a-b87c-6769d02e8213" containerName="extract" Oct 06 12:00:27 crc kubenswrapper[4958]: I1006 12:00:27.711026 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-operator-57448bb547-2ptw6" Oct 06 12:00:27 crc kubenswrapper[4958]: I1006 12:00:27.714409 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-controller-operator-dockercfg-jc5xq" Oct 06 12:00:27 crc kubenswrapper[4958]: I1006 12:00:27.748444 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-operator-57448bb547-2ptw6"] Oct 06 12:00:27 crc kubenswrapper[4958]: I1006 12:00:27.819792 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rlc5s\" (UniqueName: \"kubernetes.io/projected/a2a23b45-5568-49fb-9e85-6bce53831d13-kube-api-access-rlc5s\") pod \"openstack-operator-controller-operator-57448bb547-2ptw6\" (UID: \"a2a23b45-5568-49fb-9e85-6bce53831d13\") " pod="openstack-operators/openstack-operator-controller-operator-57448bb547-2ptw6" Oct 06 12:00:27 crc kubenswrapper[4958]: I1006 12:00:27.921272 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rlc5s\" (UniqueName: \"kubernetes.io/projected/a2a23b45-5568-49fb-9e85-6bce53831d13-kube-api-access-rlc5s\") pod \"openstack-operator-controller-operator-57448bb547-2ptw6\" (UID: \"a2a23b45-5568-49fb-9e85-6bce53831d13\") " pod="openstack-operators/openstack-operator-controller-operator-57448bb547-2ptw6" Oct 06 12:00:27 crc kubenswrapper[4958]: I1006 12:00:27.944943 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rlc5s\" (UniqueName: \"kubernetes.io/projected/a2a23b45-5568-49fb-9e85-6bce53831d13-kube-api-access-rlc5s\") pod \"openstack-operator-controller-operator-57448bb547-2ptw6\" (UID: \"a2a23b45-5568-49fb-9e85-6bce53831d13\") " pod="openstack-operators/openstack-operator-controller-operator-57448bb547-2ptw6" Oct 06 12:00:28 crc kubenswrapper[4958]: I1006 12:00:28.028212 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-operator-57448bb547-2ptw6" Oct 06 12:00:28 crc kubenswrapper[4958]: I1006 12:00:28.434929 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-operator-57448bb547-2ptw6"] Oct 06 12:00:28 crc kubenswrapper[4958]: I1006 12:00:28.879525 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-operator-57448bb547-2ptw6" event={"ID":"a2a23b45-5568-49fb-9e85-6bce53831d13","Type":"ContainerStarted","Data":"03da769097fa6ed119fe49ce59ec04788ce181381732626398ab83e4d6119725"} Oct 06 12:00:30 crc kubenswrapper[4958]: I1006 12:00:30.603557 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-rdtfs" Oct 06 12:00:30 crc kubenswrapper[4958]: I1006 12:00:30.603888 4958 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-rdtfs" Oct 06 12:00:30 crc kubenswrapper[4958]: I1006 12:00:30.648504 4958 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-b4mzj"] Oct 06 12:00:30 crc kubenswrapper[4958]: I1006 12:00:30.664578 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-b4mzj" Oct 06 12:00:30 crc kubenswrapper[4958]: I1006 12:00:30.664720 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-b4mzj"] Oct 06 12:00:30 crc kubenswrapper[4958]: I1006 12:00:30.671918 4958 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-rdtfs" Oct 06 12:00:30 crc kubenswrapper[4958]: I1006 12:00:30.759853 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-74rbg\" (UniqueName: \"kubernetes.io/projected/a6516077-3638-4800-ba2f-70f1f3506eba-kube-api-access-74rbg\") pod \"redhat-marketplace-b4mzj\" (UID: \"a6516077-3638-4800-ba2f-70f1f3506eba\") " pod="openshift-marketplace/redhat-marketplace-b4mzj" Oct 06 12:00:30 crc kubenswrapper[4958]: I1006 12:00:30.759919 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a6516077-3638-4800-ba2f-70f1f3506eba-utilities\") pod \"redhat-marketplace-b4mzj\" (UID: \"a6516077-3638-4800-ba2f-70f1f3506eba\") " pod="openshift-marketplace/redhat-marketplace-b4mzj" Oct 06 12:00:30 crc kubenswrapper[4958]: I1006 12:00:30.759939 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a6516077-3638-4800-ba2f-70f1f3506eba-catalog-content\") pod \"redhat-marketplace-b4mzj\" (UID: \"a6516077-3638-4800-ba2f-70f1f3506eba\") " pod="openshift-marketplace/redhat-marketplace-b4mzj" Oct 06 12:00:30 crc kubenswrapper[4958]: I1006 12:00:30.860947 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a6516077-3638-4800-ba2f-70f1f3506eba-utilities\") pod \"redhat-marketplace-b4mzj\" (UID: \"a6516077-3638-4800-ba2f-70f1f3506eba\") " pod="openshift-marketplace/redhat-marketplace-b4mzj" Oct 06 12:00:30 crc kubenswrapper[4958]: I1006 12:00:30.861046 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a6516077-3638-4800-ba2f-70f1f3506eba-catalog-content\") pod \"redhat-marketplace-b4mzj\" (UID: \"a6516077-3638-4800-ba2f-70f1f3506eba\") " pod="openshift-marketplace/redhat-marketplace-b4mzj" Oct 06 12:00:30 crc kubenswrapper[4958]: I1006 12:00:30.861210 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-74rbg\" (UniqueName: \"kubernetes.io/projected/a6516077-3638-4800-ba2f-70f1f3506eba-kube-api-access-74rbg\") pod \"redhat-marketplace-b4mzj\" (UID: \"a6516077-3638-4800-ba2f-70f1f3506eba\") " pod="openshift-marketplace/redhat-marketplace-b4mzj" Oct 06 12:00:30 crc kubenswrapper[4958]: I1006 12:00:30.862419 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a6516077-3638-4800-ba2f-70f1f3506eba-utilities\") pod \"redhat-marketplace-b4mzj\" (UID: \"a6516077-3638-4800-ba2f-70f1f3506eba\") " pod="openshift-marketplace/redhat-marketplace-b4mzj" Oct 06 12:00:30 crc kubenswrapper[4958]: I1006 12:00:30.862924 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a6516077-3638-4800-ba2f-70f1f3506eba-catalog-content\") pod \"redhat-marketplace-b4mzj\" (UID: \"a6516077-3638-4800-ba2f-70f1f3506eba\") " pod="openshift-marketplace/redhat-marketplace-b4mzj" Oct 06 12:00:30 crc kubenswrapper[4958]: I1006 12:00:30.897130 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-74rbg\" (UniqueName: \"kubernetes.io/projected/a6516077-3638-4800-ba2f-70f1f3506eba-kube-api-access-74rbg\") pod \"redhat-marketplace-b4mzj\" (UID: \"a6516077-3638-4800-ba2f-70f1f3506eba\") " pod="openshift-marketplace/redhat-marketplace-b4mzj" Oct 06 12:00:30 crc kubenswrapper[4958]: I1006 12:00:30.933312 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-rdtfs" Oct 06 12:00:31 crc kubenswrapper[4958]: I1006 12:00:31.001906 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-b4mzj" Oct 06 12:00:32 crc kubenswrapper[4958]: I1006 12:00:32.474985 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-b4mzj"] Oct 06 12:00:32 crc kubenswrapper[4958]: I1006 12:00:32.914401 4958 generic.go:334] "Generic (PLEG): container finished" podID="a6516077-3638-4800-ba2f-70f1f3506eba" containerID="5fad2931a55bbc89a6d3a795eb00ef543e711d543b7fe208667b1725c2532f67" exitCode=0 Oct 06 12:00:32 crc kubenswrapper[4958]: I1006 12:00:32.933313 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-b4mzj" event={"ID":"a6516077-3638-4800-ba2f-70f1f3506eba","Type":"ContainerDied","Data":"5fad2931a55bbc89a6d3a795eb00ef543e711d543b7fe208667b1725c2532f67"} Oct 06 12:00:32 crc kubenswrapper[4958]: I1006 12:00:32.933391 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-b4mzj" event={"ID":"a6516077-3638-4800-ba2f-70f1f3506eba","Type":"ContainerStarted","Data":"d1f5fa0f00fe805f1dae7a8898bb336eeaa9144d98817761fc68af97daf6421a"} Oct 06 12:00:32 crc kubenswrapper[4958]: I1006 12:00:32.933422 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-operator-57448bb547-2ptw6" event={"ID":"a2a23b45-5568-49fb-9e85-6bce53831d13","Type":"ContainerStarted","Data":"4f342a880e3435bdf40be78ac0738bf8b2d74506311ce88561f75b62e882ac16"} Oct 06 12:00:33 crc kubenswrapper[4958]: I1006 12:00:33.861676 4958 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-rdtfs"] Oct 06 12:00:33 crc kubenswrapper[4958]: I1006 12:00:33.862525 4958 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-rdtfs" podUID="376a23d7-acf1-4160-9aa6-fbe911c96837" containerName="registry-server" containerID="cri-o://eab06c162f2078942bfb50032d54439ac4a068dac71bd5445acd87c30029fe3e" gracePeriod=2 Oct 06 12:00:34 crc kubenswrapper[4958]: I1006 12:00:34.625334 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-rdtfs" Oct 06 12:00:34 crc kubenswrapper[4958]: I1006 12:00:34.718462 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kkrcx\" (UniqueName: \"kubernetes.io/projected/376a23d7-acf1-4160-9aa6-fbe911c96837-kube-api-access-kkrcx\") pod \"376a23d7-acf1-4160-9aa6-fbe911c96837\" (UID: \"376a23d7-acf1-4160-9aa6-fbe911c96837\") " Oct 06 12:00:34 crc kubenswrapper[4958]: I1006 12:00:34.718617 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/376a23d7-acf1-4160-9aa6-fbe911c96837-utilities\") pod \"376a23d7-acf1-4160-9aa6-fbe911c96837\" (UID: \"376a23d7-acf1-4160-9aa6-fbe911c96837\") " Oct 06 12:00:34 crc kubenswrapper[4958]: I1006 12:00:34.718700 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/376a23d7-acf1-4160-9aa6-fbe911c96837-catalog-content\") pod \"376a23d7-acf1-4160-9aa6-fbe911c96837\" (UID: \"376a23d7-acf1-4160-9aa6-fbe911c96837\") " Oct 06 12:00:34 crc kubenswrapper[4958]: I1006 12:00:34.719649 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/376a23d7-acf1-4160-9aa6-fbe911c96837-utilities" (OuterVolumeSpecName: "utilities") pod "376a23d7-acf1-4160-9aa6-fbe911c96837" (UID: "376a23d7-acf1-4160-9aa6-fbe911c96837"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 06 12:00:34 crc kubenswrapper[4958]: I1006 12:00:34.725315 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/376a23d7-acf1-4160-9aa6-fbe911c96837-kube-api-access-kkrcx" (OuterVolumeSpecName: "kube-api-access-kkrcx") pod "376a23d7-acf1-4160-9aa6-fbe911c96837" (UID: "376a23d7-acf1-4160-9aa6-fbe911c96837"). InnerVolumeSpecName "kube-api-access-kkrcx". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 06 12:00:34 crc kubenswrapper[4958]: I1006 12:00:34.764632 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/376a23d7-acf1-4160-9aa6-fbe911c96837-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "376a23d7-acf1-4160-9aa6-fbe911c96837" (UID: "376a23d7-acf1-4160-9aa6-fbe911c96837"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 06 12:00:34 crc kubenswrapper[4958]: I1006 12:00:34.820426 4958 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/376a23d7-acf1-4160-9aa6-fbe911c96837-utilities\") on node \"crc\" DevicePath \"\"" Oct 06 12:00:34 crc kubenswrapper[4958]: I1006 12:00:34.820497 4958 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/376a23d7-acf1-4160-9aa6-fbe911c96837-catalog-content\") on node \"crc\" DevicePath \"\"" Oct 06 12:00:34 crc kubenswrapper[4958]: I1006 12:00:34.820512 4958 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kkrcx\" (UniqueName: \"kubernetes.io/projected/376a23d7-acf1-4160-9aa6-fbe911c96837-kube-api-access-kkrcx\") on node \"crc\" DevicePath \"\"" Oct 06 12:00:34 crc kubenswrapper[4958]: I1006 12:00:34.940228 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-operator-57448bb547-2ptw6" event={"ID":"a2a23b45-5568-49fb-9e85-6bce53831d13","Type":"ContainerStarted","Data":"97fefd880df7919640244ab381e6db52589b816a932bf22ed28cbde3a40a7882"} Oct 06 12:00:34 crc kubenswrapper[4958]: I1006 12:00:34.940570 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-controller-operator-57448bb547-2ptw6" Oct 06 12:00:34 crc kubenswrapper[4958]: I1006 12:00:34.944582 4958 generic.go:334] "Generic (PLEG): container finished" podID="376a23d7-acf1-4160-9aa6-fbe911c96837" containerID="eab06c162f2078942bfb50032d54439ac4a068dac71bd5445acd87c30029fe3e" exitCode=0 Oct 06 12:00:34 crc kubenswrapper[4958]: I1006 12:00:34.944654 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-rdtfs" event={"ID":"376a23d7-acf1-4160-9aa6-fbe911c96837","Type":"ContainerDied","Data":"eab06c162f2078942bfb50032d54439ac4a068dac71bd5445acd87c30029fe3e"} Oct 06 12:00:34 crc kubenswrapper[4958]: I1006 12:00:34.944697 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-rdtfs" event={"ID":"376a23d7-acf1-4160-9aa6-fbe911c96837","Type":"ContainerDied","Data":"2fe753263cd22d954353d20c13c647cafdcf04c6acd9f991b8ef560127644170"} Oct 06 12:00:34 crc kubenswrapper[4958]: I1006 12:00:34.944735 4958 scope.go:117] "RemoveContainer" containerID="eab06c162f2078942bfb50032d54439ac4a068dac71bd5445acd87c30029fe3e" Oct 06 12:00:34 crc kubenswrapper[4958]: I1006 12:00:34.944921 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-rdtfs" Oct 06 12:00:34 crc kubenswrapper[4958]: I1006 12:00:34.987126 4958 scope.go:117] "RemoveContainer" containerID="83906d3c0d89561a0f2fb1c07dae9c33982863ef7145d9dea0b96096c304ce70" Oct 06 12:00:34 crc kubenswrapper[4958]: I1006 12:00:34.999601 4958 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-controller-operator-57448bb547-2ptw6" podStartSLOduration=1.824866764 podStartE2EDuration="7.999582408s" podCreationTimestamp="2025-10-06 12:00:27 +0000 UTC" firstStartedPulling="2025-10-06 12:00:28.452394071 +0000 UTC m=+782.338419379" lastFinishedPulling="2025-10-06 12:00:34.627109715 +0000 UTC m=+788.513135023" observedRunningTime="2025-10-06 12:00:34.991181252 +0000 UTC m=+788.877206580" watchObservedRunningTime="2025-10-06 12:00:34.999582408 +0000 UTC m=+788.885607726" Oct 06 12:00:35 crc kubenswrapper[4958]: I1006 12:00:35.028242 4958 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-rdtfs"] Oct 06 12:00:35 crc kubenswrapper[4958]: I1006 12:00:35.028906 4958 scope.go:117] "RemoveContainer" containerID="af59ad16972e4f10c09e0f07f4ad1df78334d08ab5cf369ef4017279f7547360" Oct 06 12:00:35 crc kubenswrapper[4958]: I1006 12:00:35.035882 4958 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-rdtfs"] Oct 06 12:00:35 crc kubenswrapper[4958]: I1006 12:00:35.052246 4958 scope.go:117] "RemoveContainer" containerID="eab06c162f2078942bfb50032d54439ac4a068dac71bd5445acd87c30029fe3e" Oct 06 12:00:35 crc kubenswrapper[4958]: E1006 12:00:35.052872 4958 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"eab06c162f2078942bfb50032d54439ac4a068dac71bd5445acd87c30029fe3e\": container with ID starting with eab06c162f2078942bfb50032d54439ac4a068dac71bd5445acd87c30029fe3e not found: ID does not exist" containerID="eab06c162f2078942bfb50032d54439ac4a068dac71bd5445acd87c30029fe3e" Oct 06 12:00:35 crc kubenswrapper[4958]: I1006 12:00:35.052924 4958 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"eab06c162f2078942bfb50032d54439ac4a068dac71bd5445acd87c30029fe3e"} err="failed to get container status \"eab06c162f2078942bfb50032d54439ac4a068dac71bd5445acd87c30029fe3e\": rpc error: code = NotFound desc = could not find container \"eab06c162f2078942bfb50032d54439ac4a068dac71bd5445acd87c30029fe3e\": container with ID starting with eab06c162f2078942bfb50032d54439ac4a068dac71bd5445acd87c30029fe3e not found: ID does not exist" Oct 06 12:00:35 crc kubenswrapper[4958]: I1006 12:00:35.052956 4958 scope.go:117] "RemoveContainer" containerID="83906d3c0d89561a0f2fb1c07dae9c33982863ef7145d9dea0b96096c304ce70" Oct 06 12:00:35 crc kubenswrapper[4958]: E1006 12:00:35.053504 4958 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"83906d3c0d89561a0f2fb1c07dae9c33982863ef7145d9dea0b96096c304ce70\": container with ID starting with 83906d3c0d89561a0f2fb1c07dae9c33982863ef7145d9dea0b96096c304ce70 not found: ID does not exist" containerID="83906d3c0d89561a0f2fb1c07dae9c33982863ef7145d9dea0b96096c304ce70" Oct 06 12:00:35 crc kubenswrapper[4958]: I1006 12:00:35.053533 4958 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"83906d3c0d89561a0f2fb1c07dae9c33982863ef7145d9dea0b96096c304ce70"} err="failed to get container status \"83906d3c0d89561a0f2fb1c07dae9c33982863ef7145d9dea0b96096c304ce70\": rpc error: code = NotFound desc = could not find container \"83906d3c0d89561a0f2fb1c07dae9c33982863ef7145d9dea0b96096c304ce70\": container with ID starting with 83906d3c0d89561a0f2fb1c07dae9c33982863ef7145d9dea0b96096c304ce70 not found: ID does not exist" Oct 06 12:00:35 crc kubenswrapper[4958]: I1006 12:00:35.053551 4958 scope.go:117] "RemoveContainer" containerID="af59ad16972e4f10c09e0f07f4ad1df78334d08ab5cf369ef4017279f7547360" Oct 06 12:00:35 crc kubenswrapper[4958]: E1006 12:00:35.054140 4958 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"af59ad16972e4f10c09e0f07f4ad1df78334d08ab5cf369ef4017279f7547360\": container with ID starting with af59ad16972e4f10c09e0f07f4ad1df78334d08ab5cf369ef4017279f7547360 not found: ID does not exist" containerID="af59ad16972e4f10c09e0f07f4ad1df78334d08ab5cf369ef4017279f7547360" Oct 06 12:00:35 crc kubenswrapper[4958]: I1006 12:00:35.054318 4958 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"af59ad16972e4f10c09e0f07f4ad1df78334d08ab5cf369ef4017279f7547360"} err="failed to get container status \"af59ad16972e4f10c09e0f07f4ad1df78334d08ab5cf369ef4017279f7547360\": rpc error: code = NotFound desc = could not find container \"af59ad16972e4f10c09e0f07f4ad1df78334d08ab5cf369ef4017279f7547360\": container with ID starting with af59ad16972e4f10c09e0f07f4ad1df78334d08ab5cf369ef4017279f7547360 not found: ID does not exist" Oct 06 12:00:35 crc kubenswrapper[4958]: I1006 12:00:35.954360 4958 generic.go:334] "Generic (PLEG): container finished" podID="a6516077-3638-4800-ba2f-70f1f3506eba" containerID="79299a58f1cc11121065ad846273c34175d3baef4c0ae67307392b2de2e92564" exitCode=0 Oct 06 12:00:35 crc kubenswrapper[4958]: I1006 12:00:35.954422 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-b4mzj" event={"ID":"a6516077-3638-4800-ba2f-70f1f3506eba","Type":"ContainerDied","Data":"79299a58f1cc11121065ad846273c34175d3baef4c0ae67307392b2de2e92564"} Oct 06 12:00:36 crc kubenswrapper[4958]: I1006 12:00:36.919991 4958 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="376a23d7-acf1-4160-9aa6-fbe911c96837" path="/var/lib/kubelet/pods/376a23d7-acf1-4160-9aa6-fbe911c96837/volumes" Oct 06 12:00:38 crc kubenswrapper[4958]: I1006 12:00:38.032180 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-controller-operator-57448bb547-2ptw6" Oct 06 12:00:38 crc kubenswrapper[4958]: I1006 12:00:38.977505 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-b4mzj" event={"ID":"a6516077-3638-4800-ba2f-70f1f3506eba","Type":"ContainerStarted","Data":"4037ef42cf2b4897a14b160e7296b0feb0c69a9577809be224af0bc2576b7e33"} Oct 06 12:00:38 crc kubenswrapper[4958]: I1006 12:00:38.994554 4958 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-b4mzj" podStartSLOduration=4.24671859 podStartE2EDuration="8.994536729s" podCreationTimestamp="2025-10-06 12:00:30 +0000 UTC" firstStartedPulling="2025-10-06 12:00:33.052512703 +0000 UTC m=+786.938538021" lastFinishedPulling="2025-10-06 12:00:37.800330842 +0000 UTC m=+791.686356160" observedRunningTime="2025-10-06 12:00:38.993097633 +0000 UTC m=+792.879122941" watchObservedRunningTime="2025-10-06 12:00:38.994536729 +0000 UTC m=+792.880562037" Oct 06 12:00:40 crc kubenswrapper[4958]: I1006 12:00:40.458106 4958 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-8wx6x"] Oct 06 12:00:40 crc kubenswrapper[4958]: E1006 12:00:40.461067 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="376a23d7-acf1-4160-9aa6-fbe911c96837" containerName="extract-content" Oct 06 12:00:40 crc kubenswrapper[4958]: I1006 12:00:40.461110 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="376a23d7-acf1-4160-9aa6-fbe911c96837" containerName="extract-content" Oct 06 12:00:40 crc kubenswrapper[4958]: E1006 12:00:40.461133 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="376a23d7-acf1-4160-9aa6-fbe911c96837" containerName="extract-utilities" Oct 06 12:00:40 crc kubenswrapper[4958]: I1006 12:00:40.461177 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="376a23d7-acf1-4160-9aa6-fbe911c96837" containerName="extract-utilities" Oct 06 12:00:40 crc kubenswrapper[4958]: E1006 12:00:40.461199 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="376a23d7-acf1-4160-9aa6-fbe911c96837" containerName="registry-server" Oct 06 12:00:40 crc kubenswrapper[4958]: I1006 12:00:40.461211 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="376a23d7-acf1-4160-9aa6-fbe911c96837" containerName="registry-server" Oct 06 12:00:40 crc kubenswrapper[4958]: I1006 12:00:40.461414 4958 memory_manager.go:354] "RemoveStaleState removing state" podUID="376a23d7-acf1-4160-9aa6-fbe911c96837" containerName="registry-server" Oct 06 12:00:40 crc kubenswrapper[4958]: I1006 12:00:40.464569 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-8wx6x" Oct 06 12:00:40 crc kubenswrapper[4958]: I1006 12:00:40.477033 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-8wx6x"] Oct 06 12:00:40 crc kubenswrapper[4958]: I1006 12:00:40.605067 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f7018d40-65db-421c-941b-92c80dfb9166-utilities\") pod \"redhat-operators-8wx6x\" (UID: \"f7018d40-65db-421c-941b-92c80dfb9166\") " pod="openshift-marketplace/redhat-operators-8wx6x" Oct 06 12:00:40 crc kubenswrapper[4958]: I1006 12:00:40.605125 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-966nw\" (UniqueName: \"kubernetes.io/projected/f7018d40-65db-421c-941b-92c80dfb9166-kube-api-access-966nw\") pod \"redhat-operators-8wx6x\" (UID: \"f7018d40-65db-421c-941b-92c80dfb9166\") " pod="openshift-marketplace/redhat-operators-8wx6x" Oct 06 12:00:40 crc kubenswrapper[4958]: I1006 12:00:40.605170 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f7018d40-65db-421c-941b-92c80dfb9166-catalog-content\") pod \"redhat-operators-8wx6x\" (UID: \"f7018d40-65db-421c-941b-92c80dfb9166\") " pod="openshift-marketplace/redhat-operators-8wx6x" Oct 06 12:00:40 crc kubenswrapper[4958]: I1006 12:00:40.706650 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f7018d40-65db-421c-941b-92c80dfb9166-catalog-content\") pod \"redhat-operators-8wx6x\" (UID: \"f7018d40-65db-421c-941b-92c80dfb9166\") " pod="openshift-marketplace/redhat-operators-8wx6x" Oct 06 12:00:40 crc kubenswrapper[4958]: I1006 12:00:40.706778 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f7018d40-65db-421c-941b-92c80dfb9166-utilities\") pod \"redhat-operators-8wx6x\" (UID: \"f7018d40-65db-421c-941b-92c80dfb9166\") " pod="openshift-marketplace/redhat-operators-8wx6x" Oct 06 12:00:40 crc kubenswrapper[4958]: I1006 12:00:40.706800 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-966nw\" (UniqueName: \"kubernetes.io/projected/f7018d40-65db-421c-941b-92c80dfb9166-kube-api-access-966nw\") pod \"redhat-operators-8wx6x\" (UID: \"f7018d40-65db-421c-941b-92c80dfb9166\") " pod="openshift-marketplace/redhat-operators-8wx6x" Oct 06 12:00:40 crc kubenswrapper[4958]: I1006 12:00:40.707180 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f7018d40-65db-421c-941b-92c80dfb9166-catalog-content\") pod \"redhat-operators-8wx6x\" (UID: \"f7018d40-65db-421c-941b-92c80dfb9166\") " pod="openshift-marketplace/redhat-operators-8wx6x" Oct 06 12:00:40 crc kubenswrapper[4958]: I1006 12:00:40.707200 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f7018d40-65db-421c-941b-92c80dfb9166-utilities\") pod \"redhat-operators-8wx6x\" (UID: \"f7018d40-65db-421c-941b-92c80dfb9166\") " pod="openshift-marketplace/redhat-operators-8wx6x" Oct 06 12:00:40 crc kubenswrapper[4958]: I1006 12:00:40.725312 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-966nw\" (UniqueName: \"kubernetes.io/projected/f7018d40-65db-421c-941b-92c80dfb9166-kube-api-access-966nw\") pod \"redhat-operators-8wx6x\" (UID: \"f7018d40-65db-421c-941b-92c80dfb9166\") " pod="openshift-marketplace/redhat-operators-8wx6x" Oct 06 12:00:40 crc kubenswrapper[4958]: I1006 12:00:40.783705 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-8wx6x" Oct 06 12:00:41 crc kubenswrapper[4958]: I1006 12:00:41.003109 4958 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-b4mzj" Oct 06 12:00:41 crc kubenswrapper[4958]: I1006 12:00:41.003476 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-b4mzj" Oct 06 12:00:41 crc kubenswrapper[4958]: I1006 12:00:41.054571 4958 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-b4mzj" Oct 06 12:00:41 crc kubenswrapper[4958]: I1006 12:00:41.256314 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-8wx6x"] Oct 06 12:00:41 crc kubenswrapper[4958]: W1006 12:00:41.261366 4958 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf7018d40_65db_421c_941b_92c80dfb9166.slice/crio-f8a00b0d4a45a77d8cf2ae4a6a1be013892258ce21fee85e95bd243b493ba857 WatchSource:0}: Error finding container f8a00b0d4a45a77d8cf2ae4a6a1be013892258ce21fee85e95bd243b493ba857: Status 404 returned error can't find the container with id f8a00b0d4a45a77d8cf2ae4a6a1be013892258ce21fee85e95bd243b493ba857 Oct 06 12:00:42 crc kubenswrapper[4958]: I1006 12:00:42.000028 4958 generic.go:334] "Generic (PLEG): container finished" podID="f7018d40-65db-421c-941b-92c80dfb9166" containerID="c8905abbcf4191168b1967cdd346e2fb37755236d2825f1a51ae5873f760111d" exitCode=0 Oct 06 12:00:42 crc kubenswrapper[4958]: I1006 12:00:42.000195 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-8wx6x" event={"ID":"f7018d40-65db-421c-941b-92c80dfb9166","Type":"ContainerDied","Data":"c8905abbcf4191168b1967cdd346e2fb37755236d2825f1a51ae5873f760111d"} Oct 06 12:00:42 crc kubenswrapper[4958]: I1006 12:00:42.000587 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-8wx6x" event={"ID":"f7018d40-65db-421c-941b-92c80dfb9166","Type":"ContainerStarted","Data":"f8a00b0d4a45a77d8cf2ae4a6a1be013892258ce21fee85e95bd243b493ba857"} Oct 06 12:00:42 crc kubenswrapper[4958]: I1006 12:00:42.071405 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-b4mzj" Oct 06 12:00:42 crc kubenswrapper[4958]: I1006 12:00:42.850282 4958 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-b4mzj"] Oct 06 12:00:44 crc kubenswrapper[4958]: I1006 12:00:44.017181 4958 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-b4mzj" podUID="a6516077-3638-4800-ba2f-70f1f3506eba" containerName="registry-server" containerID="cri-o://4037ef42cf2b4897a14b160e7296b0feb0c69a9577809be224af0bc2576b7e33" gracePeriod=2 Oct 06 12:00:44 crc kubenswrapper[4958]: I1006 12:00:44.017566 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-8wx6x" event={"ID":"f7018d40-65db-421c-941b-92c80dfb9166","Type":"ContainerStarted","Data":"4d306fbbf6c918cba42304ace420077897fc4d9621e28eebe89ec281d6d248a4"} Oct 06 12:00:44 crc kubenswrapper[4958]: I1006 12:00:44.454852 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-b4mzj" Oct 06 12:00:44 crc kubenswrapper[4958]: I1006 12:00:44.563603 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a6516077-3638-4800-ba2f-70f1f3506eba-catalog-content\") pod \"a6516077-3638-4800-ba2f-70f1f3506eba\" (UID: \"a6516077-3638-4800-ba2f-70f1f3506eba\") " Oct 06 12:00:44 crc kubenswrapper[4958]: I1006 12:00:44.563697 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a6516077-3638-4800-ba2f-70f1f3506eba-utilities\") pod \"a6516077-3638-4800-ba2f-70f1f3506eba\" (UID: \"a6516077-3638-4800-ba2f-70f1f3506eba\") " Oct 06 12:00:44 crc kubenswrapper[4958]: I1006 12:00:44.563717 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-74rbg\" (UniqueName: \"kubernetes.io/projected/a6516077-3638-4800-ba2f-70f1f3506eba-kube-api-access-74rbg\") pod \"a6516077-3638-4800-ba2f-70f1f3506eba\" (UID: \"a6516077-3638-4800-ba2f-70f1f3506eba\") " Oct 06 12:00:44 crc kubenswrapper[4958]: I1006 12:00:44.564499 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a6516077-3638-4800-ba2f-70f1f3506eba-utilities" (OuterVolumeSpecName: "utilities") pod "a6516077-3638-4800-ba2f-70f1f3506eba" (UID: "a6516077-3638-4800-ba2f-70f1f3506eba"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 06 12:00:44 crc kubenswrapper[4958]: I1006 12:00:44.582447 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a6516077-3638-4800-ba2f-70f1f3506eba-kube-api-access-74rbg" (OuterVolumeSpecName: "kube-api-access-74rbg") pod "a6516077-3638-4800-ba2f-70f1f3506eba" (UID: "a6516077-3638-4800-ba2f-70f1f3506eba"). InnerVolumeSpecName "kube-api-access-74rbg". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 06 12:00:44 crc kubenswrapper[4958]: I1006 12:00:44.598711 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a6516077-3638-4800-ba2f-70f1f3506eba-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "a6516077-3638-4800-ba2f-70f1f3506eba" (UID: "a6516077-3638-4800-ba2f-70f1f3506eba"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 06 12:00:44 crc kubenswrapper[4958]: I1006 12:00:44.665010 4958 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a6516077-3638-4800-ba2f-70f1f3506eba-catalog-content\") on node \"crc\" DevicePath \"\"" Oct 06 12:00:44 crc kubenswrapper[4958]: I1006 12:00:44.665055 4958 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a6516077-3638-4800-ba2f-70f1f3506eba-utilities\") on node \"crc\" DevicePath \"\"" Oct 06 12:00:44 crc kubenswrapper[4958]: I1006 12:00:44.665065 4958 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-74rbg\" (UniqueName: \"kubernetes.io/projected/a6516077-3638-4800-ba2f-70f1f3506eba-kube-api-access-74rbg\") on node \"crc\" DevicePath \"\"" Oct 06 12:00:45 crc kubenswrapper[4958]: I1006 12:00:45.023102 4958 generic.go:334] "Generic (PLEG): container finished" podID="f7018d40-65db-421c-941b-92c80dfb9166" containerID="4d306fbbf6c918cba42304ace420077897fc4d9621e28eebe89ec281d6d248a4" exitCode=0 Oct 06 12:00:45 crc kubenswrapper[4958]: I1006 12:00:45.023185 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-8wx6x" event={"ID":"f7018d40-65db-421c-941b-92c80dfb9166","Type":"ContainerDied","Data":"4d306fbbf6c918cba42304ace420077897fc4d9621e28eebe89ec281d6d248a4"} Oct 06 12:00:45 crc kubenswrapper[4958]: I1006 12:00:45.026042 4958 generic.go:334] "Generic (PLEG): container finished" podID="a6516077-3638-4800-ba2f-70f1f3506eba" containerID="4037ef42cf2b4897a14b160e7296b0feb0c69a9577809be224af0bc2576b7e33" exitCode=0 Oct 06 12:00:45 crc kubenswrapper[4958]: I1006 12:00:45.026064 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-b4mzj" event={"ID":"a6516077-3638-4800-ba2f-70f1f3506eba","Type":"ContainerDied","Data":"4037ef42cf2b4897a14b160e7296b0feb0c69a9577809be224af0bc2576b7e33"} Oct 06 12:00:45 crc kubenswrapper[4958]: I1006 12:00:45.026080 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-b4mzj" event={"ID":"a6516077-3638-4800-ba2f-70f1f3506eba","Type":"ContainerDied","Data":"d1f5fa0f00fe805f1dae7a8898bb336eeaa9144d98817761fc68af97daf6421a"} Oct 06 12:00:45 crc kubenswrapper[4958]: I1006 12:00:45.026096 4958 scope.go:117] "RemoveContainer" containerID="4037ef42cf2b4897a14b160e7296b0feb0c69a9577809be224af0bc2576b7e33" Oct 06 12:00:45 crc kubenswrapper[4958]: I1006 12:00:45.026119 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-b4mzj" Oct 06 12:00:45 crc kubenswrapper[4958]: I1006 12:00:45.047090 4958 scope.go:117] "RemoveContainer" containerID="79299a58f1cc11121065ad846273c34175d3baef4c0ae67307392b2de2e92564" Oct 06 12:00:45 crc kubenswrapper[4958]: I1006 12:00:45.074520 4958 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-b4mzj"] Oct 06 12:00:45 crc kubenswrapper[4958]: I1006 12:00:45.077047 4958 scope.go:117] "RemoveContainer" containerID="5fad2931a55bbc89a6d3a795eb00ef543e711d543b7fe208667b1725c2532f67" Oct 06 12:00:45 crc kubenswrapper[4958]: I1006 12:00:45.082561 4958 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-b4mzj"] Oct 06 12:00:45 crc kubenswrapper[4958]: I1006 12:00:45.090886 4958 scope.go:117] "RemoveContainer" containerID="4037ef42cf2b4897a14b160e7296b0feb0c69a9577809be224af0bc2576b7e33" Oct 06 12:00:45 crc kubenswrapper[4958]: E1006 12:00:45.091620 4958 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4037ef42cf2b4897a14b160e7296b0feb0c69a9577809be224af0bc2576b7e33\": container with ID starting with 4037ef42cf2b4897a14b160e7296b0feb0c69a9577809be224af0bc2576b7e33 not found: ID does not exist" containerID="4037ef42cf2b4897a14b160e7296b0feb0c69a9577809be224af0bc2576b7e33" Oct 06 12:00:45 crc kubenswrapper[4958]: I1006 12:00:45.091651 4958 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4037ef42cf2b4897a14b160e7296b0feb0c69a9577809be224af0bc2576b7e33"} err="failed to get container status \"4037ef42cf2b4897a14b160e7296b0feb0c69a9577809be224af0bc2576b7e33\": rpc error: code = NotFound desc = could not find container \"4037ef42cf2b4897a14b160e7296b0feb0c69a9577809be224af0bc2576b7e33\": container with ID starting with 4037ef42cf2b4897a14b160e7296b0feb0c69a9577809be224af0bc2576b7e33 not found: ID does not exist" Oct 06 12:00:45 crc kubenswrapper[4958]: I1006 12:00:45.091671 4958 scope.go:117] "RemoveContainer" containerID="79299a58f1cc11121065ad846273c34175d3baef4c0ae67307392b2de2e92564" Oct 06 12:00:45 crc kubenswrapper[4958]: E1006 12:00:45.091978 4958 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"79299a58f1cc11121065ad846273c34175d3baef4c0ae67307392b2de2e92564\": container with ID starting with 79299a58f1cc11121065ad846273c34175d3baef4c0ae67307392b2de2e92564 not found: ID does not exist" containerID="79299a58f1cc11121065ad846273c34175d3baef4c0ae67307392b2de2e92564" Oct 06 12:00:45 crc kubenswrapper[4958]: I1006 12:00:45.091998 4958 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"79299a58f1cc11121065ad846273c34175d3baef4c0ae67307392b2de2e92564"} err="failed to get container status \"79299a58f1cc11121065ad846273c34175d3baef4c0ae67307392b2de2e92564\": rpc error: code = NotFound desc = could not find container \"79299a58f1cc11121065ad846273c34175d3baef4c0ae67307392b2de2e92564\": container with ID starting with 79299a58f1cc11121065ad846273c34175d3baef4c0ae67307392b2de2e92564 not found: ID does not exist" Oct 06 12:00:45 crc kubenswrapper[4958]: I1006 12:00:45.092009 4958 scope.go:117] "RemoveContainer" containerID="5fad2931a55bbc89a6d3a795eb00ef543e711d543b7fe208667b1725c2532f67" Oct 06 12:00:45 crc kubenswrapper[4958]: E1006 12:00:45.092190 4958 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5fad2931a55bbc89a6d3a795eb00ef543e711d543b7fe208667b1725c2532f67\": container with ID starting with 5fad2931a55bbc89a6d3a795eb00ef543e711d543b7fe208667b1725c2532f67 not found: ID does not exist" containerID="5fad2931a55bbc89a6d3a795eb00ef543e711d543b7fe208667b1725c2532f67" Oct 06 12:00:45 crc kubenswrapper[4958]: I1006 12:00:45.092207 4958 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5fad2931a55bbc89a6d3a795eb00ef543e711d543b7fe208667b1725c2532f67"} err="failed to get container status \"5fad2931a55bbc89a6d3a795eb00ef543e711d543b7fe208667b1725c2532f67\": rpc error: code = NotFound desc = could not find container \"5fad2931a55bbc89a6d3a795eb00ef543e711d543b7fe208667b1725c2532f67\": container with ID starting with 5fad2931a55bbc89a6d3a795eb00ef543e711d543b7fe208667b1725c2532f67 not found: ID does not exist" Oct 06 12:00:46 crc kubenswrapper[4958]: I1006 12:00:46.033132 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-8wx6x" event={"ID":"f7018d40-65db-421c-941b-92c80dfb9166","Type":"ContainerStarted","Data":"639e518fa7752accc8ca854f8229d27c2d65192f129cf321bebc8b176b9ce929"} Oct 06 12:00:46 crc kubenswrapper[4958]: I1006 12:00:46.057486 4958 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-8wx6x" podStartSLOduration=2.6123001710000002 podStartE2EDuration="6.057466027s" podCreationTimestamp="2025-10-06 12:00:40 +0000 UTC" firstStartedPulling="2025-10-06 12:00:42.002481446 +0000 UTC m=+795.888506794" lastFinishedPulling="2025-10-06 12:00:45.447647332 +0000 UTC m=+799.333672650" observedRunningTime="2025-10-06 12:00:46.055851857 +0000 UTC m=+799.941877155" watchObservedRunningTime="2025-10-06 12:00:46.057466027 +0000 UTC m=+799.943491345" Oct 06 12:00:46 crc kubenswrapper[4958]: I1006 12:00:46.936836 4958 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a6516077-3638-4800-ba2f-70f1f3506eba" path="/var/lib/kubelet/pods/a6516077-3638-4800-ba2f-70f1f3506eba/volumes" Oct 06 12:00:50 crc kubenswrapper[4958]: I1006 12:00:50.784388 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-8wx6x" Oct 06 12:00:50 crc kubenswrapper[4958]: I1006 12:00:50.785044 4958 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-8wx6x" Oct 06 12:00:51 crc kubenswrapper[4958]: I1006 12:00:51.839587 4958 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-8wx6x" podUID="f7018d40-65db-421c-941b-92c80dfb9166" containerName="registry-server" probeResult="failure" output=< Oct 06 12:00:51 crc kubenswrapper[4958]: timeout: failed to connect service ":50051" within 1s Oct 06 12:00:51 crc kubenswrapper[4958]: > Oct 06 12:00:53 crc kubenswrapper[4958]: I1006 12:00:53.802310 4958 patch_prober.go:28] interesting pod/machine-config-daemon-whw6z container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 06 12:00:53 crc kubenswrapper[4958]: I1006 12:00:53.802387 4958 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-whw6z" podUID="1a63a5e9-6d00-40ff-a080-cfcaf03a1c1b" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 06 12:00:54 crc kubenswrapper[4958]: I1006 12:00:54.972593 4958 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/barbican-operator-controller-manager-58c4cd55f4-l8pp6"] Oct 06 12:00:54 crc kubenswrapper[4958]: E1006 12:00:54.973113 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a6516077-3638-4800-ba2f-70f1f3506eba" containerName="extract-utilities" Oct 06 12:00:54 crc kubenswrapper[4958]: I1006 12:00:54.973128 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="a6516077-3638-4800-ba2f-70f1f3506eba" containerName="extract-utilities" Oct 06 12:00:54 crc kubenswrapper[4958]: E1006 12:00:54.973139 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a6516077-3638-4800-ba2f-70f1f3506eba" containerName="extract-content" Oct 06 12:00:54 crc kubenswrapper[4958]: I1006 12:00:54.973164 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="a6516077-3638-4800-ba2f-70f1f3506eba" containerName="extract-content" Oct 06 12:00:54 crc kubenswrapper[4958]: E1006 12:00:54.973178 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a6516077-3638-4800-ba2f-70f1f3506eba" containerName="registry-server" Oct 06 12:00:54 crc kubenswrapper[4958]: I1006 12:00:54.973186 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="a6516077-3638-4800-ba2f-70f1f3506eba" containerName="registry-server" Oct 06 12:00:54 crc kubenswrapper[4958]: I1006 12:00:54.973327 4958 memory_manager.go:354] "RemoveStaleState removing state" podUID="a6516077-3638-4800-ba2f-70f1f3506eba" containerName="registry-server" Oct 06 12:00:54 crc kubenswrapper[4958]: I1006 12:00:54.973880 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/barbican-operator-controller-manager-58c4cd55f4-l8pp6" Oct 06 12:00:54 crc kubenswrapper[4958]: I1006 12:00:54.975471 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"barbican-operator-controller-manager-dockercfg-dzmfd" Oct 06 12:00:54 crc kubenswrapper[4958]: I1006 12:00:54.981413 4958 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/cinder-operator-controller-manager-7d4d4f8d-th999"] Oct 06 12:00:54 crc kubenswrapper[4958]: I1006 12:00:54.982711 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/cinder-operator-controller-manager-7d4d4f8d-th999" Oct 06 12:00:54 crc kubenswrapper[4958]: I1006 12:00:54.984121 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"cinder-operator-controller-manager-dockercfg-9pt8m" Oct 06 12:00:54 crc kubenswrapper[4958]: I1006 12:00:54.986481 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/barbican-operator-controller-manager-58c4cd55f4-l8pp6"] Oct 06 12:00:55 crc kubenswrapper[4958]: I1006 12:00:55.009087 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/cinder-operator-controller-manager-7d4d4f8d-th999"] Oct 06 12:00:55 crc kubenswrapper[4958]: I1006 12:00:55.014808 4958 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/designate-operator-controller-manager-75dfd9b554-4qklz"] Oct 06 12:00:55 crc kubenswrapper[4958]: I1006 12:00:55.016163 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/designate-operator-controller-manager-75dfd9b554-4qklz" Oct 06 12:00:55 crc kubenswrapper[4958]: I1006 12:00:55.020725 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"designate-operator-controller-manager-dockercfg-dhh22" Oct 06 12:00:55 crc kubenswrapper[4958]: I1006 12:00:55.052887 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/designate-operator-controller-manager-75dfd9b554-4qklz"] Oct 06 12:00:55 crc kubenswrapper[4958]: I1006 12:00:55.069271 4958 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/glance-operator-controller-manager-5dc44df7d5-jj784"] Oct 06 12:00:55 crc kubenswrapper[4958]: I1006 12:00:55.070173 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/glance-operator-controller-manager-5dc44df7d5-jj784" Oct 06 12:00:55 crc kubenswrapper[4958]: I1006 12:00:55.074623 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"glance-operator-controller-manager-dockercfg-stl2x" Oct 06 12:00:55 crc kubenswrapper[4958]: I1006 12:00:55.102462 4958 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/heat-operator-controller-manager-54b4974c45-xc89l"] Oct 06 12:00:55 crc kubenswrapper[4958]: I1006 12:00:55.103678 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/heat-operator-controller-manager-54b4974c45-xc89l" Oct 06 12:00:55 crc kubenswrapper[4958]: I1006 12:00:55.113573 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"heat-operator-controller-manager-dockercfg-2667c" Oct 06 12:00:55 crc kubenswrapper[4958]: I1006 12:00:55.113869 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/glance-operator-controller-manager-5dc44df7d5-jj784"] Oct 06 12:00:55 crc kubenswrapper[4958]: I1006 12:00:55.131343 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9mv2q\" (UniqueName: \"kubernetes.io/projected/cbe81ce3-66c4-4226-bc0a-78d6757561ff-kube-api-access-9mv2q\") pod \"barbican-operator-controller-manager-58c4cd55f4-l8pp6\" (UID: \"cbe81ce3-66c4-4226-bc0a-78d6757561ff\") " pod="openstack-operators/barbican-operator-controller-manager-58c4cd55f4-l8pp6" Oct 06 12:00:55 crc kubenswrapper[4958]: I1006 12:00:55.131501 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d6x66\" (UniqueName: \"kubernetes.io/projected/b7998026-248b-4fdb-b5fe-8e6ca29c69f0-kube-api-access-d6x66\") pod \"cinder-operator-controller-manager-7d4d4f8d-th999\" (UID: \"b7998026-248b-4fdb-b5fe-8e6ca29c69f0\") " pod="openstack-operators/cinder-operator-controller-manager-7d4d4f8d-th999" Oct 06 12:00:55 crc kubenswrapper[4958]: I1006 12:00:55.131612 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7fmw6\" (UniqueName: \"kubernetes.io/projected/efb3e8ee-d92e-49fe-82c8-3fbe5794410f-kube-api-access-7fmw6\") pod \"designate-operator-controller-manager-75dfd9b554-4qklz\" (UID: \"efb3e8ee-d92e-49fe-82c8-3fbe5794410f\") " pod="openstack-operators/designate-operator-controller-manager-75dfd9b554-4qklz" Oct 06 12:00:55 crc kubenswrapper[4958]: I1006 12:00:55.137744 4958 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/horizon-operator-controller-manager-76d5b87f47-7nfvh"] Oct 06 12:00:55 crc kubenswrapper[4958]: I1006 12:00:55.170982 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/heat-operator-controller-manager-54b4974c45-xc89l"] Oct 06 12:00:55 crc kubenswrapper[4958]: I1006 12:00:55.171133 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/horizon-operator-controller-manager-76d5b87f47-7nfvh" Oct 06 12:00:55 crc kubenswrapper[4958]: I1006 12:00:55.183667 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/horizon-operator-controller-manager-76d5b87f47-7nfvh"] Oct 06 12:00:55 crc kubenswrapper[4958]: I1006 12:00:55.198640 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"horizon-operator-controller-manager-dockercfg-d9wvx" Oct 06 12:00:55 crc kubenswrapper[4958]: I1006 12:00:55.205431 4958 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/ironic-operator-controller-manager-649675d675-6h8b6"] Oct 06 12:00:55 crc kubenswrapper[4958]: I1006 12:00:55.206989 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ironic-operator-controller-manager-649675d675-6h8b6" Oct 06 12:00:55 crc kubenswrapper[4958]: I1006 12:00:55.211775 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"ironic-operator-controller-manager-dockercfg-fj6m9" Oct 06 12:00:55 crc kubenswrapper[4958]: I1006 12:00:55.213986 4958 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/keystone-operator-controller-manager-7b5ccf6d9c-86s48"] Oct 06 12:00:55 crc kubenswrapper[4958]: I1006 12:00:55.214954 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/keystone-operator-controller-manager-7b5ccf6d9c-86s48" Oct 06 12:00:55 crc kubenswrapper[4958]: I1006 12:00:55.230765 4958 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/infra-operator-controller-manager-658588b8c9-dnk6g"] Oct 06 12:00:55 crc kubenswrapper[4958]: I1006 12:00:55.232579 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ironic-operator-controller-manager-649675d675-6h8b6"] Oct 06 12:00:55 crc kubenswrapper[4958]: I1006 12:00:55.233570 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9mv2q\" (UniqueName: \"kubernetes.io/projected/cbe81ce3-66c4-4226-bc0a-78d6757561ff-kube-api-access-9mv2q\") pod \"barbican-operator-controller-manager-58c4cd55f4-l8pp6\" (UID: \"cbe81ce3-66c4-4226-bc0a-78d6757561ff\") " pod="openstack-operators/barbican-operator-controller-manager-58c4cd55f4-l8pp6" Oct 06 12:00:55 crc kubenswrapper[4958]: I1006 12:00:55.233723 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"keystone-operator-controller-manager-dockercfg-thv4z" Oct 06 12:00:55 crc kubenswrapper[4958]: I1006 12:00:55.233735 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d6x66\" (UniqueName: \"kubernetes.io/projected/b7998026-248b-4fdb-b5fe-8e6ca29c69f0-kube-api-access-d6x66\") pod \"cinder-operator-controller-manager-7d4d4f8d-th999\" (UID: \"b7998026-248b-4fdb-b5fe-8e6ca29c69f0\") " pod="openstack-operators/cinder-operator-controller-manager-7d4d4f8d-th999" Oct 06 12:00:55 crc kubenswrapper[4958]: I1006 12:00:55.234218 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/infra-operator-controller-manager-658588b8c9-dnk6g" Oct 06 12:00:55 crc kubenswrapper[4958]: I1006 12:00:55.234271 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wgvkk\" (UniqueName: \"kubernetes.io/projected/da95cbd7-81b9-48e7-99eb-207063cf651a-kube-api-access-wgvkk\") pod \"glance-operator-controller-manager-5dc44df7d5-jj784\" (UID: \"da95cbd7-81b9-48e7-99eb-207063cf651a\") " pod="openstack-operators/glance-operator-controller-manager-5dc44df7d5-jj784" Oct 06 12:00:55 crc kubenswrapper[4958]: I1006 12:00:55.234423 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mxr2f\" (UniqueName: \"kubernetes.io/projected/96446a00-b397-4b48-94bf-432c32ed13cb-kube-api-access-mxr2f\") pod \"heat-operator-controller-manager-54b4974c45-xc89l\" (UID: \"96446a00-b397-4b48-94bf-432c32ed13cb\") " pod="openstack-operators/heat-operator-controller-manager-54b4974c45-xc89l" Oct 06 12:00:55 crc kubenswrapper[4958]: I1006 12:00:55.234471 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7fmw6\" (UniqueName: \"kubernetes.io/projected/efb3e8ee-d92e-49fe-82c8-3fbe5794410f-kube-api-access-7fmw6\") pod \"designate-operator-controller-manager-75dfd9b554-4qklz\" (UID: \"efb3e8ee-d92e-49fe-82c8-3fbe5794410f\") " pod="openstack-operators/designate-operator-controller-manager-75dfd9b554-4qklz" Oct 06 12:00:55 crc kubenswrapper[4958]: I1006 12:00:55.255334 4958 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/manila-operator-controller-manager-65d89cfd9f-gqj7w"] Oct 06 12:00:55 crc kubenswrapper[4958]: I1006 12:00:55.256276 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/manila-operator-controller-manager-65d89cfd9f-gqj7w" Oct 06 12:00:55 crc kubenswrapper[4958]: I1006 12:00:55.259385 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"manila-operator-controller-manager-dockercfg-xhjbs" Oct 06 12:00:55 crc kubenswrapper[4958]: I1006 12:00:55.262012 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"infra-operator-webhook-server-cert" Oct 06 12:00:55 crc kubenswrapper[4958]: I1006 12:00:55.262312 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"infra-operator-controller-manager-dockercfg-cbsk8" Oct 06 12:00:55 crc kubenswrapper[4958]: I1006 12:00:55.271338 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d6x66\" (UniqueName: \"kubernetes.io/projected/b7998026-248b-4fdb-b5fe-8e6ca29c69f0-kube-api-access-d6x66\") pod \"cinder-operator-controller-manager-7d4d4f8d-th999\" (UID: \"b7998026-248b-4fdb-b5fe-8e6ca29c69f0\") " pod="openstack-operators/cinder-operator-controller-manager-7d4d4f8d-th999" Oct 06 12:00:55 crc kubenswrapper[4958]: I1006 12:00:55.272737 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9mv2q\" (UniqueName: \"kubernetes.io/projected/cbe81ce3-66c4-4226-bc0a-78d6757561ff-kube-api-access-9mv2q\") pod \"barbican-operator-controller-manager-58c4cd55f4-l8pp6\" (UID: \"cbe81ce3-66c4-4226-bc0a-78d6757561ff\") " pod="openstack-operators/barbican-operator-controller-manager-58c4cd55f4-l8pp6" Oct 06 12:00:55 crc kubenswrapper[4958]: I1006 12:00:55.274528 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/infra-operator-controller-manager-658588b8c9-dnk6g"] Oct 06 12:00:55 crc kubenswrapper[4958]: I1006 12:00:55.284582 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7fmw6\" (UniqueName: \"kubernetes.io/projected/efb3e8ee-d92e-49fe-82c8-3fbe5794410f-kube-api-access-7fmw6\") pod \"designate-operator-controller-manager-75dfd9b554-4qklz\" (UID: \"efb3e8ee-d92e-49fe-82c8-3fbe5794410f\") " pod="openstack-operators/designate-operator-controller-manager-75dfd9b554-4qklz" Oct 06 12:00:55 crc kubenswrapper[4958]: I1006 12:00:55.299253 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/manila-operator-controller-manager-65d89cfd9f-gqj7w"] Oct 06 12:00:55 crc kubenswrapper[4958]: I1006 12:00:55.307732 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/keystone-operator-controller-manager-7b5ccf6d9c-86s48"] Oct 06 12:00:55 crc kubenswrapper[4958]: I1006 12:00:55.309721 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/barbican-operator-controller-manager-58c4cd55f4-l8pp6" Oct 06 12:00:55 crc kubenswrapper[4958]: I1006 12:00:55.328931 4958 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-6cd6d7bdf5-42wnq"] Oct 06 12:00:55 crc kubenswrapper[4958]: I1006 12:00:55.333178 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/cinder-operator-controller-manager-7d4d4f8d-th999" Oct 06 12:00:55 crc kubenswrapper[4958]: I1006 12:00:55.333495 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/mariadb-operator-controller-manager-6cd6d7bdf5-42wnq" Oct 06 12:00:55 crc kubenswrapper[4958]: I1006 12:00:55.342801 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"mariadb-operator-controller-manager-dockercfg-vlcjs" Oct 06 12:00:55 crc kubenswrapper[4958]: I1006 12:00:55.342899 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zgcb8\" (UniqueName: \"kubernetes.io/projected/e1d4f271-a424-45d7-abf0-33633ac7713c-kube-api-access-zgcb8\") pod \"keystone-operator-controller-manager-7b5ccf6d9c-86s48\" (UID: \"e1d4f271-a424-45d7-abf0-33633ac7713c\") " pod="openstack-operators/keystone-operator-controller-manager-7b5ccf6d9c-86s48" Oct 06 12:00:55 crc kubenswrapper[4958]: I1006 12:00:55.342952 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wgvkk\" (UniqueName: \"kubernetes.io/projected/da95cbd7-81b9-48e7-99eb-207063cf651a-kube-api-access-wgvkk\") pod \"glance-operator-controller-manager-5dc44df7d5-jj784\" (UID: \"da95cbd7-81b9-48e7-99eb-207063cf651a\") " pod="openstack-operators/glance-operator-controller-manager-5dc44df7d5-jj784" Oct 06 12:00:55 crc kubenswrapper[4958]: I1006 12:00:55.342984 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gwctl\" (UniqueName: \"kubernetes.io/projected/7d2f0a48-cffe-49d6-8ac8-830558228e2a-kube-api-access-gwctl\") pod \"horizon-operator-controller-manager-76d5b87f47-7nfvh\" (UID: \"7d2f0a48-cffe-49d6-8ac8-830558228e2a\") " pod="openstack-operators/horizon-operator-controller-manager-76d5b87f47-7nfvh" Oct 06 12:00:55 crc kubenswrapper[4958]: I1006 12:00:55.343016 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mxr2f\" (UniqueName: \"kubernetes.io/projected/96446a00-b397-4b48-94bf-432c32ed13cb-kube-api-access-mxr2f\") pod \"heat-operator-controller-manager-54b4974c45-xc89l\" (UID: \"96446a00-b397-4b48-94bf-432c32ed13cb\") " pod="openstack-operators/heat-operator-controller-manager-54b4974c45-xc89l" Oct 06 12:00:55 crc kubenswrapper[4958]: I1006 12:00:55.343040 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/8a0f56a2-c168-4707-acac-43cc91b44835-cert\") pod \"infra-operator-controller-manager-658588b8c9-dnk6g\" (UID: \"8a0f56a2-c168-4707-acac-43cc91b44835\") " pod="openstack-operators/infra-operator-controller-manager-658588b8c9-dnk6g" Oct 06 12:00:55 crc kubenswrapper[4958]: I1006 12:00:55.343078 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8jtw6\" (UniqueName: \"kubernetes.io/projected/8a0f56a2-c168-4707-acac-43cc91b44835-kube-api-access-8jtw6\") pod \"infra-operator-controller-manager-658588b8c9-dnk6g\" (UID: \"8a0f56a2-c168-4707-acac-43cc91b44835\") " pod="openstack-operators/infra-operator-controller-manager-658588b8c9-dnk6g" Oct 06 12:00:55 crc kubenswrapper[4958]: I1006 12:00:55.343105 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vg5x4\" (UniqueName: \"kubernetes.io/projected/78e7c04a-fb1a-420f-a99b-94b6b0cf899a-kube-api-access-vg5x4\") pod \"ironic-operator-controller-manager-649675d675-6h8b6\" (UID: \"78e7c04a-fb1a-420f-a99b-94b6b0cf899a\") " pod="openstack-operators/ironic-operator-controller-manager-649675d675-6h8b6" Oct 06 12:00:55 crc kubenswrapper[4958]: I1006 12:00:55.345396 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-6cd6d7bdf5-42wnq"] Oct 06 12:00:55 crc kubenswrapper[4958]: I1006 12:00:55.352078 4958 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/nova-operator-controller-manager-7c7fc454ff-nvbjx"] Oct 06 12:00:55 crc kubenswrapper[4958]: I1006 12:00:55.353282 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/nova-operator-controller-manager-7c7fc454ff-nvbjx" Oct 06 12:00:55 crc kubenswrapper[4958]: I1006 12:00:55.354959 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"nova-operator-controller-manager-dockercfg-ln6ps" Oct 06 12:00:55 crc kubenswrapper[4958]: I1006 12:00:55.358577 4958 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/neutron-operator-controller-manager-8d984cc4d-7r9bg"] Oct 06 12:00:55 crc kubenswrapper[4958]: I1006 12:00:55.358871 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/designate-operator-controller-manager-75dfd9b554-4qklz" Oct 06 12:00:55 crc kubenswrapper[4958]: I1006 12:00:55.359818 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/neutron-operator-controller-manager-8d984cc4d-7r9bg" Oct 06 12:00:55 crc kubenswrapper[4958]: I1006 12:00:55.363098 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/nova-operator-controller-manager-7c7fc454ff-nvbjx"] Oct 06 12:00:55 crc kubenswrapper[4958]: I1006 12:00:55.365674 4958 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/octavia-operator-controller-manager-7468f855d8-crg8g"] Oct 06 12:00:55 crc kubenswrapper[4958]: I1006 12:00:55.370930 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/octavia-operator-controller-manager-7468f855d8-crg8g" Oct 06 12:00:55 crc kubenswrapper[4958]: I1006 12:00:55.375983 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/neutron-operator-controller-manager-8d984cc4d-7r9bg"] Oct 06 12:00:55 crc kubenswrapper[4958]: I1006 12:00:55.376046 4958 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-5dfbbd665clrhkv"] Oct 06 12:00:55 crc kubenswrapper[4958]: I1006 12:00:55.377046 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-baremetal-operator-controller-manager-5dfbbd665clrhkv" Oct 06 12:00:55 crc kubenswrapper[4958]: I1006 12:00:55.381636 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/octavia-operator-controller-manager-7468f855d8-crg8g"] Oct 06 12:00:55 crc kubenswrapper[4958]: I1006 12:00:55.381906 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"neutron-operator-controller-manager-dockercfg-pc245" Oct 06 12:00:55 crc kubenswrapper[4958]: I1006 12:00:55.382474 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"octavia-operator-controller-manager-dockercfg-2b7dc" Oct 06 12:00:55 crc kubenswrapper[4958]: I1006 12:00:55.382587 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-baremetal-operator-webhook-server-cert" Oct 06 12:00:55 crc kubenswrapper[4958]: I1006 12:00:55.382822 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-baremetal-operator-controller-manager-dockercfg-kcpmp" Oct 06 12:00:55 crc kubenswrapper[4958]: I1006 12:00:55.388350 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-5dfbbd665clrhkv"] Oct 06 12:00:55 crc kubenswrapper[4958]: I1006 12:00:55.391886 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mxr2f\" (UniqueName: \"kubernetes.io/projected/96446a00-b397-4b48-94bf-432c32ed13cb-kube-api-access-mxr2f\") pod \"heat-operator-controller-manager-54b4974c45-xc89l\" (UID: \"96446a00-b397-4b48-94bf-432c32ed13cb\") " pod="openstack-operators/heat-operator-controller-manager-54b4974c45-xc89l" Oct 06 12:00:55 crc kubenswrapper[4958]: I1006 12:00:55.393753 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wgvkk\" (UniqueName: \"kubernetes.io/projected/da95cbd7-81b9-48e7-99eb-207063cf651a-kube-api-access-wgvkk\") pod \"glance-operator-controller-manager-5dc44df7d5-jj784\" (UID: \"da95cbd7-81b9-48e7-99eb-207063cf651a\") " pod="openstack-operators/glance-operator-controller-manager-5dc44df7d5-jj784" Oct 06 12:00:55 crc kubenswrapper[4958]: I1006 12:00:55.397207 4958 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/ovn-operator-controller-manager-646d647dd5-fqrj2"] Oct 06 12:00:55 crc kubenswrapper[4958]: I1006 12:00:55.398470 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ovn-operator-controller-manager-646d647dd5-fqrj2" Oct 06 12:00:55 crc kubenswrapper[4958]: I1006 12:00:55.402018 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"ovn-operator-controller-manager-dockercfg-j7f2f" Oct 06 12:00:55 crc kubenswrapper[4958]: I1006 12:00:55.402025 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ovn-operator-controller-manager-646d647dd5-fqrj2"] Oct 06 12:00:55 crc kubenswrapper[4958]: I1006 12:00:55.412865 4958 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/swift-operator-controller-manager-6859f9b676-4d8cf"] Oct 06 12:00:55 crc kubenswrapper[4958]: I1006 12:00:55.415693 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/swift-operator-controller-manager-6859f9b676-4d8cf" Oct 06 12:00:55 crc kubenswrapper[4958]: I1006 12:00:55.417720 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"swift-operator-controller-manager-dockercfg-dqf9x" Oct 06 12:00:55 crc kubenswrapper[4958]: I1006 12:00:55.417804 4958 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/placement-operator-controller-manager-54689d9f88-lf55c"] Oct 06 12:00:55 crc kubenswrapper[4958]: I1006 12:00:55.418718 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/placement-operator-controller-manager-54689d9f88-lf55c" Oct 06 12:00:55 crc kubenswrapper[4958]: I1006 12:00:55.421402 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"placement-operator-controller-manager-dockercfg-nr9rl" Oct 06 12:00:55 crc kubenswrapper[4958]: I1006 12:00:55.431346 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/swift-operator-controller-manager-6859f9b676-4d8cf"] Oct 06 12:00:55 crc kubenswrapper[4958]: I1006 12:00:55.444139 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gwctl\" (UniqueName: \"kubernetes.io/projected/7d2f0a48-cffe-49d6-8ac8-830558228e2a-kube-api-access-gwctl\") pod \"horizon-operator-controller-manager-76d5b87f47-7nfvh\" (UID: \"7d2f0a48-cffe-49d6-8ac8-830558228e2a\") " pod="openstack-operators/horizon-operator-controller-manager-76d5b87f47-7nfvh" Oct 06 12:00:55 crc kubenswrapper[4958]: I1006 12:00:55.444196 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mxw9c\" (UniqueName: \"kubernetes.io/projected/cb612d52-fceb-471f-af53-104bfc2966e7-kube-api-access-mxw9c\") pod \"manila-operator-controller-manager-65d89cfd9f-gqj7w\" (UID: \"cb612d52-fceb-471f-af53-104bfc2966e7\") " pod="openstack-operators/manila-operator-controller-manager-65d89cfd9f-gqj7w" Oct 06 12:00:55 crc kubenswrapper[4958]: I1006 12:00:55.444236 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/8a0f56a2-c168-4707-acac-43cc91b44835-cert\") pod \"infra-operator-controller-manager-658588b8c9-dnk6g\" (UID: \"8a0f56a2-c168-4707-acac-43cc91b44835\") " pod="openstack-operators/infra-operator-controller-manager-658588b8c9-dnk6g" Oct 06 12:00:55 crc kubenswrapper[4958]: I1006 12:00:55.444266 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8jtw6\" (UniqueName: \"kubernetes.io/projected/8a0f56a2-c168-4707-acac-43cc91b44835-kube-api-access-8jtw6\") pod \"infra-operator-controller-manager-658588b8c9-dnk6g\" (UID: \"8a0f56a2-c168-4707-acac-43cc91b44835\") " pod="openstack-operators/infra-operator-controller-manager-658588b8c9-dnk6g" Oct 06 12:00:55 crc kubenswrapper[4958]: I1006 12:00:55.444290 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vg5x4\" (UniqueName: \"kubernetes.io/projected/78e7c04a-fb1a-420f-a99b-94b6b0cf899a-kube-api-access-vg5x4\") pod \"ironic-operator-controller-manager-649675d675-6h8b6\" (UID: \"78e7c04a-fb1a-420f-a99b-94b6b0cf899a\") " pod="openstack-operators/ironic-operator-controller-manager-649675d675-6h8b6" Oct 06 12:00:55 crc kubenswrapper[4958]: I1006 12:00:55.444319 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-98gqj\" (UniqueName: \"kubernetes.io/projected/78e5cfa4-7e8c-4fb2-b90d-bd9967385a71-kube-api-access-98gqj\") pod \"mariadb-operator-controller-manager-6cd6d7bdf5-42wnq\" (UID: \"78e5cfa4-7e8c-4fb2-b90d-bd9967385a71\") " pod="openstack-operators/mariadb-operator-controller-manager-6cd6d7bdf5-42wnq" Oct 06 12:00:55 crc kubenswrapper[4958]: I1006 12:00:55.444347 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zgcb8\" (UniqueName: \"kubernetes.io/projected/e1d4f271-a424-45d7-abf0-33633ac7713c-kube-api-access-zgcb8\") pod \"keystone-operator-controller-manager-7b5ccf6d9c-86s48\" (UID: \"e1d4f271-a424-45d7-abf0-33633ac7713c\") " pod="openstack-operators/keystone-operator-controller-manager-7b5ccf6d9c-86s48" Oct 06 12:00:55 crc kubenswrapper[4958]: E1006 12:00:55.444555 4958 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Oct 06 12:00:55 crc kubenswrapper[4958]: E1006 12:00:55.444608 4958 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/8a0f56a2-c168-4707-acac-43cc91b44835-cert podName:8a0f56a2-c168-4707-acac-43cc91b44835 nodeName:}" failed. No retries permitted until 2025-10-06 12:00:55.944592418 +0000 UTC m=+809.830617726 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/8a0f56a2-c168-4707-acac-43cc91b44835-cert") pod "infra-operator-controller-manager-658588b8c9-dnk6g" (UID: "8a0f56a2-c168-4707-acac-43cc91b44835") : secret "infra-operator-webhook-server-cert" not found Oct 06 12:00:55 crc kubenswrapper[4958]: I1006 12:00:55.462276 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/placement-operator-controller-manager-54689d9f88-lf55c"] Oct 06 12:00:55 crc kubenswrapper[4958]: I1006 12:00:55.465127 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8jtw6\" (UniqueName: \"kubernetes.io/projected/8a0f56a2-c168-4707-acac-43cc91b44835-kube-api-access-8jtw6\") pod \"infra-operator-controller-manager-658588b8c9-dnk6g\" (UID: \"8a0f56a2-c168-4707-acac-43cc91b44835\") " pod="openstack-operators/infra-operator-controller-manager-658588b8c9-dnk6g" Oct 06 12:00:55 crc kubenswrapper[4958]: I1006 12:00:55.473843 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vg5x4\" (UniqueName: \"kubernetes.io/projected/78e7c04a-fb1a-420f-a99b-94b6b0cf899a-kube-api-access-vg5x4\") pod \"ironic-operator-controller-manager-649675d675-6h8b6\" (UID: \"78e7c04a-fb1a-420f-a99b-94b6b0cf899a\") " pod="openstack-operators/ironic-operator-controller-manager-649675d675-6h8b6" Oct 06 12:00:55 crc kubenswrapper[4958]: I1006 12:00:55.475707 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/heat-operator-controller-manager-54b4974c45-xc89l" Oct 06 12:00:55 crc kubenswrapper[4958]: I1006 12:00:55.481935 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zgcb8\" (UniqueName: \"kubernetes.io/projected/e1d4f271-a424-45d7-abf0-33633ac7713c-kube-api-access-zgcb8\") pod \"keystone-operator-controller-manager-7b5ccf6d9c-86s48\" (UID: \"e1d4f271-a424-45d7-abf0-33633ac7713c\") " pod="openstack-operators/keystone-operator-controller-manager-7b5ccf6d9c-86s48" Oct 06 12:00:55 crc kubenswrapper[4958]: I1006 12:00:55.487184 4958 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-5d4d74dd89-zfxpj"] Oct 06 12:00:55 crc kubenswrapper[4958]: I1006 12:00:55.496405 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/telemetry-operator-controller-manager-5d4d74dd89-zfxpj" Oct 06 12:00:55 crc kubenswrapper[4958]: I1006 12:00:55.508395 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-5d4d74dd89-zfxpj"] Oct 06 12:00:55 crc kubenswrapper[4958]: I1006 12:00:55.508513 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"telemetry-operator-controller-manager-dockercfg-nz8vs" Oct 06 12:00:55 crc kubenswrapper[4958]: I1006 12:00:55.512856 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gwctl\" (UniqueName: \"kubernetes.io/projected/7d2f0a48-cffe-49d6-8ac8-830558228e2a-kube-api-access-gwctl\") pod \"horizon-operator-controller-manager-76d5b87f47-7nfvh\" (UID: \"7d2f0a48-cffe-49d6-8ac8-830558228e2a\") " pod="openstack-operators/horizon-operator-controller-manager-76d5b87f47-7nfvh" Oct 06 12:00:55 crc kubenswrapper[4958]: I1006 12:00:55.537485 4958 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/test-operator-controller-manager-5cd5cb47d7-m5gw5"] Oct 06 12:00:55 crc kubenswrapper[4958]: I1006 12:00:55.538777 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/test-operator-controller-manager-5cd5cb47d7-m5gw5" Oct 06 12:00:55 crc kubenswrapper[4958]: I1006 12:00:55.544216 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"test-operator-controller-manager-dockercfg-pccng" Oct 06 12:00:55 crc kubenswrapper[4958]: I1006 12:00:55.546654 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9lb28\" (UniqueName: \"kubernetes.io/projected/28613b85-8223-4190-b2de-a88d186b8901-kube-api-access-9lb28\") pod \"ovn-operator-controller-manager-646d647dd5-fqrj2\" (UID: \"28613b85-8223-4190-b2de-a88d186b8901\") " pod="openstack-operators/ovn-operator-controller-manager-646d647dd5-fqrj2" Oct 06 12:00:55 crc kubenswrapper[4958]: I1006 12:00:55.546823 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vqsxv\" (UniqueName: \"kubernetes.io/projected/7c2e69f6-a1a9-46e9-9fa5-ff6002bf1f97-kube-api-access-vqsxv\") pod \"placement-operator-controller-manager-54689d9f88-lf55c\" (UID: \"7c2e69f6-a1a9-46e9-9fa5-ff6002bf1f97\") " pod="openstack-operators/placement-operator-controller-manager-54689d9f88-lf55c" Oct 06 12:00:55 crc kubenswrapper[4958]: I1006 12:00:55.546914 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-98gqj\" (UniqueName: \"kubernetes.io/projected/78e5cfa4-7e8c-4fb2-b90d-bd9967385a71-kube-api-access-98gqj\") pod \"mariadb-operator-controller-manager-6cd6d7bdf5-42wnq\" (UID: \"78e5cfa4-7e8c-4fb2-b90d-bd9967385a71\") " pod="openstack-operators/mariadb-operator-controller-manager-6cd6d7bdf5-42wnq" Oct 06 12:00:55 crc kubenswrapper[4958]: I1006 12:00:55.546993 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/6558de0f-a2a0-4841-9764-574061835f3b-cert\") pod \"openstack-baremetal-operator-controller-manager-5dfbbd665clrhkv\" (UID: \"6558de0f-a2a0-4841-9764-574061835f3b\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-5dfbbd665clrhkv" Oct 06 12:00:55 crc kubenswrapper[4958]: I1006 12:00:55.547103 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5zdb5\" (UniqueName: \"kubernetes.io/projected/6558de0f-a2a0-4841-9764-574061835f3b-kube-api-access-5zdb5\") pod \"openstack-baremetal-operator-controller-manager-5dfbbd665clrhkv\" (UID: \"6558de0f-a2a0-4841-9764-574061835f3b\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-5dfbbd665clrhkv" Oct 06 12:00:55 crc kubenswrapper[4958]: I1006 12:00:55.547206 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mxw9c\" (UniqueName: \"kubernetes.io/projected/cb612d52-fceb-471f-af53-104bfc2966e7-kube-api-access-mxw9c\") pod \"manila-operator-controller-manager-65d89cfd9f-gqj7w\" (UID: \"cb612d52-fceb-471f-af53-104bfc2966e7\") " pod="openstack-operators/manila-operator-controller-manager-65d89cfd9f-gqj7w" Oct 06 12:00:55 crc kubenswrapper[4958]: I1006 12:00:55.547283 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-swzjz\" (UniqueName: \"kubernetes.io/projected/a8547a74-8a2d-4a7f-9852-71036642c51a-kube-api-access-swzjz\") pod \"swift-operator-controller-manager-6859f9b676-4d8cf\" (UID: \"a8547a74-8a2d-4a7f-9852-71036642c51a\") " pod="openstack-operators/swift-operator-controller-manager-6859f9b676-4d8cf" Oct 06 12:00:55 crc kubenswrapper[4958]: I1006 12:00:55.547360 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wsrvt\" (UniqueName: \"kubernetes.io/projected/3332b65c-b3bf-44f5-ae31-865d77029641-kube-api-access-wsrvt\") pod \"neutron-operator-controller-manager-8d984cc4d-7r9bg\" (UID: \"3332b65c-b3bf-44f5-ae31-865d77029641\") " pod="openstack-operators/neutron-operator-controller-manager-8d984cc4d-7r9bg" Oct 06 12:00:55 crc kubenswrapper[4958]: I1006 12:00:55.547433 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-54w94\" (UniqueName: \"kubernetes.io/projected/28816408-a493-4e27-8213-998d338cc1d0-kube-api-access-54w94\") pod \"octavia-operator-controller-manager-7468f855d8-crg8g\" (UID: \"28816408-a493-4e27-8213-998d338cc1d0\") " pod="openstack-operators/octavia-operator-controller-manager-7468f855d8-crg8g" Oct 06 12:00:55 crc kubenswrapper[4958]: I1006 12:00:55.547528 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wp2n8\" (UniqueName: \"kubernetes.io/projected/c35f7c69-655c-4d86-bcfd-29a899cf3011-kube-api-access-wp2n8\") pod \"nova-operator-controller-manager-7c7fc454ff-nvbjx\" (UID: \"c35f7c69-655c-4d86-bcfd-29a899cf3011\") " pod="openstack-operators/nova-operator-controller-manager-7c7fc454ff-nvbjx" Oct 06 12:00:55 crc kubenswrapper[4958]: I1006 12:00:55.559561 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/horizon-operator-controller-manager-76d5b87f47-7nfvh" Oct 06 12:00:55 crc kubenswrapper[4958]: I1006 12:00:55.573156 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/test-operator-controller-manager-5cd5cb47d7-m5gw5"] Oct 06 12:00:55 crc kubenswrapper[4958]: I1006 12:00:55.575609 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-98gqj\" (UniqueName: \"kubernetes.io/projected/78e5cfa4-7e8c-4fb2-b90d-bd9967385a71-kube-api-access-98gqj\") pod \"mariadb-operator-controller-manager-6cd6d7bdf5-42wnq\" (UID: \"78e5cfa4-7e8c-4fb2-b90d-bd9967385a71\") " pod="openstack-operators/mariadb-operator-controller-manager-6cd6d7bdf5-42wnq" Oct 06 12:00:55 crc kubenswrapper[4958]: I1006 12:00:55.575798 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mxw9c\" (UniqueName: \"kubernetes.io/projected/cb612d52-fceb-471f-af53-104bfc2966e7-kube-api-access-mxw9c\") pod \"manila-operator-controller-manager-65d89cfd9f-gqj7w\" (UID: \"cb612d52-fceb-471f-af53-104bfc2966e7\") " pod="openstack-operators/manila-operator-controller-manager-65d89cfd9f-gqj7w" Oct 06 12:00:55 crc kubenswrapper[4958]: I1006 12:00:55.630457 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ironic-operator-controller-manager-649675d675-6h8b6" Oct 06 12:00:55 crc kubenswrapper[4958]: I1006 12:00:55.642791 4958 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/watcher-operator-controller-manager-6cbc6dd547-hfq6r"] Oct 06 12:00:55 crc kubenswrapper[4958]: I1006 12:00:55.650465 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5zdb5\" (UniqueName: \"kubernetes.io/projected/6558de0f-a2a0-4841-9764-574061835f3b-kube-api-access-5zdb5\") pod \"openstack-baremetal-operator-controller-manager-5dfbbd665clrhkv\" (UID: \"6558de0f-a2a0-4841-9764-574061835f3b\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-5dfbbd665clrhkv" Oct 06 12:00:55 crc kubenswrapper[4958]: I1006 12:00:55.650520 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-swzjz\" (UniqueName: \"kubernetes.io/projected/a8547a74-8a2d-4a7f-9852-71036642c51a-kube-api-access-swzjz\") pod \"swift-operator-controller-manager-6859f9b676-4d8cf\" (UID: \"a8547a74-8a2d-4a7f-9852-71036642c51a\") " pod="openstack-operators/swift-operator-controller-manager-6859f9b676-4d8cf" Oct 06 12:00:55 crc kubenswrapper[4958]: I1006 12:00:55.650552 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wsrvt\" (UniqueName: \"kubernetes.io/projected/3332b65c-b3bf-44f5-ae31-865d77029641-kube-api-access-wsrvt\") pod \"neutron-operator-controller-manager-8d984cc4d-7r9bg\" (UID: \"3332b65c-b3bf-44f5-ae31-865d77029641\") " pod="openstack-operators/neutron-operator-controller-manager-8d984cc4d-7r9bg" Oct 06 12:00:55 crc kubenswrapper[4958]: I1006 12:00:55.650572 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-54w94\" (UniqueName: \"kubernetes.io/projected/28816408-a493-4e27-8213-998d338cc1d0-kube-api-access-54w94\") pod \"octavia-operator-controller-manager-7468f855d8-crg8g\" (UID: \"28816408-a493-4e27-8213-998d338cc1d0\") " pod="openstack-operators/octavia-operator-controller-manager-7468f855d8-crg8g" Oct 06 12:00:55 crc kubenswrapper[4958]: I1006 12:00:55.650592 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wp2n8\" (UniqueName: \"kubernetes.io/projected/c35f7c69-655c-4d86-bcfd-29a899cf3011-kube-api-access-wp2n8\") pod \"nova-operator-controller-manager-7c7fc454ff-nvbjx\" (UID: \"c35f7c69-655c-4d86-bcfd-29a899cf3011\") " pod="openstack-operators/nova-operator-controller-manager-7c7fc454ff-nvbjx" Oct 06 12:00:55 crc kubenswrapper[4958]: I1006 12:00:55.650626 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9lb28\" (UniqueName: \"kubernetes.io/projected/28613b85-8223-4190-b2de-a88d186b8901-kube-api-access-9lb28\") pod \"ovn-operator-controller-manager-646d647dd5-fqrj2\" (UID: \"28613b85-8223-4190-b2de-a88d186b8901\") " pod="openstack-operators/ovn-operator-controller-manager-646d647dd5-fqrj2" Oct 06 12:00:55 crc kubenswrapper[4958]: I1006 12:00:55.650654 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2m6ts\" (UniqueName: \"kubernetes.io/projected/c060a91a-1009-469c-a9f4-d2e3b3d34840-kube-api-access-2m6ts\") pod \"telemetry-operator-controller-manager-5d4d74dd89-zfxpj\" (UID: \"c060a91a-1009-469c-a9f4-d2e3b3d34840\") " pod="openstack-operators/telemetry-operator-controller-manager-5d4d74dd89-zfxpj" Oct 06 12:00:55 crc kubenswrapper[4958]: I1006 12:00:55.650675 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vqsxv\" (UniqueName: \"kubernetes.io/projected/7c2e69f6-a1a9-46e9-9fa5-ff6002bf1f97-kube-api-access-vqsxv\") pod \"placement-operator-controller-manager-54689d9f88-lf55c\" (UID: \"7c2e69f6-a1a9-46e9-9fa5-ff6002bf1f97\") " pod="openstack-operators/placement-operator-controller-manager-54689d9f88-lf55c" Oct 06 12:00:55 crc kubenswrapper[4958]: I1006 12:00:55.650700 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g4wvw\" (UniqueName: \"kubernetes.io/projected/d79b059c-ab85-4eed-937e-6f7844c24621-kube-api-access-g4wvw\") pod \"test-operator-controller-manager-5cd5cb47d7-m5gw5\" (UID: \"d79b059c-ab85-4eed-937e-6f7844c24621\") " pod="openstack-operators/test-operator-controller-manager-5cd5cb47d7-m5gw5" Oct 06 12:00:55 crc kubenswrapper[4958]: I1006 12:00:55.650717 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/6558de0f-a2a0-4841-9764-574061835f3b-cert\") pod \"openstack-baremetal-operator-controller-manager-5dfbbd665clrhkv\" (UID: \"6558de0f-a2a0-4841-9764-574061835f3b\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-5dfbbd665clrhkv" Oct 06 12:00:55 crc kubenswrapper[4958]: E1006 12:00:55.650848 4958 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Oct 06 12:00:55 crc kubenswrapper[4958]: E1006 12:00:55.650894 4958 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6558de0f-a2a0-4841-9764-574061835f3b-cert podName:6558de0f-a2a0-4841-9764-574061835f3b nodeName:}" failed. No retries permitted until 2025-10-06 12:00:56.15088008 +0000 UTC m=+810.036905388 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/6558de0f-a2a0-4841-9764-574061835f3b-cert") pod "openstack-baremetal-operator-controller-manager-5dfbbd665clrhkv" (UID: "6558de0f-a2a0-4841-9764-574061835f3b") : secret "openstack-baremetal-operator-webhook-server-cert" not found Oct 06 12:00:55 crc kubenswrapper[4958]: I1006 12:00:55.651596 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/watcher-operator-controller-manager-6cbc6dd547-hfq6r"] Oct 06 12:00:55 crc kubenswrapper[4958]: I1006 12:00:55.651802 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/watcher-operator-controller-manager-6cbc6dd547-hfq6r" Oct 06 12:00:55 crc kubenswrapper[4958]: I1006 12:00:55.656686 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"watcher-operator-controller-manager-dockercfg-jcmdr" Oct 06 12:00:55 crc kubenswrapper[4958]: I1006 12:00:55.674966 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vqsxv\" (UniqueName: \"kubernetes.io/projected/7c2e69f6-a1a9-46e9-9fa5-ff6002bf1f97-kube-api-access-vqsxv\") pod \"placement-operator-controller-manager-54689d9f88-lf55c\" (UID: \"7c2e69f6-a1a9-46e9-9fa5-ff6002bf1f97\") " pod="openstack-operators/placement-operator-controller-manager-54689d9f88-lf55c" Oct 06 12:00:55 crc kubenswrapper[4958]: I1006 12:00:55.679415 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5zdb5\" (UniqueName: \"kubernetes.io/projected/6558de0f-a2a0-4841-9764-574061835f3b-kube-api-access-5zdb5\") pod \"openstack-baremetal-operator-controller-manager-5dfbbd665clrhkv\" (UID: \"6558de0f-a2a0-4841-9764-574061835f3b\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-5dfbbd665clrhkv" Oct 06 12:00:55 crc kubenswrapper[4958]: I1006 12:00:55.683828 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wsrvt\" (UniqueName: \"kubernetes.io/projected/3332b65c-b3bf-44f5-ae31-865d77029641-kube-api-access-wsrvt\") pod \"neutron-operator-controller-manager-8d984cc4d-7r9bg\" (UID: \"3332b65c-b3bf-44f5-ae31-865d77029641\") " pod="openstack-operators/neutron-operator-controller-manager-8d984cc4d-7r9bg" Oct 06 12:00:55 crc kubenswrapper[4958]: I1006 12:00:55.686015 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9lb28\" (UniqueName: \"kubernetes.io/projected/28613b85-8223-4190-b2de-a88d186b8901-kube-api-access-9lb28\") pod \"ovn-operator-controller-manager-646d647dd5-fqrj2\" (UID: \"28613b85-8223-4190-b2de-a88d186b8901\") " pod="openstack-operators/ovn-operator-controller-manager-646d647dd5-fqrj2" Oct 06 12:00:55 crc kubenswrapper[4958]: I1006 12:00:55.690827 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/glance-operator-controller-manager-5dc44df7d5-jj784" Oct 06 12:00:55 crc kubenswrapper[4958]: I1006 12:00:55.700025 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/keystone-operator-controller-manager-7b5ccf6d9c-86s48" Oct 06 12:00:55 crc kubenswrapper[4958]: I1006 12:00:55.700708 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wp2n8\" (UniqueName: \"kubernetes.io/projected/c35f7c69-655c-4d86-bcfd-29a899cf3011-kube-api-access-wp2n8\") pod \"nova-operator-controller-manager-7c7fc454ff-nvbjx\" (UID: \"c35f7c69-655c-4d86-bcfd-29a899cf3011\") " pod="openstack-operators/nova-operator-controller-manager-7c7fc454ff-nvbjx" Oct 06 12:00:55 crc kubenswrapper[4958]: I1006 12:00:55.705968 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-swzjz\" (UniqueName: \"kubernetes.io/projected/a8547a74-8a2d-4a7f-9852-71036642c51a-kube-api-access-swzjz\") pod \"swift-operator-controller-manager-6859f9b676-4d8cf\" (UID: \"a8547a74-8a2d-4a7f-9852-71036642c51a\") " pod="openstack-operators/swift-operator-controller-manager-6859f9b676-4d8cf" Oct 06 12:00:55 crc kubenswrapper[4958]: I1006 12:00:55.715436 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-54w94\" (UniqueName: \"kubernetes.io/projected/28816408-a493-4e27-8213-998d338cc1d0-kube-api-access-54w94\") pod \"octavia-operator-controller-manager-7468f855d8-crg8g\" (UID: \"28816408-a493-4e27-8213-998d338cc1d0\") " pod="openstack-operators/octavia-operator-controller-manager-7468f855d8-crg8g" Oct 06 12:00:55 crc kubenswrapper[4958]: I1006 12:00:55.717632 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/manila-operator-controller-manager-65d89cfd9f-gqj7w" Oct 06 12:00:55 crc kubenswrapper[4958]: I1006 12:00:55.735810 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/mariadb-operator-controller-manager-6cd6d7bdf5-42wnq" Oct 06 12:00:55 crc kubenswrapper[4958]: I1006 12:00:55.737668 4958 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-controller-manager-64c95c565c-djmq7"] Oct 06 12:00:55 crc kubenswrapper[4958]: I1006 12:00:55.738880 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-manager-64c95c565c-djmq7" Oct 06 12:00:55 crc kubenswrapper[4958]: I1006 12:00:55.747673 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-controller-manager-dockercfg-js4nh" Oct 06 12:00:55 crc kubenswrapper[4958]: I1006 12:00:55.748818 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"webhook-server-cert" Oct 06 12:00:55 crc kubenswrapper[4958]: I1006 12:00:55.749972 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/nova-operator-controller-manager-7c7fc454ff-nvbjx" Oct 06 12:00:55 crc kubenswrapper[4958]: I1006 12:00:55.754642 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2m6ts\" (UniqueName: \"kubernetes.io/projected/c060a91a-1009-469c-a9f4-d2e3b3d34840-kube-api-access-2m6ts\") pod \"telemetry-operator-controller-manager-5d4d74dd89-zfxpj\" (UID: \"c060a91a-1009-469c-a9f4-d2e3b3d34840\") " pod="openstack-operators/telemetry-operator-controller-manager-5d4d74dd89-zfxpj" Oct 06 12:00:55 crc kubenswrapper[4958]: I1006 12:00:55.759797 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g4wvw\" (UniqueName: \"kubernetes.io/projected/d79b059c-ab85-4eed-937e-6f7844c24621-kube-api-access-g4wvw\") pod \"test-operator-controller-manager-5cd5cb47d7-m5gw5\" (UID: \"d79b059c-ab85-4eed-937e-6f7844c24621\") " pod="openstack-operators/test-operator-controller-manager-5cd5cb47d7-m5gw5" Oct 06 12:00:55 crc kubenswrapper[4958]: I1006 12:00:55.760054 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rv4qh\" (UniqueName: \"kubernetes.io/projected/0bd35c63-af59-499c-baaa-8cec7e13f7bc-kube-api-access-rv4qh\") pod \"watcher-operator-controller-manager-6cbc6dd547-hfq6r\" (UID: \"0bd35c63-af59-499c-baaa-8cec7e13f7bc\") " pod="openstack-operators/watcher-operator-controller-manager-6cbc6dd547-hfq6r" Oct 06 12:00:55 crc kubenswrapper[4958]: I1006 12:00:55.764392 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/neutron-operator-controller-manager-8d984cc4d-7r9bg" Oct 06 12:00:55 crc kubenswrapper[4958]: I1006 12:00:55.765381 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-manager-64c95c565c-djmq7"] Oct 06 12:00:55 crc kubenswrapper[4958]: I1006 12:00:55.779635 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/octavia-operator-controller-manager-7468f855d8-crg8g" Oct 06 12:00:55 crc kubenswrapper[4958]: I1006 12:00:55.784612 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2m6ts\" (UniqueName: \"kubernetes.io/projected/c060a91a-1009-469c-a9f4-d2e3b3d34840-kube-api-access-2m6ts\") pod \"telemetry-operator-controller-manager-5d4d74dd89-zfxpj\" (UID: \"c060a91a-1009-469c-a9f4-d2e3b3d34840\") " pod="openstack-operators/telemetry-operator-controller-manager-5d4d74dd89-zfxpj" Oct 06 12:00:55 crc kubenswrapper[4958]: I1006 12:00:55.786550 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g4wvw\" (UniqueName: \"kubernetes.io/projected/d79b059c-ab85-4eed-937e-6f7844c24621-kube-api-access-g4wvw\") pod \"test-operator-controller-manager-5cd5cb47d7-m5gw5\" (UID: \"d79b059c-ab85-4eed-937e-6f7844c24621\") " pod="openstack-operators/test-operator-controller-manager-5cd5cb47d7-m5gw5" Oct 06 12:00:55 crc kubenswrapper[4958]: I1006 12:00:55.826996 4958 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-5f97d8c699-8c99l"] Oct 06 12:00:55 crc kubenswrapper[4958]: I1006 12:00:55.827937 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/rabbitmq-cluster-operator-manager-5f97d8c699-8c99l" Oct 06 12:00:55 crc kubenswrapper[4958]: I1006 12:00:55.841017 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-5f97d8c699-8c99l"] Oct 06 12:00:55 crc kubenswrapper[4958]: I1006 12:00:55.847130 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"rabbitmq-cluster-operator-controller-manager-dockercfg-5fbws" Oct 06 12:00:55 crc kubenswrapper[4958]: I1006 12:00:55.872042 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rv4qh\" (UniqueName: \"kubernetes.io/projected/0bd35c63-af59-499c-baaa-8cec7e13f7bc-kube-api-access-rv4qh\") pod \"watcher-operator-controller-manager-6cbc6dd547-hfq6r\" (UID: \"0bd35c63-af59-499c-baaa-8cec7e13f7bc\") " pod="openstack-operators/watcher-operator-controller-manager-6cbc6dd547-hfq6r" Oct 06 12:00:55 crc kubenswrapper[4958]: I1006 12:00:55.872208 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tkf7z\" (UniqueName: \"kubernetes.io/projected/e3f7c90b-8bb7-4b1c-bab9-0c341627ee32-kube-api-access-tkf7z\") pod \"openstack-operator-controller-manager-64c95c565c-djmq7\" (UID: \"e3f7c90b-8bb7-4b1c-bab9-0c341627ee32\") " pod="openstack-operators/openstack-operator-controller-manager-64c95c565c-djmq7" Oct 06 12:00:55 crc kubenswrapper[4958]: I1006 12:00:55.872269 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/e3f7c90b-8bb7-4b1c-bab9-0c341627ee32-cert\") pod \"openstack-operator-controller-manager-64c95c565c-djmq7\" (UID: \"e3f7c90b-8bb7-4b1c-bab9-0c341627ee32\") " pod="openstack-operators/openstack-operator-controller-manager-64c95c565c-djmq7" Oct 06 12:00:55 crc kubenswrapper[4958]: I1006 12:00:55.886224 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ovn-operator-controller-manager-646d647dd5-fqrj2" Oct 06 12:00:55 crc kubenswrapper[4958]: I1006 12:00:55.917477 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/swift-operator-controller-manager-6859f9b676-4d8cf" Oct 06 12:00:55 crc kubenswrapper[4958]: I1006 12:00:55.923769 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rv4qh\" (UniqueName: \"kubernetes.io/projected/0bd35c63-af59-499c-baaa-8cec7e13f7bc-kube-api-access-rv4qh\") pod \"watcher-operator-controller-manager-6cbc6dd547-hfq6r\" (UID: \"0bd35c63-af59-499c-baaa-8cec7e13f7bc\") " pod="openstack-operators/watcher-operator-controller-manager-6cbc6dd547-hfq6r" Oct 06 12:00:55 crc kubenswrapper[4958]: I1006 12:00:55.975807 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/8a0f56a2-c168-4707-acac-43cc91b44835-cert\") pod \"infra-operator-controller-manager-658588b8c9-dnk6g\" (UID: \"8a0f56a2-c168-4707-acac-43cc91b44835\") " pod="openstack-operators/infra-operator-controller-manager-658588b8c9-dnk6g" Oct 06 12:00:55 crc kubenswrapper[4958]: I1006 12:00:55.975891 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tkf7z\" (UniqueName: \"kubernetes.io/projected/e3f7c90b-8bb7-4b1c-bab9-0c341627ee32-kube-api-access-tkf7z\") pod \"openstack-operator-controller-manager-64c95c565c-djmq7\" (UID: \"e3f7c90b-8bb7-4b1c-bab9-0c341627ee32\") " pod="openstack-operators/openstack-operator-controller-manager-64c95c565c-djmq7" Oct 06 12:00:55 crc kubenswrapper[4958]: I1006 12:00:55.975920 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jbx5n\" (UniqueName: \"kubernetes.io/projected/ccce5a46-80c5-4f14-b63d-d4eff64bef36-kube-api-access-jbx5n\") pod \"rabbitmq-cluster-operator-manager-5f97d8c699-8c99l\" (UID: \"ccce5a46-80c5-4f14-b63d-d4eff64bef36\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-5f97d8c699-8c99l" Oct 06 12:00:55 crc kubenswrapper[4958]: I1006 12:00:55.975938 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/e3f7c90b-8bb7-4b1c-bab9-0c341627ee32-cert\") pod \"openstack-operator-controller-manager-64c95c565c-djmq7\" (UID: \"e3f7c90b-8bb7-4b1c-bab9-0c341627ee32\") " pod="openstack-operators/openstack-operator-controller-manager-64c95c565c-djmq7" Oct 06 12:00:55 crc kubenswrapper[4958]: E1006 12:00:55.976069 4958 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Oct 06 12:00:55 crc kubenswrapper[4958]: E1006 12:00:55.976115 4958 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e3f7c90b-8bb7-4b1c-bab9-0c341627ee32-cert podName:e3f7c90b-8bb7-4b1c-bab9-0c341627ee32 nodeName:}" failed. No retries permitted until 2025-10-06 12:00:56.476099934 +0000 UTC m=+810.362125242 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/e3f7c90b-8bb7-4b1c-bab9-0c341627ee32-cert") pod "openstack-operator-controller-manager-64c95c565c-djmq7" (UID: "e3f7c90b-8bb7-4b1c-bab9-0c341627ee32") : secret "webhook-server-cert" not found Oct 06 12:00:55 crc kubenswrapper[4958]: I1006 12:00:55.977562 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/telemetry-operator-controller-manager-5d4d74dd89-zfxpj" Oct 06 12:00:55 crc kubenswrapper[4958]: I1006 12:00:55.979831 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/placement-operator-controller-manager-54689d9f88-lf55c" Oct 06 12:00:55 crc kubenswrapper[4958]: I1006 12:00:55.987418 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/8a0f56a2-c168-4707-acac-43cc91b44835-cert\") pod \"infra-operator-controller-manager-658588b8c9-dnk6g\" (UID: \"8a0f56a2-c168-4707-acac-43cc91b44835\") " pod="openstack-operators/infra-operator-controller-manager-658588b8c9-dnk6g" Oct 06 12:00:56 crc kubenswrapper[4958]: I1006 12:00:56.005255 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tkf7z\" (UniqueName: \"kubernetes.io/projected/e3f7c90b-8bb7-4b1c-bab9-0c341627ee32-kube-api-access-tkf7z\") pod \"openstack-operator-controller-manager-64c95c565c-djmq7\" (UID: \"e3f7c90b-8bb7-4b1c-bab9-0c341627ee32\") " pod="openstack-operators/openstack-operator-controller-manager-64c95c565c-djmq7" Oct 06 12:00:56 crc kubenswrapper[4958]: I1006 12:00:56.014888 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/test-operator-controller-manager-5cd5cb47d7-m5gw5" Oct 06 12:00:56 crc kubenswrapper[4958]: I1006 12:00:56.014966 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/infra-operator-controller-manager-658588b8c9-dnk6g" Oct 06 12:00:56 crc kubenswrapper[4958]: I1006 12:00:56.019530 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/watcher-operator-controller-manager-6cbc6dd547-hfq6r" Oct 06 12:00:56 crc kubenswrapper[4958]: I1006 12:00:56.055028 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/barbican-operator-controller-manager-58c4cd55f4-l8pp6"] Oct 06 12:00:56 crc kubenswrapper[4958]: I1006 12:00:56.079971 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jbx5n\" (UniqueName: \"kubernetes.io/projected/ccce5a46-80c5-4f14-b63d-d4eff64bef36-kube-api-access-jbx5n\") pod \"rabbitmq-cluster-operator-manager-5f97d8c699-8c99l\" (UID: \"ccce5a46-80c5-4f14-b63d-d4eff64bef36\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-5f97d8c699-8c99l" Oct 06 12:00:56 crc kubenswrapper[4958]: I1006 12:00:56.106311 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/heat-operator-controller-manager-54b4974c45-xc89l"] Oct 06 12:00:56 crc kubenswrapper[4958]: I1006 12:00:56.111866 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jbx5n\" (UniqueName: \"kubernetes.io/projected/ccce5a46-80c5-4f14-b63d-d4eff64bef36-kube-api-access-jbx5n\") pod \"rabbitmq-cluster-operator-manager-5f97d8c699-8c99l\" (UID: \"ccce5a46-80c5-4f14-b63d-d4eff64bef36\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-5f97d8c699-8c99l" Oct 06 12:00:56 crc kubenswrapper[4958]: I1006 12:00:56.131668 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/cinder-operator-controller-manager-7d4d4f8d-th999"] Oct 06 12:00:56 crc kubenswrapper[4958]: I1006 12:00:56.181875 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/6558de0f-a2a0-4841-9764-574061835f3b-cert\") pod \"openstack-baremetal-operator-controller-manager-5dfbbd665clrhkv\" (UID: \"6558de0f-a2a0-4841-9764-574061835f3b\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-5dfbbd665clrhkv" Oct 06 12:00:56 crc kubenswrapper[4958]: I1006 12:00:56.184540 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/rabbitmq-cluster-operator-manager-5f97d8c699-8c99l" Oct 06 12:00:56 crc kubenswrapper[4958]: I1006 12:00:56.189501 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/6558de0f-a2a0-4841-9764-574061835f3b-cert\") pod \"openstack-baremetal-operator-controller-manager-5dfbbd665clrhkv\" (UID: \"6558de0f-a2a0-4841-9764-574061835f3b\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-5dfbbd665clrhkv" Oct 06 12:00:56 crc kubenswrapper[4958]: I1006 12:00:56.289509 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/designate-operator-controller-manager-75dfd9b554-4qklz"] Oct 06 12:00:56 crc kubenswrapper[4958]: I1006 12:00:56.415155 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/glance-operator-controller-manager-5dc44df7d5-jj784"] Oct 06 12:00:56 crc kubenswrapper[4958]: I1006 12:00:56.430781 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ironic-operator-controller-manager-649675d675-6h8b6"] Oct 06 12:00:56 crc kubenswrapper[4958]: W1006 12:00:56.432418 4958 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod7d2f0a48_cffe_49d6_8ac8_830558228e2a.slice/crio-15ea1296869ce675591a6678090cefce124d27b54c37cdaa6b38cceea6a07cde WatchSource:0}: Error finding container 15ea1296869ce675591a6678090cefce124d27b54c37cdaa6b38cceea6a07cde: Status 404 returned error can't find the container with id 15ea1296869ce675591a6678090cefce124d27b54c37cdaa6b38cceea6a07cde Oct 06 12:00:56 crc kubenswrapper[4958]: I1006 12:00:56.436992 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/horizon-operator-controller-manager-76d5b87f47-7nfvh"] Oct 06 12:00:56 crc kubenswrapper[4958]: I1006 12:00:56.439920 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-baremetal-operator-controller-manager-5dfbbd665clrhkv" Oct 06 12:00:56 crc kubenswrapper[4958]: I1006 12:00:56.489576 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/e3f7c90b-8bb7-4b1c-bab9-0c341627ee32-cert\") pod \"openstack-operator-controller-manager-64c95c565c-djmq7\" (UID: \"e3f7c90b-8bb7-4b1c-bab9-0c341627ee32\") " pod="openstack-operators/openstack-operator-controller-manager-64c95c565c-djmq7" Oct 06 12:00:56 crc kubenswrapper[4958]: I1006 12:00:56.492796 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/e3f7c90b-8bb7-4b1c-bab9-0c341627ee32-cert\") pod \"openstack-operator-controller-manager-64c95c565c-djmq7\" (UID: \"e3f7c90b-8bb7-4b1c-bab9-0c341627ee32\") " pod="openstack-operators/openstack-operator-controller-manager-64c95c565c-djmq7" Oct 06 12:00:56 crc kubenswrapper[4958]: I1006 12:00:56.620590 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/nova-operator-controller-manager-7c7fc454ff-nvbjx"] Oct 06 12:00:56 crc kubenswrapper[4958]: W1006 12:00:56.628294 4958 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc35f7c69_655c_4d86_bcfd_29a899cf3011.slice/crio-ff2bb63eabdb4fb171c6ef207cbf989832934782f3d09902be04ace433337047 WatchSource:0}: Error finding container ff2bb63eabdb4fb171c6ef207cbf989832934782f3d09902be04ace433337047: Status 404 returned error can't find the container with id ff2bb63eabdb4fb171c6ef207cbf989832934782f3d09902be04ace433337047 Oct 06 12:00:56 crc kubenswrapper[4958]: I1006 12:00:56.693454 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-manager-64c95c565c-djmq7" Oct 06 12:00:57 crc kubenswrapper[4958]: I1006 12:00:57.009918 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/keystone-operator-controller-manager-7b5ccf6d9c-86s48"] Oct 06 12:00:57 crc kubenswrapper[4958]: I1006 12:00:57.048855 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/neutron-operator-controller-manager-8d984cc4d-7r9bg"] Oct 06 12:00:57 crc kubenswrapper[4958]: W1006 12:00:57.055307 4958 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3332b65c_b3bf_44f5_ae31_865d77029641.slice/crio-5baec8f88f4c348070145a1736ea309c47eeaecd11d6d1b0a308f6bd0279e104 WatchSource:0}: Error finding container 5baec8f88f4c348070145a1736ea309c47eeaecd11d6d1b0a308f6bd0279e104: Status 404 returned error can't find the container with id 5baec8f88f4c348070145a1736ea309c47eeaecd11d6d1b0a308f6bd0279e104 Oct 06 12:00:57 crc kubenswrapper[4958]: I1006 12:00:57.056336 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/watcher-operator-controller-manager-6cbc6dd547-hfq6r"] Oct 06 12:00:57 crc kubenswrapper[4958]: W1006 12:00:57.059649 4958 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda8547a74_8a2d_4a7f_9852_71036642c51a.slice/crio-0bed657c046b83a20fbbfb3244ef30aa01d67b2573f304d0f587de968a19ff56 WatchSource:0}: Error finding container 0bed657c046b83a20fbbfb3244ef30aa01d67b2573f304d0f587de968a19ff56: Status 404 returned error can't find the container with id 0bed657c046b83a20fbbfb3244ef30aa01d67b2573f304d0f587de968a19ff56 Oct 06 12:00:57 crc kubenswrapper[4958]: W1006 12:00:57.060337 4958 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8a0f56a2_c168_4707_acac_43cc91b44835.slice/crio-cb33a78c345d7a44376cce91253f571c6558b976b57adc3311cb00eb8b08164e WatchSource:0}: Error finding container cb33a78c345d7a44376cce91253f571c6558b976b57adc3311cb00eb8b08164e: Status 404 returned error can't find the container with id cb33a78c345d7a44376cce91253f571c6558b976b57adc3311cb00eb8b08164e Oct 06 12:00:57 crc kubenswrapper[4958]: I1006 12:00:57.064773 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-6cd6d7bdf5-42wnq"] Oct 06 12:00:57 crc kubenswrapper[4958]: I1006 12:00:57.078183 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/swift-operator-controller-manager-6859f9b676-4d8cf"] Oct 06 12:00:57 crc kubenswrapper[4958]: I1006 12:00:57.081983 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/infra-operator-controller-manager-658588b8c9-dnk6g"] Oct 06 12:00:57 crc kubenswrapper[4958]: I1006 12:00:57.092171 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/manila-operator-controller-manager-65d89cfd9f-gqj7w"] Oct 06 12:00:57 crc kubenswrapper[4958]: W1006 12:00:57.096218 4958 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podcb612d52_fceb_471f_af53_104bfc2966e7.slice/crio-66f2c0ee0f8d2f89f8399d5e7efb8d2e755197c83bed0d8f8530ade74d259dc9 WatchSource:0}: Error finding container 66f2c0ee0f8d2f89f8399d5e7efb8d2e755197c83bed0d8f8530ade74d259dc9: Status 404 returned error can't find the container with id 66f2c0ee0f8d2f89f8399d5e7efb8d2e755197c83bed0d8f8530ade74d259dc9 Oct 06 12:00:57 crc kubenswrapper[4958]: E1006 12:00:57.102985 4958 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/octavia-operator@sha256:da5c3078d80878d66c616e6f8a0bb909f95d971cde2c612f96fded064113e182,Command:[/manager],Args:[--health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080 --leader-elect],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-54w94,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod octavia-operator-controller-manager-7468f855d8-crg8g_openstack-operators(28816408-a493-4e27-8213-998d338cc1d0): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Oct 06 12:00:57 crc kubenswrapper[4958]: E1006 12:00:57.103094 4958 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:operator,Image:quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2,Command:[/manager],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:metrics,HostPort:0,ContainerPort:9782,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:OPERATOR_NAMESPACE,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{200 -3} {} 200m DecimalSI},memory: {{524288000 0} {} 500Mi BinarySI},},Requests:ResourceList{cpu: {{5 -3} {} 5m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-jbx5n,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000660000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod rabbitmq-cluster-operator-manager-5f97d8c699-8c99l_openstack-operators(ccce5a46-80c5-4f14-b63d-d4eff64bef36): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Oct 06 12:00:57 crc kubenswrapper[4958]: E1006 12:00:57.104487 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-5f97d8c699-8c99l" podUID="ccce5a46-80c5-4f14-b63d-d4eff64bef36" Oct 06 12:00:57 crc kubenswrapper[4958]: I1006 12:00:57.104554 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/octavia-operator-controller-manager-7468f855d8-crg8g"] Oct 06 12:00:57 crc kubenswrapper[4958]: E1006 12:00:57.107122 4958 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/manila-operator@sha256:063aae1458289d1090a77c74c2b978b9eb978b0e4062c399f0cb5434a8dd2757,Command:[/manager],Args:[--health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080 --leader-elect],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-mxw9c,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod manila-operator-controller-manager-65d89cfd9f-gqj7w_openstack-operators(cb612d52-fceb-471f-af53-104bfc2966e7): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Oct 06 12:00:57 crc kubenswrapper[4958]: I1006 12:00:57.125313 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-5f97d8c699-8c99l"] Oct 06 12:00:57 crc kubenswrapper[4958]: I1006 12:00:57.156798 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/swift-operator-controller-manager-6859f9b676-4d8cf" event={"ID":"a8547a74-8a2d-4a7f-9852-71036642c51a","Type":"ContainerStarted","Data":"0bed657c046b83a20fbbfb3244ef30aa01d67b2573f304d0f587de968a19ff56"} Oct 06 12:00:57 crc kubenswrapper[4958]: I1006 12:00:57.158117 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/watcher-operator-controller-manager-6cbc6dd547-hfq6r" event={"ID":"0bd35c63-af59-499c-baaa-8cec7e13f7bc","Type":"ContainerStarted","Data":"e72ebe0469372929b373fc412c1b0c313ac05fde1556c843db6f5db044bd52da"} Oct 06 12:00:57 crc kubenswrapper[4958]: I1006 12:00:57.159181 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/rabbitmq-cluster-operator-manager-5f97d8c699-8c99l" event={"ID":"ccce5a46-80c5-4f14-b63d-d4eff64bef36","Type":"ContainerStarted","Data":"f0745c87fb36f7e425e9afd56aeb6d50b0587b45dd19ef84bda410471680338f"} Oct 06 12:00:57 crc kubenswrapper[4958]: I1006 12:00:57.160271 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/neutron-operator-controller-manager-8d984cc4d-7r9bg" event={"ID":"3332b65c-b3bf-44f5-ae31-865d77029641","Type":"ContainerStarted","Data":"5baec8f88f4c348070145a1736ea309c47eeaecd11d6d1b0a308f6bd0279e104"} Oct 06 12:00:57 crc kubenswrapper[4958]: I1006 12:00:57.162272 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/designate-operator-controller-manager-75dfd9b554-4qklz" event={"ID":"efb3e8ee-d92e-49fe-82c8-3fbe5794410f","Type":"ContainerStarted","Data":"0a0c57e59411f5e840a6c0dcb7d44c5fcb70b24cf5838e9e669a0b1b31dab3b9"} Oct 06 12:00:57 crc kubenswrapper[4958]: I1006 12:00:57.163256 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-controller-manager-658588b8c9-dnk6g" event={"ID":"8a0f56a2-c168-4707-acac-43cc91b44835","Type":"ContainerStarted","Data":"cb33a78c345d7a44376cce91253f571c6558b976b57adc3311cb00eb8b08164e"} Oct 06 12:00:57 crc kubenswrapper[4958]: I1006 12:00:57.164020 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/placement-operator-controller-manager-54689d9f88-lf55c"] Oct 06 12:00:57 crc kubenswrapper[4958]: I1006 12:00:57.164416 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/barbican-operator-controller-manager-58c4cd55f4-l8pp6" event={"ID":"cbe81ce3-66c4-4226-bc0a-78d6757561ff","Type":"ContainerStarted","Data":"f785112d9e0f87146fbd558572802e13ec3c6b5ab790dfb222197c364b9c7fc9"} Oct 06 12:00:57 crc kubenswrapper[4958]: E1006 12:00:57.166204 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2\\\"\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-5f97d8c699-8c99l" podUID="ccce5a46-80c5-4f14-b63d-d4eff64bef36" Oct 06 12:00:57 crc kubenswrapper[4958]: I1006 12:00:57.168954 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/glance-operator-controller-manager-5dc44df7d5-jj784" event={"ID":"da95cbd7-81b9-48e7-99eb-207063cf651a","Type":"ContainerStarted","Data":"092ee3d19a375ea69733f73473907140befe6f9dc0a054064e4b1f1521c16108"} Oct 06 12:00:57 crc kubenswrapper[4958]: W1006 12:00:57.169265 4958 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod28613b85_8223_4190_b2de_a88d186b8901.slice/crio-197cc74b7def8dda74601e792aa9a732e615bb776a9c94ad7d1916a634e196e9 WatchSource:0}: Error finding container 197cc74b7def8dda74601e792aa9a732e615bb776a9c94ad7d1916a634e196e9: Status 404 returned error can't find the container with id 197cc74b7def8dda74601e792aa9a732e615bb776a9c94ad7d1916a634e196e9 Oct 06 12:00:57 crc kubenswrapper[4958]: I1006 12:00:57.172445 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/heat-operator-controller-manager-54b4974c45-xc89l" event={"ID":"96446a00-b397-4b48-94bf-432c32ed13cb","Type":"ContainerStarted","Data":"98420e33337e1d4d8173cef584c7cbbde69277f663d0f11310dd98db4bc6a17e"} Oct 06 12:00:57 crc kubenswrapper[4958]: I1006 12:00:57.172609 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/test-operator-controller-manager-5cd5cb47d7-m5gw5"] Oct 06 12:00:57 crc kubenswrapper[4958]: I1006 12:00:57.173699 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/cinder-operator-controller-manager-7d4d4f8d-th999" event={"ID":"b7998026-248b-4fdb-b5fe-8e6ca29c69f0","Type":"ContainerStarted","Data":"14947188495b7345256e5fac7a2dca557d3e4bdbe877668296606f49cf86062f"} Oct 06 12:00:57 crc kubenswrapper[4958]: W1006 12:00:57.173820 4958 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc060a91a_1009_469c_a9f4_d2e3b3d34840.slice/crio-70c5903d515db9860bd2df1761849f8ade37f153caeb455b2764fc81a0523fdf WatchSource:0}: Error finding container 70c5903d515db9860bd2df1761849f8ade37f153caeb455b2764fc81a0523fdf: Status 404 returned error can't find the container with id 70c5903d515db9860bd2df1761849f8ade37f153caeb455b2764fc81a0523fdf Oct 06 12:00:57 crc kubenswrapper[4958]: I1006 12:00:57.175417 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ironic-operator-controller-manager-649675d675-6h8b6" event={"ID":"78e7c04a-fb1a-420f-a99b-94b6b0cf899a","Type":"ContainerStarted","Data":"2d8c2657f3e8db1a528dc6038522342d2b7b8ca5264212a2fc3152af5496d146"} Oct 06 12:00:57 crc kubenswrapper[4958]: I1006 12:00:57.177818 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/horizon-operator-controller-manager-76d5b87f47-7nfvh" event={"ID":"7d2f0a48-cffe-49d6-8ac8-830558228e2a","Type":"ContainerStarted","Data":"15ea1296869ce675591a6678090cefce124d27b54c37cdaa6b38cceea6a07cde"} Oct 06 12:00:57 crc kubenswrapper[4958]: E1006 12:00:57.178681 4958 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/telemetry-operator@sha256:bf55026ba10b80e1e24733078bd204cef8766d21a305fd000707a1e3b30ff52e,Command:[/manager],Args:[--health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080 --leader-elect],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-2m6ts,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod telemetry-operator-controller-manager-5d4d74dd89-zfxpj_openstack-operators(c060a91a-1009-469c-a9f4-d2e3b3d34840): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Oct 06 12:00:57 crc kubenswrapper[4958]: E1006 12:00:57.178785 4958 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:38.102.83.113:5001/openstack-k8s-operators/ovn-operator:dfb0ad220b9fa8216480f4a470cb6adfa466757d,Command:[/manager],Args:[--health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080 --leader-elect],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-9lb28,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ovn-operator-controller-manager-646d647dd5-fqrj2_openstack-operators(28613b85-8223-4190-b2de-a88d186b8901): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Oct 06 12:00:57 crc kubenswrapper[4958]: I1006 12:00:57.180884 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ovn-operator-controller-manager-646d647dd5-fqrj2"] Oct 06 12:00:57 crc kubenswrapper[4958]: I1006 12:00:57.181623 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/manila-operator-controller-manager-65d89cfd9f-gqj7w" event={"ID":"cb612d52-fceb-471f-af53-104bfc2966e7","Type":"ContainerStarted","Data":"66f2c0ee0f8d2f89f8399d5e7efb8d2e755197c83bed0d8f8530ade74d259dc9"} Oct 06 12:00:57 crc kubenswrapper[4958]: W1006 12:00:57.182570 4958 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode3f7c90b_8bb7_4b1c_bab9_0c341627ee32.slice/crio-3308fd9e82cca0c9196ce2f88344ada9896f38abfe1d59be7dafad9346e3bffc WatchSource:0}: Error finding container 3308fd9e82cca0c9196ce2f88344ada9896f38abfe1d59be7dafad9346e3bffc: Status 404 returned error can't find the container with id 3308fd9e82cca0c9196ce2f88344ada9896f38abfe1d59be7dafad9346e3bffc Oct 06 12:00:57 crc kubenswrapper[4958]: I1006 12:00:57.183711 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/nova-operator-controller-manager-7c7fc454ff-nvbjx" event={"ID":"c35f7c69-655c-4d86-bcfd-29a899cf3011","Type":"ContainerStarted","Data":"ff2bb63eabdb4fb171c6ef207cbf989832934782f3d09902be04ace433337047"} Oct 06 12:00:57 crc kubenswrapper[4958]: I1006 12:00:57.185744 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/octavia-operator-controller-manager-7468f855d8-crg8g" event={"ID":"28816408-a493-4e27-8213-998d338cc1d0","Type":"ContainerStarted","Data":"1bfd2ad6ae9796d2c77860e09185abe298ad7df0f5a487666c85fc4f8cd483ed"} Oct 06 12:00:57 crc kubenswrapper[4958]: I1006 12:00:57.187472 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/keystone-operator-controller-manager-7b5ccf6d9c-86s48" event={"ID":"e1d4f271-a424-45d7-abf0-33633ac7713c","Type":"ContainerStarted","Data":"22ef46ba370b94e7304344447902a83af4270afad2fca7e25df9bf9a2e824afb"} Oct 06 12:00:57 crc kubenswrapper[4958]: I1006 12:00:57.188090 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-5d4d74dd89-zfxpj"] Oct 06 12:00:57 crc kubenswrapper[4958]: E1006 12:00:57.188082 4958 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/test-operator@sha256:0daf76cc40ab619ae266b11defcc1b65beb22d859369e7b1b04de9169089a4cb,Command:[/manager],Args:[--health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080 --leader-elect],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-g4wvw,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod test-operator-controller-manager-5cd5cb47d7-m5gw5_openstack-operators(d79b059c-ab85-4eed-937e-6f7844c24621): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Oct 06 12:00:57 crc kubenswrapper[4958]: E1006 12:00:57.188280 4958 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/openstack-baremetal-operator@sha256:bcd1acac74e68eea5a9c3b7ba1bcb29d3a5b43423fc23c19ad4715bdac41f799,Command:[/manager],Args:[--health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080 --leader-elect],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:true,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_AGENT_IMAGE_URL_DEFAULT,Value:quay.io/openstack-k8s-operators/openstack-baremetal-operator-agent:latest,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_ANSIBLEEE_IMAGE_URL_DEFAULT,Value:quay.io/openstack-k8s-operators/openstack-ansibleee-runner:latest,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_AODH_API_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-aodh-api:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_AODH_EVALUATOR_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-aodh-evaluator:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_AODH_LISTENER_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-aodh-listener:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_AODH_NOTIFIER_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-aodh-notifier:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_APACHE_IMAGE_URL_DEFAULT,Value:registry.redhat.io/ubi9/httpd-24:latest,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_BARBICAN_API_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-barbican-api:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_BARBICAN_KEYSTONE_LISTENER_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-barbican-keystone-listener:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_BARBICAN_WORKER_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-barbican-worker:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_CEILOMETER_CENTRAL_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-ceilometer-central:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_CEILOMETER_COMPUTE_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_CEILOMETER_IPMI_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_CEILOMETER_MYSQLD_EXPORTER_IMAGE_URL_DEFAULT,Value:quay.io/prometheus/mysqld-exporter:v0.15.1,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_CEILOMETER_NOTIFICATION_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-ceilometer-notification:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_CEILOMETER_SGCORE_IMAGE_URL_DEFAULT,Value:quay.io/openstack-k8s-operators/sg-core:latest,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_CINDER_API_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-cinder-api:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_CINDER_BACKUP_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-cinder-backup:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_CINDER_SCHEDULER_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-cinder-scheduler:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_CINDER_VOLUME_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-cinder-volume:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_DESIGNATE_API_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-designate-api:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_DESIGNATE_BACKENDBIND9_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-designate-backend-bind9:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_DESIGNATE_CENTRAL_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-designate-central:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_DESIGNATE_MDNS_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-designate-mdns:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_DESIGNATE_PRODUCER_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-designate-producer:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_DESIGNATE_UNBOUND_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-unbound:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_DESIGNATE_WORKER_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-designate-worker:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_EDPM_FRR_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-frr:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_EDPM_ISCSID_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-iscsid:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_EDPM_KEPLER_IMAGE_URL_DEFAULT,Value:quay.io/sustainable_computing_io/kepler:release-0.7.12,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_EDPM_LOGROTATE_CROND_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-cron:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_EDPM_MULTIPATHD_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-multipathd:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_EDPM_NEUTRON_DHCP_AGENT_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_EDPM_NEUTRON_METADATA_AGENT_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_EDPM_NEUTRON_OVN_AGENT_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-neutron-ovn-agent:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_EDPM_NEUTRON_SRIOV_AGENT_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-neutron-sriov-agent:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_EDPM_NODE_EXPORTER_IMAGE_URL_DEFAULT,Value:quay.io/prometheus/node-exporter:v1.5.0,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_EDPM_OVN_BGP_AGENT_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-ovn-bgp-agent:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_EDPM_PODMAN_EXPORTER_IMAGE_URL_DEFAULT,Value:quay.io/navidys/prometheus-podman-exporter:v1.10.1,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_GLANCE_API_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-glance-api:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_HEAT_API_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-heat-api:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_HEAT_CFNAPI_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-heat-api-cfn:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_HEAT_ENGINE_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-heat-engine:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_HORIZON_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-horizon:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_INFRA_MEMCACHED_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-memcached:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_INFRA_REDIS_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-redis:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_IRONIC_API_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-ironic-api:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_IRONIC_CONDUCTOR_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-ironic-conductor:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_IRONIC_INSPECTOR_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-ironic-inspector:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_IRONIC_NEUTRON_AGENT_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-ironic-neutron-agent:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_IRONIC_PXE_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-ironic-pxe:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_IRONIC_PYTHON_AGENT_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/ironic-python-agent:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_KEYSTONE_API_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-keystone:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_KSM_IMAGE_URL_DEFAULT,Value:registry.k8s.io/kube-state-metrics/kube-state-metrics:v2.15.0,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_MANILA_API_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-manila-api:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_MANILA_SCHEDULER_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-manila-scheduler:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_MANILA_SHARE_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-manila-share:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_MARIADB_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-mariadb:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_NET_UTILS_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-netutils:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_NEUTRON_API_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_NOVA_API_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-nova-api:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_NOVA_COMPUTE_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_NOVA_CONDUCTOR_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-nova-conductor:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_NOVA_NOVNC_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-nova-novncproxy:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_NOVA_SCHEDULER_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-nova-scheduler:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_OCTAVIA_API_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-octavia-api:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_OCTAVIA_HEALTHMANAGER_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-octavia-health-manager:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_OCTAVIA_HOUSEKEEPING_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-octavia-housekeeping:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_OCTAVIA_RSYSLOG_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-rsyslog:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_OCTAVIA_WORKER_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-octavia-worker:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_OPENSTACK_CLIENT_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-openstackclient:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_OPENSTACK_MUST_GATHER_DEFAULT,Value:quay.io/openstack-k8s-operators/openstack-must-gather:latest,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_OPENSTACK_NETWORK_EXPORTER_IMAGE_URL_DEFAULT,Value:quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_OS_CONTAINER_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/edpm-hardened-uefi:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_OVN_CONTROLLER_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_OVN_CONTROLLER_OVS_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-ovn-base:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_OVN_NB_DBCLUSTER_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-ovn-nb-db-server:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_OVN_NORTHD_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-ovn-northd:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_OVN_SB_DBCLUSTER_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-ovn-sb-db-server:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_PLACEMENT_API_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-placement-api:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_RABBITMQ_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-rabbitmq:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_SWIFT_ACCOUNT_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-swift-account:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_SWIFT_CONTAINER_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-swift-container:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_SWIFT_OBJECT_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-swift-object:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_SWIFT_PROXY_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-swift-proxy-server:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_TEST_TEMPEST_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-tempest-all:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_WATCHER_API_IMAGE_URL_DEFAULT,Value:quay.io/podified-master-centos9/openstack-watcher-api:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_WATCHER_APPLIER_IMAGE_URL_DEFAULT,Value:quay.io/podified-master-centos9/openstack-watcher-applier:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_WATCHER_DECISION_ENGINE_IMAGE_URL_DEFAULT,Value:quay.io/podified-master-centos9/openstack-watcher-decision-engine:current-podified,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:cert,ReadOnly:true,MountPath:/tmp/k8s-webhook-server/serving-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-5zdb5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000660000,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod openstack-baremetal-operator-controller-manager-5dfbbd665clrhkv_openstack-operators(6558de0f-a2a0-4841-9764-574061835f3b): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Oct 06 12:00:57 crc kubenswrapper[4958]: I1006 12:00:57.190942 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-controller-manager-6cd6d7bdf5-42wnq" event={"ID":"78e5cfa4-7e8c-4fb2-b90d-bd9967385a71","Type":"ContainerStarted","Data":"1165e96f428bb99ae34cb1a9b3ba35a7bc89ca9f69dc0fe86a0d070e9006336d"} Oct 06 12:00:57 crc kubenswrapper[4958]: I1006 12:00:57.195486 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-5dfbbd665clrhkv"] Oct 06 12:00:57 crc kubenswrapper[4958]: I1006 12:00:57.199588 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-manager-64c95c565c-djmq7"] Oct 06 12:00:57 crc kubenswrapper[4958]: E1006 12:00:57.311254 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/octavia-operator-controller-manager-7468f855d8-crg8g" podUID="28816408-a493-4e27-8213-998d338cc1d0" Oct 06 12:00:57 crc kubenswrapper[4958]: E1006 12:00:57.482901 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/manila-operator-controller-manager-65d89cfd9f-gqj7w" podUID="cb612d52-fceb-471f-af53-104bfc2966e7" Oct 06 12:00:57 crc kubenswrapper[4958]: E1006 12:00:57.612467 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/test-operator-controller-manager-5cd5cb47d7-m5gw5" podUID="d79b059c-ab85-4eed-937e-6f7844c24621" Oct 06 12:00:57 crc kubenswrapper[4958]: E1006 12:00:57.612690 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/telemetry-operator-controller-manager-5d4d74dd89-zfxpj" podUID="c060a91a-1009-469c-a9f4-d2e3b3d34840" Oct 06 12:00:57 crc kubenswrapper[4958]: E1006 12:00:57.632132 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/ovn-operator-controller-manager-646d647dd5-fqrj2" podUID="28613b85-8223-4190-b2de-a88d186b8901" Oct 06 12:00:57 crc kubenswrapper[4958]: E1006 12:00:57.660974 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/openstack-baremetal-operator-controller-manager-5dfbbd665clrhkv" podUID="6558de0f-a2a0-4841-9764-574061835f3b" Oct 06 12:00:58 crc kubenswrapper[4958]: I1006 12:00:58.234092 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/telemetry-operator-controller-manager-5d4d74dd89-zfxpj" event={"ID":"c060a91a-1009-469c-a9f4-d2e3b3d34840","Type":"ContainerStarted","Data":"c1544e2deb4582c256f0e9eb4307bb6151fa1142481af72b1c515e40e44ac1e6"} Oct 06 12:00:58 crc kubenswrapper[4958]: I1006 12:00:58.234394 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/telemetry-operator-controller-manager-5d4d74dd89-zfxpj" event={"ID":"c060a91a-1009-469c-a9f4-d2e3b3d34840","Type":"ContainerStarted","Data":"70c5903d515db9860bd2df1761849f8ade37f153caeb455b2764fc81a0523fdf"} Oct 06 12:00:58 crc kubenswrapper[4958]: I1006 12:00:58.236065 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/manila-operator-controller-manager-65d89cfd9f-gqj7w" event={"ID":"cb612d52-fceb-471f-af53-104bfc2966e7","Type":"ContainerStarted","Data":"51a323176e6dea2b17cb493a1a7f684cf1950195a73e70e7db54631818040221"} Oct 06 12:00:58 crc kubenswrapper[4958]: E1006 12:00:58.237843 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/telemetry-operator@sha256:bf55026ba10b80e1e24733078bd204cef8766d21a305fd000707a1e3b30ff52e\\\"\"" pod="openstack-operators/telemetry-operator-controller-manager-5d4d74dd89-zfxpj" podUID="c060a91a-1009-469c-a9f4-d2e3b3d34840" Oct 06 12:00:58 crc kubenswrapper[4958]: E1006 12:00:58.239542 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/manila-operator@sha256:063aae1458289d1090a77c74c2b978b9eb978b0e4062c399f0cb5434a8dd2757\\\"\"" pod="openstack-operators/manila-operator-controller-manager-65d89cfd9f-gqj7w" podUID="cb612d52-fceb-471f-af53-104bfc2966e7" Oct 06 12:00:58 crc kubenswrapper[4958]: I1006 12:00:58.240819 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/test-operator-controller-manager-5cd5cb47d7-m5gw5" event={"ID":"d79b059c-ab85-4eed-937e-6f7844c24621","Type":"ContainerStarted","Data":"b2c71713e7442b902d5cd170c1800ca5866d1fdb999d99cfff80af7e82ef951c"} Oct 06 12:00:58 crc kubenswrapper[4958]: I1006 12:00:58.240848 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/test-operator-controller-manager-5cd5cb47d7-m5gw5" event={"ID":"d79b059c-ab85-4eed-937e-6f7844c24621","Type":"ContainerStarted","Data":"f14853da36b76d04a7d9860479d2dc6b90a64fa5d75e902eaff3df39ae414c48"} Oct 06 12:00:58 crc kubenswrapper[4958]: E1006 12:00:58.242178 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/test-operator@sha256:0daf76cc40ab619ae266b11defcc1b65beb22d859369e7b1b04de9169089a4cb\\\"\"" pod="openstack-operators/test-operator-controller-manager-5cd5cb47d7-m5gw5" podUID="d79b059c-ab85-4eed-937e-6f7844c24621" Oct 06 12:00:58 crc kubenswrapper[4958]: I1006 12:00:58.242761 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/octavia-operator-controller-manager-7468f855d8-crg8g" event={"ID":"28816408-a493-4e27-8213-998d338cc1d0","Type":"ContainerStarted","Data":"51a8169c7a6ab3f053c7e2dabaf87fcfc24bb5c83b6f04c8a6c8245a2188a0d1"} Oct 06 12:00:58 crc kubenswrapper[4958]: E1006 12:00:58.243788 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/octavia-operator@sha256:da5c3078d80878d66c616e6f8a0bb909f95d971cde2c612f96fded064113e182\\\"\"" pod="openstack-operators/octavia-operator-controller-manager-7468f855d8-crg8g" podUID="28816408-a493-4e27-8213-998d338cc1d0" Oct 06 12:00:58 crc kubenswrapper[4958]: I1006 12:00:58.244551 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ovn-operator-controller-manager-646d647dd5-fqrj2" event={"ID":"28613b85-8223-4190-b2de-a88d186b8901","Type":"ContainerStarted","Data":"ee238192780b3c1e65fdb8a3cfacfb3aeb0c16cdb0523c7360084a70ca6631d5"} Oct 06 12:00:58 crc kubenswrapper[4958]: I1006 12:00:58.244575 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ovn-operator-controller-manager-646d647dd5-fqrj2" event={"ID":"28613b85-8223-4190-b2de-a88d186b8901","Type":"ContainerStarted","Data":"197cc74b7def8dda74601e792aa9a732e615bb776a9c94ad7d1916a634e196e9"} Oct 06 12:00:58 crc kubenswrapper[4958]: E1006 12:00:58.248385 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"38.102.83.113:5001/openstack-k8s-operators/ovn-operator:dfb0ad220b9fa8216480f4a470cb6adfa466757d\\\"\"" pod="openstack-operators/ovn-operator-controller-manager-646d647dd5-fqrj2" podUID="28613b85-8223-4190-b2de-a88d186b8901" Oct 06 12:00:58 crc kubenswrapper[4958]: I1006 12:00:58.264117 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-baremetal-operator-controller-manager-5dfbbd665clrhkv" event={"ID":"6558de0f-a2a0-4841-9764-574061835f3b","Type":"ContainerStarted","Data":"172e446400e65697821c04cb7bd72c1aa9a598ccb30bcb92d7dbbacefd432875"} Oct 06 12:00:58 crc kubenswrapper[4958]: I1006 12:00:58.264180 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-baremetal-operator-controller-manager-5dfbbd665clrhkv" event={"ID":"6558de0f-a2a0-4841-9764-574061835f3b","Type":"ContainerStarted","Data":"4db0e5ba44934981dcfb9004f097384ff2925e73f3de51cd4ce530922f21bfcb"} Oct 06 12:00:58 crc kubenswrapper[4958]: I1006 12:00:58.265297 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/placement-operator-controller-manager-54689d9f88-lf55c" event={"ID":"7c2e69f6-a1a9-46e9-9fa5-ff6002bf1f97","Type":"ContainerStarted","Data":"8f382dd22d1cb7a3ec74dcc8460c66a644cb217689a48119177e0b6294c7cbf4"} Oct 06 12:00:58 crc kubenswrapper[4958]: I1006 12:00:58.268580 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-manager-64c95c565c-djmq7" event={"ID":"e3f7c90b-8bb7-4b1c-bab9-0c341627ee32","Type":"ContainerStarted","Data":"f71f724e36b9bfa4dd605e28cbdeec81a9e7b090dce0cdea720796a3299500db"} Oct 06 12:00:58 crc kubenswrapper[4958]: I1006 12:00:58.268604 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-manager-64c95c565c-djmq7" event={"ID":"e3f7c90b-8bb7-4b1c-bab9-0c341627ee32","Type":"ContainerStarted","Data":"bc4c0a44e17bd70779257cb1b9a68073b6a4df2a94bd10e6cf975a40a7469d3e"} Oct 06 12:00:58 crc kubenswrapper[4958]: I1006 12:00:58.268613 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-manager-64c95c565c-djmq7" event={"ID":"e3f7c90b-8bb7-4b1c-bab9-0c341627ee32","Type":"ContainerStarted","Data":"3308fd9e82cca0c9196ce2f88344ada9896f38abfe1d59be7dafad9346e3bffc"} Oct 06 12:00:58 crc kubenswrapper[4958]: I1006 12:00:58.268736 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-controller-manager-64c95c565c-djmq7" Oct 06 12:00:58 crc kubenswrapper[4958]: E1006 12:00:58.269200 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/openstack-baremetal-operator@sha256:bcd1acac74e68eea5a9c3b7ba1bcb29d3a5b43423fc23c19ad4715bdac41f799\\\"\"" pod="openstack-operators/openstack-baremetal-operator-controller-manager-5dfbbd665clrhkv" podUID="6558de0f-a2a0-4841-9764-574061835f3b" Oct 06 12:00:58 crc kubenswrapper[4958]: E1006 12:00:58.269698 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2\\\"\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-5f97d8c699-8c99l" podUID="ccce5a46-80c5-4f14-b63d-d4eff64bef36" Oct 06 12:00:58 crc kubenswrapper[4958]: I1006 12:00:58.348078 4958 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-controller-manager-64c95c565c-djmq7" podStartSLOduration=3.348061006 podStartE2EDuration="3.348061006s" podCreationTimestamp="2025-10-06 12:00:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-06 12:00:58.346689523 +0000 UTC m=+812.232714821" watchObservedRunningTime="2025-10-06 12:00:58.348061006 +0000 UTC m=+812.234086314" Oct 06 12:00:59 crc kubenswrapper[4958]: E1006 12:00:59.276800 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/manila-operator@sha256:063aae1458289d1090a77c74c2b978b9eb978b0e4062c399f0cb5434a8dd2757\\\"\"" pod="openstack-operators/manila-operator-controller-manager-65d89cfd9f-gqj7w" podUID="cb612d52-fceb-471f-af53-104bfc2966e7" Oct 06 12:00:59 crc kubenswrapper[4958]: E1006 12:00:59.276847 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/telemetry-operator@sha256:bf55026ba10b80e1e24733078bd204cef8766d21a305fd000707a1e3b30ff52e\\\"\"" pod="openstack-operators/telemetry-operator-controller-manager-5d4d74dd89-zfxpj" podUID="c060a91a-1009-469c-a9f4-d2e3b3d34840" Oct 06 12:00:59 crc kubenswrapper[4958]: E1006 12:00:59.276899 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"38.102.83.113:5001/openstack-k8s-operators/ovn-operator:dfb0ad220b9fa8216480f4a470cb6adfa466757d\\\"\"" pod="openstack-operators/ovn-operator-controller-manager-646d647dd5-fqrj2" podUID="28613b85-8223-4190-b2de-a88d186b8901" Oct 06 12:00:59 crc kubenswrapper[4958]: E1006 12:00:59.279608 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/openstack-baremetal-operator@sha256:bcd1acac74e68eea5a9c3b7ba1bcb29d3a5b43423fc23c19ad4715bdac41f799\\\"\"" pod="openstack-operators/openstack-baremetal-operator-controller-manager-5dfbbd665clrhkv" podUID="6558de0f-a2a0-4841-9764-574061835f3b" Oct 06 12:00:59 crc kubenswrapper[4958]: E1006 12:00:59.279546 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/test-operator@sha256:0daf76cc40ab619ae266b11defcc1b65beb22d859369e7b1b04de9169089a4cb\\\"\"" pod="openstack-operators/test-operator-controller-manager-5cd5cb47d7-m5gw5" podUID="d79b059c-ab85-4eed-937e-6f7844c24621" Oct 06 12:00:59 crc kubenswrapper[4958]: E1006 12:00:59.279719 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/octavia-operator@sha256:da5c3078d80878d66c616e6f8a0bb909f95d971cde2c612f96fded064113e182\\\"\"" pod="openstack-operators/octavia-operator-controller-manager-7468f855d8-crg8g" podUID="28816408-a493-4e27-8213-998d338cc1d0" Oct 06 12:01:00 crc kubenswrapper[4958]: I1006 12:01:00.821747 4958 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-8wx6x" Oct 06 12:01:00 crc kubenswrapper[4958]: I1006 12:01:00.864112 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-8wx6x" Oct 06 12:01:01 crc kubenswrapper[4958]: I1006 12:01:01.052225 4958 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-8wx6x"] Oct 06 12:01:02 crc kubenswrapper[4958]: I1006 12:01:02.290531 4958 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-8wx6x" podUID="f7018d40-65db-421c-941b-92c80dfb9166" containerName="registry-server" containerID="cri-o://639e518fa7752accc8ca854f8229d27c2d65192f129cf321bebc8b176b9ce929" gracePeriod=2 Oct 06 12:01:03 crc kubenswrapper[4958]: I1006 12:01:03.304231 4958 generic.go:334] "Generic (PLEG): container finished" podID="f7018d40-65db-421c-941b-92c80dfb9166" containerID="639e518fa7752accc8ca854f8229d27c2d65192f129cf321bebc8b176b9ce929" exitCode=0 Oct 06 12:01:03 crc kubenswrapper[4958]: I1006 12:01:03.304392 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-8wx6x" event={"ID":"f7018d40-65db-421c-941b-92c80dfb9166","Type":"ContainerDied","Data":"639e518fa7752accc8ca854f8229d27c2d65192f129cf321bebc8b176b9ce929"} Oct 06 12:01:03 crc kubenswrapper[4958]: I1006 12:01:03.981597 4958 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-qqm7m"] Oct 06 12:01:03 crc kubenswrapper[4958]: I1006 12:01:03.985588 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-qqm7m" Oct 06 12:01:03 crc kubenswrapper[4958]: I1006 12:01:03.999799 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-qqm7m"] Oct 06 12:01:04 crc kubenswrapper[4958]: I1006 12:01:04.125824 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c8ab94cb-0fde-495e-9e1b-cb57600ce892-catalog-content\") pod \"certified-operators-qqm7m\" (UID: \"c8ab94cb-0fde-495e-9e1b-cb57600ce892\") " pod="openshift-marketplace/certified-operators-qqm7m" Oct 06 12:01:04 crc kubenswrapper[4958]: I1006 12:01:04.126218 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-28h6z\" (UniqueName: \"kubernetes.io/projected/c8ab94cb-0fde-495e-9e1b-cb57600ce892-kube-api-access-28h6z\") pod \"certified-operators-qqm7m\" (UID: \"c8ab94cb-0fde-495e-9e1b-cb57600ce892\") " pod="openshift-marketplace/certified-operators-qqm7m" Oct 06 12:01:04 crc kubenswrapper[4958]: I1006 12:01:04.126243 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c8ab94cb-0fde-495e-9e1b-cb57600ce892-utilities\") pod \"certified-operators-qqm7m\" (UID: \"c8ab94cb-0fde-495e-9e1b-cb57600ce892\") " pod="openshift-marketplace/certified-operators-qqm7m" Oct 06 12:01:04 crc kubenswrapper[4958]: I1006 12:01:04.227579 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-28h6z\" (UniqueName: \"kubernetes.io/projected/c8ab94cb-0fde-495e-9e1b-cb57600ce892-kube-api-access-28h6z\") pod \"certified-operators-qqm7m\" (UID: \"c8ab94cb-0fde-495e-9e1b-cb57600ce892\") " pod="openshift-marketplace/certified-operators-qqm7m" Oct 06 12:01:04 crc kubenswrapper[4958]: I1006 12:01:04.227629 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c8ab94cb-0fde-495e-9e1b-cb57600ce892-utilities\") pod \"certified-operators-qqm7m\" (UID: \"c8ab94cb-0fde-495e-9e1b-cb57600ce892\") " pod="openshift-marketplace/certified-operators-qqm7m" Oct 06 12:01:04 crc kubenswrapper[4958]: I1006 12:01:04.227713 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c8ab94cb-0fde-495e-9e1b-cb57600ce892-catalog-content\") pod \"certified-operators-qqm7m\" (UID: \"c8ab94cb-0fde-495e-9e1b-cb57600ce892\") " pod="openshift-marketplace/certified-operators-qqm7m" Oct 06 12:01:04 crc kubenswrapper[4958]: I1006 12:01:04.228190 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c8ab94cb-0fde-495e-9e1b-cb57600ce892-catalog-content\") pod \"certified-operators-qqm7m\" (UID: \"c8ab94cb-0fde-495e-9e1b-cb57600ce892\") " pod="openshift-marketplace/certified-operators-qqm7m" Oct 06 12:01:04 crc kubenswrapper[4958]: I1006 12:01:04.228370 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c8ab94cb-0fde-495e-9e1b-cb57600ce892-utilities\") pod \"certified-operators-qqm7m\" (UID: \"c8ab94cb-0fde-495e-9e1b-cb57600ce892\") " pod="openshift-marketplace/certified-operators-qqm7m" Oct 06 12:01:04 crc kubenswrapper[4958]: I1006 12:01:04.245775 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-28h6z\" (UniqueName: \"kubernetes.io/projected/c8ab94cb-0fde-495e-9e1b-cb57600ce892-kube-api-access-28h6z\") pod \"certified-operators-qqm7m\" (UID: \"c8ab94cb-0fde-495e-9e1b-cb57600ce892\") " pod="openshift-marketplace/certified-operators-qqm7m" Oct 06 12:01:04 crc kubenswrapper[4958]: I1006 12:01:04.333866 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-qqm7m" Oct 06 12:01:06 crc kubenswrapper[4958]: I1006 12:01:06.701705 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-controller-manager-64c95c565c-djmq7" Oct 06 12:01:10 crc kubenswrapper[4958]: E1006 12:01:10.784804 4958 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 639e518fa7752accc8ca854f8229d27c2d65192f129cf321bebc8b176b9ce929 is running failed: container process not found" containerID="639e518fa7752accc8ca854f8229d27c2d65192f129cf321bebc8b176b9ce929" cmd=["grpc_health_probe","-addr=:50051"] Oct 06 12:01:10 crc kubenswrapper[4958]: E1006 12:01:10.786016 4958 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 639e518fa7752accc8ca854f8229d27c2d65192f129cf321bebc8b176b9ce929 is running failed: container process not found" containerID="639e518fa7752accc8ca854f8229d27c2d65192f129cf321bebc8b176b9ce929" cmd=["grpc_health_probe","-addr=:50051"] Oct 06 12:01:10 crc kubenswrapper[4958]: E1006 12:01:10.787182 4958 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 639e518fa7752accc8ca854f8229d27c2d65192f129cf321bebc8b176b9ce929 is running failed: container process not found" containerID="639e518fa7752accc8ca854f8229d27c2d65192f129cf321bebc8b176b9ce929" cmd=["grpc_health_probe","-addr=:50051"] Oct 06 12:01:10 crc kubenswrapper[4958]: E1006 12:01:10.787280 4958 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 639e518fa7752accc8ca854f8229d27c2d65192f129cf321bebc8b176b9ce929 is running failed: container process not found" probeType="Readiness" pod="openshift-marketplace/redhat-operators-8wx6x" podUID="f7018d40-65db-421c-941b-92c80dfb9166" containerName="registry-server" Oct 06 12:01:11 crc kubenswrapper[4958]: E1006 12:01:11.184327 4958 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/infra-operator@sha256:b6cef68bfaacdf992a9fa1a6b03a848a48c18cbb6ed12d95561b4b37d858b99f" Oct 06 12:01:11 crc kubenswrapper[4958]: E1006 12:01:11.184899 4958 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/infra-operator@sha256:b6cef68bfaacdf992a9fa1a6b03a848a48c18cbb6ed12d95561b4b37d858b99f,Command:[/manager],Args:[--health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080 --leader-elect],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{600 -3} {} 600m DecimalSI},memory: {{2147483648 0} {} 2Gi BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{536870912 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:cert,ReadOnly:true,MountPath:/tmp/k8s-webhook-server/serving-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-8jtw6,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod infra-operator-controller-manager-658588b8c9-dnk6g_openstack-operators(8a0f56a2-c168-4707-acac-43cc91b44835): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Oct 06 12:01:11 crc kubenswrapper[4958]: I1006 12:01:11.616858 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-8wx6x" Oct 06 12:01:11 crc kubenswrapper[4958]: I1006 12:01:11.738709 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f7018d40-65db-421c-941b-92c80dfb9166-utilities\") pod \"f7018d40-65db-421c-941b-92c80dfb9166\" (UID: \"f7018d40-65db-421c-941b-92c80dfb9166\") " Oct 06 12:01:11 crc kubenswrapper[4958]: I1006 12:01:11.738806 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f7018d40-65db-421c-941b-92c80dfb9166-catalog-content\") pod \"f7018d40-65db-421c-941b-92c80dfb9166\" (UID: \"f7018d40-65db-421c-941b-92c80dfb9166\") " Oct 06 12:01:11 crc kubenswrapper[4958]: I1006 12:01:11.738894 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-966nw\" (UniqueName: \"kubernetes.io/projected/f7018d40-65db-421c-941b-92c80dfb9166-kube-api-access-966nw\") pod \"f7018d40-65db-421c-941b-92c80dfb9166\" (UID: \"f7018d40-65db-421c-941b-92c80dfb9166\") " Oct 06 12:01:11 crc kubenswrapper[4958]: I1006 12:01:11.741267 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f7018d40-65db-421c-941b-92c80dfb9166-utilities" (OuterVolumeSpecName: "utilities") pod "f7018d40-65db-421c-941b-92c80dfb9166" (UID: "f7018d40-65db-421c-941b-92c80dfb9166"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 06 12:01:11 crc kubenswrapper[4958]: I1006 12:01:11.748999 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f7018d40-65db-421c-941b-92c80dfb9166-kube-api-access-966nw" (OuterVolumeSpecName: "kube-api-access-966nw") pod "f7018d40-65db-421c-941b-92c80dfb9166" (UID: "f7018d40-65db-421c-941b-92c80dfb9166"). InnerVolumeSpecName "kube-api-access-966nw". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 06 12:01:11 crc kubenswrapper[4958]: I1006 12:01:11.832694 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f7018d40-65db-421c-941b-92c80dfb9166-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "f7018d40-65db-421c-941b-92c80dfb9166" (UID: "f7018d40-65db-421c-941b-92c80dfb9166"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 06 12:01:11 crc kubenswrapper[4958]: I1006 12:01:11.841129 4958 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f7018d40-65db-421c-941b-92c80dfb9166-catalog-content\") on node \"crc\" DevicePath \"\"" Oct 06 12:01:11 crc kubenswrapper[4958]: I1006 12:01:11.841183 4958 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-966nw\" (UniqueName: \"kubernetes.io/projected/f7018d40-65db-421c-941b-92c80dfb9166-kube-api-access-966nw\") on node \"crc\" DevicePath \"\"" Oct 06 12:01:11 crc kubenswrapper[4958]: I1006 12:01:11.841200 4958 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f7018d40-65db-421c-941b-92c80dfb9166-utilities\") on node \"crc\" DevicePath \"\"" Oct 06 12:01:12 crc kubenswrapper[4958]: I1006 12:01:12.367786 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-8wx6x" event={"ID":"f7018d40-65db-421c-941b-92c80dfb9166","Type":"ContainerDied","Data":"f8a00b0d4a45a77d8cf2ae4a6a1be013892258ce21fee85e95bd243b493ba857"} Oct 06 12:01:12 crc kubenswrapper[4958]: I1006 12:01:12.368155 4958 scope.go:117] "RemoveContainer" containerID="639e518fa7752accc8ca854f8229d27c2d65192f129cf321bebc8b176b9ce929" Oct 06 12:01:12 crc kubenswrapper[4958]: I1006 12:01:12.367857 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-8wx6x" Oct 06 12:01:12 crc kubenswrapper[4958]: I1006 12:01:12.399050 4958 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-8wx6x"] Oct 06 12:01:12 crc kubenswrapper[4958]: I1006 12:01:12.415839 4958 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-8wx6x"] Oct 06 12:01:12 crc kubenswrapper[4958]: I1006 12:01:12.733561 4958 scope.go:117] "RemoveContainer" containerID="4d306fbbf6c918cba42304ace420077897fc4d9621e28eebe89ec281d6d248a4" Oct 06 12:01:12 crc kubenswrapper[4958]: I1006 12:01:12.754386 4958 scope.go:117] "RemoveContainer" containerID="c8905abbcf4191168b1967cdd346e2fb37755236d2825f1a51ae5873f760111d" Oct 06 12:01:12 crc kubenswrapper[4958]: I1006 12:01:12.925545 4958 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f7018d40-65db-421c-941b-92c80dfb9166" path="/var/lib/kubelet/pods/f7018d40-65db-421c-941b-92c80dfb9166/volumes" Oct 06 12:01:12 crc kubenswrapper[4958]: E1006 12:01:12.958528 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/infra-operator-controller-manager-658588b8c9-dnk6g" podUID="8a0f56a2-c168-4707-acac-43cc91b44835" Oct 06 12:01:13 crc kubenswrapper[4958]: I1006 12:01:13.195357 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-qqm7m"] Oct 06 12:01:13 crc kubenswrapper[4958]: I1006 12:01:13.391088 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/barbican-operator-controller-manager-58c4cd55f4-l8pp6" event={"ID":"cbe81ce3-66c4-4226-bc0a-78d6757561ff","Type":"ContainerStarted","Data":"d6313113950d81c6266e72730d611557ab5546c10199afcceaa00f4ed75e4981"} Oct 06 12:01:13 crc kubenswrapper[4958]: I1006 12:01:13.409684 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/neutron-operator-controller-manager-8d984cc4d-7r9bg" event={"ID":"3332b65c-b3bf-44f5-ae31-865d77029641","Type":"ContainerStarted","Data":"71b02bf598e8d4937790ff92c3c54468dd87b2ee958b5c6d7e77f59102803998"} Oct 06 12:01:13 crc kubenswrapper[4958]: I1006 12:01:13.411679 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-controller-manager-658588b8c9-dnk6g" event={"ID":"8a0f56a2-c168-4707-acac-43cc91b44835","Type":"ContainerStarted","Data":"76a7dc1378de80f343dd8e7f6e8ba337bb019c33ec853643c23f56ed25b466c5"} Oct 06 12:01:13 crc kubenswrapper[4958]: I1006 12:01:13.456207 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/placement-operator-controller-manager-54689d9f88-lf55c" event={"ID":"7c2e69f6-a1a9-46e9-9fa5-ff6002bf1f97","Type":"ContainerStarted","Data":"d52c71abca9772a696daf5c7abfecb5da6c17fe6a1145b15e458d0fce2bf2f77"} Oct 06 12:01:13 crc kubenswrapper[4958]: E1006 12:01:13.467625 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/infra-operator@sha256:b6cef68bfaacdf992a9fa1a6b03a848a48c18cbb6ed12d95561b4b37d858b99f\\\"\"" pod="openstack-operators/infra-operator-controller-manager-658588b8c9-dnk6g" podUID="8a0f56a2-c168-4707-acac-43cc91b44835" Oct 06 12:01:13 crc kubenswrapper[4958]: I1006 12:01:13.478747 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/nova-operator-controller-manager-7c7fc454ff-nvbjx" event={"ID":"c35f7c69-655c-4d86-bcfd-29a899cf3011","Type":"ContainerStarted","Data":"bfa1f3a1a8b9adfa3f584b39193e4562d9bf16607c159a84987661e34554c856"} Oct 06 12:01:13 crc kubenswrapper[4958]: I1006 12:01:13.524487 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/designate-operator-controller-manager-75dfd9b554-4qklz" event={"ID":"efb3e8ee-d92e-49fe-82c8-3fbe5794410f","Type":"ContainerStarted","Data":"a16b150a2d5112a91fd5c98f7b6fd985ddf1680cf9ef12877f6792e15ce22dad"} Oct 06 12:01:13 crc kubenswrapper[4958]: I1006 12:01:13.527416 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-qqm7m" event={"ID":"c8ab94cb-0fde-495e-9e1b-cb57600ce892","Type":"ContainerStarted","Data":"ee550958174ab1edd2913256b35357772d1ebace963bd9b770f77a5f068d23e2"} Oct 06 12:01:13 crc kubenswrapper[4958]: I1006 12:01:13.538918 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/watcher-operator-controller-manager-6cbc6dd547-hfq6r" event={"ID":"0bd35c63-af59-499c-baaa-8cec7e13f7bc","Type":"ContainerStarted","Data":"3cd8e66b43e8d6f93c1c87a92b8e859b7c7900ed8ec2eb10eb5220a3d9ba9d8c"} Oct 06 12:01:14 crc kubenswrapper[4958]: I1006 12:01:14.569530 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/heat-operator-controller-manager-54b4974c45-xc89l" event={"ID":"96446a00-b397-4b48-94bf-432c32ed13cb","Type":"ContainerStarted","Data":"982e04492f766b4c99a58ca7b8f8e43fd5e3563a3aaae793dff5bf3ae462eb40"} Oct 06 12:01:14 crc kubenswrapper[4958]: I1006 12:01:14.571883 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/nova-operator-controller-manager-7c7fc454ff-nvbjx" event={"ID":"c35f7c69-655c-4d86-bcfd-29a899cf3011","Type":"ContainerStarted","Data":"cb1e63b92dda3918d8ebf89b9fe2ed566a4609d72fb6c56b454ea73be6519c20"} Oct 06 12:01:14 crc kubenswrapper[4958]: I1006 12:01:14.572083 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/nova-operator-controller-manager-7c7fc454ff-nvbjx" Oct 06 12:01:14 crc kubenswrapper[4958]: I1006 12:01:14.578813 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/designate-operator-controller-manager-75dfd9b554-4qklz" event={"ID":"efb3e8ee-d92e-49fe-82c8-3fbe5794410f","Type":"ContainerStarted","Data":"a534828414f0e7956724982524957924376e31f0efc1aaa471d0b1dc338a8336"} Oct 06 12:01:14 crc kubenswrapper[4958]: I1006 12:01:14.579320 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/designate-operator-controller-manager-75dfd9b554-4qklz" Oct 06 12:01:14 crc kubenswrapper[4958]: I1006 12:01:14.580563 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/glance-operator-controller-manager-5dc44df7d5-jj784" event={"ID":"da95cbd7-81b9-48e7-99eb-207063cf651a","Type":"ContainerStarted","Data":"d7d3eaa1a1e4c2487bd6fc00dbdde9ecbf8a78d8894d2a08364d7b18db529a0d"} Oct 06 12:01:14 crc kubenswrapper[4958]: I1006 12:01:14.582982 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-controller-manager-6cd6d7bdf5-42wnq" event={"ID":"78e5cfa4-7e8c-4fb2-b90d-bd9967385a71","Type":"ContainerStarted","Data":"264f1b22e40c5f8e0c19286a7c81417b59a2f38a4d692769cc9dc1ee8fe9fd29"} Oct 06 12:01:14 crc kubenswrapper[4958]: I1006 12:01:14.583006 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-controller-manager-6cd6d7bdf5-42wnq" event={"ID":"78e5cfa4-7e8c-4fb2-b90d-bd9967385a71","Type":"ContainerStarted","Data":"9afed374ee036a3e4a1aa1b55095980ea772acc74a175d1b12b9edce6dd444a6"} Oct 06 12:01:14 crc kubenswrapper[4958]: I1006 12:01:14.583128 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/mariadb-operator-controller-manager-6cd6d7bdf5-42wnq" Oct 06 12:01:14 crc kubenswrapper[4958]: I1006 12:01:14.585481 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/placement-operator-controller-manager-54689d9f88-lf55c" event={"ID":"7c2e69f6-a1a9-46e9-9fa5-ff6002bf1f97","Type":"ContainerStarted","Data":"7afacd791013d59bb5acd739b3c8398a2640feda93e0c2fe43faf55c583daa87"} Oct 06 12:01:14 crc kubenswrapper[4958]: I1006 12:01:14.585946 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/placement-operator-controller-manager-54689d9f88-lf55c" Oct 06 12:01:14 crc kubenswrapper[4958]: I1006 12:01:14.588609 4958 generic.go:334] "Generic (PLEG): container finished" podID="c8ab94cb-0fde-495e-9e1b-cb57600ce892" containerID="8a833e028753a22abdd2b6df7f8d99c85ba1763f1a15dbd6d8154df4c02b5b2e" exitCode=0 Oct 06 12:01:14 crc kubenswrapper[4958]: I1006 12:01:14.588663 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-qqm7m" event={"ID":"c8ab94cb-0fde-495e-9e1b-cb57600ce892","Type":"ContainerDied","Data":"8a833e028753a22abdd2b6df7f8d99c85ba1763f1a15dbd6d8154df4c02b5b2e"} Oct 06 12:01:14 crc kubenswrapper[4958]: I1006 12:01:14.598635 4958 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/nova-operator-controller-manager-7c7fc454ff-nvbjx" podStartSLOduration=3.952002936 podStartE2EDuration="19.59861529s" podCreationTimestamp="2025-10-06 12:00:55 +0000 UTC" firstStartedPulling="2025-10-06 12:00:56.631777872 +0000 UTC m=+810.517803180" lastFinishedPulling="2025-10-06 12:01:12.278390226 +0000 UTC m=+826.164415534" observedRunningTime="2025-10-06 12:01:14.588914832 +0000 UTC m=+828.474940140" watchObservedRunningTime="2025-10-06 12:01:14.59861529 +0000 UTC m=+828.484640598" Oct 06 12:01:14 crc kubenswrapper[4958]: I1006 12:01:14.605291 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/watcher-operator-controller-manager-6cbc6dd547-hfq6r" event={"ID":"0bd35c63-af59-499c-baaa-8cec7e13f7bc","Type":"ContainerStarted","Data":"03efacd66a13fb905f80259eb8b4e99e9945da740584b74959212c710a25405d"} Oct 06 12:01:14 crc kubenswrapper[4958]: I1006 12:01:14.605960 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/watcher-operator-controller-manager-6cbc6dd547-hfq6r" Oct 06 12:01:14 crc kubenswrapper[4958]: I1006 12:01:14.620006 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/keystone-operator-controller-manager-7b5ccf6d9c-86s48" event={"ID":"e1d4f271-a424-45d7-abf0-33633ac7713c","Type":"ContainerStarted","Data":"6a223cd1b0db5987bee3b55e62a8fca3929b90e254355c1c8f908d683f3f6754"} Oct 06 12:01:14 crc kubenswrapper[4958]: I1006 12:01:14.645818 4958 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/designate-operator-controller-manager-75dfd9b554-4qklz" podStartSLOduration=4.676245474 podStartE2EDuration="20.645803606s" podCreationTimestamp="2025-10-06 12:00:54 +0000 UTC" firstStartedPulling="2025-10-06 12:00:56.30991851 +0000 UTC m=+810.195943808" lastFinishedPulling="2025-10-06 12:01:12.279476632 +0000 UTC m=+826.165501940" observedRunningTime="2025-10-06 12:01:14.643038578 +0000 UTC m=+828.529063886" watchObservedRunningTime="2025-10-06 12:01:14.645803606 +0000 UTC m=+828.531828914" Oct 06 12:01:14 crc kubenswrapper[4958]: I1006 12:01:14.647776 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/test-operator-controller-manager-5cd5cb47d7-m5gw5" event={"ID":"d79b059c-ab85-4eed-937e-6f7844c24621","Type":"ContainerStarted","Data":"bc819187d85b7404cd4c2d587e30ca6de18fd3051a1da6f2d234b5197be85720"} Oct 06 12:01:14 crc kubenswrapper[4958]: I1006 12:01:14.648065 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/test-operator-controller-manager-5cd5cb47d7-m5gw5" Oct 06 12:01:14 crc kubenswrapper[4958]: I1006 12:01:14.652305 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/horizon-operator-controller-manager-76d5b87f47-7nfvh" event={"ID":"7d2f0a48-cffe-49d6-8ac8-830558228e2a","Type":"ContainerStarted","Data":"3627ca14fdf68e23c046e6dccda8850b6dabd5555a174c9e543410564c7d608c"} Oct 06 12:01:14 crc kubenswrapper[4958]: I1006 12:01:14.659397 4958 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/placement-operator-controller-manager-54689d9f88-lf55c" podStartSLOduration=4.54667027 podStartE2EDuration="19.659379858s" podCreationTimestamp="2025-10-06 12:00:55 +0000 UTC" firstStartedPulling="2025-10-06 12:00:57.166374585 +0000 UTC m=+811.052399893" lastFinishedPulling="2025-10-06 12:01:12.279084173 +0000 UTC m=+826.165109481" observedRunningTime="2025-10-06 12:01:14.656892087 +0000 UTC m=+828.542917395" watchObservedRunningTime="2025-10-06 12:01:14.659379858 +0000 UTC m=+828.545405156" Oct 06 12:01:14 crc kubenswrapper[4958]: I1006 12:01:14.665756 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/swift-operator-controller-manager-6859f9b676-4d8cf" event={"ID":"a8547a74-8a2d-4a7f-9852-71036642c51a","Type":"ContainerStarted","Data":"c0ca4096c1f0f3d8b8bcb71da1aa7e3282b2a4aff2e0fcb6096c3eaa5208dfc0"} Oct 06 12:01:14 crc kubenswrapper[4958]: I1006 12:01:14.665799 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/swift-operator-controller-manager-6859f9b676-4d8cf" event={"ID":"a8547a74-8a2d-4a7f-9852-71036642c51a","Type":"ContainerStarted","Data":"07da9df1229ed47cc3f994908d653ee88a522c04f34f4c0168834559ebb998d1"} Oct 06 12:01:14 crc kubenswrapper[4958]: I1006 12:01:14.665982 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/swift-operator-controller-manager-6859f9b676-4d8cf" Oct 06 12:01:14 crc kubenswrapper[4958]: I1006 12:01:14.675903 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/cinder-operator-controller-manager-7d4d4f8d-th999" event={"ID":"b7998026-248b-4fdb-b5fe-8e6ca29c69f0","Type":"ContainerStarted","Data":"e158cae8fa945542e7220c8adda501c913706774e6de5d92b1e450658460e4b2"} Oct 06 12:01:14 crc kubenswrapper[4958]: I1006 12:01:14.679113 4958 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/mariadb-operator-controller-manager-6cd6d7bdf5-42wnq" podStartSLOduration=3.9802863889999998 podStartE2EDuration="19.679096741s" podCreationTimestamp="2025-10-06 12:00:55 +0000 UTC" firstStartedPulling="2025-10-06 12:00:57.049061392 +0000 UTC m=+810.935086700" lastFinishedPulling="2025-10-06 12:01:12.747871744 +0000 UTC m=+826.633897052" observedRunningTime="2025-10-06 12:01:14.676233911 +0000 UTC m=+828.562259219" watchObservedRunningTime="2025-10-06 12:01:14.679096741 +0000 UTC m=+828.565122049" Oct 06 12:01:14 crc kubenswrapper[4958]: I1006 12:01:14.684182 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/barbican-operator-controller-manager-58c4cd55f4-l8pp6" event={"ID":"cbe81ce3-66c4-4226-bc0a-78d6757561ff","Type":"ContainerStarted","Data":"4035cc26def4e084a228d848c21973903cf867b93e27b6e7c28da3287c49e96b"} Oct 06 12:01:14 crc kubenswrapper[4958]: I1006 12:01:14.684265 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/barbican-operator-controller-manager-58c4cd55f4-l8pp6" Oct 06 12:01:14 crc kubenswrapper[4958]: I1006 12:01:14.690591 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ironic-operator-controller-manager-649675d675-6h8b6" event={"ID":"78e7c04a-fb1a-420f-a99b-94b6b0cf899a","Type":"ContainerStarted","Data":"a5e50a9f60119772e46d70835df97fb41973d891abcc118594549f579b0e8c94"} Oct 06 12:01:14 crc kubenswrapper[4958]: I1006 12:01:14.690625 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ironic-operator-controller-manager-649675d675-6h8b6" event={"ID":"78e7c04a-fb1a-420f-a99b-94b6b0cf899a","Type":"ContainerStarted","Data":"5bd74355940656f3d573a5113deb8f17c33ec83bd62eb14bd070a15a3a3a38ae"} Oct 06 12:01:14 crc kubenswrapper[4958]: I1006 12:01:14.690639 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/ironic-operator-controller-manager-649675d675-6h8b6" Oct 06 12:01:14 crc kubenswrapper[4958]: E1006 12:01:14.692939 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/infra-operator@sha256:b6cef68bfaacdf992a9fa1a6b03a848a48c18cbb6ed12d95561b4b37d858b99f\\\"\"" pod="openstack-operators/infra-operator-controller-manager-658588b8c9-dnk6g" podUID="8a0f56a2-c168-4707-acac-43cc91b44835" Oct 06 12:01:14 crc kubenswrapper[4958]: I1006 12:01:14.717505 4958 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/watcher-operator-controller-manager-6cbc6dd547-hfq6r" podStartSLOduration=4.046104541 podStartE2EDuration="19.717487251s" podCreationTimestamp="2025-10-06 12:00:55 +0000 UTC" firstStartedPulling="2025-10-06 12:00:57.047226017 +0000 UTC m=+810.933251325" lastFinishedPulling="2025-10-06 12:01:12.718608727 +0000 UTC m=+826.604634035" observedRunningTime="2025-10-06 12:01:14.700094085 +0000 UTC m=+828.586119393" watchObservedRunningTime="2025-10-06 12:01:14.717487251 +0000 UTC m=+828.603512559" Oct 06 12:01:14 crc kubenswrapper[4958]: I1006 12:01:14.718666 4958 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/test-operator-controller-manager-5cd5cb47d7-m5gw5" podStartSLOduration=3.867063746 podStartE2EDuration="19.71866147s" podCreationTimestamp="2025-10-06 12:00:55 +0000 UTC" firstStartedPulling="2025-10-06 12:00:57.187961274 +0000 UTC m=+811.073986582" lastFinishedPulling="2025-10-06 12:01:13.039558998 +0000 UTC m=+826.925584306" observedRunningTime="2025-10-06 12:01:14.714871247 +0000 UTC m=+828.600896545" watchObservedRunningTime="2025-10-06 12:01:14.71866147 +0000 UTC m=+828.604686778" Oct 06 12:01:14 crc kubenswrapper[4958]: I1006 12:01:14.738531 4958 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/swift-operator-controller-manager-6859f9b676-4d8cf" podStartSLOduration=4.059346796 podStartE2EDuration="19.738513276s" podCreationTimestamp="2025-10-06 12:00:55 +0000 UTC" firstStartedPulling="2025-10-06 12:00:57.062479741 +0000 UTC m=+810.948505049" lastFinishedPulling="2025-10-06 12:01:12.741646221 +0000 UTC m=+826.627671529" observedRunningTime="2025-10-06 12:01:14.735965834 +0000 UTC m=+828.621991142" watchObservedRunningTime="2025-10-06 12:01:14.738513276 +0000 UTC m=+828.624538584" Oct 06 12:01:14 crc kubenswrapper[4958]: I1006 12:01:14.787750 4958 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/ironic-operator-controller-manager-649675d675-6h8b6" podStartSLOduration=3.50623496 podStartE2EDuration="19.787735712s" podCreationTimestamp="2025-10-06 12:00:55 +0000 UTC" firstStartedPulling="2025-10-06 12:00:56.437794692 +0000 UTC m=+810.323820000" lastFinishedPulling="2025-10-06 12:01:12.719295444 +0000 UTC m=+826.605320752" observedRunningTime="2025-10-06 12:01:14.779363497 +0000 UTC m=+828.665388805" watchObservedRunningTime="2025-10-06 12:01:14.787735712 +0000 UTC m=+828.673761010" Oct 06 12:01:14 crc kubenswrapper[4958]: I1006 12:01:14.801722 4958 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/barbican-operator-controller-manager-58c4cd55f4-l8pp6" podStartSLOduration=4.681447261 podStartE2EDuration="20.801712584s" podCreationTimestamp="2025-10-06 12:00:54 +0000 UTC" firstStartedPulling="2025-10-06 12:00:56.159615289 +0000 UTC m=+810.045640597" lastFinishedPulling="2025-10-06 12:01:12.279880612 +0000 UTC m=+826.165905920" observedRunningTime="2025-10-06 12:01:14.801231082 +0000 UTC m=+828.687256380" watchObservedRunningTime="2025-10-06 12:01:14.801712584 +0000 UTC m=+828.687737892" Oct 06 12:01:15 crc kubenswrapper[4958]: I1006 12:01:15.699617 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/cinder-operator-controller-manager-7d4d4f8d-th999" event={"ID":"b7998026-248b-4fdb-b5fe-8e6ca29c69f0","Type":"ContainerStarted","Data":"865b3bc2e059f177ec273f40c21800501811c42c9025bec3902ca543e4e55977"} Oct 06 12:01:15 crc kubenswrapper[4958]: I1006 12:01:15.699704 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/cinder-operator-controller-manager-7d4d4f8d-th999" Oct 06 12:01:15 crc kubenswrapper[4958]: I1006 12:01:15.704749 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/heat-operator-controller-manager-54b4974c45-xc89l" event={"ID":"96446a00-b397-4b48-94bf-432c32ed13cb","Type":"ContainerStarted","Data":"e737e4fbb97bdb33910b4a8f681ba75004dd4c3b74bc902772e2a87657a957b3"} Oct 06 12:01:15 crc kubenswrapper[4958]: I1006 12:01:15.704872 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/heat-operator-controller-manager-54b4974c45-xc89l" Oct 06 12:01:15 crc kubenswrapper[4958]: I1006 12:01:15.707528 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/neutron-operator-controller-manager-8d984cc4d-7r9bg" event={"ID":"3332b65c-b3bf-44f5-ae31-865d77029641","Type":"ContainerStarted","Data":"b0997553c7c0a650da5abdaa8cd3ed49cb1905170ff3a2936c957efab12fb81b"} Oct 06 12:01:15 crc kubenswrapper[4958]: I1006 12:01:15.707637 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/neutron-operator-controller-manager-8d984cc4d-7r9bg" Oct 06 12:01:15 crc kubenswrapper[4958]: I1006 12:01:15.710574 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/keystone-operator-controller-manager-7b5ccf6d9c-86s48" event={"ID":"e1d4f271-a424-45d7-abf0-33633ac7713c","Type":"ContainerStarted","Data":"d6c96778db7959912efddb22a66a5f52848f386ab966545a443a557d6dbff662"} Oct 06 12:01:15 crc kubenswrapper[4958]: I1006 12:01:15.710621 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/keystone-operator-controller-manager-7b5ccf6d9c-86s48" Oct 06 12:01:15 crc kubenswrapper[4958]: I1006 12:01:15.712411 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/glance-operator-controller-manager-5dc44df7d5-jj784" event={"ID":"da95cbd7-81b9-48e7-99eb-207063cf651a","Type":"ContainerStarted","Data":"e364e54f792b4d54d6b637eeb6b8469c2d6995ed9ef14abc27adaf65e9034792"} Oct 06 12:01:15 crc kubenswrapper[4958]: I1006 12:01:15.712581 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/glance-operator-controller-manager-5dc44df7d5-jj784" Oct 06 12:01:15 crc kubenswrapper[4958]: I1006 12:01:15.714568 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/horizon-operator-controller-manager-76d5b87f47-7nfvh" event={"ID":"7d2f0a48-cffe-49d6-8ac8-830558228e2a","Type":"ContainerStarted","Data":"40f08e97c787708270904e9431af4faf3d4abfce96fe1221fbde5e451b66211c"} Oct 06 12:01:15 crc kubenswrapper[4958]: I1006 12:01:15.716873 4958 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/cinder-operator-controller-manager-7d4d4f8d-th999" podStartSLOduration=5.206699465 podStartE2EDuration="21.716856127s" podCreationTimestamp="2025-10-06 12:00:54 +0000 UTC" firstStartedPulling="2025-10-06 12:00:56.208574918 +0000 UTC m=+810.094600226" lastFinishedPulling="2025-10-06 12:01:12.71873158 +0000 UTC m=+826.604756888" observedRunningTime="2025-10-06 12:01:15.713397832 +0000 UTC m=+829.599423150" watchObservedRunningTime="2025-10-06 12:01:15.716856127 +0000 UTC m=+829.602881435" Oct 06 12:01:15 crc kubenswrapper[4958]: I1006 12:01:15.738743 4958 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/keystone-operator-controller-manager-7b5ccf6d9c-86s48" podStartSLOduration=5.073487654 podStartE2EDuration="20.738725043s" podCreationTimestamp="2025-10-06 12:00:55 +0000 UTC" firstStartedPulling="2025-10-06 12:00:57.053368258 +0000 UTC m=+810.939393556" lastFinishedPulling="2025-10-06 12:01:12.718605637 +0000 UTC m=+826.604630945" observedRunningTime="2025-10-06 12:01:15.727032416 +0000 UTC m=+829.613057734" watchObservedRunningTime="2025-10-06 12:01:15.738725043 +0000 UTC m=+829.624750351" Oct 06 12:01:15 crc kubenswrapper[4958]: I1006 12:01:15.747107 4958 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/neutron-operator-controller-manager-8d984cc4d-7r9bg" podStartSLOduration=5.06396271 podStartE2EDuration="20.747088178s" podCreationTimestamp="2025-10-06 12:00:55 +0000 UTC" firstStartedPulling="2025-10-06 12:00:57.059627161 +0000 UTC m=+810.945652459" lastFinishedPulling="2025-10-06 12:01:12.742752609 +0000 UTC m=+826.628777927" observedRunningTime="2025-10-06 12:01:15.744664028 +0000 UTC m=+829.630689336" watchObservedRunningTime="2025-10-06 12:01:15.747088178 +0000 UTC m=+829.633113486" Oct 06 12:01:15 crc kubenswrapper[4958]: I1006 12:01:15.764236 4958 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/heat-operator-controller-manager-54b4974c45-xc89l" podStartSLOduration=4.200467062 podStartE2EDuration="20.764218617s" podCreationTimestamp="2025-10-06 12:00:55 +0000 UTC" firstStartedPulling="2025-10-06 12:00:56.170259599 +0000 UTC m=+810.056284907" lastFinishedPulling="2025-10-06 12:01:12.734011154 +0000 UTC m=+826.620036462" observedRunningTime="2025-10-06 12:01:15.759560573 +0000 UTC m=+829.645585891" watchObservedRunningTime="2025-10-06 12:01:15.764218617 +0000 UTC m=+829.650243925" Oct 06 12:01:15 crc kubenswrapper[4958]: I1006 12:01:15.780467 4958 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/glance-operator-controller-manager-5dc44df7d5-jj784" podStartSLOduration=4.928430431 podStartE2EDuration="20.780438164s" podCreationTimestamp="2025-10-06 12:00:55 +0000 UTC" firstStartedPulling="2025-10-06 12:00:56.427220423 +0000 UTC m=+810.313245731" lastFinishedPulling="2025-10-06 12:01:12.279228156 +0000 UTC m=+826.165253464" observedRunningTime="2025-10-06 12:01:15.773636448 +0000 UTC m=+829.659661746" watchObservedRunningTime="2025-10-06 12:01:15.780438164 +0000 UTC m=+829.666463512" Oct 06 12:01:15 crc kubenswrapper[4958]: I1006 12:01:15.792761 4958 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/horizon-operator-controller-manager-76d5b87f47-7nfvh" podStartSLOduration=4.511183942 podStartE2EDuration="20.792737816s" podCreationTimestamp="2025-10-06 12:00:55 +0000 UTC" firstStartedPulling="2025-10-06 12:00:56.437128705 +0000 UTC m=+810.323154013" lastFinishedPulling="2025-10-06 12:01:12.718682579 +0000 UTC m=+826.604707887" observedRunningTime="2025-10-06 12:01:15.788525782 +0000 UTC m=+829.674551140" watchObservedRunningTime="2025-10-06 12:01:15.792737816 +0000 UTC m=+829.678763164" Oct 06 12:01:16 crc kubenswrapper[4958]: I1006 12:01:16.722419 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/horizon-operator-controller-manager-76d5b87f47-7nfvh" Oct 06 12:01:20 crc kubenswrapper[4958]: I1006 12:01:20.755138 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-baremetal-operator-controller-manager-5dfbbd665clrhkv" event={"ID":"6558de0f-a2a0-4841-9764-574061835f3b","Type":"ContainerStarted","Data":"5e8581bcf96660c9b345b001f8b737a05c4bcf89ab129d34b4b5a6bc7f073cbd"} Oct 06 12:01:20 crc kubenswrapper[4958]: I1006 12:01:20.755896 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-baremetal-operator-controller-manager-5dfbbd665clrhkv" Oct 06 12:01:20 crc kubenswrapper[4958]: I1006 12:01:20.758550 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ovn-operator-controller-manager-646d647dd5-fqrj2" event={"ID":"28613b85-8223-4190-b2de-a88d186b8901","Type":"ContainerStarted","Data":"8dfa3ec07e0fe5616274bab58a1e3184cedfc283fc28ebc534922318c3ad49e5"} Oct 06 12:01:20 crc kubenswrapper[4958]: I1006 12:01:20.758778 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/ovn-operator-controller-manager-646d647dd5-fqrj2" Oct 06 12:01:20 crc kubenswrapper[4958]: I1006 12:01:20.765458 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/telemetry-operator-controller-manager-5d4d74dd89-zfxpj" event={"ID":"c060a91a-1009-469c-a9f4-d2e3b3d34840","Type":"ContainerStarted","Data":"5771009f5fb762adfdb5d8644c2cf5fc6d63f32bd1d4ffe862c5b5e14ef7e8f6"} Oct 06 12:01:20 crc kubenswrapper[4958]: I1006 12:01:20.766217 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/telemetry-operator-controller-manager-5d4d74dd89-zfxpj" Oct 06 12:01:20 crc kubenswrapper[4958]: I1006 12:01:20.770352 4958 generic.go:334] "Generic (PLEG): container finished" podID="c8ab94cb-0fde-495e-9e1b-cb57600ce892" containerID="00be4d4e4f3345be70d7e48a9b2a715e10b53ab3c6c46e6bc241ebb1ce285e9a" exitCode=0 Oct 06 12:01:20 crc kubenswrapper[4958]: I1006 12:01:20.770407 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-qqm7m" event={"ID":"c8ab94cb-0fde-495e-9e1b-cb57600ce892","Type":"ContainerDied","Data":"00be4d4e4f3345be70d7e48a9b2a715e10b53ab3c6c46e6bc241ebb1ce285e9a"} Oct 06 12:01:20 crc kubenswrapper[4958]: I1006 12:01:20.782969 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/manila-operator-controller-manager-65d89cfd9f-gqj7w" event={"ID":"cb612d52-fceb-471f-af53-104bfc2966e7","Type":"ContainerStarted","Data":"c20b502271c385e5e4a1f0ce6c5cb41a2b4ba260a32793ebae1c21827c26128d"} Oct 06 12:01:20 crc kubenswrapper[4958]: I1006 12:01:20.783718 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/manila-operator-controller-manager-65d89cfd9f-gqj7w" Oct 06 12:01:20 crc kubenswrapper[4958]: I1006 12:01:20.786256 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/rabbitmq-cluster-operator-manager-5f97d8c699-8c99l" event={"ID":"ccce5a46-80c5-4f14-b63d-d4eff64bef36","Type":"ContainerStarted","Data":"6f628a2f393a03ed69f1a492e0a98ab1206d63c91a63de6d62799eb8b4ef2f16"} Oct 06 12:01:20 crc kubenswrapper[4958]: I1006 12:01:20.793684 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/octavia-operator-controller-manager-7468f855d8-crg8g" event={"ID":"28816408-a493-4e27-8213-998d338cc1d0","Type":"ContainerStarted","Data":"3ebd79326d9d12bd1fe8e52a297222b202c84fce972a046633428a97ea3df3e4"} Oct 06 12:01:20 crc kubenswrapper[4958]: I1006 12:01:20.794325 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/octavia-operator-controller-manager-7468f855d8-crg8g" Oct 06 12:01:20 crc kubenswrapper[4958]: I1006 12:01:20.796334 4958 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-baremetal-operator-controller-manager-5dfbbd665clrhkv" podStartSLOduration=3.383229438 podStartE2EDuration="25.796319769s" podCreationTimestamp="2025-10-06 12:00:55 +0000 UTC" firstStartedPulling="2025-10-06 12:00:57.187923023 +0000 UTC m=+811.073948321" lastFinishedPulling="2025-10-06 12:01:19.601013344 +0000 UTC m=+833.487038652" observedRunningTime="2025-10-06 12:01:20.795259473 +0000 UTC m=+834.681284771" watchObservedRunningTime="2025-10-06 12:01:20.796319769 +0000 UTC m=+834.682345077" Oct 06 12:01:20 crc kubenswrapper[4958]: I1006 12:01:20.810070 4958 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/rabbitmq-cluster-operator-manager-5f97d8c699-8c99l" podStartSLOduration=3.355926459 podStartE2EDuration="25.810051055s" podCreationTimestamp="2025-10-06 12:00:55 +0000 UTC" firstStartedPulling="2025-10-06 12:00:57.103040564 +0000 UTC m=+810.989065872" lastFinishedPulling="2025-10-06 12:01:19.55716516 +0000 UTC m=+833.443190468" observedRunningTime="2025-10-06 12:01:20.80780659 +0000 UTC m=+834.693831908" watchObservedRunningTime="2025-10-06 12:01:20.810051055 +0000 UTC m=+834.696076353" Oct 06 12:01:20 crc kubenswrapper[4958]: I1006 12:01:20.840539 4958 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/telemetry-operator-controller-manager-5d4d74dd89-zfxpj" podStartSLOduration=3.418092311 podStartE2EDuration="25.840516981s" podCreationTimestamp="2025-10-06 12:00:55 +0000 UTC" firstStartedPulling="2025-10-06 12:00:57.178563634 +0000 UTC m=+811.064588942" lastFinishedPulling="2025-10-06 12:01:19.600988304 +0000 UTC m=+833.487013612" observedRunningTime="2025-10-06 12:01:20.824314635 +0000 UTC m=+834.710339963" watchObservedRunningTime="2025-10-06 12:01:20.840516981 +0000 UTC m=+834.726542309" Oct 06 12:01:20 crc kubenswrapper[4958]: I1006 12:01:20.859967 4958 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/octavia-operator-controller-manager-7468f855d8-crg8g" podStartSLOduration=3.404537579 podStartE2EDuration="25.859949046s" podCreationTimestamp="2025-10-06 12:00:55 +0000 UTC" firstStartedPulling="2025-10-06 12:00:57.10284909 +0000 UTC m=+810.988874398" lastFinishedPulling="2025-10-06 12:01:19.558260557 +0000 UTC m=+833.444285865" observedRunningTime="2025-10-06 12:01:20.858699276 +0000 UTC m=+834.744724594" watchObservedRunningTime="2025-10-06 12:01:20.859949046 +0000 UTC m=+834.745974354" Oct 06 12:01:20 crc kubenswrapper[4958]: I1006 12:01:20.874212 4958 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/manila-operator-controller-manager-65d89cfd9f-gqj7w" podStartSLOduration=3.424120428 podStartE2EDuration="25.874195015s" podCreationTimestamp="2025-10-06 12:00:55 +0000 UTC" firstStartedPulling="2025-10-06 12:00:57.107028292 +0000 UTC m=+810.993053600" lastFinishedPulling="2025-10-06 12:01:19.557102879 +0000 UTC m=+833.443128187" observedRunningTime="2025-10-06 12:01:20.870368091 +0000 UTC m=+834.756393409" watchObservedRunningTime="2025-10-06 12:01:20.874195015 +0000 UTC m=+834.760220333" Oct 06 12:01:20 crc kubenswrapper[4958]: I1006 12:01:20.893779 4958 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/ovn-operator-controller-manager-646d647dd5-fqrj2" podStartSLOduration=3.459277609 podStartE2EDuration="25.893761094s" podCreationTimestamp="2025-10-06 12:00:55 +0000 UTC" firstStartedPulling="2025-10-06 12:00:57.178730698 +0000 UTC m=+811.064756006" lastFinishedPulling="2025-10-06 12:01:19.613214183 +0000 UTC m=+833.499239491" observedRunningTime="2025-10-06 12:01:20.888309811 +0000 UTC m=+834.774335129" watchObservedRunningTime="2025-10-06 12:01:20.893761094 +0000 UTC m=+834.779786402" Oct 06 12:01:21 crc kubenswrapper[4958]: I1006 12:01:21.805211 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-qqm7m" event={"ID":"c8ab94cb-0fde-495e-9e1b-cb57600ce892","Type":"ContainerStarted","Data":"522e99c43278f7bd9022eff793130b32c5adb54e5132e6deb98fd987cb71ed66"} Oct 06 12:01:21 crc kubenswrapper[4958]: I1006 12:01:21.841595 4958 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-qqm7m" podStartSLOduration=11.944402709 podStartE2EDuration="18.841573238s" podCreationTimestamp="2025-10-06 12:01:03 +0000 UTC" firstStartedPulling="2025-10-06 12:01:14.589742573 +0000 UTC m=+828.475767881" lastFinishedPulling="2025-10-06 12:01:21.486913062 +0000 UTC m=+835.372938410" observedRunningTime="2025-10-06 12:01:21.839267481 +0000 UTC m=+835.725292829" watchObservedRunningTime="2025-10-06 12:01:21.841573238 +0000 UTC m=+835.727598566" Oct 06 12:01:23 crc kubenswrapper[4958]: I1006 12:01:23.802214 4958 patch_prober.go:28] interesting pod/machine-config-daemon-whw6z container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 06 12:01:23 crc kubenswrapper[4958]: I1006 12:01:23.802304 4958 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-whw6z" podUID="1a63a5e9-6d00-40ff-a080-cfcaf03a1c1b" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 06 12:01:23 crc kubenswrapper[4958]: I1006 12:01:23.802371 4958 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-whw6z" Oct 06 12:01:23 crc kubenswrapper[4958]: I1006 12:01:23.803249 4958 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"50ea0a44529e4bdc070f78b8c163d73c0feb9c99ae2d2366012f3431b888a961"} pod="openshift-machine-config-operator/machine-config-daemon-whw6z" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Oct 06 12:01:23 crc kubenswrapper[4958]: I1006 12:01:23.803678 4958 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-whw6z" podUID="1a63a5e9-6d00-40ff-a080-cfcaf03a1c1b" containerName="machine-config-daemon" containerID="cri-o://50ea0a44529e4bdc070f78b8c163d73c0feb9c99ae2d2366012f3431b888a961" gracePeriod=600 Oct 06 12:01:24 crc kubenswrapper[4958]: I1006 12:01:24.335111 4958 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-qqm7m" Oct 06 12:01:24 crc kubenswrapper[4958]: I1006 12:01:24.335426 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-qqm7m" Oct 06 12:01:24 crc kubenswrapper[4958]: I1006 12:01:24.416830 4958 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-qqm7m" Oct 06 12:01:24 crc kubenswrapper[4958]: I1006 12:01:24.838855 4958 generic.go:334] "Generic (PLEG): container finished" podID="1a63a5e9-6d00-40ff-a080-cfcaf03a1c1b" containerID="50ea0a44529e4bdc070f78b8c163d73c0feb9c99ae2d2366012f3431b888a961" exitCode=0 Oct 06 12:01:24 crc kubenswrapper[4958]: I1006 12:01:24.838945 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-whw6z" event={"ID":"1a63a5e9-6d00-40ff-a080-cfcaf03a1c1b","Type":"ContainerDied","Data":"50ea0a44529e4bdc070f78b8c163d73c0feb9c99ae2d2366012f3431b888a961"} Oct 06 12:01:24 crc kubenswrapper[4958]: I1006 12:01:24.839088 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-whw6z" event={"ID":"1a63a5e9-6d00-40ff-a080-cfcaf03a1c1b","Type":"ContainerStarted","Data":"d557eb58019a825d47465bf11cc9a867134d838f4f8d2d5b54e629c90b675773"} Oct 06 12:01:24 crc kubenswrapper[4958]: I1006 12:01:24.839126 4958 scope.go:117] "RemoveContainer" containerID="62259fb4a3daeef462272162942ee3efe9a1f7d5314ed03623fbe14dfc330edf" Oct 06 12:01:25 crc kubenswrapper[4958]: I1006 12:01:25.315528 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/barbican-operator-controller-manager-58c4cd55f4-l8pp6" Oct 06 12:01:25 crc kubenswrapper[4958]: I1006 12:01:25.343111 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/cinder-operator-controller-manager-7d4d4f8d-th999" Oct 06 12:01:25 crc kubenswrapper[4958]: I1006 12:01:25.363972 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/designate-operator-controller-manager-75dfd9b554-4qklz" Oct 06 12:01:25 crc kubenswrapper[4958]: I1006 12:01:25.479526 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/heat-operator-controller-manager-54b4974c45-xc89l" Oct 06 12:01:25 crc kubenswrapper[4958]: I1006 12:01:25.564037 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/horizon-operator-controller-manager-76d5b87f47-7nfvh" Oct 06 12:01:25 crc kubenswrapper[4958]: I1006 12:01:25.635038 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/ironic-operator-controller-manager-649675d675-6h8b6" Oct 06 12:01:25 crc kubenswrapper[4958]: I1006 12:01:25.695044 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/glance-operator-controller-manager-5dc44df7d5-jj784" Oct 06 12:01:25 crc kubenswrapper[4958]: I1006 12:01:25.703587 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/keystone-operator-controller-manager-7b5ccf6d9c-86s48" Oct 06 12:01:25 crc kubenswrapper[4958]: I1006 12:01:25.725437 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/manila-operator-controller-manager-65d89cfd9f-gqj7w" Oct 06 12:01:25 crc kubenswrapper[4958]: I1006 12:01:25.743652 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/mariadb-operator-controller-manager-6cd6d7bdf5-42wnq" Oct 06 12:01:25 crc kubenswrapper[4958]: I1006 12:01:25.759663 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/nova-operator-controller-manager-7c7fc454ff-nvbjx" Oct 06 12:01:25 crc kubenswrapper[4958]: I1006 12:01:25.772799 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/neutron-operator-controller-manager-8d984cc4d-7r9bg" Oct 06 12:01:25 crc kubenswrapper[4958]: I1006 12:01:25.784257 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/octavia-operator-controller-manager-7468f855d8-crg8g" Oct 06 12:01:25 crc kubenswrapper[4958]: I1006 12:01:25.890358 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/ovn-operator-controller-manager-646d647dd5-fqrj2" Oct 06 12:01:25 crc kubenswrapper[4958]: I1006 12:01:25.920453 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/swift-operator-controller-manager-6859f9b676-4d8cf" Oct 06 12:01:25 crc kubenswrapper[4958]: I1006 12:01:25.980762 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/telemetry-operator-controller-manager-5d4d74dd89-zfxpj" Oct 06 12:01:25 crc kubenswrapper[4958]: I1006 12:01:25.982660 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/placement-operator-controller-manager-54689d9f88-lf55c" Oct 06 12:01:26 crc kubenswrapper[4958]: I1006 12:01:26.017069 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/test-operator-controller-manager-5cd5cb47d7-m5gw5" Oct 06 12:01:26 crc kubenswrapper[4958]: I1006 12:01:26.025446 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/watcher-operator-controller-manager-6cbc6dd547-hfq6r" Oct 06 12:01:26 crc kubenswrapper[4958]: I1006 12:01:26.450579 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-baremetal-operator-controller-manager-5dfbbd665clrhkv" Oct 06 12:01:30 crc kubenswrapper[4958]: I1006 12:01:30.896383 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-controller-manager-658588b8c9-dnk6g" event={"ID":"8a0f56a2-c168-4707-acac-43cc91b44835","Type":"ContainerStarted","Data":"008a619643bb643909b1fd71be795369f8bcb65171e3cab107eab3a72cd9c23a"} Oct 06 12:01:30 crc kubenswrapper[4958]: I1006 12:01:30.897394 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/infra-operator-controller-manager-658588b8c9-dnk6g" Oct 06 12:01:30 crc kubenswrapper[4958]: I1006 12:01:30.932036 4958 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/infra-operator-controller-manager-658588b8c9-dnk6g" podStartSLOduration=4.425481284 podStartE2EDuration="35.931974231s" podCreationTimestamp="2025-10-06 12:00:55 +0000 UTC" firstStartedPulling="2025-10-06 12:00:57.064345567 +0000 UTC m=+810.950370875" lastFinishedPulling="2025-10-06 12:01:28.570838504 +0000 UTC m=+842.456863822" observedRunningTime="2025-10-06 12:01:30.919490595 +0000 UTC m=+844.805515933" watchObservedRunningTime="2025-10-06 12:01:30.931974231 +0000 UTC m=+844.817999589" Oct 06 12:01:34 crc kubenswrapper[4958]: I1006 12:01:34.379644 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-qqm7m" Oct 06 12:01:34 crc kubenswrapper[4958]: I1006 12:01:34.484980 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-qqm7m"] Oct 06 12:01:34 crc kubenswrapper[4958]: I1006 12:01:34.554476 4958 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-xkchh"] Oct 06 12:01:34 crc kubenswrapper[4958]: I1006 12:01:34.554768 4958 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-xkchh" podUID="590ccad0-d358-4f6a-9dcd-dfd539830f4e" containerName="registry-server" containerID="cri-o://ebd97896949aa51a7b3fdb90bc7ff13c546d6726780d846b2a184eedceea46bf" gracePeriod=2 Oct 06 12:01:34 crc kubenswrapper[4958]: I1006 12:01:34.930788 4958 generic.go:334] "Generic (PLEG): container finished" podID="590ccad0-d358-4f6a-9dcd-dfd539830f4e" containerID="ebd97896949aa51a7b3fdb90bc7ff13c546d6726780d846b2a184eedceea46bf" exitCode=0 Oct 06 12:01:34 crc kubenswrapper[4958]: I1006 12:01:34.930841 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-xkchh" event={"ID":"590ccad0-d358-4f6a-9dcd-dfd539830f4e","Type":"ContainerDied","Data":"ebd97896949aa51a7b3fdb90bc7ff13c546d6726780d846b2a184eedceea46bf"} Oct 06 12:01:36 crc kubenswrapper[4958]: I1006 12:01:36.025671 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/infra-operator-controller-manager-658588b8c9-dnk6g" Oct 06 12:01:37 crc kubenswrapper[4958]: I1006 12:01:37.735903 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-xkchh" Oct 06 12:01:37 crc kubenswrapper[4958]: I1006 12:01:37.909442 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/590ccad0-d358-4f6a-9dcd-dfd539830f4e-catalog-content\") pod \"590ccad0-d358-4f6a-9dcd-dfd539830f4e\" (UID: \"590ccad0-d358-4f6a-9dcd-dfd539830f4e\") " Oct 06 12:01:37 crc kubenswrapper[4958]: I1006 12:01:37.909704 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/590ccad0-d358-4f6a-9dcd-dfd539830f4e-utilities\") pod \"590ccad0-d358-4f6a-9dcd-dfd539830f4e\" (UID: \"590ccad0-d358-4f6a-9dcd-dfd539830f4e\") " Oct 06 12:01:37 crc kubenswrapper[4958]: I1006 12:01:37.909739 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-l88rm\" (UniqueName: \"kubernetes.io/projected/590ccad0-d358-4f6a-9dcd-dfd539830f4e-kube-api-access-l88rm\") pod \"590ccad0-d358-4f6a-9dcd-dfd539830f4e\" (UID: \"590ccad0-d358-4f6a-9dcd-dfd539830f4e\") " Oct 06 12:01:37 crc kubenswrapper[4958]: I1006 12:01:37.911119 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/590ccad0-d358-4f6a-9dcd-dfd539830f4e-utilities" (OuterVolumeSpecName: "utilities") pod "590ccad0-d358-4f6a-9dcd-dfd539830f4e" (UID: "590ccad0-d358-4f6a-9dcd-dfd539830f4e"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 06 12:01:37 crc kubenswrapper[4958]: I1006 12:01:37.919494 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/590ccad0-d358-4f6a-9dcd-dfd539830f4e-kube-api-access-l88rm" (OuterVolumeSpecName: "kube-api-access-l88rm") pod "590ccad0-d358-4f6a-9dcd-dfd539830f4e" (UID: "590ccad0-d358-4f6a-9dcd-dfd539830f4e"). InnerVolumeSpecName "kube-api-access-l88rm". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 06 12:01:37 crc kubenswrapper[4958]: I1006 12:01:37.961232 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-xkchh" event={"ID":"590ccad0-d358-4f6a-9dcd-dfd539830f4e","Type":"ContainerDied","Data":"fe138ee04901a1ca97d1ed1298738f8e5173db4fedf65b44ab46b057fde35fbd"} Oct 06 12:01:37 crc kubenswrapper[4958]: I1006 12:01:37.961332 4958 scope.go:117] "RemoveContainer" containerID="ebd97896949aa51a7b3fdb90bc7ff13c546d6726780d846b2a184eedceea46bf" Oct 06 12:01:37 crc kubenswrapper[4958]: I1006 12:01:37.961344 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-xkchh" Oct 06 12:01:37 crc kubenswrapper[4958]: I1006 12:01:37.984609 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/590ccad0-d358-4f6a-9dcd-dfd539830f4e-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "590ccad0-d358-4f6a-9dcd-dfd539830f4e" (UID: "590ccad0-d358-4f6a-9dcd-dfd539830f4e"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 06 12:01:37 crc kubenswrapper[4958]: I1006 12:01:37.995503 4958 scope.go:117] "RemoveContainer" containerID="3b92f0f121c06cfc266d36ea953a56c359ed0552145f4615bcfd5a09cfb54609" Oct 06 12:01:38 crc kubenswrapper[4958]: I1006 12:01:38.011405 4958 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/590ccad0-d358-4f6a-9dcd-dfd539830f4e-catalog-content\") on node \"crc\" DevicePath \"\"" Oct 06 12:01:38 crc kubenswrapper[4958]: I1006 12:01:38.011437 4958 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/590ccad0-d358-4f6a-9dcd-dfd539830f4e-utilities\") on node \"crc\" DevicePath \"\"" Oct 06 12:01:38 crc kubenswrapper[4958]: I1006 12:01:38.011448 4958 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-l88rm\" (UniqueName: \"kubernetes.io/projected/590ccad0-d358-4f6a-9dcd-dfd539830f4e-kube-api-access-l88rm\") on node \"crc\" DevicePath \"\"" Oct 06 12:01:38 crc kubenswrapper[4958]: I1006 12:01:38.017245 4958 scope.go:117] "RemoveContainer" containerID="df7ca5f26f1dd9f20e01b2b5f778447189e7b5396876e21ae3b93373ad9086b9" Oct 06 12:01:38 crc kubenswrapper[4958]: I1006 12:01:38.325238 4958 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-xkchh"] Oct 06 12:01:38 crc kubenswrapper[4958]: I1006 12:01:38.331453 4958 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-xkchh"] Oct 06 12:01:38 crc kubenswrapper[4958]: I1006 12:01:38.924672 4958 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="590ccad0-d358-4f6a-9dcd-dfd539830f4e" path="/var/lib/kubelet/pods/590ccad0-d358-4f6a-9dcd-dfd539830f4e/volumes" Oct 06 12:01:51 crc kubenswrapper[4958]: I1006 12:01:51.648721 4958 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-whwkw"] Oct 06 12:01:51 crc kubenswrapper[4958]: E1006 12:01:51.649653 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f7018d40-65db-421c-941b-92c80dfb9166" containerName="extract-content" Oct 06 12:01:51 crc kubenswrapper[4958]: I1006 12:01:51.649669 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="f7018d40-65db-421c-941b-92c80dfb9166" containerName="extract-content" Oct 06 12:01:51 crc kubenswrapper[4958]: E1006 12:01:51.649697 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="590ccad0-d358-4f6a-9dcd-dfd539830f4e" containerName="registry-server" Oct 06 12:01:51 crc kubenswrapper[4958]: I1006 12:01:51.649705 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="590ccad0-d358-4f6a-9dcd-dfd539830f4e" containerName="registry-server" Oct 06 12:01:51 crc kubenswrapper[4958]: E1006 12:01:51.649722 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f7018d40-65db-421c-941b-92c80dfb9166" containerName="registry-server" Oct 06 12:01:51 crc kubenswrapper[4958]: I1006 12:01:51.649731 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="f7018d40-65db-421c-941b-92c80dfb9166" containerName="registry-server" Oct 06 12:01:51 crc kubenswrapper[4958]: E1006 12:01:51.649744 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f7018d40-65db-421c-941b-92c80dfb9166" containerName="extract-utilities" Oct 06 12:01:51 crc kubenswrapper[4958]: I1006 12:01:51.649751 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="f7018d40-65db-421c-941b-92c80dfb9166" containerName="extract-utilities" Oct 06 12:01:51 crc kubenswrapper[4958]: E1006 12:01:51.649770 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="590ccad0-d358-4f6a-9dcd-dfd539830f4e" containerName="extract-utilities" Oct 06 12:01:51 crc kubenswrapper[4958]: I1006 12:01:51.649777 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="590ccad0-d358-4f6a-9dcd-dfd539830f4e" containerName="extract-utilities" Oct 06 12:01:51 crc kubenswrapper[4958]: E1006 12:01:51.649793 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="590ccad0-d358-4f6a-9dcd-dfd539830f4e" containerName="extract-content" Oct 06 12:01:51 crc kubenswrapper[4958]: I1006 12:01:51.649800 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="590ccad0-d358-4f6a-9dcd-dfd539830f4e" containerName="extract-content" Oct 06 12:01:51 crc kubenswrapper[4958]: I1006 12:01:51.649954 4958 memory_manager.go:354] "RemoveStaleState removing state" podUID="f7018d40-65db-421c-941b-92c80dfb9166" containerName="registry-server" Oct 06 12:01:51 crc kubenswrapper[4958]: I1006 12:01:51.649977 4958 memory_manager.go:354] "RemoveStaleState removing state" podUID="590ccad0-d358-4f6a-9dcd-dfd539830f4e" containerName="registry-server" Oct 06 12:01:51 crc kubenswrapper[4958]: I1006 12:01:51.650861 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-675f4bcbfc-whwkw" Oct 06 12:01:51 crc kubenswrapper[4958]: I1006 12:01:51.652774 4958 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns" Oct 06 12:01:51 crc kubenswrapper[4958]: I1006 12:01:51.654473 4958 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"kube-root-ca.crt" Oct 06 12:01:51 crc kubenswrapper[4958]: I1006 12:01:51.654641 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dnsmasq-dns-dockercfg-v68rh" Oct 06 12:01:51 crc kubenswrapper[4958]: I1006 12:01:51.654799 4958 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openshift-service-ca.crt" Oct 06 12:01:51 crc kubenswrapper[4958]: I1006 12:01:51.665914 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-whwkw"] Oct 06 12:01:51 crc kubenswrapper[4958]: I1006 12:01:51.713107 4958 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-55j7h"] Oct 06 12:01:51 crc kubenswrapper[4958]: I1006 12:01:51.714513 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78dd6ddcc-55j7h" Oct 06 12:01:51 crc kubenswrapper[4958]: I1006 12:01:51.716657 4958 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns-svc" Oct 06 12:01:51 crc kubenswrapper[4958]: I1006 12:01:51.721398 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-55j7h"] Oct 06 12:01:51 crc kubenswrapper[4958]: I1006 12:01:51.727274 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5bb0d843-3402-42d3-8fb8-c81ab8befa24-config\") pod \"dnsmasq-dns-78dd6ddcc-55j7h\" (UID: \"5bb0d843-3402-42d3-8fb8-c81ab8befa24\") " pod="openstack/dnsmasq-dns-78dd6ddcc-55j7h" Oct 06 12:01:51 crc kubenswrapper[4958]: I1006 12:01:51.727329 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2zw6n\" (UniqueName: \"kubernetes.io/projected/5bb0d843-3402-42d3-8fb8-c81ab8befa24-kube-api-access-2zw6n\") pod \"dnsmasq-dns-78dd6ddcc-55j7h\" (UID: \"5bb0d843-3402-42d3-8fb8-c81ab8befa24\") " pod="openstack/dnsmasq-dns-78dd6ddcc-55j7h" Oct 06 12:01:51 crc kubenswrapper[4958]: I1006 12:01:51.727386 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d0a905a9-fae0-4067-bde3-8aacc42ef198-config\") pod \"dnsmasq-dns-675f4bcbfc-whwkw\" (UID: \"d0a905a9-fae0-4067-bde3-8aacc42ef198\") " pod="openstack/dnsmasq-dns-675f4bcbfc-whwkw" Oct 06 12:01:51 crc kubenswrapper[4958]: I1006 12:01:51.727456 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4x9v4\" (UniqueName: \"kubernetes.io/projected/d0a905a9-fae0-4067-bde3-8aacc42ef198-kube-api-access-4x9v4\") pod \"dnsmasq-dns-675f4bcbfc-whwkw\" (UID: \"d0a905a9-fae0-4067-bde3-8aacc42ef198\") " pod="openstack/dnsmasq-dns-675f4bcbfc-whwkw" Oct 06 12:01:51 crc kubenswrapper[4958]: I1006 12:01:51.727514 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/5bb0d843-3402-42d3-8fb8-c81ab8befa24-dns-svc\") pod \"dnsmasq-dns-78dd6ddcc-55j7h\" (UID: \"5bb0d843-3402-42d3-8fb8-c81ab8befa24\") " pod="openstack/dnsmasq-dns-78dd6ddcc-55j7h" Oct 06 12:01:51 crc kubenswrapper[4958]: I1006 12:01:51.833749 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/5bb0d843-3402-42d3-8fb8-c81ab8befa24-dns-svc\") pod \"dnsmasq-dns-78dd6ddcc-55j7h\" (UID: \"5bb0d843-3402-42d3-8fb8-c81ab8befa24\") " pod="openstack/dnsmasq-dns-78dd6ddcc-55j7h" Oct 06 12:01:51 crc kubenswrapper[4958]: I1006 12:01:51.834324 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5bb0d843-3402-42d3-8fb8-c81ab8befa24-config\") pod \"dnsmasq-dns-78dd6ddcc-55j7h\" (UID: \"5bb0d843-3402-42d3-8fb8-c81ab8befa24\") " pod="openstack/dnsmasq-dns-78dd6ddcc-55j7h" Oct 06 12:01:51 crc kubenswrapper[4958]: I1006 12:01:51.834430 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2zw6n\" (UniqueName: \"kubernetes.io/projected/5bb0d843-3402-42d3-8fb8-c81ab8befa24-kube-api-access-2zw6n\") pod \"dnsmasq-dns-78dd6ddcc-55j7h\" (UID: \"5bb0d843-3402-42d3-8fb8-c81ab8befa24\") " pod="openstack/dnsmasq-dns-78dd6ddcc-55j7h" Oct 06 12:01:51 crc kubenswrapper[4958]: I1006 12:01:51.834533 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d0a905a9-fae0-4067-bde3-8aacc42ef198-config\") pod \"dnsmasq-dns-675f4bcbfc-whwkw\" (UID: \"d0a905a9-fae0-4067-bde3-8aacc42ef198\") " pod="openstack/dnsmasq-dns-675f4bcbfc-whwkw" Oct 06 12:01:51 crc kubenswrapper[4958]: I1006 12:01:51.834633 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4x9v4\" (UniqueName: \"kubernetes.io/projected/d0a905a9-fae0-4067-bde3-8aacc42ef198-kube-api-access-4x9v4\") pod \"dnsmasq-dns-675f4bcbfc-whwkw\" (UID: \"d0a905a9-fae0-4067-bde3-8aacc42ef198\") " pod="openstack/dnsmasq-dns-675f4bcbfc-whwkw" Oct 06 12:01:51 crc kubenswrapper[4958]: I1006 12:01:51.835052 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/5bb0d843-3402-42d3-8fb8-c81ab8befa24-dns-svc\") pod \"dnsmasq-dns-78dd6ddcc-55j7h\" (UID: \"5bb0d843-3402-42d3-8fb8-c81ab8befa24\") " pod="openstack/dnsmasq-dns-78dd6ddcc-55j7h" Oct 06 12:01:51 crc kubenswrapper[4958]: I1006 12:01:51.835869 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d0a905a9-fae0-4067-bde3-8aacc42ef198-config\") pod \"dnsmasq-dns-675f4bcbfc-whwkw\" (UID: \"d0a905a9-fae0-4067-bde3-8aacc42ef198\") " pod="openstack/dnsmasq-dns-675f4bcbfc-whwkw" Oct 06 12:01:51 crc kubenswrapper[4958]: I1006 12:01:51.835993 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5bb0d843-3402-42d3-8fb8-c81ab8befa24-config\") pod \"dnsmasq-dns-78dd6ddcc-55j7h\" (UID: \"5bb0d843-3402-42d3-8fb8-c81ab8befa24\") " pod="openstack/dnsmasq-dns-78dd6ddcc-55j7h" Oct 06 12:01:51 crc kubenswrapper[4958]: I1006 12:01:51.854108 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2zw6n\" (UniqueName: \"kubernetes.io/projected/5bb0d843-3402-42d3-8fb8-c81ab8befa24-kube-api-access-2zw6n\") pod \"dnsmasq-dns-78dd6ddcc-55j7h\" (UID: \"5bb0d843-3402-42d3-8fb8-c81ab8befa24\") " pod="openstack/dnsmasq-dns-78dd6ddcc-55j7h" Oct 06 12:01:51 crc kubenswrapper[4958]: I1006 12:01:51.854872 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4x9v4\" (UniqueName: \"kubernetes.io/projected/d0a905a9-fae0-4067-bde3-8aacc42ef198-kube-api-access-4x9v4\") pod \"dnsmasq-dns-675f4bcbfc-whwkw\" (UID: \"d0a905a9-fae0-4067-bde3-8aacc42ef198\") " pod="openstack/dnsmasq-dns-675f4bcbfc-whwkw" Oct 06 12:01:51 crc kubenswrapper[4958]: I1006 12:01:51.971521 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-675f4bcbfc-whwkw" Oct 06 12:01:52 crc kubenswrapper[4958]: I1006 12:01:52.039800 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78dd6ddcc-55j7h" Oct 06 12:01:52 crc kubenswrapper[4958]: I1006 12:01:52.493705 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-whwkw"] Oct 06 12:01:52 crc kubenswrapper[4958]: W1006 12:01:52.495026 4958 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd0a905a9_fae0_4067_bde3_8aacc42ef198.slice/crio-1c2ffc7231d30a5b00dfa6b3185a61286ba83c01c4df57316fc0267bdafa4100 WatchSource:0}: Error finding container 1c2ffc7231d30a5b00dfa6b3185a61286ba83c01c4df57316fc0267bdafa4100: Status 404 returned error can't find the container with id 1c2ffc7231d30a5b00dfa6b3185a61286ba83c01c4df57316fc0267bdafa4100 Oct 06 12:01:52 crc kubenswrapper[4958]: I1006 12:01:52.497595 4958 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Oct 06 12:01:52 crc kubenswrapper[4958]: W1006 12:01:52.512423 4958 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod5bb0d843_3402_42d3_8fb8_c81ab8befa24.slice/crio-6b64a4d584980c45fda1617e56968ff41deafd94ea0989e036e6abee66d03417 WatchSource:0}: Error finding container 6b64a4d584980c45fda1617e56968ff41deafd94ea0989e036e6abee66d03417: Status 404 returned error can't find the container with id 6b64a4d584980c45fda1617e56968ff41deafd94ea0989e036e6abee66d03417 Oct 06 12:01:52 crc kubenswrapper[4958]: I1006 12:01:52.512956 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-55j7h"] Oct 06 12:01:53 crc kubenswrapper[4958]: I1006 12:01:53.104692 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-78dd6ddcc-55j7h" event={"ID":"5bb0d843-3402-42d3-8fb8-c81ab8befa24","Type":"ContainerStarted","Data":"6b64a4d584980c45fda1617e56968ff41deafd94ea0989e036e6abee66d03417"} Oct 06 12:01:53 crc kubenswrapper[4958]: I1006 12:01:53.106213 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-675f4bcbfc-whwkw" event={"ID":"d0a905a9-fae0-4067-bde3-8aacc42ef198","Type":"ContainerStarted","Data":"1c2ffc7231d30a5b00dfa6b3185a61286ba83c01c4df57316fc0267bdafa4100"} Oct 06 12:01:54 crc kubenswrapper[4958]: I1006 12:01:54.774423 4958 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-whwkw"] Oct 06 12:01:54 crc kubenswrapper[4958]: I1006 12:01:54.797648 4958 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-fb7np"] Oct 06 12:01:54 crc kubenswrapper[4958]: I1006 12:01:54.798748 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-666b6646f7-fb7np" Oct 06 12:01:54 crc kubenswrapper[4958]: I1006 12:01:54.810905 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-fb7np"] Oct 06 12:01:54 crc kubenswrapper[4958]: I1006 12:01:54.876313 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/9f28c916-3fd1-4c4d-8791-7019551b1900-dns-svc\") pod \"dnsmasq-dns-666b6646f7-fb7np\" (UID: \"9f28c916-3fd1-4c4d-8791-7019551b1900\") " pod="openstack/dnsmasq-dns-666b6646f7-fb7np" Oct 06 12:01:54 crc kubenswrapper[4958]: I1006 12:01:54.876384 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9lzlx\" (UniqueName: \"kubernetes.io/projected/9f28c916-3fd1-4c4d-8791-7019551b1900-kube-api-access-9lzlx\") pod \"dnsmasq-dns-666b6646f7-fb7np\" (UID: \"9f28c916-3fd1-4c4d-8791-7019551b1900\") " pod="openstack/dnsmasq-dns-666b6646f7-fb7np" Oct 06 12:01:54 crc kubenswrapper[4958]: I1006 12:01:54.876427 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9f28c916-3fd1-4c4d-8791-7019551b1900-config\") pod \"dnsmasq-dns-666b6646f7-fb7np\" (UID: \"9f28c916-3fd1-4c4d-8791-7019551b1900\") " pod="openstack/dnsmasq-dns-666b6646f7-fb7np" Oct 06 12:01:54 crc kubenswrapper[4958]: I1006 12:01:54.976923 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9lzlx\" (UniqueName: \"kubernetes.io/projected/9f28c916-3fd1-4c4d-8791-7019551b1900-kube-api-access-9lzlx\") pod \"dnsmasq-dns-666b6646f7-fb7np\" (UID: \"9f28c916-3fd1-4c4d-8791-7019551b1900\") " pod="openstack/dnsmasq-dns-666b6646f7-fb7np" Oct 06 12:01:54 crc kubenswrapper[4958]: I1006 12:01:54.976976 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9f28c916-3fd1-4c4d-8791-7019551b1900-config\") pod \"dnsmasq-dns-666b6646f7-fb7np\" (UID: \"9f28c916-3fd1-4c4d-8791-7019551b1900\") " pod="openstack/dnsmasq-dns-666b6646f7-fb7np" Oct 06 12:01:54 crc kubenswrapper[4958]: I1006 12:01:54.977104 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/9f28c916-3fd1-4c4d-8791-7019551b1900-dns-svc\") pod \"dnsmasq-dns-666b6646f7-fb7np\" (UID: \"9f28c916-3fd1-4c4d-8791-7019551b1900\") " pod="openstack/dnsmasq-dns-666b6646f7-fb7np" Oct 06 12:01:54 crc kubenswrapper[4958]: I1006 12:01:54.978045 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/9f28c916-3fd1-4c4d-8791-7019551b1900-dns-svc\") pod \"dnsmasq-dns-666b6646f7-fb7np\" (UID: \"9f28c916-3fd1-4c4d-8791-7019551b1900\") " pod="openstack/dnsmasq-dns-666b6646f7-fb7np" Oct 06 12:01:54 crc kubenswrapper[4958]: I1006 12:01:54.978273 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9f28c916-3fd1-4c4d-8791-7019551b1900-config\") pod \"dnsmasq-dns-666b6646f7-fb7np\" (UID: \"9f28c916-3fd1-4c4d-8791-7019551b1900\") " pod="openstack/dnsmasq-dns-666b6646f7-fb7np" Oct 06 12:01:54 crc kubenswrapper[4958]: I1006 12:01:54.999582 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9lzlx\" (UniqueName: \"kubernetes.io/projected/9f28c916-3fd1-4c4d-8791-7019551b1900-kube-api-access-9lzlx\") pod \"dnsmasq-dns-666b6646f7-fb7np\" (UID: \"9f28c916-3fd1-4c4d-8791-7019551b1900\") " pod="openstack/dnsmasq-dns-666b6646f7-fb7np" Oct 06 12:01:55 crc kubenswrapper[4958]: I1006 12:01:55.041843 4958 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-55j7h"] Oct 06 12:01:55 crc kubenswrapper[4958]: I1006 12:01:55.060700 4958 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-9q2cz"] Oct 06 12:01:55 crc kubenswrapper[4958]: I1006 12:01:55.061785 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57d769cc4f-9q2cz" Oct 06 12:01:55 crc kubenswrapper[4958]: I1006 12:01:55.075889 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-9q2cz"] Oct 06 12:01:55 crc kubenswrapper[4958]: I1006 12:01:55.078790 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8njbt\" (UniqueName: \"kubernetes.io/projected/682c8402-9403-4a24-a1ae-bf6025255f2b-kube-api-access-8njbt\") pod \"dnsmasq-dns-57d769cc4f-9q2cz\" (UID: \"682c8402-9403-4a24-a1ae-bf6025255f2b\") " pod="openstack/dnsmasq-dns-57d769cc4f-9q2cz" Oct 06 12:01:55 crc kubenswrapper[4958]: I1006 12:01:55.078843 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/682c8402-9403-4a24-a1ae-bf6025255f2b-dns-svc\") pod \"dnsmasq-dns-57d769cc4f-9q2cz\" (UID: \"682c8402-9403-4a24-a1ae-bf6025255f2b\") " pod="openstack/dnsmasq-dns-57d769cc4f-9q2cz" Oct 06 12:01:55 crc kubenswrapper[4958]: I1006 12:01:55.078890 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/682c8402-9403-4a24-a1ae-bf6025255f2b-config\") pod \"dnsmasq-dns-57d769cc4f-9q2cz\" (UID: \"682c8402-9403-4a24-a1ae-bf6025255f2b\") " pod="openstack/dnsmasq-dns-57d769cc4f-9q2cz" Oct 06 12:01:55 crc kubenswrapper[4958]: I1006 12:01:55.127444 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-666b6646f7-fb7np" Oct 06 12:01:55 crc kubenswrapper[4958]: I1006 12:01:55.179974 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/682c8402-9403-4a24-a1ae-bf6025255f2b-dns-svc\") pod \"dnsmasq-dns-57d769cc4f-9q2cz\" (UID: \"682c8402-9403-4a24-a1ae-bf6025255f2b\") " pod="openstack/dnsmasq-dns-57d769cc4f-9q2cz" Oct 06 12:01:55 crc kubenswrapper[4958]: I1006 12:01:55.180056 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/682c8402-9403-4a24-a1ae-bf6025255f2b-config\") pod \"dnsmasq-dns-57d769cc4f-9q2cz\" (UID: \"682c8402-9403-4a24-a1ae-bf6025255f2b\") " pod="openstack/dnsmasq-dns-57d769cc4f-9q2cz" Oct 06 12:01:55 crc kubenswrapper[4958]: I1006 12:01:55.180107 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8njbt\" (UniqueName: \"kubernetes.io/projected/682c8402-9403-4a24-a1ae-bf6025255f2b-kube-api-access-8njbt\") pod \"dnsmasq-dns-57d769cc4f-9q2cz\" (UID: \"682c8402-9403-4a24-a1ae-bf6025255f2b\") " pod="openstack/dnsmasq-dns-57d769cc4f-9q2cz" Oct 06 12:01:55 crc kubenswrapper[4958]: I1006 12:01:55.181338 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/682c8402-9403-4a24-a1ae-bf6025255f2b-dns-svc\") pod \"dnsmasq-dns-57d769cc4f-9q2cz\" (UID: \"682c8402-9403-4a24-a1ae-bf6025255f2b\") " pod="openstack/dnsmasq-dns-57d769cc4f-9q2cz" Oct 06 12:01:55 crc kubenswrapper[4958]: I1006 12:01:55.181868 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/682c8402-9403-4a24-a1ae-bf6025255f2b-config\") pod \"dnsmasq-dns-57d769cc4f-9q2cz\" (UID: \"682c8402-9403-4a24-a1ae-bf6025255f2b\") " pod="openstack/dnsmasq-dns-57d769cc4f-9q2cz" Oct 06 12:01:55 crc kubenswrapper[4958]: I1006 12:01:55.196439 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8njbt\" (UniqueName: \"kubernetes.io/projected/682c8402-9403-4a24-a1ae-bf6025255f2b-kube-api-access-8njbt\") pod \"dnsmasq-dns-57d769cc4f-9q2cz\" (UID: \"682c8402-9403-4a24-a1ae-bf6025255f2b\") " pod="openstack/dnsmasq-dns-57d769cc4f-9q2cz" Oct 06 12:01:55 crc kubenswrapper[4958]: I1006 12:01:55.376430 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57d769cc4f-9q2cz" Oct 06 12:01:55 crc kubenswrapper[4958]: I1006 12:01:55.936741 4958 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-server-0"] Oct 06 12:01:55 crc kubenswrapper[4958]: I1006 12:01:55.949852 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Oct 06 12:01:55 crc kubenswrapper[4958]: I1006 12:01:55.955174 4958 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-plugins-conf" Oct 06 12:01:55 crc kubenswrapper[4958]: I1006 12:01:55.955484 4958 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-server-conf" Oct 06 12:01:55 crc kubenswrapper[4958]: I1006 12:01:55.957692 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-default-user" Oct 06 12:01:55 crc kubenswrapper[4958]: I1006 12:01:55.958194 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-erlang-cookie" Oct 06 12:01:55 crc kubenswrapper[4958]: I1006 12:01:55.960197 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-svc" Oct 06 12:01:55 crc kubenswrapper[4958]: I1006 12:01:55.960655 4958 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-config-data" Oct 06 12:01:55 crc kubenswrapper[4958]: I1006 12:01:55.960729 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-server-dockercfg-jkfbz" Oct 06 12:01:55 crc kubenswrapper[4958]: I1006 12:01:55.971547 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Oct 06 12:01:56 crc kubenswrapper[4958]: I1006 12:01:56.094231 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/2542eba4-43d2-4108-a6f0-8eb4a1714f77-pod-info\") pod \"rabbitmq-server-0\" (UID: \"2542eba4-43d2-4108-a6f0-8eb4a1714f77\") " pod="openstack/rabbitmq-server-0" Oct 06 12:01:56 crc kubenswrapper[4958]: I1006 12:01:56.094475 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/2542eba4-43d2-4108-a6f0-8eb4a1714f77-config-data\") pod \"rabbitmq-server-0\" (UID: \"2542eba4-43d2-4108-a6f0-8eb4a1714f77\") " pod="openstack/rabbitmq-server-0" Oct 06 12:01:56 crc kubenswrapper[4958]: I1006 12:01:56.094583 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"rabbitmq-server-0\" (UID: \"2542eba4-43d2-4108-a6f0-8eb4a1714f77\") " pod="openstack/rabbitmq-server-0" Oct 06 12:01:56 crc kubenswrapper[4958]: I1006 12:01:56.094678 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/2542eba4-43d2-4108-a6f0-8eb4a1714f77-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"2542eba4-43d2-4108-a6f0-8eb4a1714f77\") " pod="openstack/rabbitmq-server-0" Oct 06 12:01:56 crc kubenswrapper[4958]: I1006 12:01:56.094811 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2ghxb\" (UniqueName: \"kubernetes.io/projected/2542eba4-43d2-4108-a6f0-8eb4a1714f77-kube-api-access-2ghxb\") pod \"rabbitmq-server-0\" (UID: \"2542eba4-43d2-4108-a6f0-8eb4a1714f77\") " pod="openstack/rabbitmq-server-0" Oct 06 12:01:56 crc kubenswrapper[4958]: I1006 12:01:56.094920 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/2542eba4-43d2-4108-a6f0-8eb4a1714f77-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"2542eba4-43d2-4108-a6f0-8eb4a1714f77\") " pod="openstack/rabbitmq-server-0" Oct 06 12:01:56 crc kubenswrapper[4958]: I1006 12:01:56.095005 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/2542eba4-43d2-4108-a6f0-8eb4a1714f77-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"2542eba4-43d2-4108-a6f0-8eb4a1714f77\") " pod="openstack/rabbitmq-server-0" Oct 06 12:01:56 crc kubenswrapper[4958]: I1006 12:01:56.095129 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/2542eba4-43d2-4108-a6f0-8eb4a1714f77-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"2542eba4-43d2-4108-a6f0-8eb4a1714f77\") " pod="openstack/rabbitmq-server-0" Oct 06 12:01:56 crc kubenswrapper[4958]: I1006 12:01:56.095251 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/2542eba4-43d2-4108-a6f0-8eb4a1714f77-server-conf\") pod \"rabbitmq-server-0\" (UID: \"2542eba4-43d2-4108-a6f0-8eb4a1714f77\") " pod="openstack/rabbitmq-server-0" Oct 06 12:01:56 crc kubenswrapper[4958]: I1006 12:01:56.095372 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/2542eba4-43d2-4108-a6f0-8eb4a1714f77-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"2542eba4-43d2-4108-a6f0-8eb4a1714f77\") " pod="openstack/rabbitmq-server-0" Oct 06 12:01:56 crc kubenswrapper[4958]: I1006 12:01:56.095488 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/2542eba4-43d2-4108-a6f0-8eb4a1714f77-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"2542eba4-43d2-4108-a6f0-8eb4a1714f77\") " pod="openstack/rabbitmq-server-0" Oct 06 12:01:56 crc kubenswrapper[4958]: I1006 12:01:56.196952 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/2542eba4-43d2-4108-a6f0-8eb4a1714f77-pod-info\") pod \"rabbitmq-server-0\" (UID: \"2542eba4-43d2-4108-a6f0-8eb4a1714f77\") " pod="openstack/rabbitmq-server-0" Oct 06 12:01:56 crc kubenswrapper[4958]: I1006 12:01:56.196996 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/2542eba4-43d2-4108-a6f0-8eb4a1714f77-config-data\") pod \"rabbitmq-server-0\" (UID: \"2542eba4-43d2-4108-a6f0-8eb4a1714f77\") " pod="openstack/rabbitmq-server-0" Oct 06 12:01:56 crc kubenswrapper[4958]: I1006 12:01:56.197021 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"rabbitmq-server-0\" (UID: \"2542eba4-43d2-4108-a6f0-8eb4a1714f77\") " pod="openstack/rabbitmq-server-0" Oct 06 12:01:56 crc kubenswrapper[4958]: I1006 12:01:56.197036 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/2542eba4-43d2-4108-a6f0-8eb4a1714f77-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"2542eba4-43d2-4108-a6f0-8eb4a1714f77\") " pod="openstack/rabbitmq-server-0" Oct 06 12:01:56 crc kubenswrapper[4958]: I1006 12:01:56.197068 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2ghxb\" (UniqueName: \"kubernetes.io/projected/2542eba4-43d2-4108-a6f0-8eb4a1714f77-kube-api-access-2ghxb\") pod \"rabbitmq-server-0\" (UID: \"2542eba4-43d2-4108-a6f0-8eb4a1714f77\") " pod="openstack/rabbitmq-server-0" Oct 06 12:01:56 crc kubenswrapper[4958]: I1006 12:01:56.197091 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/2542eba4-43d2-4108-a6f0-8eb4a1714f77-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"2542eba4-43d2-4108-a6f0-8eb4a1714f77\") " pod="openstack/rabbitmq-server-0" Oct 06 12:01:56 crc kubenswrapper[4958]: I1006 12:01:56.197108 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/2542eba4-43d2-4108-a6f0-8eb4a1714f77-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"2542eba4-43d2-4108-a6f0-8eb4a1714f77\") " pod="openstack/rabbitmq-server-0" Oct 06 12:01:56 crc kubenswrapper[4958]: I1006 12:01:56.197124 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/2542eba4-43d2-4108-a6f0-8eb4a1714f77-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"2542eba4-43d2-4108-a6f0-8eb4a1714f77\") " pod="openstack/rabbitmq-server-0" Oct 06 12:01:56 crc kubenswrapper[4958]: I1006 12:01:56.197137 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/2542eba4-43d2-4108-a6f0-8eb4a1714f77-server-conf\") pod \"rabbitmq-server-0\" (UID: \"2542eba4-43d2-4108-a6f0-8eb4a1714f77\") " pod="openstack/rabbitmq-server-0" Oct 06 12:01:56 crc kubenswrapper[4958]: I1006 12:01:56.197179 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/2542eba4-43d2-4108-a6f0-8eb4a1714f77-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"2542eba4-43d2-4108-a6f0-8eb4a1714f77\") " pod="openstack/rabbitmq-server-0" Oct 06 12:01:56 crc kubenswrapper[4958]: I1006 12:01:56.197205 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/2542eba4-43d2-4108-a6f0-8eb4a1714f77-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"2542eba4-43d2-4108-a6f0-8eb4a1714f77\") " pod="openstack/rabbitmq-server-0" Oct 06 12:01:56 crc kubenswrapper[4958]: I1006 12:01:56.198125 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/2542eba4-43d2-4108-a6f0-8eb4a1714f77-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"2542eba4-43d2-4108-a6f0-8eb4a1714f77\") " pod="openstack/rabbitmq-server-0" Oct 06 12:01:56 crc kubenswrapper[4958]: I1006 12:01:56.198823 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/2542eba4-43d2-4108-a6f0-8eb4a1714f77-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"2542eba4-43d2-4108-a6f0-8eb4a1714f77\") " pod="openstack/rabbitmq-server-0" Oct 06 12:01:56 crc kubenswrapper[4958]: I1006 12:01:56.199351 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/2542eba4-43d2-4108-a6f0-8eb4a1714f77-config-data\") pod \"rabbitmq-server-0\" (UID: \"2542eba4-43d2-4108-a6f0-8eb4a1714f77\") " pod="openstack/rabbitmq-server-0" Oct 06 12:01:56 crc kubenswrapper[4958]: I1006 12:01:56.199591 4958 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"rabbitmq-server-0\" (UID: \"2542eba4-43d2-4108-a6f0-8eb4a1714f77\") device mount path \"/mnt/openstack/pv10\"" pod="openstack/rabbitmq-server-0" Oct 06 12:01:56 crc kubenswrapper[4958]: I1006 12:01:56.204957 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/2542eba4-43d2-4108-a6f0-8eb4a1714f77-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"2542eba4-43d2-4108-a6f0-8eb4a1714f77\") " pod="openstack/rabbitmq-server-0" Oct 06 12:01:56 crc kubenswrapper[4958]: I1006 12:01:56.207195 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/2542eba4-43d2-4108-a6f0-8eb4a1714f77-server-conf\") pod \"rabbitmq-server-0\" (UID: \"2542eba4-43d2-4108-a6f0-8eb4a1714f77\") " pod="openstack/rabbitmq-server-0" Oct 06 12:01:56 crc kubenswrapper[4958]: I1006 12:01:56.207256 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/2542eba4-43d2-4108-a6f0-8eb4a1714f77-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"2542eba4-43d2-4108-a6f0-8eb4a1714f77\") " pod="openstack/rabbitmq-server-0" Oct 06 12:01:56 crc kubenswrapper[4958]: I1006 12:01:56.210674 4958 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Oct 06 12:01:56 crc kubenswrapper[4958]: I1006 12:01:56.214884 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Oct 06 12:01:56 crc kubenswrapper[4958]: I1006 12:01:56.219632 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/2542eba4-43d2-4108-a6f0-8eb4a1714f77-pod-info\") pod \"rabbitmq-server-0\" (UID: \"2542eba4-43d2-4108-a6f0-8eb4a1714f77\") " pod="openstack/rabbitmq-server-0" Oct 06 12:01:56 crc kubenswrapper[4958]: I1006 12:01:56.221885 4958 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-config-data" Oct 06 12:01:56 crc kubenswrapper[4958]: I1006 12:01:56.221932 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-erlang-cookie" Oct 06 12:01:56 crc kubenswrapper[4958]: I1006 12:01:56.222104 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-cell1-svc" Oct 06 12:01:56 crc kubenswrapper[4958]: I1006 12:01:56.222314 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-default-user" Oct 06 12:01:56 crc kubenswrapper[4958]: I1006 12:01:56.222437 4958 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-plugins-conf" Oct 06 12:01:56 crc kubenswrapper[4958]: I1006 12:01:56.222603 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-server-dockercfg-82c5t" Oct 06 12:01:56 crc kubenswrapper[4958]: I1006 12:01:56.223187 4958 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-server-conf" Oct 06 12:01:56 crc kubenswrapper[4958]: I1006 12:01:56.225399 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/2542eba4-43d2-4108-a6f0-8eb4a1714f77-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"2542eba4-43d2-4108-a6f0-8eb4a1714f77\") " pod="openstack/rabbitmq-server-0" Oct 06 12:01:56 crc kubenswrapper[4958]: I1006 12:01:56.227928 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Oct 06 12:01:56 crc kubenswrapper[4958]: I1006 12:01:56.228814 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/2542eba4-43d2-4108-a6f0-8eb4a1714f77-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"2542eba4-43d2-4108-a6f0-8eb4a1714f77\") " pod="openstack/rabbitmq-server-0" Oct 06 12:01:56 crc kubenswrapper[4958]: I1006 12:01:56.235677 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2ghxb\" (UniqueName: \"kubernetes.io/projected/2542eba4-43d2-4108-a6f0-8eb4a1714f77-kube-api-access-2ghxb\") pod \"rabbitmq-server-0\" (UID: \"2542eba4-43d2-4108-a6f0-8eb4a1714f77\") " pod="openstack/rabbitmq-server-0" Oct 06 12:01:56 crc kubenswrapper[4958]: I1006 12:01:56.237985 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"rabbitmq-server-0\" (UID: \"2542eba4-43d2-4108-a6f0-8eb4a1714f77\") " pod="openstack/rabbitmq-server-0" Oct 06 12:01:56 crc kubenswrapper[4958]: I1006 12:01:56.284135 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Oct 06 12:01:56 crc kubenswrapper[4958]: I1006 12:01:56.400645 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/4c931ada-9afe-4ec4-9f75-42db89dc36e8-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"4c931ada-9afe-4ec4-9f75-42db89dc36e8\") " pod="openstack/rabbitmq-cell1-server-0" Oct 06 12:01:56 crc kubenswrapper[4958]: I1006 12:01:56.400729 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/4c931ada-9afe-4ec4-9f75-42db89dc36e8-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"4c931ada-9afe-4ec4-9f75-42db89dc36e8\") " pod="openstack/rabbitmq-cell1-server-0" Oct 06 12:01:56 crc kubenswrapper[4958]: I1006 12:01:56.400912 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"4c931ada-9afe-4ec4-9f75-42db89dc36e8\") " pod="openstack/rabbitmq-cell1-server-0" Oct 06 12:01:56 crc kubenswrapper[4958]: I1006 12:01:56.401101 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/4c931ada-9afe-4ec4-9f75-42db89dc36e8-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"4c931ada-9afe-4ec4-9f75-42db89dc36e8\") " pod="openstack/rabbitmq-cell1-server-0" Oct 06 12:01:56 crc kubenswrapper[4958]: I1006 12:01:56.401269 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/4c931ada-9afe-4ec4-9f75-42db89dc36e8-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"4c931ada-9afe-4ec4-9f75-42db89dc36e8\") " pod="openstack/rabbitmq-cell1-server-0" Oct 06 12:01:56 crc kubenswrapper[4958]: I1006 12:01:56.401329 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/4c931ada-9afe-4ec4-9f75-42db89dc36e8-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"4c931ada-9afe-4ec4-9f75-42db89dc36e8\") " pod="openstack/rabbitmq-cell1-server-0" Oct 06 12:01:56 crc kubenswrapper[4958]: I1006 12:01:56.401350 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/4c931ada-9afe-4ec4-9f75-42db89dc36e8-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"4c931ada-9afe-4ec4-9f75-42db89dc36e8\") " pod="openstack/rabbitmq-cell1-server-0" Oct 06 12:01:56 crc kubenswrapper[4958]: I1006 12:01:56.401430 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/4c931ada-9afe-4ec4-9f75-42db89dc36e8-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"4c931ada-9afe-4ec4-9f75-42db89dc36e8\") " pod="openstack/rabbitmq-cell1-server-0" Oct 06 12:01:56 crc kubenswrapper[4958]: I1006 12:01:56.401511 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/4c931ada-9afe-4ec4-9f75-42db89dc36e8-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"4c931ada-9afe-4ec4-9f75-42db89dc36e8\") " pod="openstack/rabbitmq-cell1-server-0" Oct 06 12:01:56 crc kubenswrapper[4958]: I1006 12:01:56.401550 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/4c931ada-9afe-4ec4-9f75-42db89dc36e8-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"4c931ada-9afe-4ec4-9f75-42db89dc36e8\") " pod="openstack/rabbitmq-cell1-server-0" Oct 06 12:01:56 crc kubenswrapper[4958]: I1006 12:01:56.401616 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pf6dt\" (UniqueName: \"kubernetes.io/projected/4c931ada-9afe-4ec4-9f75-42db89dc36e8-kube-api-access-pf6dt\") pod \"rabbitmq-cell1-server-0\" (UID: \"4c931ada-9afe-4ec4-9f75-42db89dc36e8\") " pod="openstack/rabbitmq-cell1-server-0" Oct 06 12:01:56 crc kubenswrapper[4958]: I1006 12:01:56.503289 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/4c931ada-9afe-4ec4-9f75-42db89dc36e8-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"4c931ada-9afe-4ec4-9f75-42db89dc36e8\") " pod="openstack/rabbitmq-cell1-server-0" Oct 06 12:01:56 crc kubenswrapper[4958]: I1006 12:01:56.503370 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/4c931ada-9afe-4ec4-9f75-42db89dc36e8-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"4c931ada-9afe-4ec4-9f75-42db89dc36e8\") " pod="openstack/rabbitmq-cell1-server-0" Oct 06 12:01:56 crc kubenswrapper[4958]: I1006 12:01:56.503424 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"4c931ada-9afe-4ec4-9f75-42db89dc36e8\") " pod="openstack/rabbitmq-cell1-server-0" Oct 06 12:01:56 crc kubenswrapper[4958]: I1006 12:01:56.503473 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/4c931ada-9afe-4ec4-9f75-42db89dc36e8-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"4c931ada-9afe-4ec4-9f75-42db89dc36e8\") " pod="openstack/rabbitmq-cell1-server-0" Oct 06 12:01:56 crc kubenswrapper[4958]: I1006 12:01:56.503521 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/4c931ada-9afe-4ec4-9f75-42db89dc36e8-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"4c931ada-9afe-4ec4-9f75-42db89dc36e8\") " pod="openstack/rabbitmq-cell1-server-0" Oct 06 12:01:56 crc kubenswrapper[4958]: I1006 12:01:56.503740 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/4c931ada-9afe-4ec4-9f75-42db89dc36e8-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"4c931ada-9afe-4ec4-9f75-42db89dc36e8\") " pod="openstack/rabbitmq-cell1-server-0" Oct 06 12:01:56 crc kubenswrapper[4958]: I1006 12:01:56.503784 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/4c931ada-9afe-4ec4-9f75-42db89dc36e8-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"4c931ada-9afe-4ec4-9f75-42db89dc36e8\") " pod="openstack/rabbitmq-cell1-server-0" Oct 06 12:01:56 crc kubenswrapper[4958]: I1006 12:01:56.503821 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/4c931ada-9afe-4ec4-9f75-42db89dc36e8-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"4c931ada-9afe-4ec4-9f75-42db89dc36e8\") " pod="openstack/rabbitmq-cell1-server-0" Oct 06 12:01:56 crc kubenswrapper[4958]: I1006 12:01:56.503900 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/4c931ada-9afe-4ec4-9f75-42db89dc36e8-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"4c931ada-9afe-4ec4-9f75-42db89dc36e8\") " pod="openstack/rabbitmq-cell1-server-0" Oct 06 12:01:56 crc kubenswrapper[4958]: I1006 12:01:56.503935 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/4c931ada-9afe-4ec4-9f75-42db89dc36e8-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"4c931ada-9afe-4ec4-9f75-42db89dc36e8\") " pod="openstack/rabbitmq-cell1-server-0" Oct 06 12:01:56 crc kubenswrapper[4958]: I1006 12:01:56.503985 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pf6dt\" (UniqueName: \"kubernetes.io/projected/4c931ada-9afe-4ec4-9f75-42db89dc36e8-kube-api-access-pf6dt\") pod \"rabbitmq-cell1-server-0\" (UID: \"4c931ada-9afe-4ec4-9f75-42db89dc36e8\") " pod="openstack/rabbitmq-cell1-server-0" Oct 06 12:01:56 crc kubenswrapper[4958]: I1006 12:01:56.503995 4958 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"4c931ada-9afe-4ec4-9f75-42db89dc36e8\") device mount path \"/mnt/openstack/pv04\"" pod="openstack/rabbitmq-cell1-server-0" Oct 06 12:01:56 crc kubenswrapper[4958]: I1006 12:01:56.504600 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/4c931ada-9afe-4ec4-9f75-42db89dc36e8-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"4c931ada-9afe-4ec4-9f75-42db89dc36e8\") " pod="openstack/rabbitmq-cell1-server-0" Oct 06 12:01:56 crc kubenswrapper[4958]: I1006 12:01:56.506422 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/4c931ada-9afe-4ec4-9f75-42db89dc36e8-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"4c931ada-9afe-4ec4-9f75-42db89dc36e8\") " pod="openstack/rabbitmq-cell1-server-0" Oct 06 12:01:56 crc kubenswrapper[4958]: I1006 12:01:56.507579 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/4c931ada-9afe-4ec4-9f75-42db89dc36e8-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"4c931ada-9afe-4ec4-9f75-42db89dc36e8\") " pod="openstack/rabbitmq-cell1-server-0" Oct 06 12:01:56 crc kubenswrapper[4958]: I1006 12:01:56.508013 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/4c931ada-9afe-4ec4-9f75-42db89dc36e8-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"4c931ada-9afe-4ec4-9f75-42db89dc36e8\") " pod="openstack/rabbitmq-cell1-server-0" Oct 06 12:01:56 crc kubenswrapper[4958]: I1006 12:01:56.508174 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/4c931ada-9afe-4ec4-9f75-42db89dc36e8-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"4c931ada-9afe-4ec4-9f75-42db89dc36e8\") " pod="openstack/rabbitmq-cell1-server-0" Oct 06 12:01:56 crc kubenswrapper[4958]: I1006 12:01:56.508203 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/4c931ada-9afe-4ec4-9f75-42db89dc36e8-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"4c931ada-9afe-4ec4-9f75-42db89dc36e8\") " pod="openstack/rabbitmq-cell1-server-0" Oct 06 12:01:56 crc kubenswrapper[4958]: I1006 12:01:56.508436 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/4c931ada-9afe-4ec4-9f75-42db89dc36e8-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"4c931ada-9afe-4ec4-9f75-42db89dc36e8\") " pod="openstack/rabbitmq-cell1-server-0" Oct 06 12:01:56 crc kubenswrapper[4958]: I1006 12:01:56.509158 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/4c931ada-9afe-4ec4-9f75-42db89dc36e8-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"4c931ada-9afe-4ec4-9f75-42db89dc36e8\") " pod="openstack/rabbitmq-cell1-server-0" Oct 06 12:01:56 crc kubenswrapper[4958]: I1006 12:01:56.520013 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/4c931ada-9afe-4ec4-9f75-42db89dc36e8-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"4c931ada-9afe-4ec4-9f75-42db89dc36e8\") " pod="openstack/rabbitmq-cell1-server-0" Oct 06 12:01:56 crc kubenswrapper[4958]: I1006 12:01:56.535279 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"4c931ada-9afe-4ec4-9f75-42db89dc36e8\") " pod="openstack/rabbitmq-cell1-server-0" Oct 06 12:01:56 crc kubenswrapper[4958]: I1006 12:01:56.538355 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pf6dt\" (UniqueName: \"kubernetes.io/projected/4c931ada-9afe-4ec4-9f75-42db89dc36e8-kube-api-access-pf6dt\") pod \"rabbitmq-cell1-server-0\" (UID: \"4c931ada-9afe-4ec4-9f75-42db89dc36e8\") " pod="openstack/rabbitmq-cell1-server-0" Oct 06 12:01:56 crc kubenswrapper[4958]: I1006 12:01:56.607941 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Oct 06 12:01:58 crc kubenswrapper[4958]: I1006 12:01:58.266232 4958 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstack-galera-0"] Oct 06 12:01:58 crc kubenswrapper[4958]: I1006 12:01:58.270498 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-galera-0" Oct 06 12:01:58 crc kubenswrapper[4958]: I1006 12:01:58.280759 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"osp-secret" Oct 06 12:01:58 crc kubenswrapper[4958]: I1006 12:01:58.281069 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-galera-openstack-svc" Oct 06 12:01:58 crc kubenswrapper[4958]: I1006 12:01:58.281306 4958 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-config-data" Oct 06 12:01:58 crc kubenswrapper[4958]: I1006 12:01:58.281434 4958 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-scripts" Oct 06 12:01:58 crc kubenswrapper[4958]: I1006 12:01:58.281555 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"galera-openstack-dockercfg-tznql" Oct 06 12:01:58 crc kubenswrapper[4958]: I1006 12:01:58.287450 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-galera-0"] Oct 06 12:01:58 crc kubenswrapper[4958]: I1006 12:01:58.294265 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"combined-ca-bundle" Oct 06 12:01:58 crc kubenswrapper[4958]: I1006 12:01:58.431556 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/acaf745d-7462-44e9-be0b-28424e3c2f31-config-data-default\") pod \"openstack-galera-0\" (UID: \"acaf745d-7462-44e9-be0b-28424e3c2f31\") " pod="openstack/openstack-galera-0" Oct 06 12:01:58 crc kubenswrapper[4958]: I1006 12:01:58.431679 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gn6w6\" (UniqueName: \"kubernetes.io/projected/acaf745d-7462-44e9-be0b-28424e3c2f31-kube-api-access-gn6w6\") pod \"openstack-galera-0\" (UID: \"acaf745d-7462-44e9-be0b-28424e3c2f31\") " pod="openstack/openstack-galera-0" Oct 06 12:01:58 crc kubenswrapper[4958]: I1006 12:01:58.431806 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secrets\" (UniqueName: \"kubernetes.io/secret/acaf745d-7462-44e9-be0b-28424e3c2f31-secrets\") pod \"openstack-galera-0\" (UID: \"acaf745d-7462-44e9-be0b-28424e3c2f31\") " pod="openstack/openstack-galera-0" Oct 06 12:01:58 crc kubenswrapper[4958]: I1006 12:01:58.431849 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/acaf745d-7462-44e9-be0b-28424e3c2f31-config-data-generated\") pod \"openstack-galera-0\" (UID: \"acaf745d-7462-44e9-be0b-28424e3c2f31\") " pod="openstack/openstack-galera-0" Oct 06 12:01:58 crc kubenswrapper[4958]: I1006 12:01:58.431878 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/acaf745d-7462-44e9-be0b-28424e3c2f31-operator-scripts\") pod \"openstack-galera-0\" (UID: \"acaf745d-7462-44e9-be0b-28424e3c2f31\") " pod="openstack/openstack-galera-0" Oct 06 12:01:58 crc kubenswrapper[4958]: I1006 12:01:58.431920 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/acaf745d-7462-44e9-be0b-28424e3c2f31-kolla-config\") pod \"openstack-galera-0\" (UID: \"acaf745d-7462-44e9-be0b-28424e3c2f31\") " pod="openstack/openstack-galera-0" Oct 06 12:01:58 crc kubenswrapper[4958]: I1006 12:01:58.431953 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/acaf745d-7462-44e9-be0b-28424e3c2f31-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"acaf745d-7462-44e9-be0b-28424e3c2f31\") " pod="openstack/openstack-galera-0" Oct 06 12:01:58 crc kubenswrapper[4958]: I1006 12:01:58.432043 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/acaf745d-7462-44e9-be0b-28424e3c2f31-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"acaf745d-7462-44e9-be0b-28424e3c2f31\") " pod="openstack/openstack-galera-0" Oct 06 12:01:58 crc kubenswrapper[4958]: I1006 12:01:58.432099 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"openstack-galera-0\" (UID: \"acaf745d-7462-44e9-be0b-28424e3c2f31\") " pod="openstack/openstack-galera-0" Oct 06 12:01:58 crc kubenswrapper[4958]: I1006 12:01:58.539504 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/acaf745d-7462-44e9-be0b-28424e3c2f31-config-data-default\") pod \"openstack-galera-0\" (UID: \"acaf745d-7462-44e9-be0b-28424e3c2f31\") " pod="openstack/openstack-galera-0" Oct 06 12:01:58 crc kubenswrapper[4958]: I1006 12:01:58.539759 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gn6w6\" (UniqueName: \"kubernetes.io/projected/acaf745d-7462-44e9-be0b-28424e3c2f31-kube-api-access-gn6w6\") pod \"openstack-galera-0\" (UID: \"acaf745d-7462-44e9-be0b-28424e3c2f31\") " pod="openstack/openstack-galera-0" Oct 06 12:01:58 crc kubenswrapper[4958]: I1006 12:01:58.539832 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secrets\" (UniqueName: \"kubernetes.io/secret/acaf745d-7462-44e9-be0b-28424e3c2f31-secrets\") pod \"openstack-galera-0\" (UID: \"acaf745d-7462-44e9-be0b-28424e3c2f31\") " pod="openstack/openstack-galera-0" Oct 06 12:01:58 crc kubenswrapper[4958]: I1006 12:01:58.539856 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/acaf745d-7462-44e9-be0b-28424e3c2f31-config-data-generated\") pod \"openstack-galera-0\" (UID: \"acaf745d-7462-44e9-be0b-28424e3c2f31\") " pod="openstack/openstack-galera-0" Oct 06 12:01:58 crc kubenswrapper[4958]: I1006 12:01:58.539875 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/acaf745d-7462-44e9-be0b-28424e3c2f31-operator-scripts\") pod \"openstack-galera-0\" (UID: \"acaf745d-7462-44e9-be0b-28424e3c2f31\") " pod="openstack/openstack-galera-0" Oct 06 12:01:58 crc kubenswrapper[4958]: I1006 12:01:58.539896 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/acaf745d-7462-44e9-be0b-28424e3c2f31-kolla-config\") pod \"openstack-galera-0\" (UID: \"acaf745d-7462-44e9-be0b-28424e3c2f31\") " pod="openstack/openstack-galera-0" Oct 06 12:01:58 crc kubenswrapper[4958]: I1006 12:01:58.539913 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/acaf745d-7462-44e9-be0b-28424e3c2f31-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"acaf745d-7462-44e9-be0b-28424e3c2f31\") " pod="openstack/openstack-galera-0" Oct 06 12:01:58 crc kubenswrapper[4958]: I1006 12:01:58.539953 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/acaf745d-7462-44e9-be0b-28424e3c2f31-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"acaf745d-7462-44e9-be0b-28424e3c2f31\") " pod="openstack/openstack-galera-0" Oct 06 12:01:58 crc kubenswrapper[4958]: I1006 12:01:58.539986 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"openstack-galera-0\" (UID: \"acaf745d-7462-44e9-be0b-28424e3c2f31\") " pod="openstack/openstack-galera-0" Oct 06 12:01:58 crc kubenswrapper[4958]: I1006 12:01:58.540356 4958 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"openstack-galera-0\" (UID: \"acaf745d-7462-44e9-be0b-28424e3c2f31\") device mount path \"/mnt/openstack/pv12\"" pod="openstack/openstack-galera-0" Oct 06 12:01:58 crc kubenswrapper[4958]: I1006 12:01:58.541281 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/acaf745d-7462-44e9-be0b-28424e3c2f31-kolla-config\") pod \"openstack-galera-0\" (UID: \"acaf745d-7462-44e9-be0b-28424e3c2f31\") " pod="openstack/openstack-galera-0" Oct 06 12:01:58 crc kubenswrapper[4958]: I1006 12:01:58.541480 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/acaf745d-7462-44e9-be0b-28424e3c2f31-config-data-generated\") pod \"openstack-galera-0\" (UID: \"acaf745d-7462-44e9-be0b-28424e3c2f31\") " pod="openstack/openstack-galera-0" Oct 06 12:01:58 crc kubenswrapper[4958]: I1006 12:01:58.542568 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/acaf745d-7462-44e9-be0b-28424e3c2f31-config-data-default\") pod \"openstack-galera-0\" (UID: \"acaf745d-7462-44e9-be0b-28424e3c2f31\") " pod="openstack/openstack-galera-0" Oct 06 12:01:58 crc kubenswrapper[4958]: I1006 12:01:58.545935 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secrets\" (UniqueName: \"kubernetes.io/secret/acaf745d-7462-44e9-be0b-28424e3c2f31-secrets\") pod \"openstack-galera-0\" (UID: \"acaf745d-7462-44e9-be0b-28424e3c2f31\") " pod="openstack/openstack-galera-0" Oct 06 12:01:58 crc kubenswrapper[4958]: I1006 12:01:58.546800 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/acaf745d-7462-44e9-be0b-28424e3c2f31-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"acaf745d-7462-44e9-be0b-28424e3c2f31\") " pod="openstack/openstack-galera-0" Oct 06 12:01:58 crc kubenswrapper[4958]: I1006 12:01:58.552908 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/acaf745d-7462-44e9-be0b-28424e3c2f31-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"acaf745d-7462-44e9-be0b-28424e3c2f31\") " pod="openstack/openstack-galera-0" Oct 06 12:01:58 crc kubenswrapper[4958]: I1006 12:01:58.554558 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/acaf745d-7462-44e9-be0b-28424e3c2f31-operator-scripts\") pod \"openstack-galera-0\" (UID: \"acaf745d-7462-44e9-be0b-28424e3c2f31\") " pod="openstack/openstack-galera-0" Oct 06 12:01:58 crc kubenswrapper[4958]: I1006 12:01:58.557994 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gn6w6\" (UniqueName: \"kubernetes.io/projected/acaf745d-7462-44e9-be0b-28424e3c2f31-kube-api-access-gn6w6\") pod \"openstack-galera-0\" (UID: \"acaf745d-7462-44e9-be0b-28424e3c2f31\") " pod="openstack/openstack-galera-0" Oct 06 12:01:58 crc kubenswrapper[4958]: I1006 12:01:58.577799 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"openstack-galera-0\" (UID: \"acaf745d-7462-44e9-be0b-28424e3c2f31\") " pod="openstack/openstack-galera-0" Oct 06 12:01:58 crc kubenswrapper[4958]: I1006 12:01:58.590557 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-galera-0" Oct 06 12:01:58 crc kubenswrapper[4958]: I1006 12:01:58.924567 4958 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstack-cell1-galera-0"] Oct 06 12:01:58 crc kubenswrapper[4958]: I1006 12:01:58.928414 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-cell1-galera-0" Oct 06 12:01:58 crc kubenswrapper[4958]: I1006 12:01:58.935786 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-galera-openstack-cell1-svc" Oct 06 12:01:58 crc kubenswrapper[4958]: I1006 12:01:58.935993 4958 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-cell1-config-data" Oct 06 12:01:58 crc kubenswrapper[4958]: I1006 12:01:58.936607 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"galera-openstack-cell1-dockercfg-clc9k" Oct 06 12:01:58 crc kubenswrapper[4958]: I1006 12:01:58.936660 4958 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-cell1-scripts" Oct 06 12:01:58 crc kubenswrapper[4958]: I1006 12:01:58.967711 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-cell1-galera-0"] Oct 06 12:01:59 crc kubenswrapper[4958]: I1006 12:01:59.053650 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/16e7f0fd-80ed-4a05-8ec6-c3b82fe3fb51-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"16e7f0fd-80ed-4a05-8ec6-c3b82fe3fb51\") " pod="openstack/openstack-cell1-galera-0" Oct 06 12:01:59 crc kubenswrapper[4958]: I1006 12:01:59.053731 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rkhj4\" (UniqueName: \"kubernetes.io/projected/16e7f0fd-80ed-4a05-8ec6-c3b82fe3fb51-kube-api-access-rkhj4\") pod \"openstack-cell1-galera-0\" (UID: \"16e7f0fd-80ed-4a05-8ec6-c3b82fe3fb51\") " pod="openstack/openstack-cell1-galera-0" Oct 06 12:01:59 crc kubenswrapper[4958]: I1006 12:01:59.053774 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/16e7f0fd-80ed-4a05-8ec6-c3b82fe3fb51-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"16e7f0fd-80ed-4a05-8ec6-c3b82fe3fb51\") " pod="openstack/openstack-cell1-galera-0" Oct 06 12:01:59 crc kubenswrapper[4958]: I1006 12:01:59.053803 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/16e7f0fd-80ed-4a05-8ec6-c3b82fe3fb51-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"16e7f0fd-80ed-4a05-8ec6-c3b82fe3fb51\") " pod="openstack/openstack-cell1-galera-0" Oct 06 12:01:59 crc kubenswrapper[4958]: I1006 12:01:59.053885 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/16e7f0fd-80ed-4a05-8ec6-c3b82fe3fb51-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"16e7f0fd-80ed-4a05-8ec6-c3b82fe3fb51\") " pod="openstack/openstack-cell1-galera-0" Oct 06 12:01:59 crc kubenswrapper[4958]: I1006 12:01:59.053931 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secrets\" (UniqueName: \"kubernetes.io/secret/16e7f0fd-80ed-4a05-8ec6-c3b82fe3fb51-secrets\") pod \"openstack-cell1-galera-0\" (UID: \"16e7f0fd-80ed-4a05-8ec6-c3b82fe3fb51\") " pod="openstack/openstack-cell1-galera-0" Oct 06 12:01:59 crc kubenswrapper[4958]: I1006 12:01:59.053948 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"openstack-cell1-galera-0\" (UID: \"16e7f0fd-80ed-4a05-8ec6-c3b82fe3fb51\") " pod="openstack/openstack-cell1-galera-0" Oct 06 12:01:59 crc kubenswrapper[4958]: I1006 12:01:59.053978 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/16e7f0fd-80ed-4a05-8ec6-c3b82fe3fb51-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"16e7f0fd-80ed-4a05-8ec6-c3b82fe3fb51\") " pod="openstack/openstack-cell1-galera-0" Oct 06 12:01:59 crc kubenswrapper[4958]: I1006 12:01:59.053993 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/16e7f0fd-80ed-4a05-8ec6-c3b82fe3fb51-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"16e7f0fd-80ed-4a05-8ec6-c3b82fe3fb51\") " pod="openstack/openstack-cell1-galera-0" Oct 06 12:01:59 crc kubenswrapper[4958]: I1006 12:01:59.157917 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/16e7f0fd-80ed-4a05-8ec6-c3b82fe3fb51-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"16e7f0fd-80ed-4a05-8ec6-c3b82fe3fb51\") " pod="openstack/openstack-cell1-galera-0" Oct 06 12:01:59 crc kubenswrapper[4958]: I1006 12:01:59.158211 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secrets\" (UniqueName: \"kubernetes.io/secret/16e7f0fd-80ed-4a05-8ec6-c3b82fe3fb51-secrets\") pod \"openstack-cell1-galera-0\" (UID: \"16e7f0fd-80ed-4a05-8ec6-c3b82fe3fb51\") " pod="openstack/openstack-cell1-galera-0" Oct 06 12:01:59 crc kubenswrapper[4958]: I1006 12:01:59.158237 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"openstack-cell1-galera-0\" (UID: \"16e7f0fd-80ed-4a05-8ec6-c3b82fe3fb51\") " pod="openstack/openstack-cell1-galera-0" Oct 06 12:01:59 crc kubenswrapper[4958]: I1006 12:01:59.158280 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/16e7f0fd-80ed-4a05-8ec6-c3b82fe3fb51-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"16e7f0fd-80ed-4a05-8ec6-c3b82fe3fb51\") " pod="openstack/openstack-cell1-galera-0" Oct 06 12:01:59 crc kubenswrapper[4958]: I1006 12:01:59.158308 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/16e7f0fd-80ed-4a05-8ec6-c3b82fe3fb51-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"16e7f0fd-80ed-4a05-8ec6-c3b82fe3fb51\") " pod="openstack/openstack-cell1-galera-0" Oct 06 12:01:59 crc kubenswrapper[4958]: I1006 12:01:59.158355 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/16e7f0fd-80ed-4a05-8ec6-c3b82fe3fb51-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"16e7f0fd-80ed-4a05-8ec6-c3b82fe3fb51\") " pod="openstack/openstack-cell1-galera-0" Oct 06 12:01:59 crc kubenswrapper[4958]: I1006 12:01:59.158392 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rkhj4\" (UniqueName: \"kubernetes.io/projected/16e7f0fd-80ed-4a05-8ec6-c3b82fe3fb51-kube-api-access-rkhj4\") pod \"openstack-cell1-galera-0\" (UID: \"16e7f0fd-80ed-4a05-8ec6-c3b82fe3fb51\") " pod="openstack/openstack-cell1-galera-0" Oct 06 12:01:59 crc kubenswrapper[4958]: I1006 12:01:59.158430 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/16e7f0fd-80ed-4a05-8ec6-c3b82fe3fb51-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"16e7f0fd-80ed-4a05-8ec6-c3b82fe3fb51\") " pod="openstack/openstack-cell1-galera-0" Oct 06 12:01:59 crc kubenswrapper[4958]: I1006 12:01:59.158455 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/16e7f0fd-80ed-4a05-8ec6-c3b82fe3fb51-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"16e7f0fd-80ed-4a05-8ec6-c3b82fe3fb51\") " pod="openstack/openstack-cell1-galera-0" Oct 06 12:01:59 crc kubenswrapper[4958]: I1006 12:01:59.158708 4958 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"openstack-cell1-galera-0\" (UID: \"16e7f0fd-80ed-4a05-8ec6-c3b82fe3fb51\") device mount path \"/mnt/openstack/pv07\"" pod="openstack/openstack-cell1-galera-0" Oct 06 12:01:59 crc kubenswrapper[4958]: I1006 12:01:59.159949 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/16e7f0fd-80ed-4a05-8ec6-c3b82fe3fb51-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"16e7f0fd-80ed-4a05-8ec6-c3b82fe3fb51\") " pod="openstack/openstack-cell1-galera-0" Oct 06 12:01:59 crc kubenswrapper[4958]: I1006 12:01:59.160313 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/16e7f0fd-80ed-4a05-8ec6-c3b82fe3fb51-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"16e7f0fd-80ed-4a05-8ec6-c3b82fe3fb51\") " pod="openstack/openstack-cell1-galera-0" Oct 06 12:01:59 crc kubenswrapper[4958]: I1006 12:01:59.160726 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/16e7f0fd-80ed-4a05-8ec6-c3b82fe3fb51-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"16e7f0fd-80ed-4a05-8ec6-c3b82fe3fb51\") " pod="openstack/openstack-cell1-galera-0" Oct 06 12:01:59 crc kubenswrapper[4958]: I1006 12:01:59.161410 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/16e7f0fd-80ed-4a05-8ec6-c3b82fe3fb51-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"16e7f0fd-80ed-4a05-8ec6-c3b82fe3fb51\") " pod="openstack/openstack-cell1-galera-0" Oct 06 12:01:59 crc kubenswrapper[4958]: I1006 12:01:59.165257 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/16e7f0fd-80ed-4a05-8ec6-c3b82fe3fb51-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"16e7f0fd-80ed-4a05-8ec6-c3b82fe3fb51\") " pod="openstack/openstack-cell1-galera-0" Oct 06 12:01:59 crc kubenswrapper[4958]: I1006 12:01:59.166011 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/16e7f0fd-80ed-4a05-8ec6-c3b82fe3fb51-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"16e7f0fd-80ed-4a05-8ec6-c3b82fe3fb51\") " pod="openstack/openstack-cell1-galera-0" Oct 06 12:01:59 crc kubenswrapper[4958]: I1006 12:01:59.178579 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secrets\" (UniqueName: \"kubernetes.io/secret/16e7f0fd-80ed-4a05-8ec6-c3b82fe3fb51-secrets\") pod \"openstack-cell1-galera-0\" (UID: \"16e7f0fd-80ed-4a05-8ec6-c3b82fe3fb51\") " pod="openstack/openstack-cell1-galera-0" Oct 06 12:01:59 crc kubenswrapper[4958]: I1006 12:01:59.178887 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rkhj4\" (UniqueName: \"kubernetes.io/projected/16e7f0fd-80ed-4a05-8ec6-c3b82fe3fb51-kube-api-access-rkhj4\") pod \"openstack-cell1-galera-0\" (UID: \"16e7f0fd-80ed-4a05-8ec6-c3b82fe3fb51\") " pod="openstack/openstack-cell1-galera-0" Oct 06 12:01:59 crc kubenswrapper[4958]: I1006 12:01:59.218703 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"openstack-cell1-galera-0\" (UID: \"16e7f0fd-80ed-4a05-8ec6-c3b82fe3fb51\") " pod="openstack/openstack-cell1-galera-0" Oct 06 12:01:59 crc kubenswrapper[4958]: I1006 12:01:59.272468 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-cell1-galera-0" Oct 06 12:01:59 crc kubenswrapper[4958]: I1006 12:01:59.275636 4958 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/memcached-0"] Oct 06 12:01:59 crc kubenswrapper[4958]: I1006 12:01:59.276716 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/memcached-0" Oct 06 12:01:59 crc kubenswrapper[4958]: I1006 12:01:59.281212 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"memcached-memcached-dockercfg-866l7" Oct 06 12:01:59 crc kubenswrapper[4958]: I1006 12:01:59.281458 4958 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"memcached-config-data" Oct 06 12:01:59 crc kubenswrapper[4958]: I1006 12:01:59.288212 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/memcached-0"] Oct 06 12:01:59 crc kubenswrapper[4958]: I1006 12:01:59.288498 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-memcached-svc" Oct 06 12:01:59 crc kubenswrapper[4958]: I1006 12:01:59.462909 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/3a4b3c4e-da8b-4eb1-a159-6376181dcbb8-kolla-config\") pod \"memcached-0\" (UID: \"3a4b3c4e-da8b-4eb1-a159-6376181dcbb8\") " pod="openstack/memcached-0" Oct 06 12:01:59 crc kubenswrapper[4958]: I1006 12:01:59.462951 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3a4b3c4e-da8b-4eb1-a159-6376181dcbb8-combined-ca-bundle\") pod \"memcached-0\" (UID: \"3a4b3c4e-da8b-4eb1-a159-6376181dcbb8\") " pod="openstack/memcached-0" Oct 06 12:01:59 crc kubenswrapper[4958]: I1006 12:01:59.462971 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/3a4b3c4e-da8b-4eb1-a159-6376181dcbb8-config-data\") pod \"memcached-0\" (UID: \"3a4b3c4e-da8b-4eb1-a159-6376181dcbb8\") " pod="openstack/memcached-0" Oct 06 12:01:59 crc kubenswrapper[4958]: I1006 12:01:59.463011 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rznj9\" (UniqueName: \"kubernetes.io/projected/3a4b3c4e-da8b-4eb1-a159-6376181dcbb8-kube-api-access-rznj9\") pod \"memcached-0\" (UID: \"3a4b3c4e-da8b-4eb1-a159-6376181dcbb8\") " pod="openstack/memcached-0" Oct 06 12:01:59 crc kubenswrapper[4958]: I1006 12:01:59.463213 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/3a4b3c4e-da8b-4eb1-a159-6376181dcbb8-memcached-tls-certs\") pod \"memcached-0\" (UID: \"3a4b3c4e-da8b-4eb1-a159-6376181dcbb8\") " pod="openstack/memcached-0" Oct 06 12:01:59 crc kubenswrapper[4958]: I1006 12:01:59.564976 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/3a4b3c4e-da8b-4eb1-a159-6376181dcbb8-kolla-config\") pod \"memcached-0\" (UID: \"3a4b3c4e-da8b-4eb1-a159-6376181dcbb8\") " pod="openstack/memcached-0" Oct 06 12:01:59 crc kubenswrapper[4958]: I1006 12:01:59.565018 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3a4b3c4e-da8b-4eb1-a159-6376181dcbb8-combined-ca-bundle\") pod \"memcached-0\" (UID: \"3a4b3c4e-da8b-4eb1-a159-6376181dcbb8\") " pod="openstack/memcached-0" Oct 06 12:01:59 crc kubenswrapper[4958]: I1006 12:01:59.565036 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/3a4b3c4e-da8b-4eb1-a159-6376181dcbb8-config-data\") pod \"memcached-0\" (UID: \"3a4b3c4e-da8b-4eb1-a159-6376181dcbb8\") " pod="openstack/memcached-0" Oct 06 12:01:59 crc kubenswrapper[4958]: I1006 12:01:59.565078 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rznj9\" (UniqueName: \"kubernetes.io/projected/3a4b3c4e-da8b-4eb1-a159-6376181dcbb8-kube-api-access-rznj9\") pod \"memcached-0\" (UID: \"3a4b3c4e-da8b-4eb1-a159-6376181dcbb8\") " pod="openstack/memcached-0" Oct 06 12:01:59 crc kubenswrapper[4958]: I1006 12:01:59.565132 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/3a4b3c4e-da8b-4eb1-a159-6376181dcbb8-memcached-tls-certs\") pod \"memcached-0\" (UID: \"3a4b3c4e-da8b-4eb1-a159-6376181dcbb8\") " pod="openstack/memcached-0" Oct 06 12:01:59 crc kubenswrapper[4958]: I1006 12:01:59.566286 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/3a4b3c4e-da8b-4eb1-a159-6376181dcbb8-config-data\") pod \"memcached-0\" (UID: \"3a4b3c4e-da8b-4eb1-a159-6376181dcbb8\") " pod="openstack/memcached-0" Oct 06 12:01:59 crc kubenswrapper[4958]: I1006 12:01:59.566313 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/3a4b3c4e-da8b-4eb1-a159-6376181dcbb8-kolla-config\") pod \"memcached-0\" (UID: \"3a4b3c4e-da8b-4eb1-a159-6376181dcbb8\") " pod="openstack/memcached-0" Oct 06 12:01:59 crc kubenswrapper[4958]: I1006 12:01:59.569035 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3a4b3c4e-da8b-4eb1-a159-6376181dcbb8-combined-ca-bundle\") pod \"memcached-0\" (UID: \"3a4b3c4e-da8b-4eb1-a159-6376181dcbb8\") " pod="openstack/memcached-0" Oct 06 12:01:59 crc kubenswrapper[4958]: I1006 12:01:59.572493 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/3a4b3c4e-da8b-4eb1-a159-6376181dcbb8-memcached-tls-certs\") pod \"memcached-0\" (UID: \"3a4b3c4e-da8b-4eb1-a159-6376181dcbb8\") " pod="openstack/memcached-0" Oct 06 12:01:59 crc kubenswrapper[4958]: I1006 12:01:59.589645 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rznj9\" (UniqueName: \"kubernetes.io/projected/3a4b3c4e-da8b-4eb1-a159-6376181dcbb8-kube-api-access-rznj9\") pod \"memcached-0\" (UID: \"3a4b3c4e-da8b-4eb1-a159-6376181dcbb8\") " pod="openstack/memcached-0" Oct 06 12:01:59 crc kubenswrapper[4958]: I1006 12:01:59.608709 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/memcached-0" Oct 06 12:02:01 crc kubenswrapper[4958]: I1006 12:02:01.347230 4958 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/kube-state-metrics-0"] Oct 06 12:02:01 crc kubenswrapper[4958]: I1006 12:02:01.348531 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Oct 06 12:02:01 crc kubenswrapper[4958]: I1006 12:02:01.351397 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"telemetry-ceilometer-dockercfg-xs5wb" Oct 06 12:02:01 crc kubenswrapper[4958]: I1006 12:02:01.361434 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Oct 06 12:02:01 crc kubenswrapper[4958]: I1006 12:02:01.412521 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rh5lx\" (UniqueName: \"kubernetes.io/projected/ed469bc1-6294-481d-aed5-136cc0585e1c-kube-api-access-rh5lx\") pod \"kube-state-metrics-0\" (UID: \"ed469bc1-6294-481d-aed5-136cc0585e1c\") " pod="openstack/kube-state-metrics-0" Oct 06 12:02:01 crc kubenswrapper[4958]: I1006 12:02:01.514024 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rh5lx\" (UniqueName: \"kubernetes.io/projected/ed469bc1-6294-481d-aed5-136cc0585e1c-kube-api-access-rh5lx\") pod \"kube-state-metrics-0\" (UID: \"ed469bc1-6294-481d-aed5-136cc0585e1c\") " pod="openstack/kube-state-metrics-0" Oct 06 12:02:01 crc kubenswrapper[4958]: I1006 12:02:01.539233 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rh5lx\" (UniqueName: \"kubernetes.io/projected/ed469bc1-6294-481d-aed5-136cc0585e1c-kube-api-access-rh5lx\") pod \"kube-state-metrics-0\" (UID: \"ed469bc1-6294-481d-aed5-136cc0585e1c\") " pod="openstack/kube-state-metrics-0" Oct 06 12:02:01 crc kubenswrapper[4958]: I1006 12:02:01.666622 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Oct 06 12:02:04 crc kubenswrapper[4958]: I1006 12:02:04.653754 4958 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-5x288"] Oct 06 12:02:04 crc kubenswrapper[4958]: I1006 12:02:04.654875 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-5x288" Oct 06 12:02:04 crc kubenswrapper[4958]: I1006 12:02:04.656301 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovncontroller-ovndbs" Oct 06 12:02:04 crc kubenswrapper[4958]: I1006 12:02:04.658013 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncontroller-ovncontroller-dockercfg-64427" Oct 06 12:02:04 crc kubenswrapper[4958]: I1006 12:02:04.658466 4958 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-scripts" Oct 06 12:02:04 crc kubenswrapper[4958]: I1006 12:02:04.663358 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/ca1fdee5-1c5e-4740-b69a-d2111ba255ee-ovn-controller-tls-certs\") pod \"ovn-controller-5x288\" (UID: \"ca1fdee5-1c5e-4740-b69a-d2111ba255ee\") " pod="openstack/ovn-controller-5x288" Oct 06 12:02:04 crc kubenswrapper[4958]: I1006 12:02:04.663400 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/ca1fdee5-1c5e-4740-b69a-d2111ba255ee-var-log-ovn\") pod \"ovn-controller-5x288\" (UID: \"ca1fdee5-1c5e-4740-b69a-d2111ba255ee\") " pod="openstack/ovn-controller-5x288" Oct 06 12:02:04 crc kubenswrapper[4958]: I1006 12:02:04.663421 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/ca1fdee5-1c5e-4740-b69a-d2111ba255ee-scripts\") pod \"ovn-controller-5x288\" (UID: \"ca1fdee5-1c5e-4740-b69a-d2111ba255ee\") " pod="openstack/ovn-controller-5x288" Oct 06 12:02:04 crc kubenswrapper[4958]: I1006 12:02:04.663470 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z6s2h\" (UniqueName: \"kubernetes.io/projected/ca1fdee5-1c5e-4740-b69a-d2111ba255ee-kube-api-access-z6s2h\") pod \"ovn-controller-5x288\" (UID: \"ca1fdee5-1c5e-4740-b69a-d2111ba255ee\") " pod="openstack/ovn-controller-5x288" Oct 06 12:02:04 crc kubenswrapper[4958]: I1006 12:02:04.663555 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/ca1fdee5-1c5e-4740-b69a-d2111ba255ee-var-run\") pod \"ovn-controller-5x288\" (UID: \"ca1fdee5-1c5e-4740-b69a-d2111ba255ee\") " pod="openstack/ovn-controller-5x288" Oct 06 12:02:04 crc kubenswrapper[4958]: I1006 12:02:04.663576 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ca1fdee5-1c5e-4740-b69a-d2111ba255ee-combined-ca-bundle\") pod \"ovn-controller-5x288\" (UID: \"ca1fdee5-1c5e-4740-b69a-d2111ba255ee\") " pod="openstack/ovn-controller-5x288" Oct 06 12:02:04 crc kubenswrapper[4958]: I1006 12:02:04.663585 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-5x288"] Oct 06 12:02:04 crc kubenswrapper[4958]: I1006 12:02:04.663600 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/ca1fdee5-1c5e-4740-b69a-d2111ba255ee-var-run-ovn\") pod \"ovn-controller-5x288\" (UID: \"ca1fdee5-1c5e-4740-b69a-d2111ba255ee\") " pod="openstack/ovn-controller-5x288" Oct 06 12:02:04 crc kubenswrapper[4958]: I1006 12:02:04.717318 4958 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-ovs-k7nm2"] Oct 06 12:02:04 crc kubenswrapper[4958]: I1006 12:02:04.719462 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-ovs-k7nm2" Oct 06 12:02:04 crc kubenswrapper[4958]: I1006 12:02:04.734561 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-ovs-k7nm2"] Oct 06 12:02:04 crc kubenswrapper[4958]: I1006 12:02:04.764774 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/ca1fdee5-1c5e-4740-b69a-d2111ba255ee-var-log-ovn\") pod \"ovn-controller-5x288\" (UID: \"ca1fdee5-1c5e-4740-b69a-d2111ba255ee\") " pod="openstack/ovn-controller-5x288" Oct 06 12:02:04 crc kubenswrapper[4958]: I1006 12:02:04.764823 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/ca1fdee5-1c5e-4740-b69a-d2111ba255ee-scripts\") pod \"ovn-controller-5x288\" (UID: \"ca1fdee5-1c5e-4740-b69a-d2111ba255ee\") " pod="openstack/ovn-controller-5x288" Oct 06 12:02:04 crc kubenswrapper[4958]: I1006 12:02:04.764860 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z6s2h\" (UniqueName: \"kubernetes.io/projected/ca1fdee5-1c5e-4740-b69a-d2111ba255ee-kube-api-access-z6s2h\") pod \"ovn-controller-5x288\" (UID: \"ca1fdee5-1c5e-4740-b69a-d2111ba255ee\") " pod="openstack/ovn-controller-5x288" Oct 06 12:02:04 crc kubenswrapper[4958]: I1006 12:02:04.764900 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/f8475601-8235-4d69-958e-53f8e6a2f71b-etc-ovs\") pod \"ovn-controller-ovs-k7nm2\" (UID: \"f8475601-8235-4d69-958e-53f8e6a2f71b\") " pod="openstack/ovn-controller-ovs-k7nm2" Oct 06 12:02:04 crc kubenswrapper[4958]: I1006 12:02:04.764936 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-flzmj\" (UniqueName: \"kubernetes.io/projected/f8475601-8235-4d69-958e-53f8e6a2f71b-kube-api-access-flzmj\") pod \"ovn-controller-ovs-k7nm2\" (UID: \"f8475601-8235-4d69-958e-53f8e6a2f71b\") " pod="openstack/ovn-controller-ovs-k7nm2" Oct 06 12:02:04 crc kubenswrapper[4958]: I1006 12:02:04.764964 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/f8475601-8235-4d69-958e-53f8e6a2f71b-scripts\") pod \"ovn-controller-ovs-k7nm2\" (UID: \"f8475601-8235-4d69-958e-53f8e6a2f71b\") " pod="openstack/ovn-controller-ovs-k7nm2" Oct 06 12:02:04 crc kubenswrapper[4958]: I1006 12:02:04.764987 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/ca1fdee5-1c5e-4740-b69a-d2111ba255ee-var-run\") pod \"ovn-controller-5x288\" (UID: \"ca1fdee5-1c5e-4740-b69a-d2111ba255ee\") " pod="openstack/ovn-controller-5x288" Oct 06 12:02:04 crc kubenswrapper[4958]: I1006 12:02:04.765002 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f8475601-8235-4d69-958e-53f8e6a2f71b-var-log\") pod \"ovn-controller-ovs-k7nm2\" (UID: \"f8475601-8235-4d69-958e-53f8e6a2f71b\") " pod="openstack/ovn-controller-ovs-k7nm2" Oct 06 12:02:04 crc kubenswrapper[4958]: I1006 12:02:04.765023 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ca1fdee5-1c5e-4740-b69a-d2111ba255ee-combined-ca-bundle\") pod \"ovn-controller-5x288\" (UID: \"ca1fdee5-1c5e-4740-b69a-d2111ba255ee\") " pod="openstack/ovn-controller-5x288" Oct 06 12:02:04 crc kubenswrapper[4958]: I1006 12:02:04.765041 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/f8475601-8235-4d69-958e-53f8e6a2f71b-var-lib\") pod \"ovn-controller-ovs-k7nm2\" (UID: \"f8475601-8235-4d69-958e-53f8e6a2f71b\") " pod="openstack/ovn-controller-ovs-k7nm2" Oct 06 12:02:04 crc kubenswrapper[4958]: I1006 12:02:04.765061 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/ca1fdee5-1c5e-4740-b69a-d2111ba255ee-var-run-ovn\") pod \"ovn-controller-5x288\" (UID: \"ca1fdee5-1c5e-4740-b69a-d2111ba255ee\") " pod="openstack/ovn-controller-5x288" Oct 06 12:02:04 crc kubenswrapper[4958]: I1006 12:02:04.765102 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/ca1fdee5-1c5e-4740-b69a-d2111ba255ee-ovn-controller-tls-certs\") pod \"ovn-controller-5x288\" (UID: \"ca1fdee5-1c5e-4740-b69a-d2111ba255ee\") " pod="openstack/ovn-controller-5x288" Oct 06 12:02:04 crc kubenswrapper[4958]: I1006 12:02:04.765119 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/f8475601-8235-4d69-958e-53f8e6a2f71b-var-run\") pod \"ovn-controller-ovs-k7nm2\" (UID: \"f8475601-8235-4d69-958e-53f8e6a2f71b\") " pod="openstack/ovn-controller-ovs-k7nm2" Oct 06 12:02:04 crc kubenswrapper[4958]: I1006 12:02:04.765593 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/ca1fdee5-1c5e-4740-b69a-d2111ba255ee-var-log-ovn\") pod \"ovn-controller-5x288\" (UID: \"ca1fdee5-1c5e-4740-b69a-d2111ba255ee\") " pod="openstack/ovn-controller-5x288" Oct 06 12:02:04 crc kubenswrapper[4958]: I1006 12:02:04.765610 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/ca1fdee5-1c5e-4740-b69a-d2111ba255ee-var-run-ovn\") pod \"ovn-controller-5x288\" (UID: \"ca1fdee5-1c5e-4740-b69a-d2111ba255ee\") " pod="openstack/ovn-controller-5x288" Oct 06 12:02:04 crc kubenswrapper[4958]: I1006 12:02:04.766002 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/ca1fdee5-1c5e-4740-b69a-d2111ba255ee-var-run\") pod \"ovn-controller-5x288\" (UID: \"ca1fdee5-1c5e-4740-b69a-d2111ba255ee\") " pod="openstack/ovn-controller-5x288" Oct 06 12:02:04 crc kubenswrapper[4958]: I1006 12:02:04.767658 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/ca1fdee5-1c5e-4740-b69a-d2111ba255ee-scripts\") pod \"ovn-controller-5x288\" (UID: \"ca1fdee5-1c5e-4740-b69a-d2111ba255ee\") " pod="openstack/ovn-controller-5x288" Oct 06 12:02:04 crc kubenswrapper[4958]: I1006 12:02:04.771306 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ca1fdee5-1c5e-4740-b69a-d2111ba255ee-combined-ca-bundle\") pod \"ovn-controller-5x288\" (UID: \"ca1fdee5-1c5e-4740-b69a-d2111ba255ee\") " pod="openstack/ovn-controller-5x288" Oct 06 12:02:04 crc kubenswrapper[4958]: I1006 12:02:04.773350 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/ca1fdee5-1c5e-4740-b69a-d2111ba255ee-ovn-controller-tls-certs\") pod \"ovn-controller-5x288\" (UID: \"ca1fdee5-1c5e-4740-b69a-d2111ba255ee\") " pod="openstack/ovn-controller-5x288" Oct 06 12:02:04 crc kubenswrapper[4958]: I1006 12:02:04.779441 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z6s2h\" (UniqueName: \"kubernetes.io/projected/ca1fdee5-1c5e-4740-b69a-d2111ba255ee-kube-api-access-z6s2h\") pod \"ovn-controller-5x288\" (UID: \"ca1fdee5-1c5e-4740-b69a-d2111ba255ee\") " pod="openstack/ovn-controller-5x288" Oct 06 12:02:04 crc kubenswrapper[4958]: I1006 12:02:04.866916 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/f8475601-8235-4d69-958e-53f8e6a2f71b-var-run\") pod \"ovn-controller-ovs-k7nm2\" (UID: \"f8475601-8235-4d69-958e-53f8e6a2f71b\") " pod="openstack/ovn-controller-ovs-k7nm2" Oct 06 12:02:04 crc kubenswrapper[4958]: I1006 12:02:04.867330 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/f8475601-8235-4d69-958e-53f8e6a2f71b-etc-ovs\") pod \"ovn-controller-ovs-k7nm2\" (UID: \"f8475601-8235-4d69-958e-53f8e6a2f71b\") " pod="openstack/ovn-controller-ovs-k7nm2" Oct 06 12:02:04 crc kubenswrapper[4958]: I1006 12:02:04.867370 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-flzmj\" (UniqueName: \"kubernetes.io/projected/f8475601-8235-4d69-958e-53f8e6a2f71b-kube-api-access-flzmj\") pod \"ovn-controller-ovs-k7nm2\" (UID: \"f8475601-8235-4d69-958e-53f8e6a2f71b\") " pod="openstack/ovn-controller-ovs-k7nm2" Oct 06 12:02:04 crc kubenswrapper[4958]: I1006 12:02:04.867044 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/f8475601-8235-4d69-958e-53f8e6a2f71b-var-run\") pod \"ovn-controller-ovs-k7nm2\" (UID: \"f8475601-8235-4d69-958e-53f8e6a2f71b\") " pod="openstack/ovn-controller-ovs-k7nm2" Oct 06 12:02:04 crc kubenswrapper[4958]: I1006 12:02:04.867399 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/f8475601-8235-4d69-958e-53f8e6a2f71b-scripts\") pod \"ovn-controller-ovs-k7nm2\" (UID: \"f8475601-8235-4d69-958e-53f8e6a2f71b\") " pod="openstack/ovn-controller-ovs-k7nm2" Oct 06 12:02:04 crc kubenswrapper[4958]: I1006 12:02:04.867442 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f8475601-8235-4d69-958e-53f8e6a2f71b-var-log\") pod \"ovn-controller-ovs-k7nm2\" (UID: \"f8475601-8235-4d69-958e-53f8e6a2f71b\") " pod="openstack/ovn-controller-ovs-k7nm2" Oct 06 12:02:04 crc kubenswrapper[4958]: I1006 12:02:04.867479 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/f8475601-8235-4d69-958e-53f8e6a2f71b-var-lib\") pod \"ovn-controller-ovs-k7nm2\" (UID: \"f8475601-8235-4d69-958e-53f8e6a2f71b\") " pod="openstack/ovn-controller-ovs-k7nm2" Oct 06 12:02:04 crc kubenswrapper[4958]: I1006 12:02:04.867649 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f8475601-8235-4d69-958e-53f8e6a2f71b-var-log\") pod \"ovn-controller-ovs-k7nm2\" (UID: \"f8475601-8235-4d69-958e-53f8e6a2f71b\") " pod="openstack/ovn-controller-ovs-k7nm2" Oct 06 12:02:04 crc kubenswrapper[4958]: I1006 12:02:04.867695 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/f8475601-8235-4d69-958e-53f8e6a2f71b-var-lib\") pod \"ovn-controller-ovs-k7nm2\" (UID: \"f8475601-8235-4d69-958e-53f8e6a2f71b\") " pod="openstack/ovn-controller-ovs-k7nm2" Oct 06 12:02:04 crc kubenswrapper[4958]: I1006 12:02:04.867743 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/f8475601-8235-4d69-958e-53f8e6a2f71b-etc-ovs\") pod \"ovn-controller-ovs-k7nm2\" (UID: \"f8475601-8235-4d69-958e-53f8e6a2f71b\") " pod="openstack/ovn-controller-ovs-k7nm2" Oct 06 12:02:04 crc kubenswrapper[4958]: I1006 12:02:04.870451 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/f8475601-8235-4d69-958e-53f8e6a2f71b-scripts\") pod \"ovn-controller-ovs-k7nm2\" (UID: \"f8475601-8235-4d69-958e-53f8e6a2f71b\") " pod="openstack/ovn-controller-ovs-k7nm2" Oct 06 12:02:04 crc kubenswrapper[4958]: I1006 12:02:04.883171 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-flzmj\" (UniqueName: \"kubernetes.io/projected/f8475601-8235-4d69-958e-53f8e6a2f71b-kube-api-access-flzmj\") pod \"ovn-controller-ovs-k7nm2\" (UID: \"f8475601-8235-4d69-958e-53f8e6a2f71b\") " pod="openstack/ovn-controller-ovs-k7nm2" Oct 06 12:02:04 crc kubenswrapper[4958]: I1006 12:02:04.975200 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-5x288" Oct 06 12:02:05 crc kubenswrapper[4958]: I1006 12:02:05.036760 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-ovs-k7nm2" Oct 06 12:02:05 crc kubenswrapper[4958]: I1006 12:02:05.102306 4958 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovsdbserver-nb-0"] Oct 06 12:02:05 crc kubenswrapper[4958]: I1006 12:02:05.103884 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-nb-0" Oct 06 12:02:05 crc kubenswrapper[4958]: I1006 12:02:05.108565 4958 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-nb-scripts" Oct 06 12:02:05 crc kubenswrapper[4958]: I1006 12:02:05.109723 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncluster-ovndbcluster-nb-dockercfg-5shj8" Oct 06 12:02:05 crc kubenswrapper[4958]: I1006 12:02:05.109856 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovndbcluster-nb-ovndbs" Oct 06 12:02:05 crc kubenswrapper[4958]: I1006 12:02:05.109959 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovn-metrics" Oct 06 12:02:05 crc kubenswrapper[4958]: I1006 12:02:05.116080 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-nb-0"] Oct 06 12:02:05 crc kubenswrapper[4958]: I1006 12:02:05.173313 4958 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-nb-config" Oct 06 12:02:05 crc kubenswrapper[4958]: I1006 12:02:05.275630 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sd6hp\" (UniqueName: \"kubernetes.io/projected/d699699e-9c26-4129-9483-3ac7d597f948-kube-api-access-sd6hp\") pod \"ovsdbserver-nb-0\" (UID: \"d699699e-9c26-4129-9483-3ac7d597f948\") " pod="openstack/ovsdbserver-nb-0" Oct 06 12:02:05 crc kubenswrapper[4958]: I1006 12:02:05.275812 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/d699699e-9c26-4129-9483-3ac7d597f948-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"d699699e-9c26-4129-9483-3ac7d597f948\") " pod="openstack/ovsdbserver-nb-0" Oct 06 12:02:05 crc kubenswrapper[4958]: I1006 12:02:05.275872 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d699699e-9c26-4129-9483-3ac7d597f948-config\") pod \"ovsdbserver-nb-0\" (UID: \"d699699e-9c26-4129-9483-3ac7d597f948\") " pod="openstack/ovsdbserver-nb-0" Oct 06 12:02:05 crc kubenswrapper[4958]: I1006 12:02:05.275907 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/d699699e-9c26-4129-9483-3ac7d597f948-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"d699699e-9c26-4129-9483-3ac7d597f948\") " pod="openstack/ovsdbserver-nb-0" Oct 06 12:02:05 crc kubenswrapper[4958]: I1006 12:02:05.275949 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/d699699e-9c26-4129-9483-3ac7d597f948-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"d699699e-9c26-4129-9483-3ac7d597f948\") " pod="openstack/ovsdbserver-nb-0" Oct 06 12:02:05 crc kubenswrapper[4958]: I1006 12:02:05.276018 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/d699699e-9c26-4129-9483-3ac7d597f948-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"d699699e-9c26-4129-9483-3ac7d597f948\") " pod="openstack/ovsdbserver-nb-0" Oct 06 12:02:05 crc kubenswrapper[4958]: I1006 12:02:05.276060 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"ovsdbserver-nb-0\" (UID: \"d699699e-9c26-4129-9483-3ac7d597f948\") " pod="openstack/ovsdbserver-nb-0" Oct 06 12:02:05 crc kubenswrapper[4958]: I1006 12:02:05.276097 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d699699e-9c26-4129-9483-3ac7d597f948-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"d699699e-9c26-4129-9483-3ac7d597f948\") " pod="openstack/ovsdbserver-nb-0" Oct 06 12:02:05 crc kubenswrapper[4958]: I1006 12:02:05.377424 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"ovsdbserver-nb-0\" (UID: \"d699699e-9c26-4129-9483-3ac7d597f948\") " pod="openstack/ovsdbserver-nb-0" Oct 06 12:02:05 crc kubenswrapper[4958]: I1006 12:02:05.377480 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d699699e-9c26-4129-9483-3ac7d597f948-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"d699699e-9c26-4129-9483-3ac7d597f948\") " pod="openstack/ovsdbserver-nb-0" Oct 06 12:02:05 crc kubenswrapper[4958]: I1006 12:02:05.377531 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sd6hp\" (UniqueName: \"kubernetes.io/projected/d699699e-9c26-4129-9483-3ac7d597f948-kube-api-access-sd6hp\") pod \"ovsdbserver-nb-0\" (UID: \"d699699e-9c26-4129-9483-3ac7d597f948\") " pod="openstack/ovsdbserver-nb-0" Oct 06 12:02:05 crc kubenswrapper[4958]: I1006 12:02:05.377614 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/d699699e-9c26-4129-9483-3ac7d597f948-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"d699699e-9c26-4129-9483-3ac7d597f948\") " pod="openstack/ovsdbserver-nb-0" Oct 06 12:02:05 crc kubenswrapper[4958]: I1006 12:02:05.377654 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d699699e-9c26-4129-9483-3ac7d597f948-config\") pod \"ovsdbserver-nb-0\" (UID: \"d699699e-9c26-4129-9483-3ac7d597f948\") " pod="openstack/ovsdbserver-nb-0" Oct 06 12:02:05 crc kubenswrapper[4958]: I1006 12:02:05.377677 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/d699699e-9c26-4129-9483-3ac7d597f948-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"d699699e-9c26-4129-9483-3ac7d597f948\") " pod="openstack/ovsdbserver-nb-0" Oct 06 12:02:05 crc kubenswrapper[4958]: I1006 12:02:05.377703 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/d699699e-9c26-4129-9483-3ac7d597f948-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"d699699e-9c26-4129-9483-3ac7d597f948\") " pod="openstack/ovsdbserver-nb-0" Oct 06 12:02:05 crc kubenswrapper[4958]: I1006 12:02:05.377751 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/d699699e-9c26-4129-9483-3ac7d597f948-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"d699699e-9c26-4129-9483-3ac7d597f948\") " pod="openstack/ovsdbserver-nb-0" Oct 06 12:02:05 crc kubenswrapper[4958]: I1006 12:02:05.377898 4958 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"ovsdbserver-nb-0\" (UID: \"d699699e-9c26-4129-9483-3ac7d597f948\") device mount path \"/mnt/openstack/pv02\"" pod="openstack/ovsdbserver-nb-0" Oct 06 12:02:05 crc kubenswrapper[4958]: I1006 12:02:05.379035 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/d699699e-9c26-4129-9483-3ac7d597f948-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"d699699e-9c26-4129-9483-3ac7d597f948\") " pod="openstack/ovsdbserver-nb-0" Oct 06 12:02:05 crc kubenswrapper[4958]: I1006 12:02:05.379220 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d699699e-9c26-4129-9483-3ac7d597f948-config\") pod \"ovsdbserver-nb-0\" (UID: \"d699699e-9c26-4129-9483-3ac7d597f948\") " pod="openstack/ovsdbserver-nb-0" Oct 06 12:02:05 crc kubenswrapper[4958]: I1006 12:02:05.379894 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/d699699e-9c26-4129-9483-3ac7d597f948-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"d699699e-9c26-4129-9483-3ac7d597f948\") " pod="openstack/ovsdbserver-nb-0" Oct 06 12:02:05 crc kubenswrapper[4958]: I1006 12:02:05.382667 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d699699e-9c26-4129-9483-3ac7d597f948-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"d699699e-9c26-4129-9483-3ac7d597f948\") " pod="openstack/ovsdbserver-nb-0" Oct 06 12:02:05 crc kubenswrapper[4958]: I1006 12:02:05.382704 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/d699699e-9c26-4129-9483-3ac7d597f948-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"d699699e-9c26-4129-9483-3ac7d597f948\") " pod="openstack/ovsdbserver-nb-0" Oct 06 12:02:05 crc kubenswrapper[4958]: I1006 12:02:05.386811 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/d699699e-9c26-4129-9483-3ac7d597f948-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"d699699e-9c26-4129-9483-3ac7d597f948\") " pod="openstack/ovsdbserver-nb-0" Oct 06 12:02:05 crc kubenswrapper[4958]: I1006 12:02:05.397677 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sd6hp\" (UniqueName: \"kubernetes.io/projected/d699699e-9c26-4129-9483-3ac7d597f948-kube-api-access-sd6hp\") pod \"ovsdbserver-nb-0\" (UID: \"d699699e-9c26-4129-9483-3ac7d597f948\") " pod="openstack/ovsdbserver-nb-0" Oct 06 12:02:05 crc kubenswrapper[4958]: I1006 12:02:05.410921 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"ovsdbserver-nb-0\" (UID: \"d699699e-9c26-4129-9483-3ac7d597f948\") " pod="openstack/ovsdbserver-nb-0" Oct 06 12:02:05 crc kubenswrapper[4958]: I1006 12:02:05.487810 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-nb-0" Oct 06 12:02:07 crc kubenswrapper[4958]: I1006 12:02:07.508243 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-9q2cz"] Oct 06 12:02:08 crc kubenswrapper[4958]: E1006 12:02:08.232184 4958 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified" Oct 06 12:02:08 crc kubenswrapper[4958]: E1006 12:02:08.232339 4958 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries --test],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:ndfhb5h667h568h584h5f9h58dh565h664h587h597h577h64bh5c4h66fh647hbdh68ch5c5h68dh686h5f7h64hd7hc6h55fh57bh98h57fh87h5fh57fq,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:dns-svc,ReadOnly:true,MountPath:/etc/dnsmasq.d/hosts/dns-svc,SubPath:dns-svc,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-2zw6n,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-78dd6ddcc-55j7h_openstack(5bb0d843-3402-42d3-8fb8-c81ab8befa24): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Oct 06 12:02:08 crc kubenswrapper[4958]: E1006 12:02:08.233899 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/dnsmasq-dns-78dd6ddcc-55j7h" podUID="5bb0d843-3402-42d3-8fb8-c81ab8befa24" Oct 06 12:02:08 crc kubenswrapper[4958]: E1006 12:02:08.259047 4958 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified" Oct 06 12:02:08 crc kubenswrapper[4958]: E1006 12:02:08.259285 4958 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries --test],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:nffh5bdhf4h5f8h79h55h77h58fh56dh7bh6fh578hbch55dh68h56bhd9h65dh57ch658hc9h566h666h688h58h65dh684h5d7h6ch575h5d6h88q,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-4x9v4,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-675f4bcbfc-whwkw_openstack(d0a905a9-fae0-4067-bde3-8aacc42ef198): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Oct 06 12:02:08 crc kubenswrapper[4958]: E1006 12:02:08.260527 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/dnsmasq-dns-675f4bcbfc-whwkw" podUID="d0a905a9-fae0-4067-bde3-8aacc42ef198" Oct 06 12:02:08 crc kubenswrapper[4958]: I1006 12:02:08.845481 4958 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovsdbserver-sb-0"] Oct 06 12:02:08 crc kubenswrapper[4958]: I1006 12:02:08.850783 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-sb-0" Oct 06 12:02:08 crc kubenswrapper[4958]: I1006 12:02:08.853443 4958 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-sb-config" Oct 06 12:02:08 crc kubenswrapper[4958]: I1006 12:02:08.854239 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovndbcluster-sb-ovndbs" Oct 06 12:02:08 crc kubenswrapper[4958]: I1006 12:02:08.854467 4958 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-sb-scripts" Oct 06 12:02:08 crc kubenswrapper[4958]: I1006 12:02:08.854662 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncluster-ovndbcluster-sb-dockercfg-jdzzm" Oct 06 12:02:08 crc kubenswrapper[4958]: I1006 12:02:08.864746 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-sb-0"] Oct 06 12:02:08 crc kubenswrapper[4958]: I1006 12:02:08.924280 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Oct 06 12:02:08 crc kubenswrapper[4958]: W1006 12:02:08.940017 4958 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2542eba4_43d2_4108_a6f0_8eb4a1714f77.slice/crio-3950b8bfde8a63186deae60f23ede73b06b58698e28846afd98c8d3d17e8edf2 WatchSource:0}: Error finding container 3950b8bfde8a63186deae60f23ede73b06b58698e28846afd98c8d3d17e8edf2: Status 404 returned error can't find the container with id 3950b8bfde8a63186deae60f23ede73b06b58698e28846afd98c8d3d17e8edf2 Oct 06 12:02:09 crc kubenswrapper[4958]: I1006 12:02:09.025062 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-fb7np"] Oct 06 12:02:09 crc kubenswrapper[4958]: W1006 12:02:09.032848 4958 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod9f28c916_3fd1_4c4d_8791_7019551b1900.slice/crio-f2017aa759876523fb7b9c8137dc88c06cabbe78a97204d7500b9628b90eccf8 WatchSource:0}: Error finding container f2017aa759876523fb7b9c8137dc88c06cabbe78a97204d7500b9628b90eccf8: Status 404 returned error can't find the container with id f2017aa759876523fb7b9c8137dc88c06cabbe78a97204d7500b9628b90eccf8 Oct 06 12:02:09 crc kubenswrapper[4958]: I1006 12:02:09.039720 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/90b1ca14-6697-42b0-8e63-fcec51e4599a-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"90b1ca14-6697-42b0-8e63-fcec51e4599a\") " pod="openstack/ovsdbserver-sb-0" Oct 06 12:02:09 crc kubenswrapper[4958]: I1006 12:02:09.039780 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/90b1ca14-6697-42b0-8e63-fcec51e4599a-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"90b1ca14-6697-42b0-8e63-fcec51e4599a\") " pod="openstack/ovsdbserver-sb-0" Oct 06 12:02:09 crc kubenswrapper[4958]: I1006 12:02:09.039884 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"ovsdbserver-sb-0\" (UID: \"90b1ca14-6697-42b0-8e63-fcec51e4599a\") " pod="openstack/ovsdbserver-sb-0" Oct 06 12:02:09 crc kubenswrapper[4958]: I1006 12:02:09.039915 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/90b1ca14-6697-42b0-8e63-fcec51e4599a-config\") pod \"ovsdbserver-sb-0\" (UID: \"90b1ca14-6697-42b0-8e63-fcec51e4599a\") " pod="openstack/ovsdbserver-sb-0" Oct 06 12:02:09 crc kubenswrapper[4958]: I1006 12:02:09.039935 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/90b1ca14-6697-42b0-8e63-fcec51e4599a-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"90b1ca14-6697-42b0-8e63-fcec51e4599a\") " pod="openstack/ovsdbserver-sb-0" Oct 06 12:02:09 crc kubenswrapper[4958]: I1006 12:02:09.040044 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kbxtx\" (UniqueName: \"kubernetes.io/projected/90b1ca14-6697-42b0-8e63-fcec51e4599a-kube-api-access-kbxtx\") pod \"ovsdbserver-sb-0\" (UID: \"90b1ca14-6697-42b0-8e63-fcec51e4599a\") " pod="openstack/ovsdbserver-sb-0" Oct 06 12:02:09 crc kubenswrapper[4958]: I1006 12:02:09.040063 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/90b1ca14-6697-42b0-8e63-fcec51e4599a-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"90b1ca14-6697-42b0-8e63-fcec51e4599a\") " pod="openstack/ovsdbserver-sb-0" Oct 06 12:02:09 crc kubenswrapper[4958]: I1006 12:02:09.040823 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/90b1ca14-6697-42b0-8e63-fcec51e4599a-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"90b1ca14-6697-42b0-8e63-fcec51e4599a\") " pod="openstack/ovsdbserver-sb-0" Oct 06 12:02:09 crc kubenswrapper[4958]: I1006 12:02:09.141973 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/90b1ca14-6697-42b0-8e63-fcec51e4599a-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"90b1ca14-6697-42b0-8e63-fcec51e4599a\") " pod="openstack/ovsdbserver-sb-0" Oct 06 12:02:09 crc kubenswrapper[4958]: I1006 12:02:09.142051 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/90b1ca14-6697-42b0-8e63-fcec51e4599a-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"90b1ca14-6697-42b0-8e63-fcec51e4599a\") " pod="openstack/ovsdbserver-sb-0" Oct 06 12:02:09 crc kubenswrapper[4958]: I1006 12:02:09.142080 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/90b1ca14-6697-42b0-8e63-fcec51e4599a-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"90b1ca14-6697-42b0-8e63-fcec51e4599a\") " pod="openstack/ovsdbserver-sb-0" Oct 06 12:02:09 crc kubenswrapper[4958]: I1006 12:02:09.142111 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"ovsdbserver-sb-0\" (UID: \"90b1ca14-6697-42b0-8e63-fcec51e4599a\") " pod="openstack/ovsdbserver-sb-0" Oct 06 12:02:09 crc kubenswrapper[4958]: I1006 12:02:09.142132 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/90b1ca14-6697-42b0-8e63-fcec51e4599a-config\") pod \"ovsdbserver-sb-0\" (UID: \"90b1ca14-6697-42b0-8e63-fcec51e4599a\") " pod="openstack/ovsdbserver-sb-0" Oct 06 12:02:09 crc kubenswrapper[4958]: I1006 12:02:09.142164 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/90b1ca14-6697-42b0-8e63-fcec51e4599a-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"90b1ca14-6697-42b0-8e63-fcec51e4599a\") " pod="openstack/ovsdbserver-sb-0" Oct 06 12:02:09 crc kubenswrapper[4958]: I1006 12:02:09.142215 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kbxtx\" (UniqueName: \"kubernetes.io/projected/90b1ca14-6697-42b0-8e63-fcec51e4599a-kube-api-access-kbxtx\") pod \"ovsdbserver-sb-0\" (UID: \"90b1ca14-6697-42b0-8e63-fcec51e4599a\") " pod="openstack/ovsdbserver-sb-0" Oct 06 12:02:09 crc kubenswrapper[4958]: I1006 12:02:09.142230 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/90b1ca14-6697-42b0-8e63-fcec51e4599a-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"90b1ca14-6697-42b0-8e63-fcec51e4599a\") " pod="openstack/ovsdbserver-sb-0" Oct 06 12:02:09 crc kubenswrapper[4958]: I1006 12:02:09.142890 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/90b1ca14-6697-42b0-8e63-fcec51e4599a-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"90b1ca14-6697-42b0-8e63-fcec51e4599a\") " pod="openstack/ovsdbserver-sb-0" Oct 06 12:02:09 crc kubenswrapper[4958]: I1006 12:02:09.143115 4958 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"ovsdbserver-sb-0\" (UID: \"90b1ca14-6697-42b0-8e63-fcec51e4599a\") device mount path \"/mnt/openstack/pv05\"" pod="openstack/ovsdbserver-sb-0" Oct 06 12:02:09 crc kubenswrapper[4958]: I1006 12:02:09.143483 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/90b1ca14-6697-42b0-8e63-fcec51e4599a-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"90b1ca14-6697-42b0-8e63-fcec51e4599a\") " pod="openstack/ovsdbserver-sb-0" Oct 06 12:02:09 crc kubenswrapper[4958]: I1006 12:02:09.143926 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/90b1ca14-6697-42b0-8e63-fcec51e4599a-config\") pod \"ovsdbserver-sb-0\" (UID: \"90b1ca14-6697-42b0-8e63-fcec51e4599a\") " pod="openstack/ovsdbserver-sb-0" Oct 06 12:02:09 crc kubenswrapper[4958]: I1006 12:02:09.146775 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/90b1ca14-6697-42b0-8e63-fcec51e4599a-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"90b1ca14-6697-42b0-8e63-fcec51e4599a\") " pod="openstack/ovsdbserver-sb-0" Oct 06 12:02:09 crc kubenswrapper[4958]: I1006 12:02:09.146990 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/90b1ca14-6697-42b0-8e63-fcec51e4599a-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"90b1ca14-6697-42b0-8e63-fcec51e4599a\") " pod="openstack/ovsdbserver-sb-0" Oct 06 12:02:09 crc kubenswrapper[4958]: I1006 12:02:09.149746 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/90b1ca14-6697-42b0-8e63-fcec51e4599a-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"90b1ca14-6697-42b0-8e63-fcec51e4599a\") " pod="openstack/ovsdbserver-sb-0" Oct 06 12:02:09 crc kubenswrapper[4958]: I1006 12:02:09.160959 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kbxtx\" (UniqueName: \"kubernetes.io/projected/90b1ca14-6697-42b0-8e63-fcec51e4599a-kube-api-access-kbxtx\") pod \"ovsdbserver-sb-0\" (UID: \"90b1ca14-6697-42b0-8e63-fcec51e4599a\") " pod="openstack/ovsdbserver-sb-0" Oct 06 12:02:09 crc kubenswrapper[4958]: I1006 12:02:09.178591 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"ovsdbserver-sb-0\" (UID: \"90b1ca14-6697-42b0-8e63-fcec51e4599a\") " pod="openstack/ovsdbserver-sb-0" Oct 06 12:02:09 crc kubenswrapper[4958]: I1006 12:02:09.246041 4958 generic.go:334] "Generic (PLEG): container finished" podID="682c8402-9403-4a24-a1ae-bf6025255f2b" containerID="d795c137205981326b5f9f901d8e37fa3e4fc6c9362d298e6f299623c6fa9da3" exitCode=0 Oct 06 12:02:09 crc kubenswrapper[4958]: I1006 12:02:09.246499 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57d769cc4f-9q2cz" event={"ID":"682c8402-9403-4a24-a1ae-bf6025255f2b","Type":"ContainerDied","Data":"d795c137205981326b5f9f901d8e37fa3e4fc6c9362d298e6f299623c6fa9da3"} Oct 06 12:02:09 crc kubenswrapper[4958]: I1006 12:02:09.246532 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57d769cc4f-9q2cz" event={"ID":"682c8402-9403-4a24-a1ae-bf6025255f2b","Type":"ContainerStarted","Data":"e0d8e451d6787e9ac96273810bce9660096d71045c4920826433875fc641b625"} Oct 06 12:02:09 crc kubenswrapper[4958]: I1006 12:02:09.255032 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"2542eba4-43d2-4108-a6f0-8eb4a1714f77","Type":"ContainerStarted","Data":"3950b8bfde8a63186deae60f23ede73b06b58698e28846afd98c8d3d17e8edf2"} Oct 06 12:02:09 crc kubenswrapper[4958]: I1006 12:02:09.259441 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-666b6646f7-fb7np" event={"ID":"9f28c916-3fd1-4c4d-8791-7019551b1900","Type":"ContainerStarted","Data":"acb25d853bedc902c5a5e50333991194ecb6e09b85a606c7959b472dff78432b"} Oct 06 12:02:09 crc kubenswrapper[4958]: I1006 12:02:09.259474 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-666b6646f7-fb7np" event={"ID":"9f28c916-3fd1-4c4d-8791-7019551b1900","Type":"ContainerStarted","Data":"f2017aa759876523fb7b9c8137dc88c06cabbe78a97204d7500b9628b90eccf8"} Oct 06 12:02:09 crc kubenswrapper[4958]: I1006 12:02:09.350094 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/memcached-0"] Oct 06 12:02:09 crc kubenswrapper[4958]: I1006 12:02:09.355714 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-5x288"] Oct 06 12:02:09 crc kubenswrapper[4958]: I1006 12:02:09.367100 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Oct 06 12:02:09 crc kubenswrapper[4958]: I1006 12:02:09.373857 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-cell1-galera-0"] Oct 06 12:02:09 crc kubenswrapper[4958]: W1006 12:02:09.399000 4958 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podca1fdee5_1c5e_4740_b69a_d2111ba255ee.slice/crio-f404f65cbf490c43f5170f0504cd5abd3671b64356f37b6fdcffba996741c997 WatchSource:0}: Error finding container f404f65cbf490c43f5170f0504cd5abd3671b64356f37b6fdcffba996741c997: Status 404 returned error can't find the container with id f404f65cbf490c43f5170f0504cd5abd3671b64356f37b6fdcffba996741c997 Oct 06 12:02:09 crc kubenswrapper[4958]: I1006 12:02:09.400210 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-galera-0"] Oct 06 12:02:09 crc kubenswrapper[4958]: I1006 12:02:09.420339 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Oct 06 12:02:09 crc kubenswrapper[4958]: I1006 12:02:09.478325 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-sb-0" Oct 06 12:02:09 crc kubenswrapper[4958]: I1006 12:02:09.509548 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-nb-0"] Oct 06 12:02:09 crc kubenswrapper[4958]: I1006 12:02:09.599907 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-ovs-k7nm2"] Oct 06 12:02:09 crc kubenswrapper[4958]: W1006 12:02:09.603382 4958 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf8475601_8235_4d69_958e_53f8e6a2f71b.slice/crio-eda0a7755cdf8781f18cce10fb02434db6b84e5d9f749d56c79fabf9a8f52155 WatchSource:0}: Error finding container eda0a7755cdf8781f18cce10fb02434db6b84e5d9f749d56c79fabf9a8f52155: Status 404 returned error can't find the container with id eda0a7755cdf8781f18cce10fb02434db6b84e5d9f749d56c79fabf9a8f52155 Oct 06 12:02:09 crc kubenswrapper[4958]: I1006 12:02:09.757853 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-675f4bcbfc-whwkw" Oct 06 12:02:09 crc kubenswrapper[4958]: I1006 12:02:09.813209 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78dd6ddcc-55j7h" Oct 06 12:02:09 crc kubenswrapper[4958]: I1006 12:02:09.862442 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4x9v4\" (UniqueName: \"kubernetes.io/projected/d0a905a9-fae0-4067-bde3-8aacc42ef198-kube-api-access-4x9v4\") pod \"d0a905a9-fae0-4067-bde3-8aacc42ef198\" (UID: \"d0a905a9-fae0-4067-bde3-8aacc42ef198\") " Oct 06 12:02:09 crc kubenswrapper[4958]: I1006 12:02:09.862629 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d0a905a9-fae0-4067-bde3-8aacc42ef198-config\") pod \"d0a905a9-fae0-4067-bde3-8aacc42ef198\" (UID: \"d0a905a9-fae0-4067-bde3-8aacc42ef198\") " Oct 06 12:02:09 crc kubenswrapper[4958]: I1006 12:02:09.863124 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d0a905a9-fae0-4067-bde3-8aacc42ef198-config" (OuterVolumeSpecName: "config") pod "d0a905a9-fae0-4067-bde3-8aacc42ef198" (UID: "d0a905a9-fae0-4067-bde3-8aacc42ef198"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 06 12:02:09 crc kubenswrapper[4958]: I1006 12:02:09.868510 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d0a905a9-fae0-4067-bde3-8aacc42ef198-kube-api-access-4x9v4" (OuterVolumeSpecName: "kube-api-access-4x9v4") pod "d0a905a9-fae0-4067-bde3-8aacc42ef198" (UID: "d0a905a9-fae0-4067-bde3-8aacc42ef198"). InnerVolumeSpecName "kube-api-access-4x9v4". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 06 12:02:09 crc kubenswrapper[4958]: I1006 12:02:09.913776 4958 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-metrics-dk97g"] Oct 06 12:02:09 crc kubenswrapper[4958]: I1006 12:02:09.914933 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-metrics-dk97g" Oct 06 12:02:09 crc kubenswrapper[4958]: I1006 12:02:09.923253 4958 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-metrics-config" Oct 06 12:02:09 crc kubenswrapper[4958]: I1006 12:02:09.952806 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-metrics-dk97g"] Oct 06 12:02:09 crc kubenswrapper[4958]: I1006 12:02:09.964298 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5bb0d843-3402-42d3-8fb8-c81ab8befa24-config\") pod \"5bb0d843-3402-42d3-8fb8-c81ab8befa24\" (UID: \"5bb0d843-3402-42d3-8fb8-c81ab8befa24\") " Oct 06 12:02:09 crc kubenswrapper[4958]: I1006 12:02:09.964420 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2zw6n\" (UniqueName: \"kubernetes.io/projected/5bb0d843-3402-42d3-8fb8-c81ab8befa24-kube-api-access-2zw6n\") pod \"5bb0d843-3402-42d3-8fb8-c81ab8befa24\" (UID: \"5bb0d843-3402-42d3-8fb8-c81ab8befa24\") " Oct 06 12:02:09 crc kubenswrapper[4958]: I1006 12:02:09.964443 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/5bb0d843-3402-42d3-8fb8-c81ab8befa24-dns-svc\") pod \"5bb0d843-3402-42d3-8fb8-c81ab8befa24\" (UID: \"5bb0d843-3402-42d3-8fb8-c81ab8befa24\") " Oct 06 12:02:09 crc kubenswrapper[4958]: I1006 12:02:09.966888 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5bb0d843-3402-42d3-8fb8-c81ab8befa24-config" (OuterVolumeSpecName: "config") pod "5bb0d843-3402-42d3-8fb8-c81ab8befa24" (UID: "5bb0d843-3402-42d3-8fb8-c81ab8befa24"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 06 12:02:09 crc kubenswrapper[4958]: I1006 12:02:09.967650 4958 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5bb0d843-3402-42d3-8fb8-c81ab8befa24-config\") on node \"crc\" DevicePath \"\"" Oct 06 12:02:09 crc kubenswrapper[4958]: I1006 12:02:09.967674 4958 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d0a905a9-fae0-4067-bde3-8aacc42ef198-config\") on node \"crc\" DevicePath \"\"" Oct 06 12:02:09 crc kubenswrapper[4958]: I1006 12:02:09.967686 4958 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4x9v4\" (UniqueName: \"kubernetes.io/projected/d0a905a9-fae0-4067-bde3-8aacc42ef198-kube-api-access-4x9v4\") on node \"crc\" DevicePath \"\"" Oct 06 12:02:09 crc kubenswrapper[4958]: I1006 12:02:09.967788 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5bb0d843-3402-42d3-8fb8-c81ab8befa24-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "5bb0d843-3402-42d3-8fb8-c81ab8befa24" (UID: "5bb0d843-3402-42d3-8fb8-c81ab8befa24"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 06 12:02:09 crc kubenswrapper[4958]: I1006 12:02:09.973498 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5bb0d843-3402-42d3-8fb8-c81ab8befa24-kube-api-access-2zw6n" (OuterVolumeSpecName: "kube-api-access-2zw6n") pod "5bb0d843-3402-42d3-8fb8-c81ab8befa24" (UID: "5bb0d843-3402-42d3-8fb8-c81ab8befa24"). InnerVolumeSpecName "kube-api-access-2zw6n". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 06 12:02:10 crc kubenswrapper[4958]: I1006 12:02:10.060139 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-sb-0"] Oct 06 12:02:10 crc kubenswrapper[4958]: I1006 12:02:10.069291 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0001ceb8-4afd-4d37-acfe-8ed9c976b6d9-config\") pod \"ovn-controller-metrics-dk97g\" (UID: \"0001ceb8-4afd-4d37-acfe-8ed9c976b6d9\") " pod="openstack/ovn-controller-metrics-dk97g" Oct 06 12:02:10 crc kubenswrapper[4958]: I1006 12:02:10.069346 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0001ceb8-4afd-4d37-acfe-8ed9c976b6d9-combined-ca-bundle\") pod \"ovn-controller-metrics-dk97g\" (UID: \"0001ceb8-4afd-4d37-acfe-8ed9c976b6d9\") " pod="openstack/ovn-controller-metrics-dk97g" Oct 06 12:02:10 crc kubenswrapper[4958]: I1006 12:02:10.069422 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/0001ceb8-4afd-4d37-acfe-8ed9c976b6d9-metrics-certs-tls-certs\") pod \"ovn-controller-metrics-dk97g\" (UID: \"0001ceb8-4afd-4d37-acfe-8ed9c976b6d9\") " pod="openstack/ovn-controller-metrics-dk97g" Oct 06 12:02:10 crc kubenswrapper[4958]: I1006 12:02:10.069475 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ckw9h\" (UniqueName: \"kubernetes.io/projected/0001ceb8-4afd-4d37-acfe-8ed9c976b6d9-kube-api-access-ckw9h\") pod \"ovn-controller-metrics-dk97g\" (UID: \"0001ceb8-4afd-4d37-acfe-8ed9c976b6d9\") " pod="openstack/ovn-controller-metrics-dk97g" Oct 06 12:02:10 crc kubenswrapper[4958]: I1006 12:02:10.069543 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/0001ceb8-4afd-4d37-acfe-8ed9c976b6d9-ovn-rundir\") pod \"ovn-controller-metrics-dk97g\" (UID: \"0001ceb8-4afd-4d37-acfe-8ed9c976b6d9\") " pod="openstack/ovn-controller-metrics-dk97g" Oct 06 12:02:10 crc kubenswrapper[4958]: I1006 12:02:10.069579 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/0001ceb8-4afd-4d37-acfe-8ed9c976b6d9-ovs-rundir\") pod \"ovn-controller-metrics-dk97g\" (UID: \"0001ceb8-4afd-4d37-acfe-8ed9c976b6d9\") " pod="openstack/ovn-controller-metrics-dk97g" Oct 06 12:02:10 crc kubenswrapper[4958]: I1006 12:02:10.069689 4958 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2zw6n\" (UniqueName: \"kubernetes.io/projected/5bb0d843-3402-42d3-8fb8-c81ab8befa24-kube-api-access-2zw6n\") on node \"crc\" DevicePath \"\"" Oct 06 12:02:10 crc kubenswrapper[4958]: I1006 12:02:10.069703 4958 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/5bb0d843-3402-42d3-8fb8-c81ab8befa24-dns-svc\") on node \"crc\" DevicePath \"\"" Oct 06 12:02:10 crc kubenswrapper[4958]: I1006 12:02:10.158374 4958 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-9q2cz"] Oct 06 12:02:10 crc kubenswrapper[4958]: I1006 12:02:10.173394 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ckw9h\" (UniqueName: \"kubernetes.io/projected/0001ceb8-4afd-4d37-acfe-8ed9c976b6d9-kube-api-access-ckw9h\") pod \"ovn-controller-metrics-dk97g\" (UID: \"0001ceb8-4afd-4d37-acfe-8ed9c976b6d9\") " pod="openstack/ovn-controller-metrics-dk97g" Oct 06 12:02:10 crc kubenswrapper[4958]: I1006 12:02:10.173471 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/0001ceb8-4afd-4d37-acfe-8ed9c976b6d9-ovn-rundir\") pod \"ovn-controller-metrics-dk97g\" (UID: \"0001ceb8-4afd-4d37-acfe-8ed9c976b6d9\") " pod="openstack/ovn-controller-metrics-dk97g" Oct 06 12:02:10 crc kubenswrapper[4958]: I1006 12:02:10.173494 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/0001ceb8-4afd-4d37-acfe-8ed9c976b6d9-ovs-rundir\") pod \"ovn-controller-metrics-dk97g\" (UID: \"0001ceb8-4afd-4d37-acfe-8ed9c976b6d9\") " pod="openstack/ovn-controller-metrics-dk97g" Oct 06 12:02:10 crc kubenswrapper[4958]: I1006 12:02:10.173543 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0001ceb8-4afd-4d37-acfe-8ed9c976b6d9-config\") pod \"ovn-controller-metrics-dk97g\" (UID: \"0001ceb8-4afd-4d37-acfe-8ed9c976b6d9\") " pod="openstack/ovn-controller-metrics-dk97g" Oct 06 12:02:10 crc kubenswrapper[4958]: I1006 12:02:10.173564 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0001ceb8-4afd-4d37-acfe-8ed9c976b6d9-combined-ca-bundle\") pod \"ovn-controller-metrics-dk97g\" (UID: \"0001ceb8-4afd-4d37-acfe-8ed9c976b6d9\") " pod="openstack/ovn-controller-metrics-dk97g" Oct 06 12:02:10 crc kubenswrapper[4958]: I1006 12:02:10.173586 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/0001ceb8-4afd-4d37-acfe-8ed9c976b6d9-metrics-certs-tls-certs\") pod \"ovn-controller-metrics-dk97g\" (UID: \"0001ceb8-4afd-4d37-acfe-8ed9c976b6d9\") " pod="openstack/ovn-controller-metrics-dk97g" Oct 06 12:02:10 crc kubenswrapper[4958]: I1006 12:02:10.175799 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/0001ceb8-4afd-4d37-acfe-8ed9c976b6d9-ovs-rundir\") pod \"ovn-controller-metrics-dk97g\" (UID: \"0001ceb8-4afd-4d37-acfe-8ed9c976b6d9\") " pod="openstack/ovn-controller-metrics-dk97g" Oct 06 12:02:10 crc kubenswrapper[4958]: I1006 12:02:10.176280 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/0001ceb8-4afd-4d37-acfe-8ed9c976b6d9-ovn-rundir\") pod \"ovn-controller-metrics-dk97g\" (UID: \"0001ceb8-4afd-4d37-acfe-8ed9c976b6d9\") " pod="openstack/ovn-controller-metrics-dk97g" Oct 06 12:02:10 crc kubenswrapper[4958]: I1006 12:02:10.176482 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0001ceb8-4afd-4d37-acfe-8ed9c976b6d9-config\") pod \"ovn-controller-metrics-dk97g\" (UID: \"0001ceb8-4afd-4d37-acfe-8ed9c976b6d9\") " pod="openstack/ovn-controller-metrics-dk97g" Oct 06 12:02:10 crc kubenswrapper[4958]: I1006 12:02:10.179552 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0001ceb8-4afd-4d37-acfe-8ed9c976b6d9-combined-ca-bundle\") pod \"ovn-controller-metrics-dk97g\" (UID: \"0001ceb8-4afd-4d37-acfe-8ed9c976b6d9\") " pod="openstack/ovn-controller-metrics-dk97g" Oct 06 12:02:10 crc kubenswrapper[4958]: I1006 12:02:10.202017 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/0001ceb8-4afd-4d37-acfe-8ed9c976b6d9-metrics-certs-tls-certs\") pod \"ovn-controller-metrics-dk97g\" (UID: \"0001ceb8-4afd-4d37-acfe-8ed9c976b6d9\") " pod="openstack/ovn-controller-metrics-dk97g" Oct 06 12:02:10 crc kubenswrapper[4958]: I1006 12:02:10.206620 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ckw9h\" (UniqueName: \"kubernetes.io/projected/0001ceb8-4afd-4d37-acfe-8ed9c976b6d9-kube-api-access-ckw9h\") pod \"ovn-controller-metrics-dk97g\" (UID: \"0001ceb8-4afd-4d37-acfe-8ed9c976b6d9\") " pod="openstack/ovn-controller-metrics-dk97g" Oct 06 12:02:10 crc kubenswrapper[4958]: I1006 12:02:10.206709 4958 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-7fd796d7df-h8bp2"] Oct 06 12:02:10 crc kubenswrapper[4958]: I1006 12:02:10.208261 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7fd796d7df-h8bp2" Oct 06 12:02:10 crc kubenswrapper[4958]: I1006 12:02:10.212888 4958 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovsdbserver-nb" Oct 06 12:02:10 crc kubenswrapper[4958]: I1006 12:02:10.215913 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7fd796d7df-h8bp2"] Oct 06 12:02:10 crc kubenswrapper[4958]: I1006 12:02:10.257056 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-metrics-dk97g" Oct 06 12:02:10 crc kubenswrapper[4958]: I1006 12:02:10.310601 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"ed469bc1-6294-481d-aed5-136cc0585e1c","Type":"ContainerStarted","Data":"4177041dad3df0d002a27dd05416f6b032363b7c4c792f80308c6d25f5172292"} Oct 06 12:02:10 crc kubenswrapper[4958]: I1006 12:02:10.316673 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"90b1ca14-6697-42b0-8e63-fcec51e4599a","Type":"ContainerStarted","Data":"58f6417b2de87e30ec4a29150799a83aa8178f77aa4275813b848c8349251f1e"} Oct 06 12:02:10 crc kubenswrapper[4958]: I1006 12:02:10.318063 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"4c931ada-9afe-4ec4-9f75-42db89dc36e8","Type":"ContainerStarted","Data":"76747f183e6353869963159a41c388cd7ddec98d4e1de9ee4df6d13537377d5c"} Oct 06 12:02:10 crc kubenswrapper[4958]: I1006 12:02:10.318964 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-78dd6ddcc-55j7h" event={"ID":"5bb0d843-3402-42d3-8fb8-c81ab8befa24","Type":"ContainerDied","Data":"6b64a4d584980c45fda1617e56968ff41deafd94ea0989e036e6abee66d03417"} Oct 06 12:02:10 crc kubenswrapper[4958]: I1006 12:02:10.319023 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78dd6ddcc-55j7h" Oct 06 12:02:10 crc kubenswrapper[4958]: I1006 12:02:10.341612 4958 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-fb7np"] Oct 06 12:02:10 crc kubenswrapper[4958]: I1006 12:02:10.346559 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-675f4bcbfc-whwkw" event={"ID":"d0a905a9-fae0-4067-bde3-8aacc42ef198","Type":"ContainerDied","Data":"1c2ffc7231d30a5b00dfa6b3185a61286ba83c01c4df57316fc0267bdafa4100"} Oct 06 12:02:10 crc kubenswrapper[4958]: I1006 12:02:10.346653 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-675f4bcbfc-whwkw" Oct 06 12:02:10 crc kubenswrapper[4958]: I1006 12:02:10.349085 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/memcached-0" event={"ID":"3a4b3c4e-da8b-4eb1-a159-6376181dcbb8","Type":"ContainerStarted","Data":"7464358177b81baa8faf7371e6696f13dddaf4657dd6ab9975383f6b70edbf00"} Oct 06 12:02:10 crc kubenswrapper[4958]: I1006 12:02:10.352553 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"acaf745d-7462-44e9-be0b-28424e3c2f31","Type":"ContainerStarted","Data":"a4ce18396402b2cc9e5289b7d9bd3f7f168a434e2f440f7c0ebe91d1853d8fbd"} Oct 06 12:02:10 crc kubenswrapper[4958]: I1006 12:02:10.374513 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57d769cc4f-9q2cz" event={"ID":"682c8402-9403-4a24-a1ae-bf6025255f2b","Type":"ContainerStarted","Data":"3ccdcb02ac6526b136afd5bb147bbfd12492c7a146904baea9433d974f80f0d1"} Oct 06 12:02:10 crc kubenswrapper[4958]: I1006 12:02:10.375124 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-57d769cc4f-9q2cz" Oct 06 12:02:10 crc kubenswrapper[4958]: I1006 12:02:10.376737 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jm2fm\" (UniqueName: \"kubernetes.io/projected/fdf0c15c-42d1-456a-ab96-8b11cff221ca-kube-api-access-jm2fm\") pod \"dnsmasq-dns-7fd796d7df-h8bp2\" (UID: \"fdf0c15c-42d1-456a-ab96-8b11cff221ca\") " pod="openstack/dnsmasq-dns-7fd796d7df-h8bp2" Oct 06 12:02:10 crc kubenswrapper[4958]: I1006 12:02:10.376775 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/fdf0c15c-42d1-456a-ab96-8b11cff221ca-dns-svc\") pod \"dnsmasq-dns-7fd796d7df-h8bp2\" (UID: \"fdf0c15c-42d1-456a-ab96-8b11cff221ca\") " pod="openstack/dnsmasq-dns-7fd796d7df-h8bp2" Oct 06 12:02:10 crc kubenswrapper[4958]: I1006 12:02:10.376852 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/fdf0c15c-42d1-456a-ab96-8b11cff221ca-ovsdbserver-nb\") pod \"dnsmasq-dns-7fd796d7df-h8bp2\" (UID: \"fdf0c15c-42d1-456a-ab96-8b11cff221ca\") " pod="openstack/dnsmasq-dns-7fd796d7df-h8bp2" Oct 06 12:02:10 crc kubenswrapper[4958]: I1006 12:02:10.376881 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fdf0c15c-42d1-456a-ab96-8b11cff221ca-config\") pod \"dnsmasq-dns-7fd796d7df-h8bp2\" (UID: \"fdf0c15c-42d1-456a-ab96-8b11cff221ca\") " pod="openstack/dnsmasq-dns-7fd796d7df-h8bp2" Oct 06 12:02:10 crc kubenswrapper[4958]: I1006 12:02:10.388401 4958 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-86db49b7ff-hmmpn"] Oct 06 12:02:10 crc kubenswrapper[4958]: I1006 12:02:10.389706 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-86db49b7ff-hmmpn" Oct 06 12:02:10 crc kubenswrapper[4958]: I1006 12:02:10.396652 4958 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovsdbserver-sb" Oct 06 12:02:10 crc kubenswrapper[4958]: I1006 12:02:10.403052 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"d699699e-9c26-4129-9483-3ac7d597f948","Type":"ContainerStarted","Data":"cd8f2188e9ad12dec3306b16e6474d62020cb786f62378ad3c125f043baf534e"} Oct 06 12:02:10 crc kubenswrapper[4958]: I1006 12:02:10.405188 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-86db49b7ff-hmmpn"] Oct 06 12:02:10 crc kubenswrapper[4958]: I1006 12:02:10.410577 4958 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-57d769cc4f-9q2cz" podStartSLOduration=14.716703689 podStartE2EDuration="15.410556272s" podCreationTimestamp="2025-10-06 12:01:55 +0000 UTC" firstStartedPulling="2025-10-06 12:02:08.248982672 +0000 UTC m=+882.135007980" lastFinishedPulling="2025-10-06 12:02:08.942835255 +0000 UTC m=+882.828860563" observedRunningTime="2025-10-06 12:02:10.397489462 +0000 UTC m=+884.283514760" watchObservedRunningTime="2025-10-06 12:02:10.410556272 +0000 UTC m=+884.296581580" Oct 06 12:02:10 crc kubenswrapper[4958]: I1006 12:02:10.418375 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-5x288" event={"ID":"ca1fdee5-1c5e-4740-b69a-d2111ba255ee","Type":"ContainerStarted","Data":"f404f65cbf490c43f5170f0504cd5abd3671b64356f37b6fdcffba996741c997"} Oct 06 12:02:11 crc kubenswrapper[4958]: I1006 12:02:10.432575 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"16e7f0fd-80ed-4a05-8ec6-c3b82fe3fb51","Type":"ContainerStarted","Data":"adb7fe76fc60ed80c7b402874011ad46b84b8def707b09910cee8156da10f7a9"} Oct 06 12:02:11 crc kubenswrapper[4958]: I1006 12:02:10.463920 4958 generic.go:334] "Generic (PLEG): container finished" podID="9f28c916-3fd1-4c4d-8791-7019551b1900" containerID="acb25d853bedc902c5a5e50333991194ecb6e09b85a606c7959b472dff78432b" exitCode=0 Oct 06 12:02:11 crc kubenswrapper[4958]: I1006 12:02:10.464477 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-666b6646f7-fb7np" event={"ID":"9f28c916-3fd1-4c4d-8791-7019551b1900","Type":"ContainerDied","Data":"acb25d853bedc902c5a5e50333991194ecb6e09b85a606c7959b472dff78432b"} Oct 06 12:02:11 crc kubenswrapper[4958]: I1006 12:02:10.470775 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-k7nm2" event={"ID":"f8475601-8235-4d69-958e-53f8e6a2f71b","Type":"ContainerStarted","Data":"eda0a7755cdf8781f18cce10fb02434db6b84e5d9f749d56c79fabf9a8f52155"} Oct 06 12:02:11 crc kubenswrapper[4958]: I1006 12:02:10.479766 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2084f4e2-c329-4c6b-98f3-9814b782b587-config\") pod \"dnsmasq-dns-86db49b7ff-hmmpn\" (UID: \"2084f4e2-c329-4c6b-98f3-9814b782b587\") " pod="openstack/dnsmasq-dns-86db49b7ff-hmmpn" Oct 06 12:02:11 crc kubenswrapper[4958]: I1006 12:02:10.479816 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/2084f4e2-c329-4c6b-98f3-9814b782b587-ovsdbserver-nb\") pod \"dnsmasq-dns-86db49b7ff-hmmpn\" (UID: \"2084f4e2-c329-4c6b-98f3-9814b782b587\") " pod="openstack/dnsmasq-dns-86db49b7ff-hmmpn" Oct 06 12:02:11 crc kubenswrapper[4958]: I1006 12:02:10.479863 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jm2fm\" (UniqueName: \"kubernetes.io/projected/fdf0c15c-42d1-456a-ab96-8b11cff221ca-kube-api-access-jm2fm\") pod \"dnsmasq-dns-7fd796d7df-h8bp2\" (UID: \"fdf0c15c-42d1-456a-ab96-8b11cff221ca\") " pod="openstack/dnsmasq-dns-7fd796d7df-h8bp2" Oct 06 12:02:11 crc kubenswrapper[4958]: I1006 12:02:10.479907 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/fdf0c15c-42d1-456a-ab96-8b11cff221ca-dns-svc\") pod \"dnsmasq-dns-7fd796d7df-h8bp2\" (UID: \"fdf0c15c-42d1-456a-ab96-8b11cff221ca\") " pod="openstack/dnsmasq-dns-7fd796d7df-h8bp2" Oct 06 12:02:11 crc kubenswrapper[4958]: I1006 12:02:10.479931 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5djjp\" (UniqueName: \"kubernetes.io/projected/2084f4e2-c329-4c6b-98f3-9814b782b587-kube-api-access-5djjp\") pod \"dnsmasq-dns-86db49b7ff-hmmpn\" (UID: \"2084f4e2-c329-4c6b-98f3-9814b782b587\") " pod="openstack/dnsmasq-dns-86db49b7ff-hmmpn" Oct 06 12:02:11 crc kubenswrapper[4958]: I1006 12:02:10.479960 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/2084f4e2-c329-4c6b-98f3-9814b782b587-dns-svc\") pod \"dnsmasq-dns-86db49b7ff-hmmpn\" (UID: \"2084f4e2-c329-4c6b-98f3-9814b782b587\") " pod="openstack/dnsmasq-dns-86db49b7ff-hmmpn" Oct 06 12:02:11 crc kubenswrapper[4958]: I1006 12:02:10.480042 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/fdf0c15c-42d1-456a-ab96-8b11cff221ca-ovsdbserver-nb\") pod \"dnsmasq-dns-7fd796d7df-h8bp2\" (UID: \"fdf0c15c-42d1-456a-ab96-8b11cff221ca\") " pod="openstack/dnsmasq-dns-7fd796d7df-h8bp2" Oct 06 12:02:11 crc kubenswrapper[4958]: I1006 12:02:10.480069 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fdf0c15c-42d1-456a-ab96-8b11cff221ca-config\") pod \"dnsmasq-dns-7fd796d7df-h8bp2\" (UID: \"fdf0c15c-42d1-456a-ab96-8b11cff221ca\") " pod="openstack/dnsmasq-dns-7fd796d7df-h8bp2" Oct 06 12:02:11 crc kubenswrapper[4958]: I1006 12:02:10.480092 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/2084f4e2-c329-4c6b-98f3-9814b782b587-ovsdbserver-sb\") pod \"dnsmasq-dns-86db49b7ff-hmmpn\" (UID: \"2084f4e2-c329-4c6b-98f3-9814b782b587\") " pod="openstack/dnsmasq-dns-86db49b7ff-hmmpn" Oct 06 12:02:11 crc kubenswrapper[4958]: I1006 12:02:10.482140 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/fdf0c15c-42d1-456a-ab96-8b11cff221ca-dns-svc\") pod \"dnsmasq-dns-7fd796d7df-h8bp2\" (UID: \"fdf0c15c-42d1-456a-ab96-8b11cff221ca\") " pod="openstack/dnsmasq-dns-7fd796d7df-h8bp2" Oct 06 12:02:11 crc kubenswrapper[4958]: I1006 12:02:10.482789 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fdf0c15c-42d1-456a-ab96-8b11cff221ca-config\") pod \"dnsmasq-dns-7fd796d7df-h8bp2\" (UID: \"fdf0c15c-42d1-456a-ab96-8b11cff221ca\") " pod="openstack/dnsmasq-dns-7fd796d7df-h8bp2" Oct 06 12:02:11 crc kubenswrapper[4958]: I1006 12:02:10.483139 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/fdf0c15c-42d1-456a-ab96-8b11cff221ca-ovsdbserver-nb\") pod \"dnsmasq-dns-7fd796d7df-h8bp2\" (UID: \"fdf0c15c-42d1-456a-ab96-8b11cff221ca\") " pod="openstack/dnsmasq-dns-7fd796d7df-h8bp2" Oct 06 12:02:11 crc kubenswrapper[4958]: I1006 12:02:10.491104 4958 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-55j7h"] Oct 06 12:02:11 crc kubenswrapper[4958]: I1006 12:02:10.496589 4958 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-55j7h"] Oct 06 12:02:11 crc kubenswrapper[4958]: I1006 12:02:10.526782 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jm2fm\" (UniqueName: \"kubernetes.io/projected/fdf0c15c-42d1-456a-ab96-8b11cff221ca-kube-api-access-jm2fm\") pod \"dnsmasq-dns-7fd796d7df-h8bp2\" (UID: \"fdf0c15c-42d1-456a-ab96-8b11cff221ca\") " pod="openstack/dnsmasq-dns-7fd796d7df-h8bp2" Oct 06 12:02:11 crc kubenswrapper[4958]: I1006 12:02:10.559891 4958 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-whwkw"] Oct 06 12:02:11 crc kubenswrapper[4958]: I1006 12:02:10.572326 4958 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-whwkw"] Oct 06 12:02:11 crc kubenswrapper[4958]: I1006 12:02:10.581977 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/2084f4e2-c329-4c6b-98f3-9814b782b587-ovsdbserver-sb\") pod \"dnsmasq-dns-86db49b7ff-hmmpn\" (UID: \"2084f4e2-c329-4c6b-98f3-9814b782b587\") " pod="openstack/dnsmasq-dns-86db49b7ff-hmmpn" Oct 06 12:02:11 crc kubenswrapper[4958]: I1006 12:02:10.582102 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2084f4e2-c329-4c6b-98f3-9814b782b587-config\") pod \"dnsmasq-dns-86db49b7ff-hmmpn\" (UID: \"2084f4e2-c329-4c6b-98f3-9814b782b587\") " pod="openstack/dnsmasq-dns-86db49b7ff-hmmpn" Oct 06 12:02:11 crc kubenswrapper[4958]: I1006 12:02:10.582138 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/2084f4e2-c329-4c6b-98f3-9814b782b587-ovsdbserver-nb\") pod \"dnsmasq-dns-86db49b7ff-hmmpn\" (UID: \"2084f4e2-c329-4c6b-98f3-9814b782b587\") " pod="openstack/dnsmasq-dns-86db49b7ff-hmmpn" Oct 06 12:02:11 crc kubenswrapper[4958]: I1006 12:02:10.582226 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5djjp\" (UniqueName: \"kubernetes.io/projected/2084f4e2-c329-4c6b-98f3-9814b782b587-kube-api-access-5djjp\") pod \"dnsmasq-dns-86db49b7ff-hmmpn\" (UID: \"2084f4e2-c329-4c6b-98f3-9814b782b587\") " pod="openstack/dnsmasq-dns-86db49b7ff-hmmpn" Oct 06 12:02:11 crc kubenswrapper[4958]: I1006 12:02:10.582255 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/2084f4e2-c329-4c6b-98f3-9814b782b587-dns-svc\") pod \"dnsmasq-dns-86db49b7ff-hmmpn\" (UID: \"2084f4e2-c329-4c6b-98f3-9814b782b587\") " pod="openstack/dnsmasq-dns-86db49b7ff-hmmpn" Oct 06 12:02:11 crc kubenswrapper[4958]: I1006 12:02:10.583270 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/2084f4e2-c329-4c6b-98f3-9814b782b587-dns-svc\") pod \"dnsmasq-dns-86db49b7ff-hmmpn\" (UID: \"2084f4e2-c329-4c6b-98f3-9814b782b587\") " pod="openstack/dnsmasq-dns-86db49b7ff-hmmpn" Oct 06 12:02:11 crc kubenswrapper[4958]: I1006 12:02:10.583365 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/2084f4e2-c329-4c6b-98f3-9814b782b587-ovsdbserver-sb\") pod \"dnsmasq-dns-86db49b7ff-hmmpn\" (UID: \"2084f4e2-c329-4c6b-98f3-9814b782b587\") " pod="openstack/dnsmasq-dns-86db49b7ff-hmmpn" Oct 06 12:02:11 crc kubenswrapper[4958]: I1006 12:02:10.584409 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2084f4e2-c329-4c6b-98f3-9814b782b587-config\") pod \"dnsmasq-dns-86db49b7ff-hmmpn\" (UID: \"2084f4e2-c329-4c6b-98f3-9814b782b587\") " pod="openstack/dnsmasq-dns-86db49b7ff-hmmpn" Oct 06 12:02:11 crc kubenswrapper[4958]: I1006 12:02:10.585259 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/2084f4e2-c329-4c6b-98f3-9814b782b587-ovsdbserver-nb\") pod \"dnsmasq-dns-86db49b7ff-hmmpn\" (UID: \"2084f4e2-c329-4c6b-98f3-9814b782b587\") " pod="openstack/dnsmasq-dns-86db49b7ff-hmmpn" Oct 06 12:02:11 crc kubenswrapper[4958]: I1006 12:02:10.603697 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7fd796d7df-h8bp2" Oct 06 12:02:11 crc kubenswrapper[4958]: I1006 12:02:10.605887 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5djjp\" (UniqueName: \"kubernetes.io/projected/2084f4e2-c329-4c6b-98f3-9814b782b587-kube-api-access-5djjp\") pod \"dnsmasq-dns-86db49b7ff-hmmpn\" (UID: \"2084f4e2-c329-4c6b-98f3-9814b782b587\") " pod="openstack/dnsmasq-dns-86db49b7ff-hmmpn" Oct 06 12:02:11 crc kubenswrapper[4958]: I1006 12:02:10.763278 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-86db49b7ff-hmmpn" Oct 06 12:02:11 crc kubenswrapper[4958]: I1006 12:02:10.924110 4958 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5bb0d843-3402-42d3-8fb8-c81ab8befa24" path="/var/lib/kubelet/pods/5bb0d843-3402-42d3-8fb8-c81ab8befa24/volumes" Oct 06 12:02:11 crc kubenswrapper[4958]: I1006 12:02:10.924540 4958 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d0a905a9-fae0-4067-bde3-8aacc42ef198" path="/var/lib/kubelet/pods/d0a905a9-fae0-4067-bde3-8aacc42ef198/volumes" Oct 06 12:02:11 crc kubenswrapper[4958]: I1006 12:02:11.480644 4958 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-57d769cc4f-9q2cz" podUID="682c8402-9403-4a24-a1ae-bf6025255f2b" containerName="dnsmasq-dns" containerID="cri-o://3ccdcb02ac6526b136afd5bb147bbfd12492c7a146904baea9433d974f80f0d1" gracePeriod=10 Oct 06 12:02:11 crc kubenswrapper[4958]: I1006 12:02:11.831331 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-metrics-dk97g"] Oct 06 12:02:12 crc kubenswrapper[4958]: I1006 12:02:12.495250 4958 generic.go:334] "Generic (PLEG): container finished" podID="682c8402-9403-4a24-a1ae-bf6025255f2b" containerID="3ccdcb02ac6526b136afd5bb147bbfd12492c7a146904baea9433d974f80f0d1" exitCode=0 Oct 06 12:02:12 crc kubenswrapper[4958]: I1006 12:02:12.495322 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57d769cc4f-9q2cz" event={"ID":"682c8402-9403-4a24-a1ae-bf6025255f2b","Type":"ContainerDied","Data":"3ccdcb02ac6526b136afd5bb147bbfd12492c7a146904baea9433d974f80f0d1"} Oct 06 12:02:15 crc kubenswrapper[4958]: I1006 12:02:15.518902 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-metrics-dk97g" event={"ID":"0001ceb8-4afd-4d37-acfe-8ed9c976b6d9","Type":"ContainerStarted","Data":"5389f3892854089f9e01fbe11b4c061fac10d6ff8de34868227178adb0b4aabe"} Oct 06 12:02:18 crc kubenswrapper[4958]: I1006 12:02:18.891628 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57d769cc4f-9q2cz" Oct 06 12:02:18 crc kubenswrapper[4958]: I1006 12:02:18.964074 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8njbt\" (UniqueName: \"kubernetes.io/projected/682c8402-9403-4a24-a1ae-bf6025255f2b-kube-api-access-8njbt\") pod \"682c8402-9403-4a24-a1ae-bf6025255f2b\" (UID: \"682c8402-9403-4a24-a1ae-bf6025255f2b\") " Oct 06 12:02:18 crc kubenswrapper[4958]: I1006 12:02:18.964245 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/682c8402-9403-4a24-a1ae-bf6025255f2b-dns-svc\") pod \"682c8402-9403-4a24-a1ae-bf6025255f2b\" (UID: \"682c8402-9403-4a24-a1ae-bf6025255f2b\") " Oct 06 12:02:18 crc kubenswrapper[4958]: I1006 12:02:18.964293 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/682c8402-9403-4a24-a1ae-bf6025255f2b-config\") pod \"682c8402-9403-4a24-a1ae-bf6025255f2b\" (UID: \"682c8402-9403-4a24-a1ae-bf6025255f2b\") " Oct 06 12:02:18 crc kubenswrapper[4958]: I1006 12:02:18.972253 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/682c8402-9403-4a24-a1ae-bf6025255f2b-kube-api-access-8njbt" (OuterVolumeSpecName: "kube-api-access-8njbt") pod "682c8402-9403-4a24-a1ae-bf6025255f2b" (UID: "682c8402-9403-4a24-a1ae-bf6025255f2b"). InnerVolumeSpecName "kube-api-access-8njbt". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 06 12:02:19 crc kubenswrapper[4958]: I1006 12:02:19.011558 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/682c8402-9403-4a24-a1ae-bf6025255f2b-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "682c8402-9403-4a24-a1ae-bf6025255f2b" (UID: "682c8402-9403-4a24-a1ae-bf6025255f2b"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 06 12:02:19 crc kubenswrapper[4958]: I1006 12:02:19.021411 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/682c8402-9403-4a24-a1ae-bf6025255f2b-config" (OuterVolumeSpecName: "config") pod "682c8402-9403-4a24-a1ae-bf6025255f2b" (UID: "682c8402-9403-4a24-a1ae-bf6025255f2b"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 06 12:02:19 crc kubenswrapper[4958]: I1006 12:02:19.065953 4958 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/682c8402-9403-4a24-a1ae-bf6025255f2b-dns-svc\") on node \"crc\" DevicePath \"\"" Oct 06 12:02:19 crc kubenswrapper[4958]: I1006 12:02:19.065986 4958 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/682c8402-9403-4a24-a1ae-bf6025255f2b-config\") on node \"crc\" DevicePath \"\"" Oct 06 12:02:19 crc kubenswrapper[4958]: I1006 12:02:19.065995 4958 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8njbt\" (UniqueName: \"kubernetes.io/projected/682c8402-9403-4a24-a1ae-bf6025255f2b-kube-api-access-8njbt\") on node \"crc\" DevicePath \"\"" Oct 06 12:02:19 crc kubenswrapper[4958]: I1006 12:02:19.235312 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7fd796d7df-h8bp2"] Oct 06 12:02:19 crc kubenswrapper[4958]: I1006 12:02:19.554617 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57d769cc4f-9q2cz" event={"ID":"682c8402-9403-4a24-a1ae-bf6025255f2b","Type":"ContainerDied","Data":"e0d8e451d6787e9ac96273810bce9660096d71045c4920826433875fc641b625"} Oct 06 12:02:19 crc kubenswrapper[4958]: I1006 12:02:19.554670 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57d769cc4f-9q2cz" Oct 06 12:02:19 crc kubenswrapper[4958]: I1006 12:02:19.554968 4958 scope.go:117] "RemoveContainer" containerID="3ccdcb02ac6526b136afd5bb147bbfd12492c7a146904baea9433d974f80f0d1" Oct 06 12:02:19 crc kubenswrapper[4958]: I1006 12:02:19.583590 4958 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-9q2cz"] Oct 06 12:02:19 crc kubenswrapper[4958]: I1006 12:02:19.588913 4958 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-9q2cz"] Oct 06 12:02:19 crc kubenswrapper[4958]: W1006 12:02:19.817729 4958 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podfdf0c15c_42d1_456a_ab96_8b11cff221ca.slice/crio-a78aac2a9318aebf83ea63e8c12ef0dd9146405bf330f9dc3bb0e99c9530e2f0 WatchSource:0}: Error finding container a78aac2a9318aebf83ea63e8c12ef0dd9146405bf330f9dc3bb0e99c9530e2f0: Status 404 returned error can't find the container with id a78aac2a9318aebf83ea63e8c12ef0dd9146405bf330f9dc3bb0e99c9530e2f0 Oct 06 12:02:19 crc kubenswrapper[4958]: I1006 12:02:19.987828 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-86db49b7ff-hmmpn"] Oct 06 12:02:20 crc kubenswrapper[4958]: I1006 12:02:20.377737 4958 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-57d769cc4f-9q2cz" podUID="682c8402-9403-4a24-a1ae-bf6025255f2b" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.100:5353: i/o timeout" Oct 06 12:02:20 crc kubenswrapper[4958]: I1006 12:02:20.562637 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7fd796d7df-h8bp2" event={"ID":"fdf0c15c-42d1-456a-ab96-8b11cff221ca","Type":"ContainerStarted","Data":"a78aac2a9318aebf83ea63e8c12ef0dd9146405bf330f9dc3bb0e99c9530e2f0"} Oct 06 12:02:20 crc kubenswrapper[4958]: I1006 12:02:20.925019 4958 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="682c8402-9403-4a24-a1ae-bf6025255f2b" path="/var/lib/kubelet/pods/682c8402-9403-4a24-a1ae-bf6025255f2b/volumes" Oct 06 12:02:21 crc kubenswrapper[4958]: I1006 12:02:21.228964 4958 scope.go:117] "RemoveContainer" containerID="d795c137205981326b5f9f901d8e37fa3e4fc6c9362d298e6f299623c6fa9da3" Oct 06 12:02:21 crc kubenswrapper[4958]: I1006 12:02:21.574434 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-86db49b7ff-hmmpn" event={"ID":"2084f4e2-c329-4c6b-98f3-9814b782b587","Type":"ContainerStarted","Data":"78a0043918bbb3f8bc2b40ad0fd59a371cb419e3541524ed18a45b7c4099a0e6"} Oct 06 12:02:22 crc kubenswrapper[4958]: I1006 12:02:22.584354 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-666b6646f7-fb7np" event={"ID":"9f28c916-3fd1-4c4d-8791-7019551b1900","Type":"ContainerStarted","Data":"83a70133384a53ebfee294156dcda7c2f4997f1ccdbc77c95aa51b4ae4bd5f40"} Oct 06 12:02:22 crc kubenswrapper[4958]: I1006 12:02:22.584880 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-666b6646f7-fb7np" Oct 06 12:02:22 crc kubenswrapper[4958]: I1006 12:02:22.584486 4958 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-666b6646f7-fb7np" podUID="9f28c916-3fd1-4c4d-8791-7019551b1900" containerName="dnsmasq-dns" containerID="cri-o://83a70133384a53ebfee294156dcda7c2f4997f1ccdbc77c95aa51b4ae4bd5f40" gracePeriod=10 Oct 06 12:02:22 crc kubenswrapper[4958]: I1006 12:02:22.607659 4958 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-666b6646f7-fb7np" podStartSLOduration=28.607638421 podStartE2EDuration="28.607638421s" podCreationTimestamp="2025-10-06 12:01:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-06 12:02:22.60227182 +0000 UTC m=+896.488297138" watchObservedRunningTime="2025-10-06 12:02:22.607638421 +0000 UTC m=+896.493663749" Oct 06 12:02:23 crc kubenswrapper[4958]: I1006 12:02:23.351398 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-666b6646f7-fb7np" Oct 06 12:02:23 crc kubenswrapper[4958]: I1006 12:02:23.446954 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/9f28c916-3fd1-4c4d-8791-7019551b1900-dns-svc\") pod \"9f28c916-3fd1-4c4d-8791-7019551b1900\" (UID: \"9f28c916-3fd1-4c4d-8791-7019551b1900\") " Oct 06 12:02:23 crc kubenswrapper[4958]: I1006 12:02:23.447135 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9lzlx\" (UniqueName: \"kubernetes.io/projected/9f28c916-3fd1-4c4d-8791-7019551b1900-kube-api-access-9lzlx\") pod \"9f28c916-3fd1-4c4d-8791-7019551b1900\" (UID: \"9f28c916-3fd1-4c4d-8791-7019551b1900\") " Oct 06 12:02:23 crc kubenswrapper[4958]: I1006 12:02:23.447294 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9f28c916-3fd1-4c4d-8791-7019551b1900-config\") pod \"9f28c916-3fd1-4c4d-8791-7019551b1900\" (UID: \"9f28c916-3fd1-4c4d-8791-7019551b1900\") " Oct 06 12:02:23 crc kubenswrapper[4958]: I1006 12:02:23.531801 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9f28c916-3fd1-4c4d-8791-7019551b1900-kube-api-access-9lzlx" (OuterVolumeSpecName: "kube-api-access-9lzlx") pod "9f28c916-3fd1-4c4d-8791-7019551b1900" (UID: "9f28c916-3fd1-4c4d-8791-7019551b1900"). InnerVolumeSpecName "kube-api-access-9lzlx". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 06 12:02:23 crc kubenswrapper[4958]: I1006 12:02:23.549620 4958 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9lzlx\" (UniqueName: \"kubernetes.io/projected/9f28c916-3fd1-4c4d-8791-7019551b1900-kube-api-access-9lzlx\") on node \"crc\" DevicePath \"\"" Oct 06 12:02:23 crc kubenswrapper[4958]: I1006 12:02:23.594374 4958 generic.go:334] "Generic (PLEG): container finished" podID="9f28c916-3fd1-4c4d-8791-7019551b1900" containerID="83a70133384a53ebfee294156dcda7c2f4997f1ccdbc77c95aa51b4ae4bd5f40" exitCode=0 Oct 06 12:02:23 crc kubenswrapper[4958]: I1006 12:02:23.594424 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-666b6646f7-fb7np" Oct 06 12:02:23 crc kubenswrapper[4958]: I1006 12:02:23.594477 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-666b6646f7-fb7np" event={"ID":"9f28c916-3fd1-4c4d-8791-7019551b1900","Type":"ContainerDied","Data":"83a70133384a53ebfee294156dcda7c2f4997f1ccdbc77c95aa51b4ae4bd5f40"} Oct 06 12:02:23 crc kubenswrapper[4958]: I1006 12:02:23.594555 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-666b6646f7-fb7np" event={"ID":"9f28c916-3fd1-4c4d-8791-7019551b1900","Type":"ContainerDied","Data":"f2017aa759876523fb7b9c8137dc88c06cabbe78a97204d7500b9628b90eccf8"} Oct 06 12:02:23 crc kubenswrapper[4958]: I1006 12:02:23.594590 4958 scope.go:117] "RemoveContainer" containerID="83a70133384a53ebfee294156dcda7c2f4997f1ccdbc77c95aa51b4ae4bd5f40" Oct 06 12:02:23 crc kubenswrapper[4958]: I1006 12:02:23.597771 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/memcached-0" event={"ID":"3a4b3c4e-da8b-4eb1-a159-6376181dcbb8","Type":"ContainerStarted","Data":"35f634e1e99fc19405626ccb217ee04e378c945bf5c23f25be363431d107fae0"} Oct 06 12:02:23 crc kubenswrapper[4958]: I1006 12:02:23.597862 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/memcached-0" Oct 06 12:02:23 crc kubenswrapper[4958]: I1006 12:02:23.600596 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"acaf745d-7462-44e9-be0b-28424e3c2f31","Type":"ContainerStarted","Data":"9152c6dddc449babe5504465f88e6939b74b7aea3126a790b2d9ef81442372c6"} Oct 06 12:02:23 crc kubenswrapper[4958]: I1006 12:02:23.602539 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"d699699e-9c26-4129-9483-3ac7d597f948","Type":"ContainerStarted","Data":"dae146899db7cfa264aac4bc2b3bf7cac22169fc2a36e8e5e650acf3ef7714d3"} Oct 06 12:02:23 crc kubenswrapper[4958]: I1006 12:02:23.604292 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"ed469bc1-6294-481d-aed5-136cc0585e1c","Type":"ContainerStarted","Data":"9d08a890706eff159de753dbced2e509a813db0e921778a980d06d2cea82d28e"} Oct 06 12:02:23 crc kubenswrapper[4958]: I1006 12:02:23.604728 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/kube-state-metrics-0" Oct 06 12:02:23 crc kubenswrapper[4958]: I1006 12:02:23.605896 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"90b1ca14-6697-42b0-8e63-fcec51e4599a","Type":"ContainerStarted","Data":"5dcf21aeedb92c9fa953ec0e8f0b5d7cb3fdb00819685348404430f270e6143b"} Oct 06 12:02:23 crc kubenswrapper[4958]: I1006 12:02:23.606898 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"16e7f0fd-80ed-4a05-8ec6-c3b82fe3fb51","Type":"ContainerStarted","Data":"202c92107de849f2d757145069517fb92dd687c80f4a98d95e950402d04c7a0a"} Oct 06 12:02:23 crc kubenswrapper[4958]: I1006 12:02:23.621331 4958 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/memcached-0" podStartSLOduration=12.983888914 podStartE2EDuration="24.621308697s" podCreationTimestamp="2025-10-06 12:01:59 +0000 UTC" firstStartedPulling="2025-10-06 12:02:09.405263591 +0000 UTC m=+883.291288899" lastFinishedPulling="2025-10-06 12:02:21.042683374 +0000 UTC m=+894.928708682" observedRunningTime="2025-10-06 12:02:23.620096948 +0000 UTC m=+897.506122276" watchObservedRunningTime="2025-10-06 12:02:23.621308697 +0000 UTC m=+897.507334005" Oct 06 12:02:23 crc kubenswrapper[4958]: I1006 12:02:23.641239 4958 scope.go:117] "RemoveContainer" containerID="acb25d853bedc902c5a5e50333991194ecb6e09b85a606c7959b472dff78432b" Oct 06 12:02:23 crc kubenswrapper[4958]: I1006 12:02:23.685474 4958 scope.go:117] "RemoveContainer" containerID="83a70133384a53ebfee294156dcda7c2f4997f1ccdbc77c95aa51b4ae4bd5f40" Oct 06 12:02:23 crc kubenswrapper[4958]: E1006 12:02:23.686003 4958 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"83a70133384a53ebfee294156dcda7c2f4997f1ccdbc77c95aa51b4ae4bd5f40\": container with ID starting with 83a70133384a53ebfee294156dcda7c2f4997f1ccdbc77c95aa51b4ae4bd5f40 not found: ID does not exist" containerID="83a70133384a53ebfee294156dcda7c2f4997f1ccdbc77c95aa51b4ae4bd5f40" Oct 06 12:02:23 crc kubenswrapper[4958]: I1006 12:02:23.686045 4958 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"83a70133384a53ebfee294156dcda7c2f4997f1ccdbc77c95aa51b4ae4bd5f40"} err="failed to get container status \"83a70133384a53ebfee294156dcda7c2f4997f1ccdbc77c95aa51b4ae4bd5f40\": rpc error: code = NotFound desc = could not find container \"83a70133384a53ebfee294156dcda7c2f4997f1ccdbc77c95aa51b4ae4bd5f40\": container with ID starting with 83a70133384a53ebfee294156dcda7c2f4997f1ccdbc77c95aa51b4ae4bd5f40 not found: ID does not exist" Oct 06 12:02:23 crc kubenswrapper[4958]: I1006 12:02:23.686074 4958 scope.go:117] "RemoveContainer" containerID="acb25d853bedc902c5a5e50333991194ecb6e09b85a606c7959b472dff78432b" Oct 06 12:02:23 crc kubenswrapper[4958]: E1006 12:02:23.686604 4958 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"acb25d853bedc902c5a5e50333991194ecb6e09b85a606c7959b472dff78432b\": container with ID starting with acb25d853bedc902c5a5e50333991194ecb6e09b85a606c7959b472dff78432b not found: ID does not exist" containerID="acb25d853bedc902c5a5e50333991194ecb6e09b85a606c7959b472dff78432b" Oct 06 12:02:23 crc kubenswrapper[4958]: I1006 12:02:23.686626 4958 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"acb25d853bedc902c5a5e50333991194ecb6e09b85a606c7959b472dff78432b"} err="failed to get container status \"acb25d853bedc902c5a5e50333991194ecb6e09b85a606c7959b472dff78432b\": rpc error: code = NotFound desc = could not find container \"acb25d853bedc902c5a5e50333991194ecb6e09b85a606c7959b472dff78432b\": container with ID starting with acb25d853bedc902c5a5e50333991194ecb6e09b85a606c7959b472dff78432b not found: ID does not exist" Oct 06 12:02:23 crc kubenswrapper[4958]: I1006 12:02:23.689258 4958 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/kube-state-metrics-0" podStartSLOduration=9.263722445 podStartE2EDuration="22.689243711s" podCreationTimestamp="2025-10-06 12:02:01 +0000 UTC" firstStartedPulling="2025-10-06 12:02:09.405440955 +0000 UTC m=+883.291466263" lastFinishedPulling="2025-10-06 12:02:22.830962221 +0000 UTC m=+896.716987529" observedRunningTime="2025-10-06 12:02:23.686471483 +0000 UTC m=+897.572496791" watchObservedRunningTime="2025-10-06 12:02:23.689243711 +0000 UTC m=+897.575269029" Oct 06 12:02:23 crc kubenswrapper[4958]: I1006 12:02:23.943970 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9f28c916-3fd1-4c4d-8791-7019551b1900-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "9f28c916-3fd1-4c4d-8791-7019551b1900" (UID: "9f28c916-3fd1-4c4d-8791-7019551b1900"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 06 12:02:23 crc kubenswrapper[4958]: I1006 12:02:23.991779 4958 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/9f28c916-3fd1-4c4d-8791-7019551b1900-dns-svc\") on node \"crc\" DevicePath \"\"" Oct 06 12:02:24 crc kubenswrapper[4958]: I1006 12:02:24.044693 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9f28c916-3fd1-4c4d-8791-7019551b1900-config" (OuterVolumeSpecName: "config") pod "9f28c916-3fd1-4c4d-8791-7019551b1900" (UID: "9f28c916-3fd1-4c4d-8791-7019551b1900"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 06 12:02:24 crc kubenswrapper[4958]: I1006 12:02:24.093443 4958 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9f28c916-3fd1-4c4d-8791-7019551b1900-config\") on node \"crc\" DevicePath \"\"" Oct 06 12:02:24 crc kubenswrapper[4958]: I1006 12:02:24.254975 4958 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-fb7np"] Oct 06 12:02:24 crc kubenswrapper[4958]: I1006 12:02:24.261379 4958 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-fb7np"] Oct 06 12:02:24 crc kubenswrapper[4958]: I1006 12:02:24.621003 4958 generic.go:334] "Generic (PLEG): container finished" podID="fdf0c15c-42d1-456a-ab96-8b11cff221ca" containerID="4616045c5bc2af561013976a45b0d1599755368395e497c8cde3bff13f100b0c" exitCode=0 Oct 06 12:02:24 crc kubenswrapper[4958]: I1006 12:02:24.621104 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7fd796d7df-h8bp2" event={"ID":"fdf0c15c-42d1-456a-ab96-8b11cff221ca","Type":"ContainerDied","Data":"4616045c5bc2af561013976a45b0d1599755368395e497c8cde3bff13f100b0c"} Oct 06 12:02:24 crc kubenswrapper[4958]: I1006 12:02:24.627296 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"d699699e-9c26-4129-9483-3ac7d597f948","Type":"ContainerStarted","Data":"1e1593355f72a76c43a9a7d2e6e1a751f37d8eddcf786f9a15b61432b2957214"} Oct 06 12:02:24 crc kubenswrapper[4958]: I1006 12:02:24.631024 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"2542eba4-43d2-4108-a6f0-8eb4a1714f77","Type":"ContainerStarted","Data":"7e0e80d0691399bbda45e00199dfd3f3bdb4ab8902cdd466a7d63235da759697"} Oct 06 12:02:24 crc kubenswrapper[4958]: I1006 12:02:24.634316 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-metrics-dk97g" event={"ID":"0001ceb8-4afd-4d37-acfe-8ed9c976b6d9","Type":"ContainerStarted","Data":"3f0c9ed69976553d742a6502c3854a57dda5359cc1c1b47b80228ea6719ab891"} Oct 06 12:02:24 crc kubenswrapper[4958]: I1006 12:02:24.636732 4958 generic.go:334] "Generic (PLEG): container finished" podID="2084f4e2-c329-4c6b-98f3-9814b782b587" containerID="d373f140ed0982edd619a65bad33aa8467267528d63e97bf9aee7d5cfde78b34" exitCode=0 Oct 06 12:02:24 crc kubenswrapper[4958]: I1006 12:02:24.636992 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-86db49b7ff-hmmpn" event={"ID":"2084f4e2-c329-4c6b-98f3-9814b782b587","Type":"ContainerDied","Data":"d373f140ed0982edd619a65bad33aa8467267528d63e97bf9aee7d5cfde78b34"} Oct 06 12:02:24 crc kubenswrapper[4958]: I1006 12:02:24.640936 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-5x288" event={"ID":"ca1fdee5-1c5e-4740-b69a-d2111ba255ee","Type":"ContainerStarted","Data":"182eb06c6ee474b8bc63e5450c7b9c56dc51afbf6e4bf911366388f8c764ba99"} Oct 06 12:02:24 crc kubenswrapper[4958]: I1006 12:02:24.642209 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-5x288" Oct 06 12:02:24 crc kubenswrapper[4958]: I1006 12:02:24.644205 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"90b1ca14-6697-42b0-8e63-fcec51e4599a","Type":"ContainerStarted","Data":"2ae9080abb3216b87917b23af0ced8bcaba6b2290e5fb20c2cc85a5b602f2dc6"} Oct 06 12:02:24 crc kubenswrapper[4958]: I1006 12:02:24.646406 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"4c931ada-9afe-4ec4-9f75-42db89dc36e8","Type":"ContainerStarted","Data":"9d985a2890095c07e0d679b7da0afb5dd7107bb0d4001822c4341d3e1ea05dc6"} Oct 06 12:02:24 crc kubenswrapper[4958]: I1006 12:02:24.651263 4958 generic.go:334] "Generic (PLEG): container finished" podID="f8475601-8235-4d69-958e-53f8e6a2f71b" containerID="1119ef82f62a648957332af173ab6b512c0ac387641a466453507c2a008f5ce8" exitCode=0 Oct 06 12:02:24 crc kubenswrapper[4958]: I1006 12:02:24.651386 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-k7nm2" event={"ID":"f8475601-8235-4d69-958e-53f8e6a2f71b","Type":"ContainerDied","Data":"1119ef82f62a648957332af173ab6b512c0ac387641a466453507c2a008f5ce8"} Oct 06 12:02:24 crc kubenswrapper[4958]: I1006 12:02:24.792496 4958 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-metrics-dk97g" podStartSLOduration=8.557111609 podStartE2EDuration="15.792476231s" podCreationTimestamp="2025-10-06 12:02:09 +0000 UTC" firstStartedPulling="2025-10-06 12:02:14.962705368 +0000 UTC m=+888.848730676" lastFinishedPulling="2025-10-06 12:02:22.19806999 +0000 UTC m=+896.084095298" observedRunningTime="2025-10-06 12:02:24.749871397 +0000 UTC m=+898.635896705" watchObservedRunningTime="2025-10-06 12:02:24.792476231 +0000 UTC m=+898.678501539" Oct 06 12:02:24 crc kubenswrapper[4958]: I1006 12:02:24.824861 4958 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-5x288" podStartSLOduration=8.321924023 podStartE2EDuration="20.824839103s" podCreationTimestamp="2025-10-06 12:02:04 +0000 UTC" firstStartedPulling="2025-10-06 12:02:09.404692597 +0000 UTC m=+883.290717905" lastFinishedPulling="2025-10-06 12:02:21.907607677 +0000 UTC m=+895.793632985" observedRunningTime="2025-10-06 12:02:24.820520017 +0000 UTC m=+898.706545325" watchObservedRunningTime="2025-10-06 12:02:24.824839103 +0000 UTC m=+898.710864411" Oct 06 12:02:24 crc kubenswrapper[4958]: I1006 12:02:24.867433 4958 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovsdbserver-nb-0" podStartSLOduration=8.626841172 podStartE2EDuration="20.867409616s" podCreationTimestamp="2025-10-06 12:02:04 +0000 UTC" firstStartedPulling="2025-10-06 12:02:09.543153268 +0000 UTC m=+883.429178576" lastFinishedPulling="2025-10-06 12:02:21.783721712 +0000 UTC m=+895.669747020" observedRunningTime="2025-10-06 12:02:24.865732255 +0000 UTC m=+898.751757563" watchObservedRunningTime="2025-10-06 12:02:24.867409616 +0000 UTC m=+898.753434924" Oct 06 12:02:24 crc kubenswrapper[4958]: I1006 12:02:24.899914 4958 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovsdbserver-sb-0" podStartSLOduration=6.066485329 podStartE2EDuration="17.899894361s" podCreationTimestamp="2025-10-06 12:02:07 +0000 UTC" firstStartedPulling="2025-10-06 12:02:10.076961202 +0000 UTC m=+883.962986510" lastFinishedPulling="2025-10-06 12:02:21.910370234 +0000 UTC m=+895.796395542" observedRunningTime="2025-10-06 12:02:24.892858269 +0000 UTC m=+898.778883597" watchObservedRunningTime="2025-10-06 12:02:24.899894361 +0000 UTC m=+898.785919659" Oct 06 12:02:24 crc kubenswrapper[4958]: I1006 12:02:24.927742 4958 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9f28c916-3fd1-4c4d-8791-7019551b1900" path="/var/lib/kubelet/pods/9f28c916-3fd1-4c4d-8791-7019551b1900/volumes" Oct 06 12:02:25 crc kubenswrapper[4958]: I1006 12:02:25.488867 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovsdbserver-nb-0" Oct 06 12:02:25 crc kubenswrapper[4958]: I1006 12:02:25.664132 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-k7nm2" event={"ID":"f8475601-8235-4d69-958e-53f8e6a2f71b","Type":"ContainerStarted","Data":"8856518b4bed1da94d6d460e46623c5c44a9b67a0d92cdc2752020162357bbb1"} Oct 06 12:02:25 crc kubenswrapper[4958]: I1006 12:02:25.664205 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-k7nm2" event={"ID":"f8475601-8235-4d69-958e-53f8e6a2f71b","Type":"ContainerStarted","Data":"0ef01aaec127f2c41c0e14ba1c027a91dc398b9a0565a8a2d98a97560d5548d8"} Oct 06 12:02:25 crc kubenswrapper[4958]: I1006 12:02:25.665386 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-ovs-k7nm2" Oct 06 12:02:25 crc kubenswrapper[4958]: I1006 12:02:25.668318 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7fd796d7df-h8bp2" event={"ID":"fdf0c15c-42d1-456a-ab96-8b11cff221ca","Type":"ContainerStarted","Data":"82c977da1bd66c386f0c5862ca6d42432af5a4f14b83ebc06723b1c44581fcbc"} Oct 06 12:02:25 crc kubenswrapper[4958]: I1006 12:02:25.668493 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-7fd796d7df-h8bp2" Oct 06 12:02:25 crc kubenswrapper[4958]: I1006 12:02:25.670676 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-86db49b7ff-hmmpn" event={"ID":"2084f4e2-c329-4c6b-98f3-9814b782b587","Type":"ContainerStarted","Data":"13d45eed1f0706706b5564ffff6172aa9e58369e08775eb3af4d7a0572f099c0"} Oct 06 12:02:25 crc kubenswrapper[4958]: I1006 12:02:25.697220 4958 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-ovs-k7nm2" podStartSLOduration=10.262025857 podStartE2EDuration="21.697199787s" podCreationTimestamp="2025-10-06 12:02:04 +0000 UTC" firstStartedPulling="2025-10-06 12:02:09.607461243 +0000 UTC m=+883.493486551" lastFinishedPulling="2025-10-06 12:02:21.042635173 +0000 UTC m=+894.928660481" observedRunningTime="2025-10-06 12:02:25.695837164 +0000 UTC m=+899.581862472" watchObservedRunningTime="2025-10-06 12:02:25.697199787 +0000 UTC m=+899.583225095" Oct 06 12:02:25 crc kubenswrapper[4958]: I1006 12:02:25.733600 4958 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-7fd796d7df-h8bp2" podStartSLOduration=15.733576218 podStartE2EDuration="15.733576218s" podCreationTimestamp="2025-10-06 12:02:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-06 12:02:25.727790026 +0000 UTC m=+899.613815324" watchObservedRunningTime="2025-10-06 12:02:25.733576218 +0000 UTC m=+899.619601556" Oct 06 12:02:25 crc kubenswrapper[4958]: I1006 12:02:25.764239 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-86db49b7ff-hmmpn" Oct 06 12:02:25 crc kubenswrapper[4958]: I1006 12:02:25.767179 4958 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-86db49b7ff-hmmpn" podStartSLOduration=15.76713412 podStartE2EDuration="15.76713412s" podCreationTimestamp="2025-10-06 12:02:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-06 12:02:25.763345427 +0000 UTC m=+899.649370745" watchObservedRunningTime="2025-10-06 12:02:25.76713412 +0000 UTC m=+899.653159438" Oct 06 12:02:26 crc kubenswrapper[4958]: I1006 12:02:26.488896 4958 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/ovsdbserver-nb-0" Oct 06 12:02:26 crc kubenswrapper[4958]: I1006 12:02:26.559990 4958 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/ovsdbserver-nb-0" Oct 06 12:02:26 crc kubenswrapper[4958]: I1006 12:02:26.680228 4958 generic.go:334] "Generic (PLEG): container finished" podID="acaf745d-7462-44e9-be0b-28424e3c2f31" containerID="9152c6dddc449babe5504465f88e6939b74b7aea3126a790b2d9ef81442372c6" exitCode=0 Oct 06 12:02:26 crc kubenswrapper[4958]: I1006 12:02:26.680300 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"acaf745d-7462-44e9-be0b-28424e3c2f31","Type":"ContainerDied","Data":"9152c6dddc449babe5504465f88e6939b74b7aea3126a790b2d9ef81442372c6"} Oct 06 12:02:26 crc kubenswrapper[4958]: I1006 12:02:26.682912 4958 generic.go:334] "Generic (PLEG): container finished" podID="16e7f0fd-80ed-4a05-8ec6-c3b82fe3fb51" containerID="202c92107de849f2d757145069517fb92dd687c80f4a98d95e950402d04c7a0a" exitCode=0 Oct 06 12:02:26 crc kubenswrapper[4958]: I1006 12:02:26.683070 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"16e7f0fd-80ed-4a05-8ec6-c3b82fe3fb51","Type":"ContainerDied","Data":"202c92107de849f2d757145069517fb92dd687c80f4a98d95e950402d04c7a0a"} Oct 06 12:02:26 crc kubenswrapper[4958]: I1006 12:02:26.684608 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-ovs-k7nm2" Oct 06 12:02:27 crc kubenswrapper[4958]: I1006 12:02:27.479339 4958 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/ovsdbserver-sb-0" Oct 06 12:02:27 crc kubenswrapper[4958]: I1006 12:02:27.513606 4958 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/ovsdbserver-sb-0" Oct 06 12:02:27 crc kubenswrapper[4958]: I1006 12:02:27.694860 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"acaf745d-7462-44e9-be0b-28424e3c2f31","Type":"ContainerStarted","Data":"c6cb0577e0de5144e9ca7dfd5a3f312697cdd101255aece18eee605cd4c139f2"} Oct 06 12:02:27 crc kubenswrapper[4958]: I1006 12:02:27.697639 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"16e7f0fd-80ed-4a05-8ec6-c3b82fe3fb51","Type":"ContainerStarted","Data":"7eedb809545e3f64a84dd4f45460a8c0ac95be918ef7e793b64607ca9211aa7b"} Oct 06 12:02:27 crc kubenswrapper[4958]: I1006 12:02:27.698223 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovsdbserver-sb-0" Oct 06 12:02:27 crc kubenswrapper[4958]: I1006 12:02:27.719301 4958 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstack-galera-0" podStartSLOduration=20.329413733 podStartE2EDuration="30.719281321s" podCreationTimestamp="2025-10-06 12:01:57 +0000 UTC" firstStartedPulling="2025-10-06 12:02:09.465691971 +0000 UTC m=+883.351717279" lastFinishedPulling="2025-10-06 12:02:19.855559559 +0000 UTC m=+893.741584867" observedRunningTime="2025-10-06 12:02:27.717175589 +0000 UTC m=+901.603200917" watchObservedRunningTime="2025-10-06 12:02:27.719281321 +0000 UTC m=+901.605306629" Oct 06 12:02:27 crc kubenswrapper[4958]: I1006 12:02:27.757512 4958 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstack-cell1-galera-0" podStartSLOduration=18.624046314 podStartE2EDuration="30.757479476s" podCreationTimestamp="2025-10-06 12:01:57 +0000 UTC" firstStartedPulling="2025-10-06 12:02:09.412547229 +0000 UTC m=+883.298572537" lastFinishedPulling="2025-10-06 12:02:21.545980381 +0000 UTC m=+895.432005699" observedRunningTime="2025-10-06 12:02:27.75598652 +0000 UTC m=+901.642011838" watchObservedRunningTime="2025-10-06 12:02:27.757479476 +0000 UTC m=+901.643504824" Oct 06 12:02:28 crc kubenswrapper[4958]: I1006 12:02:28.591569 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/openstack-galera-0" Oct 06 12:02:28 crc kubenswrapper[4958]: I1006 12:02:28.591883 4958 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/openstack-galera-0" Oct 06 12:02:28 crc kubenswrapper[4958]: I1006 12:02:28.766948 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovsdbserver-sb-0" Oct 06 12:02:29 crc kubenswrapper[4958]: I1006 12:02:29.273841 4958 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/openstack-cell1-galera-0" Oct 06 12:02:29 crc kubenswrapper[4958]: I1006 12:02:29.273899 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/openstack-cell1-galera-0" Oct 06 12:02:29 crc kubenswrapper[4958]: I1006 12:02:29.610335 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/memcached-0" Oct 06 12:02:30 crc kubenswrapper[4958]: I1006 12:02:30.539326 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovsdbserver-nb-0" Oct 06 12:02:30 crc kubenswrapper[4958]: I1006 12:02:30.605359 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-7fd796d7df-h8bp2" Oct 06 12:02:30 crc kubenswrapper[4958]: I1006 12:02:30.765404 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-86db49b7ff-hmmpn" Oct 06 12:02:30 crc kubenswrapper[4958]: I1006 12:02:30.801301 4958 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-northd-0"] Oct 06 12:02:30 crc kubenswrapper[4958]: E1006 12:02:30.801714 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9f28c916-3fd1-4c4d-8791-7019551b1900" containerName="dnsmasq-dns" Oct 06 12:02:30 crc kubenswrapper[4958]: I1006 12:02:30.801735 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="9f28c916-3fd1-4c4d-8791-7019551b1900" containerName="dnsmasq-dns" Oct 06 12:02:30 crc kubenswrapper[4958]: E1006 12:02:30.801756 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="682c8402-9403-4a24-a1ae-bf6025255f2b" containerName="dnsmasq-dns" Oct 06 12:02:30 crc kubenswrapper[4958]: I1006 12:02:30.801764 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="682c8402-9403-4a24-a1ae-bf6025255f2b" containerName="dnsmasq-dns" Oct 06 12:02:30 crc kubenswrapper[4958]: E1006 12:02:30.801791 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9f28c916-3fd1-4c4d-8791-7019551b1900" containerName="init" Oct 06 12:02:30 crc kubenswrapper[4958]: I1006 12:02:30.801800 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="9f28c916-3fd1-4c4d-8791-7019551b1900" containerName="init" Oct 06 12:02:30 crc kubenswrapper[4958]: E1006 12:02:30.801813 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="682c8402-9403-4a24-a1ae-bf6025255f2b" containerName="init" Oct 06 12:02:30 crc kubenswrapper[4958]: I1006 12:02:30.801821 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="682c8402-9403-4a24-a1ae-bf6025255f2b" containerName="init" Oct 06 12:02:30 crc kubenswrapper[4958]: I1006 12:02:30.802008 4958 memory_manager.go:354] "RemoveStaleState removing state" podUID="682c8402-9403-4a24-a1ae-bf6025255f2b" containerName="dnsmasq-dns" Oct 06 12:02:30 crc kubenswrapper[4958]: I1006 12:02:30.802025 4958 memory_manager.go:354] "RemoveStaleState removing state" podUID="9f28c916-3fd1-4c4d-8791-7019551b1900" containerName="dnsmasq-dns" Oct 06 12:02:30 crc kubenswrapper[4958]: I1006 12:02:30.802981 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-northd-0" Oct 06 12:02:30 crc kubenswrapper[4958]: I1006 12:02:30.809903 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovnnorthd-ovndbs" Oct 06 12:02:30 crc kubenswrapper[4958]: I1006 12:02:30.810198 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovnnorthd-ovnnorthd-dockercfg-q42m4" Oct 06 12:02:30 crc kubenswrapper[4958]: I1006 12:02:30.810347 4958 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovnnorthd-scripts" Oct 06 12:02:30 crc kubenswrapper[4958]: I1006 12:02:30.810490 4958 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovnnorthd-config" Oct 06 12:02:30 crc kubenswrapper[4958]: I1006 12:02:30.832284 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-northd-0"] Oct 06 12:02:30 crc kubenswrapper[4958]: I1006 12:02:30.840572 4958 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7fd796d7df-h8bp2"] Oct 06 12:02:30 crc kubenswrapper[4958]: I1006 12:02:30.840887 4958 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-7fd796d7df-h8bp2" podUID="fdf0c15c-42d1-456a-ab96-8b11cff221ca" containerName="dnsmasq-dns" containerID="cri-o://82c977da1bd66c386f0c5862ca6d42432af5a4f14b83ebc06723b1c44581fcbc" gracePeriod=10 Oct 06 12:02:30 crc kubenswrapper[4958]: I1006 12:02:30.932907 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0bee2760-9ae4-4988-80cc-1bf507ae032b-config\") pod \"ovn-northd-0\" (UID: \"0bee2760-9ae4-4988-80cc-1bf507ae032b\") " pod="openstack/ovn-northd-0" Oct 06 12:02:30 crc kubenswrapper[4958]: I1006 12:02:30.932982 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/0bee2760-9ae4-4988-80cc-1bf507ae032b-metrics-certs-tls-certs\") pod \"ovn-northd-0\" (UID: \"0bee2760-9ae4-4988-80cc-1bf507ae032b\") " pod="openstack/ovn-northd-0" Oct 06 12:02:30 crc kubenswrapper[4958]: I1006 12:02:30.933011 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/0bee2760-9ae4-4988-80cc-1bf507ae032b-scripts\") pod \"ovn-northd-0\" (UID: \"0bee2760-9ae4-4988-80cc-1bf507ae032b\") " pod="openstack/ovn-northd-0" Oct 06 12:02:30 crc kubenswrapper[4958]: I1006 12:02:30.933052 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0bee2760-9ae4-4988-80cc-1bf507ae032b-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"0bee2760-9ae4-4988-80cc-1bf507ae032b\") " pod="openstack/ovn-northd-0" Oct 06 12:02:30 crc kubenswrapper[4958]: I1006 12:02:30.933074 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/0bee2760-9ae4-4988-80cc-1bf507ae032b-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"0bee2760-9ae4-4988-80cc-1bf507ae032b\") " pod="openstack/ovn-northd-0" Oct 06 12:02:30 crc kubenswrapper[4958]: I1006 12:02:30.933097 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mmfvh\" (UniqueName: \"kubernetes.io/projected/0bee2760-9ae4-4988-80cc-1bf507ae032b-kube-api-access-mmfvh\") pod \"ovn-northd-0\" (UID: \"0bee2760-9ae4-4988-80cc-1bf507ae032b\") " pod="openstack/ovn-northd-0" Oct 06 12:02:30 crc kubenswrapper[4958]: I1006 12:02:30.933130 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/0bee2760-9ae4-4988-80cc-1bf507ae032b-ovn-northd-tls-certs\") pod \"ovn-northd-0\" (UID: \"0bee2760-9ae4-4988-80cc-1bf507ae032b\") " pod="openstack/ovn-northd-0" Oct 06 12:02:31 crc kubenswrapper[4958]: I1006 12:02:31.034286 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0bee2760-9ae4-4988-80cc-1bf507ae032b-config\") pod \"ovn-northd-0\" (UID: \"0bee2760-9ae4-4988-80cc-1bf507ae032b\") " pod="openstack/ovn-northd-0" Oct 06 12:02:31 crc kubenswrapper[4958]: I1006 12:02:31.034387 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/0bee2760-9ae4-4988-80cc-1bf507ae032b-metrics-certs-tls-certs\") pod \"ovn-northd-0\" (UID: \"0bee2760-9ae4-4988-80cc-1bf507ae032b\") " pod="openstack/ovn-northd-0" Oct 06 12:02:31 crc kubenswrapper[4958]: I1006 12:02:31.034426 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/0bee2760-9ae4-4988-80cc-1bf507ae032b-scripts\") pod \"ovn-northd-0\" (UID: \"0bee2760-9ae4-4988-80cc-1bf507ae032b\") " pod="openstack/ovn-northd-0" Oct 06 12:02:31 crc kubenswrapper[4958]: I1006 12:02:31.034449 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0bee2760-9ae4-4988-80cc-1bf507ae032b-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"0bee2760-9ae4-4988-80cc-1bf507ae032b\") " pod="openstack/ovn-northd-0" Oct 06 12:02:31 crc kubenswrapper[4958]: I1006 12:02:31.034470 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/0bee2760-9ae4-4988-80cc-1bf507ae032b-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"0bee2760-9ae4-4988-80cc-1bf507ae032b\") " pod="openstack/ovn-northd-0" Oct 06 12:02:31 crc kubenswrapper[4958]: I1006 12:02:31.034486 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mmfvh\" (UniqueName: \"kubernetes.io/projected/0bee2760-9ae4-4988-80cc-1bf507ae032b-kube-api-access-mmfvh\") pod \"ovn-northd-0\" (UID: \"0bee2760-9ae4-4988-80cc-1bf507ae032b\") " pod="openstack/ovn-northd-0" Oct 06 12:02:31 crc kubenswrapper[4958]: I1006 12:02:31.034511 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/0bee2760-9ae4-4988-80cc-1bf507ae032b-ovn-northd-tls-certs\") pod \"ovn-northd-0\" (UID: \"0bee2760-9ae4-4988-80cc-1bf507ae032b\") " pod="openstack/ovn-northd-0" Oct 06 12:02:31 crc kubenswrapper[4958]: I1006 12:02:31.035834 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0bee2760-9ae4-4988-80cc-1bf507ae032b-config\") pod \"ovn-northd-0\" (UID: \"0bee2760-9ae4-4988-80cc-1bf507ae032b\") " pod="openstack/ovn-northd-0" Oct 06 12:02:31 crc kubenswrapper[4958]: I1006 12:02:31.036408 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/0bee2760-9ae4-4988-80cc-1bf507ae032b-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"0bee2760-9ae4-4988-80cc-1bf507ae032b\") " pod="openstack/ovn-northd-0" Oct 06 12:02:31 crc kubenswrapper[4958]: I1006 12:02:31.036517 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/0bee2760-9ae4-4988-80cc-1bf507ae032b-scripts\") pod \"ovn-northd-0\" (UID: \"0bee2760-9ae4-4988-80cc-1bf507ae032b\") " pod="openstack/ovn-northd-0" Oct 06 12:02:31 crc kubenswrapper[4958]: I1006 12:02:31.042172 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/0bee2760-9ae4-4988-80cc-1bf507ae032b-ovn-northd-tls-certs\") pod \"ovn-northd-0\" (UID: \"0bee2760-9ae4-4988-80cc-1bf507ae032b\") " pod="openstack/ovn-northd-0" Oct 06 12:02:31 crc kubenswrapper[4958]: I1006 12:02:31.042291 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/0bee2760-9ae4-4988-80cc-1bf507ae032b-metrics-certs-tls-certs\") pod \"ovn-northd-0\" (UID: \"0bee2760-9ae4-4988-80cc-1bf507ae032b\") " pod="openstack/ovn-northd-0" Oct 06 12:02:31 crc kubenswrapper[4958]: I1006 12:02:31.042784 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0bee2760-9ae4-4988-80cc-1bf507ae032b-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"0bee2760-9ae4-4988-80cc-1bf507ae032b\") " pod="openstack/ovn-northd-0" Oct 06 12:02:31 crc kubenswrapper[4958]: I1006 12:02:31.055310 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mmfvh\" (UniqueName: \"kubernetes.io/projected/0bee2760-9ae4-4988-80cc-1bf507ae032b-kube-api-access-mmfvh\") pod \"ovn-northd-0\" (UID: \"0bee2760-9ae4-4988-80cc-1bf507ae032b\") " pod="openstack/ovn-northd-0" Oct 06 12:02:31 crc kubenswrapper[4958]: I1006 12:02:31.132914 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-northd-0" Oct 06 12:02:31 crc kubenswrapper[4958]: I1006 12:02:31.291123 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7fd796d7df-h8bp2" Oct 06 12:02:31 crc kubenswrapper[4958]: I1006 12:02:31.339408 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/fdf0c15c-42d1-456a-ab96-8b11cff221ca-dns-svc\") pod \"fdf0c15c-42d1-456a-ab96-8b11cff221ca\" (UID: \"fdf0c15c-42d1-456a-ab96-8b11cff221ca\") " Oct 06 12:02:31 crc kubenswrapper[4958]: I1006 12:02:31.339452 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/fdf0c15c-42d1-456a-ab96-8b11cff221ca-ovsdbserver-nb\") pod \"fdf0c15c-42d1-456a-ab96-8b11cff221ca\" (UID: \"fdf0c15c-42d1-456a-ab96-8b11cff221ca\") " Oct 06 12:02:31 crc kubenswrapper[4958]: I1006 12:02:31.339522 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fdf0c15c-42d1-456a-ab96-8b11cff221ca-config\") pod \"fdf0c15c-42d1-456a-ab96-8b11cff221ca\" (UID: \"fdf0c15c-42d1-456a-ab96-8b11cff221ca\") " Oct 06 12:02:31 crc kubenswrapper[4958]: I1006 12:02:31.339565 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jm2fm\" (UniqueName: \"kubernetes.io/projected/fdf0c15c-42d1-456a-ab96-8b11cff221ca-kube-api-access-jm2fm\") pod \"fdf0c15c-42d1-456a-ab96-8b11cff221ca\" (UID: \"fdf0c15c-42d1-456a-ab96-8b11cff221ca\") " Oct 06 12:02:31 crc kubenswrapper[4958]: I1006 12:02:31.346319 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fdf0c15c-42d1-456a-ab96-8b11cff221ca-kube-api-access-jm2fm" (OuterVolumeSpecName: "kube-api-access-jm2fm") pod "fdf0c15c-42d1-456a-ab96-8b11cff221ca" (UID: "fdf0c15c-42d1-456a-ab96-8b11cff221ca"). InnerVolumeSpecName "kube-api-access-jm2fm". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 06 12:02:31 crc kubenswrapper[4958]: I1006 12:02:31.381935 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fdf0c15c-42d1-456a-ab96-8b11cff221ca-config" (OuterVolumeSpecName: "config") pod "fdf0c15c-42d1-456a-ab96-8b11cff221ca" (UID: "fdf0c15c-42d1-456a-ab96-8b11cff221ca"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 06 12:02:31 crc kubenswrapper[4958]: I1006 12:02:31.384427 4958 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/openstack-cell1-galera-0" Oct 06 12:02:31 crc kubenswrapper[4958]: I1006 12:02:31.386678 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fdf0c15c-42d1-456a-ab96-8b11cff221ca-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "fdf0c15c-42d1-456a-ab96-8b11cff221ca" (UID: "fdf0c15c-42d1-456a-ab96-8b11cff221ca"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 06 12:02:31 crc kubenswrapper[4958]: I1006 12:02:31.404008 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fdf0c15c-42d1-456a-ab96-8b11cff221ca-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "fdf0c15c-42d1-456a-ab96-8b11cff221ca" (UID: "fdf0c15c-42d1-456a-ab96-8b11cff221ca"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 06 12:02:31 crc kubenswrapper[4958]: I1006 12:02:31.436712 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/openstack-cell1-galera-0" Oct 06 12:02:31 crc kubenswrapper[4958]: I1006 12:02:31.440889 4958 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/fdf0c15c-42d1-456a-ab96-8b11cff221ca-dns-svc\") on node \"crc\" DevicePath \"\"" Oct 06 12:02:31 crc kubenswrapper[4958]: I1006 12:02:31.440977 4958 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/fdf0c15c-42d1-456a-ab96-8b11cff221ca-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Oct 06 12:02:31 crc kubenswrapper[4958]: I1006 12:02:31.441064 4958 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fdf0c15c-42d1-456a-ab96-8b11cff221ca-config\") on node \"crc\" DevicePath \"\"" Oct 06 12:02:31 crc kubenswrapper[4958]: I1006 12:02:31.441121 4958 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jm2fm\" (UniqueName: \"kubernetes.io/projected/fdf0c15c-42d1-456a-ab96-8b11cff221ca-kube-api-access-jm2fm\") on node \"crc\" DevicePath \"\"" Oct 06 12:02:31 crc kubenswrapper[4958]: I1006 12:02:31.591803 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-northd-0"] Oct 06 12:02:31 crc kubenswrapper[4958]: W1006 12:02:31.597858 4958 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod0bee2760_9ae4_4988_80cc_1bf507ae032b.slice/crio-52802dacced024e02442a57be127ecc9a63ef70bdc4fe26d3117705ca9b9f296 WatchSource:0}: Error finding container 52802dacced024e02442a57be127ecc9a63ef70bdc4fe26d3117705ca9b9f296: Status 404 returned error can't find the container with id 52802dacced024e02442a57be127ecc9a63ef70bdc4fe26d3117705ca9b9f296 Oct 06 12:02:31 crc kubenswrapper[4958]: I1006 12:02:31.672485 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/kube-state-metrics-0" Oct 06 12:02:31 crc kubenswrapper[4958]: I1006 12:02:31.733584 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"0bee2760-9ae4-4988-80cc-1bf507ae032b","Type":"ContainerStarted","Data":"52802dacced024e02442a57be127ecc9a63ef70bdc4fe26d3117705ca9b9f296"} Oct 06 12:02:31 crc kubenswrapper[4958]: I1006 12:02:31.735616 4958 generic.go:334] "Generic (PLEG): container finished" podID="fdf0c15c-42d1-456a-ab96-8b11cff221ca" containerID="82c977da1bd66c386f0c5862ca6d42432af5a4f14b83ebc06723b1c44581fcbc" exitCode=0 Oct 06 12:02:31 crc kubenswrapper[4958]: I1006 12:02:31.735680 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7fd796d7df-h8bp2" Oct 06 12:02:31 crc kubenswrapper[4958]: I1006 12:02:31.735722 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7fd796d7df-h8bp2" event={"ID":"fdf0c15c-42d1-456a-ab96-8b11cff221ca","Type":"ContainerDied","Data":"82c977da1bd66c386f0c5862ca6d42432af5a4f14b83ebc06723b1c44581fcbc"} Oct 06 12:02:31 crc kubenswrapper[4958]: I1006 12:02:31.735781 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7fd796d7df-h8bp2" event={"ID":"fdf0c15c-42d1-456a-ab96-8b11cff221ca","Type":"ContainerDied","Data":"a78aac2a9318aebf83ea63e8c12ef0dd9146405bf330f9dc3bb0e99c9530e2f0"} Oct 06 12:02:31 crc kubenswrapper[4958]: I1006 12:02:31.735804 4958 scope.go:117] "RemoveContainer" containerID="82c977da1bd66c386f0c5862ca6d42432af5a4f14b83ebc06723b1c44581fcbc" Oct 06 12:02:31 crc kubenswrapper[4958]: I1006 12:02:31.769923 4958 scope.go:117] "RemoveContainer" containerID="4616045c5bc2af561013976a45b0d1599755368395e497c8cde3bff13f100b0c" Oct 06 12:02:31 crc kubenswrapper[4958]: I1006 12:02:31.773414 4958 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-698758b865-sx8gl"] Oct 06 12:02:31 crc kubenswrapper[4958]: E1006 12:02:31.773806 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fdf0c15c-42d1-456a-ab96-8b11cff221ca" containerName="init" Oct 06 12:02:31 crc kubenswrapper[4958]: I1006 12:02:31.773822 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="fdf0c15c-42d1-456a-ab96-8b11cff221ca" containerName="init" Oct 06 12:02:31 crc kubenswrapper[4958]: E1006 12:02:31.773849 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fdf0c15c-42d1-456a-ab96-8b11cff221ca" containerName="dnsmasq-dns" Oct 06 12:02:31 crc kubenswrapper[4958]: I1006 12:02:31.773855 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="fdf0c15c-42d1-456a-ab96-8b11cff221ca" containerName="dnsmasq-dns" Oct 06 12:02:31 crc kubenswrapper[4958]: I1006 12:02:31.774008 4958 memory_manager.go:354] "RemoveStaleState removing state" podUID="fdf0c15c-42d1-456a-ab96-8b11cff221ca" containerName="dnsmasq-dns" Oct 06 12:02:31 crc kubenswrapper[4958]: I1006 12:02:31.774972 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-698758b865-sx8gl" Oct 06 12:02:31 crc kubenswrapper[4958]: I1006 12:02:31.789099 4958 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7fd796d7df-h8bp2"] Oct 06 12:02:31 crc kubenswrapper[4958]: I1006 12:02:31.795767 4958 scope.go:117] "RemoveContainer" containerID="82c977da1bd66c386f0c5862ca6d42432af5a4f14b83ebc06723b1c44581fcbc" Oct 06 12:02:31 crc kubenswrapper[4958]: E1006 12:02:31.796326 4958 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"82c977da1bd66c386f0c5862ca6d42432af5a4f14b83ebc06723b1c44581fcbc\": container with ID starting with 82c977da1bd66c386f0c5862ca6d42432af5a4f14b83ebc06723b1c44581fcbc not found: ID does not exist" containerID="82c977da1bd66c386f0c5862ca6d42432af5a4f14b83ebc06723b1c44581fcbc" Oct 06 12:02:31 crc kubenswrapper[4958]: I1006 12:02:31.796358 4958 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"82c977da1bd66c386f0c5862ca6d42432af5a4f14b83ebc06723b1c44581fcbc"} err="failed to get container status \"82c977da1bd66c386f0c5862ca6d42432af5a4f14b83ebc06723b1c44581fcbc\": rpc error: code = NotFound desc = could not find container \"82c977da1bd66c386f0c5862ca6d42432af5a4f14b83ebc06723b1c44581fcbc\": container with ID starting with 82c977da1bd66c386f0c5862ca6d42432af5a4f14b83ebc06723b1c44581fcbc not found: ID does not exist" Oct 06 12:02:31 crc kubenswrapper[4958]: I1006 12:02:31.796380 4958 scope.go:117] "RemoveContainer" containerID="4616045c5bc2af561013976a45b0d1599755368395e497c8cde3bff13f100b0c" Oct 06 12:02:31 crc kubenswrapper[4958]: E1006 12:02:31.796686 4958 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4616045c5bc2af561013976a45b0d1599755368395e497c8cde3bff13f100b0c\": container with ID starting with 4616045c5bc2af561013976a45b0d1599755368395e497c8cde3bff13f100b0c not found: ID does not exist" containerID="4616045c5bc2af561013976a45b0d1599755368395e497c8cde3bff13f100b0c" Oct 06 12:02:31 crc kubenswrapper[4958]: I1006 12:02:31.796735 4958 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4616045c5bc2af561013976a45b0d1599755368395e497c8cde3bff13f100b0c"} err="failed to get container status \"4616045c5bc2af561013976a45b0d1599755368395e497c8cde3bff13f100b0c\": rpc error: code = NotFound desc = could not find container \"4616045c5bc2af561013976a45b0d1599755368395e497c8cde3bff13f100b0c\": container with ID starting with 4616045c5bc2af561013976a45b0d1599755368395e497c8cde3bff13f100b0c not found: ID does not exist" Oct 06 12:02:31 crc kubenswrapper[4958]: I1006 12:02:31.796995 4958 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-7fd796d7df-h8bp2"] Oct 06 12:02:31 crc kubenswrapper[4958]: I1006 12:02:31.808204 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-698758b865-sx8gl"] Oct 06 12:02:31 crc kubenswrapper[4958]: I1006 12:02:31.847705 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2fpm9\" (UniqueName: \"kubernetes.io/projected/7884f459-0365-448a-b528-326be335a9e3-kube-api-access-2fpm9\") pod \"dnsmasq-dns-698758b865-sx8gl\" (UID: \"7884f459-0365-448a-b528-326be335a9e3\") " pod="openstack/dnsmasq-dns-698758b865-sx8gl" Oct 06 12:02:31 crc kubenswrapper[4958]: I1006 12:02:31.848262 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/7884f459-0365-448a-b528-326be335a9e3-ovsdbserver-nb\") pod \"dnsmasq-dns-698758b865-sx8gl\" (UID: \"7884f459-0365-448a-b528-326be335a9e3\") " pod="openstack/dnsmasq-dns-698758b865-sx8gl" Oct 06 12:02:31 crc kubenswrapper[4958]: I1006 12:02:31.848443 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/7884f459-0365-448a-b528-326be335a9e3-dns-svc\") pod \"dnsmasq-dns-698758b865-sx8gl\" (UID: \"7884f459-0365-448a-b528-326be335a9e3\") " pod="openstack/dnsmasq-dns-698758b865-sx8gl" Oct 06 12:02:31 crc kubenswrapper[4958]: I1006 12:02:31.848497 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/7884f459-0365-448a-b528-326be335a9e3-ovsdbserver-sb\") pod \"dnsmasq-dns-698758b865-sx8gl\" (UID: \"7884f459-0365-448a-b528-326be335a9e3\") " pod="openstack/dnsmasq-dns-698758b865-sx8gl" Oct 06 12:02:31 crc kubenswrapper[4958]: I1006 12:02:31.848562 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7884f459-0365-448a-b528-326be335a9e3-config\") pod \"dnsmasq-dns-698758b865-sx8gl\" (UID: \"7884f459-0365-448a-b528-326be335a9e3\") " pod="openstack/dnsmasq-dns-698758b865-sx8gl" Oct 06 12:02:31 crc kubenswrapper[4958]: I1006 12:02:31.959035 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2fpm9\" (UniqueName: \"kubernetes.io/projected/7884f459-0365-448a-b528-326be335a9e3-kube-api-access-2fpm9\") pod \"dnsmasq-dns-698758b865-sx8gl\" (UID: \"7884f459-0365-448a-b528-326be335a9e3\") " pod="openstack/dnsmasq-dns-698758b865-sx8gl" Oct 06 12:02:31 crc kubenswrapper[4958]: I1006 12:02:31.959098 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/7884f459-0365-448a-b528-326be335a9e3-ovsdbserver-nb\") pod \"dnsmasq-dns-698758b865-sx8gl\" (UID: \"7884f459-0365-448a-b528-326be335a9e3\") " pod="openstack/dnsmasq-dns-698758b865-sx8gl" Oct 06 12:02:31 crc kubenswrapper[4958]: I1006 12:02:31.959212 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/7884f459-0365-448a-b528-326be335a9e3-dns-svc\") pod \"dnsmasq-dns-698758b865-sx8gl\" (UID: \"7884f459-0365-448a-b528-326be335a9e3\") " pod="openstack/dnsmasq-dns-698758b865-sx8gl" Oct 06 12:02:31 crc kubenswrapper[4958]: I1006 12:02:31.959236 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/7884f459-0365-448a-b528-326be335a9e3-ovsdbserver-sb\") pod \"dnsmasq-dns-698758b865-sx8gl\" (UID: \"7884f459-0365-448a-b528-326be335a9e3\") " pod="openstack/dnsmasq-dns-698758b865-sx8gl" Oct 06 12:02:31 crc kubenswrapper[4958]: I1006 12:02:31.959259 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7884f459-0365-448a-b528-326be335a9e3-config\") pod \"dnsmasq-dns-698758b865-sx8gl\" (UID: \"7884f459-0365-448a-b528-326be335a9e3\") " pod="openstack/dnsmasq-dns-698758b865-sx8gl" Oct 06 12:02:31 crc kubenswrapper[4958]: I1006 12:02:31.960352 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7884f459-0365-448a-b528-326be335a9e3-config\") pod \"dnsmasq-dns-698758b865-sx8gl\" (UID: \"7884f459-0365-448a-b528-326be335a9e3\") " pod="openstack/dnsmasq-dns-698758b865-sx8gl" Oct 06 12:02:31 crc kubenswrapper[4958]: I1006 12:02:31.960504 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/7884f459-0365-448a-b528-326be335a9e3-dns-svc\") pod \"dnsmasq-dns-698758b865-sx8gl\" (UID: \"7884f459-0365-448a-b528-326be335a9e3\") " pod="openstack/dnsmasq-dns-698758b865-sx8gl" Oct 06 12:02:31 crc kubenswrapper[4958]: I1006 12:02:31.960669 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/7884f459-0365-448a-b528-326be335a9e3-ovsdbserver-sb\") pod \"dnsmasq-dns-698758b865-sx8gl\" (UID: \"7884f459-0365-448a-b528-326be335a9e3\") " pod="openstack/dnsmasq-dns-698758b865-sx8gl" Oct 06 12:02:31 crc kubenswrapper[4958]: I1006 12:02:31.961229 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/7884f459-0365-448a-b528-326be335a9e3-ovsdbserver-nb\") pod \"dnsmasq-dns-698758b865-sx8gl\" (UID: \"7884f459-0365-448a-b528-326be335a9e3\") " pod="openstack/dnsmasq-dns-698758b865-sx8gl" Oct 06 12:02:31 crc kubenswrapper[4958]: I1006 12:02:31.976176 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2fpm9\" (UniqueName: \"kubernetes.io/projected/7884f459-0365-448a-b528-326be335a9e3-kube-api-access-2fpm9\") pod \"dnsmasq-dns-698758b865-sx8gl\" (UID: \"7884f459-0365-448a-b528-326be335a9e3\") " pod="openstack/dnsmasq-dns-698758b865-sx8gl" Oct 06 12:02:32 crc kubenswrapper[4958]: I1006 12:02:32.089633 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-698758b865-sx8gl" Oct 06 12:02:32 crc kubenswrapper[4958]: I1006 12:02:32.587589 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-698758b865-sx8gl"] Oct 06 12:02:32 crc kubenswrapper[4958]: I1006 12:02:32.684906 4958 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/openstack-galera-0" Oct 06 12:02:32 crc kubenswrapper[4958]: I1006 12:02:32.741653 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/openstack-galera-0" Oct 06 12:02:32 crc kubenswrapper[4958]: I1006 12:02:32.750859 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-698758b865-sx8gl" event={"ID":"7884f459-0365-448a-b528-326be335a9e3","Type":"ContainerStarted","Data":"c93d8e36e3130a1f799e91e57777e01f227df15915ab22d44ebe514be489348b"} Oct 06 12:02:32 crc kubenswrapper[4958]: I1006 12:02:32.906196 4958 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/swift-storage-0"] Oct 06 12:02:33 crc kubenswrapper[4958]: I1006 12:02:33.014761 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-storage-0"] Oct 06 12:02:33 crc kubenswrapper[4958]: I1006 12:02:33.015887 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-storage-0" Oct 06 12:02:33 crc kubenswrapper[4958]: I1006 12:02:33.019230 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-conf" Oct 06 12:02:33 crc kubenswrapper[4958]: I1006 12:02:33.019498 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-swift-dockercfg-kj7sp" Oct 06 12:02:33 crc kubenswrapper[4958]: I1006 12:02:33.019644 4958 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-storage-config-data" Oct 06 12:02:33 crc kubenswrapper[4958]: I1006 12:02:33.019754 4958 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-ring-files" Oct 06 12:02:33 crc kubenswrapper[4958]: I1006 12:02:33.046035 4958 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fdf0c15c-42d1-456a-ab96-8b11cff221ca" path="/var/lib/kubelet/pods/fdf0c15c-42d1-456a-ab96-8b11cff221ca/volumes" Oct 06 12:02:33 crc kubenswrapper[4958]: I1006 12:02:33.099352 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/7a90ffe6-00a1-4bee-862b-b1ca74e3185d-cache\") pod \"swift-storage-0\" (UID: \"7a90ffe6-00a1-4bee-862b-b1ca74e3185d\") " pod="openstack/swift-storage-0" Oct 06 12:02:33 crc kubenswrapper[4958]: I1006 12:02:33.099671 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8m9m7\" (UniqueName: \"kubernetes.io/projected/7a90ffe6-00a1-4bee-862b-b1ca74e3185d-kube-api-access-8m9m7\") pod \"swift-storage-0\" (UID: \"7a90ffe6-00a1-4bee-862b-b1ca74e3185d\") " pod="openstack/swift-storage-0" Oct 06 12:02:33 crc kubenswrapper[4958]: I1006 12:02:33.099981 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/7a90ffe6-00a1-4bee-862b-b1ca74e3185d-lock\") pod \"swift-storage-0\" (UID: \"7a90ffe6-00a1-4bee-862b-b1ca74e3185d\") " pod="openstack/swift-storage-0" Oct 06 12:02:33 crc kubenswrapper[4958]: I1006 12:02:33.100089 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/7a90ffe6-00a1-4bee-862b-b1ca74e3185d-etc-swift\") pod \"swift-storage-0\" (UID: \"7a90ffe6-00a1-4bee-862b-b1ca74e3185d\") " pod="openstack/swift-storage-0" Oct 06 12:02:33 crc kubenswrapper[4958]: I1006 12:02:33.100444 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"swift-storage-0\" (UID: \"7a90ffe6-00a1-4bee-862b-b1ca74e3185d\") " pod="openstack/swift-storage-0" Oct 06 12:02:33 crc kubenswrapper[4958]: I1006 12:02:33.202205 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"swift-storage-0\" (UID: \"7a90ffe6-00a1-4bee-862b-b1ca74e3185d\") " pod="openstack/swift-storage-0" Oct 06 12:02:33 crc kubenswrapper[4958]: I1006 12:02:33.202264 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/7a90ffe6-00a1-4bee-862b-b1ca74e3185d-cache\") pod \"swift-storage-0\" (UID: \"7a90ffe6-00a1-4bee-862b-b1ca74e3185d\") " pod="openstack/swift-storage-0" Oct 06 12:02:33 crc kubenswrapper[4958]: I1006 12:02:33.202302 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8m9m7\" (UniqueName: \"kubernetes.io/projected/7a90ffe6-00a1-4bee-862b-b1ca74e3185d-kube-api-access-8m9m7\") pod \"swift-storage-0\" (UID: \"7a90ffe6-00a1-4bee-862b-b1ca74e3185d\") " pod="openstack/swift-storage-0" Oct 06 12:02:33 crc kubenswrapper[4958]: I1006 12:02:33.202355 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/7a90ffe6-00a1-4bee-862b-b1ca74e3185d-lock\") pod \"swift-storage-0\" (UID: \"7a90ffe6-00a1-4bee-862b-b1ca74e3185d\") " pod="openstack/swift-storage-0" Oct 06 12:02:33 crc kubenswrapper[4958]: I1006 12:02:33.202394 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/7a90ffe6-00a1-4bee-862b-b1ca74e3185d-etc-swift\") pod \"swift-storage-0\" (UID: \"7a90ffe6-00a1-4bee-862b-b1ca74e3185d\") " pod="openstack/swift-storage-0" Oct 06 12:02:33 crc kubenswrapper[4958]: E1006 12:02:33.202532 4958 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Oct 06 12:02:33 crc kubenswrapper[4958]: E1006 12:02:33.202553 4958 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Oct 06 12:02:33 crc kubenswrapper[4958]: E1006 12:02:33.202603 4958 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/7a90ffe6-00a1-4bee-862b-b1ca74e3185d-etc-swift podName:7a90ffe6-00a1-4bee-862b-b1ca74e3185d nodeName:}" failed. No retries permitted until 2025-10-06 12:02:33.702579891 +0000 UTC m=+907.588605209 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/7a90ffe6-00a1-4bee-862b-b1ca74e3185d-etc-swift") pod "swift-storage-0" (UID: "7a90ffe6-00a1-4bee-862b-b1ca74e3185d") : configmap "swift-ring-files" not found Oct 06 12:02:33 crc kubenswrapper[4958]: I1006 12:02:33.202899 4958 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"swift-storage-0\" (UID: \"7a90ffe6-00a1-4bee-862b-b1ca74e3185d\") device mount path \"/mnt/openstack/pv06\"" pod="openstack/swift-storage-0" Oct 06 12:02:33 crc kubenswrapper[4958]: I1006 12:02:33.203423 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/7a90ffe6-00a1-4bee-862b-b1ca74e3185d-cache\") pod \"swift-storage-0\" (UID: \"7a90ffe6-00a1-4bee-862b-b1ca74e3185d\") " pod="openstack/swift-storage-0" Oct 06 12:02:33 crc kubenswrapper[4958]: I1006 12:02:33.203619 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/7a90ffe6-00a1-4bee-862b-b1ca74e3185d-lock\") pod \"swift-storage-0\" (UID: \"7a90ffe6-00a1-4bee-862b-b1ca74e3185d\") " pod="openstack/swift-storage-0" Oct 06 12:02:33 crc kubenswrapper[4958]: I1006 12:02:33.221929 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8m9m7\" (UniqueName: \"kubernetes.io/projected/7a90ffe6-00a1-4bee-862b-b1ca74e3185d-kube-api-access-8m9m7\") pod \"swift-storage-0\" (UID: \"7a90ffe6-00a1-4bee-862b-b1ca74e3185d\") " pod="openstack/swift-storage-0" Oct 06 12:02:33 crc kubenswrapper[4958]: I1006 12:02:33.222662 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"swift-storage-0\" (UID: \"7a90ffe6-00a1-4bee-862b-b1ca74e3185d\") " pod="openstack/swift-storage-0" Oct 06 12:02:33 crc kubenswrapper[4958]: I1006 12:02:33.710720 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/7a90ffe6-00a1-4bee-862b-b1ca74e3185d-etc-swift\") pod \"swift-storage-0\" (UID: \"7a90ffe6-00a1-4bee-862b-b1ca74e3185d\") " pod="openstack/swift-storage-0" Oct 06 12:02:33 crc kubenswrapper[4958]: E1006 12:02:33.710853 4958 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Oct 06 12:02:33 crc kubenswrapper[4958]: E1006 12:02:33.711072 4958 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Oct 06 12:02:33 crc kubenswrapper[4958]: E1006 12:02:33.711125 4958 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/7a90ffe6-00a1-4bee-862b-b1ca74e3185d-etc-swift podName:7a90ffe6-00a1-4bee-862b-b1ca74e3185d nodeName:}" failed. No retries permitted until 2025-10-06 12:02:34.711109706 +0000 UTC m=+908.597135014 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/7a90ffe6-00a1-4bee-862b-b1ca74e3185d-etc-swift") pod "swift-storage-0" (UID: "7a90ffe6-00a1-4bee-862b-b1ca74e3185d") : configmap "swift-ring-files" not found Oct 06 12:02:33 crc kubenswrapper[4958]: I1006 12:02:33.761370 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"0bee2760-9ae4-4988-80cc-1bf507ae032b","Type":"ContainerStarted","Data":"43604afd95f460466698430575624526b4bf3e51cb854d2ade541767b6cff782"} Oct 06 12:02:33 crc kubenswrapper[4958]: I1006 12:02:33.761439 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"0bee2760-9ae4-4988-80cc-1bf507ae032b","Type":"ContainerStarted","Data":"1eab3c330fcdaf9d55c04e259877ca54db8bb097a55611a7f3d84010e9b611f2"} Oct 06 12:02:33 crc kubenswrapper[4958]: I1006 12:02:33.761471 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-northd-0" Oct 06 12:02:33 crc kubenswrapper[4958]: I1006 12:02:33.763871 4958 generic.go:334] "Generic (PLEG): container finished" podID="7884f459-0365-448a-b528-326be335a9e3" containerID="8ffba54cf142986fe765a77c2b8e0fc265dd3ce5f9e50697ee8594bf036a8e7e" exitCode=0 Oct 06 12:02:33 crc kubenswrapper[4958]: I1006 12:02:33.763948 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-698758b865-sx8gl" event={"ID":"7884f459-0365-448a-b528-326be335a9e3","Type":"ContainerDied","Data":"8ffba54cf142986fe765a77c2b8e0fc265dd3ce5f9e50697ee8594bf036a8e7e"} Oct 06 12:02:33 crc kubenswrapper[4958]: I1006 12:02:33.794292 4958 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-northd-0" podStartSLOduration=2.778775243 podStartE2EDuration="3.794275213s" podCreationTimestamp="2025-10-06 12:02:30 +0000 UTC" firstStartedPulling="2025-10-06 12:02:31.600322271 +0000 UTC m=+905.486347579" lastFinishedPulling="2025-10-06 12:02:32.615822231 +0000 UTC m=+906.501847549" observedRunningTime="2025-10-06 12:02:33.793485823 +0000 UTC m=+907.679511171" watchObservedRunningTime="2025-10-06 12:02:33.794275213 +0000 UTC m=+907.680300531" Oct 06 12:02:34 crc kubenswrapper[4958]: I1006 12:02:34.741610 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/7a90ffe6-00a1-4bee-862b-b1ca74e3185d-etc-swift\") pod \"swift-storage-0\" (UID: \"7a90ffe6-00a1-4bee-862b-b1ca74e3185d\") " pod="openstack/swift-storage-0" Oct 06 12:02:34 crc kubenswrapper[4958]: E1006 12:02:34.741877 4958 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Oct 06 12:02:34 crc kubenswrapper[4958]: E1006 12:02:34.742050 4958 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Oct 06 12:02:34 crc kubenswrapper[4958]: E1006 12:02:34.742109 4958 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/7a90ffe6-00a1-4bee-862b-b1ca74e3185d-etc-swift podName:7a90ffe6-00a1-4bee-862b-b1ca74e3185d nodeName:}" failed. No retries permitted until 2025-10-06 12:02:36.742093206 +0000 UTC m=+910.628118514 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/7a90ffe6-00a1-4bee-862b-b1ca74e3185d-etc-swift") pod "swift-storage-0" (UID: "7a90ffe6-00a1-4bee-862b-b1ca74e3185d") : configmap "swift-ring-files" not found Oct 06 12:02:34 crc kubenswrapper[4958]: I1006 12:02:34.774103 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-698758b865-sx8gl" event={"ID":"7884f459-0365-448a-b528-326be335a9e3","Type":"ContainerStarted","Data":"e87e53ebe5f7e4a8f74f3c4d9dea6daa8eaec3c2c356781a77913a111e33335c"} Oct 06 12:02:34 crc kubenswrapper[4958]: I1006 12:02:34.774728 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-698758b865-sx8gl" Oct 06 12:02:34 crc kubenswrapper[4958]: I1006 12:02:34.795487 4958 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-698758b865-sx8gl" podStartSLOduration=3.7954588830000002 podStartE2EDuration="3.795458883s" podCreationTimestamp="2025-10-06 12:02:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-06 12:02:34.791576588 +0000 UTC m=+908.677601956" watchObservedRunningTime="2025-10-06 12:02:34.795458883 +0000 UTC m=+908.681484231" Oct 06 12:02:36 crc kubenswrapper[4958]: I1006 12:02:36.787790 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/7a90ffe6-00a1-4bee-862b-b1ca74e3185d-etc-swift\") pod \"swift-storage-0\" (UID: \"7a90ffe6-00a1-4bee-862b-b1ca74e3185d\") " pod="openstack/swift-storage-0" Oct 06 12:02:36 crc kubenswrapper[4958]: E1006 12:02:36.788039 4958 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Oct 06 12:02:36 crc kubenswrapper[4958]: E1006 12:02:36.788592 4958 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Oct 06 12:02:36 crc kubenswrapper[4958]: E1006 12:02:36.788797 4958 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/7a90ffe6-00a1-4bee-862b-b1ca74e3185d-etc-swift podName:7a90ffe6-00a1-4bee-862b-b1ca74e3185d nodeName:}" failed. No retries permitted until 2025-10-06 12:02:40.788769371 +0000 UTC m=+914.674794719 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/7a90ffe6-00a1-4bee-862b-b1ca74e3185d-etc-swift") pod "swift-storage-0" (UID: "7a90ffe6-00a1-4bee-862b-b1ca74e3185d") : configmap "swift-ring-files" not found Oct 06 12:02:36 crc kubenswrapper[4958]: I1006 12:02:36.987467 4958 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/swift-ring-rebalance-mvxsm"] Oct 06 12:02:36 crc kubenswrapper[4958]: I1006 12:02:36.989181 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-mvxsm" Oct 06 12:02:36 crc kubenswrapper[4958]: I1006 12:02:36.992753 4958 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-ring-config-data" Oct 06 12:02:36 crc kubenswrapper[4958]: I1006 12:02:36.993371 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-proxy-config-data" Oct 06 12:02:36 crc kubenswrapper[4958]: I1006 12:02:36.993798 4958 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-ring-scripts" Oct 06 12:02:36 crc kubenswrapper[4958]: I1006 12:02:36.998809 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-ring-rebalance-mvxsm"] Oct 06 12:02:37 crc kubenswrapper[4958]: I1006 12:02:37.095544 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/368db9f7-1e1b-42e1-a8d7-af0c7d9d910f-dispersionconf\") pod \"swift-ring-rebalance-mvxsm\" (UID: \"368db9f7-1e1b-42e1-a8d7-af0c7d9d910f\") " pod="openstack/swift-ring-rebalance-mvxsm" Oct 06 12:02:37 crc kubenswrapper[4958]: I1006 12:02:37.095805 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/368db9f7-1e1b-42e1-a8d7-af0c7d9d910f-ring-data-devices\") pod \"swift-ring-rebalance-mvxsm\" (UID: \"368db9f7-1e1b-42e1-a8d7-af0c7d9d910f\") " pod="openstack/swift-ring-rebalance-mvxsm" Oct 06 12:02:37 crc kubenswrapper[4958]: I1006 12:02:37.095897 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/368db9f7-1e1b-42e1-a8d7-af0c7d9d910f-scripts\") pod \"swift-ring-rebalance-mvxsm\" (UID: \"368db9f7-1e1b-42e1-a8d7-af0c7d9d910f\") " pod="openstack/swift-ring-rebalance-mvxsm" Oct 06 12:02:37 crc kubenswrapper[4958]: I1006 12:02:37.095999 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/368db9f7-1e1b-42e1-a8d7-af0c7d9d910f-swiftconf\") pod \"swift-ring-rebalance-mvxsm\" (UID: \"368db9f7-1e1b-42e1-a8d7-af0c7d9d910f\") " pod="openstack/swift-ring-rebalance-mvxsm" Oct 06 12:02:37 crc kubenswrapper[4958]: I1006 12:02:37.096086 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/368db9f7-1e1b-42e1-a8d7-af0c7d9d910f-etc-swift\") pod \"swift-ring-rebalance-mvxsm\" (UID: \"368db9f7-1e1b-42e1-a8d7-af0c7d9d910f\") " pod="openstack/swift-ring-rebalance-mvxsm" Oct 06 12:02:37 crc kubenswrapper[4958]: I1006 12:02:37.096246 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bw8hl\" (UniqueName: \"kubernetes.io/projected/368db9f7-1e1b-42e1-a8d7-af0c7d9d910f-kube-api-access-bw8hl\") pod \"swift-ring-rebalance-mvxsm\" (UID: \"368db9f7-1e1b-42e1-a8d7-af0c7d9d910f\") " pod="openstack/swift-ring-rebalance-mvxsm" Oct 06 12:02:37 crc kubenswrapper[4958]: I1006 12:02:37.096341 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/368db9f7-1e1b-42e1-a8d7-af0c7d9d910f-combined-ca-bundle\") pod \"swift-ring-rebalance-mvxsm\" (UID: \"368db9f7-1e1b-42e1-a8d7-af0c7d9d910f\") " pod="openstack/swift-ring-rebalance-mvxsm" Oct 06 12:02:37 crc kubenswrapper[4958]: I1006 12:02:37.198343 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/368db9f7-1e1b-42e1-a8d7-af0c7d9d910f-ring-data-devices\") pod \"swift-ring-rebalance-mvxsm\" (UID: \"368db9f7-1e1b-42e1-a8d7-af0c7d9d910f\") " pod="openstack/swift-ring-rebalance-mvxsm" Oct 06 12:02:37 crc kubenswrapper[4958]: I1006 12:02:37.198434 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/368db9f7-1e1b-42e1-a8d7-af0c7d9d910f-scripts\") pod \"swift-ring-rebalance-mvxsm\" (UID: \"368db9f7-1e1b-42e1-a8d7-af0c7d9d910f\") " pod="openstack/swift-ring-rebalance-mvxsm" Oct 06 12:02:37 crc kubenswrapper[4958]: I1006 12:02:37.198495 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/368db9f7-1e1b-42e1-a8d7-af0c7d9d910f-swiftconf\") pod \"swift-ring-rebalance-mvxsm\" (UID: \"368db9f7-1e1b-42e1-a8d7-af0c7d9d910f\") " pod="openstack/swift-ring-rebalance-mvxsm" Oct 06 12:02:37 crc kubenswrapper[4958]: I1006 12:02:37.198546 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/368db9f7-1e1b-42e1-a8d7-af0c7d9d910f-etc-swift\") pod \"swift-ring-rebalance-mvxsm\" (UID: \"368db9f7-1e1b-42e1-a8d7-af0c7d9d910f\") " pod="openstack/swift-ring-rebalance-mvxsm" Oct 06 12:02:37 crc kubenswrapper[4958]: I1006 12:02:37.198600 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bw8hl\" (UniqueName: \"kubernetes.io/projected/368db9f7-1e1b-42e1-a8d7-af0c7d9d910f-kube-api-access-bw8hl\") pod \"swift-ring-rebalance-mvxsm\" (UID: \"368db9f7-1e1b-42e1-a8d7-af0c7d9d910f\") " pod="openstack/swift-ring-rebalance-mvxsm" Oct 06 12:02:37 crc kubenswrapper[4958]: I1006 12:02:37.198646 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/368db9f7-1e1b-42e1-a8d7-af0c7d9d910f-combined-ca-bundle\") pod \"swift-ring-rebalance-mvxsm\" (UID: \"368db9f7-1e1b-42e1-a8d7-af0c7d9d910f\") " pod="openstack/swift-ring-rebalance-mvxsm" Oct 06 12:02:37 crc kubenswrapper[4958]: I1006 12:02:37.198701 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/368db9f7-1e1b-42e1-a8d7-af0c7d9d910f-dispersionconf\") pod \"swift-ring-rebalance-mvxsm\" (UID: \"368db9f7-1e1b-42e1-a8d7-af0c7d9d910f\") " pod="openstack/swift-ring-rebalance-mvxsm" Oct 06 12:02:37 crc kubenswrapper[4958]: I1006 12:02:37.199537 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/368db9f7-1e1b-42e1-a8d7-af0c7d9d910f-etc-swift\") pod \"swift-ring-rebalance-mvxsm\" (UID: \"368db9f7-1e1b-42e1-a8d7-af0c7d9d910f\") " pod="openstack/swift-ring-rebalance-mvxsm" Oct 06 12:02:37 crc kubenswrapper[4958]: I1006 12:02:37.199683 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/368db9f7-1e1b-42e1-a8d7-af0c7d9d910f-ring-data-devices\") pod \"swift-ring-rebalance-mvxsm\" (UID: \"368db9f7-1e1b-42e1-a8d7-af0c7d9d910f\") " pod="openstack/swift-ring-rebalance-mvxsm" Oct 06 12:02:37 crc kubenswrapper[4958]: I1006 12:02:37.199703 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/368db9f7-1e1b-42e1-a8d7-af0c7d9d910f-scripts\") pod \"swift-ring-rebalance-mvxsm\" (UID: \"368db9f7-1e1b-42e1-a8d7-af0c7d9d910f\") " pod="openstack/swift-ring-rebalance-mvxsm" Oct 06 12:02:37 crc kubenswrapper[4958]: I1006 12:02:37.208849 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/368db9f7-1e1b-42e1-a8d7-af0c7d9d910f-dispersionconf\") pod \"swift-ring-rebalance-mvxsm\" (UID: \"368db9f7-1e1b-42e1-a8d7-af0c7d9d910f\") " pod="openstack/swift-ring-rebalance-mvxsm" Oct 06 12:02:37 crc kubenswrapper[4958]: I1006 12:02:37.209076 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/368db9f7-1e1b-42e1-a8d7-af0c7d9d910f-swiftconf\") pod \"swift-ring-rebalance-mvxsm\" (UID: \"368db9f7-1e1b-42e1-a8d7-af0c7d9d910f\") " pod="openstack/swift-ring-rebalance-mvxsm" Oct 06 12:02:37 crc kubenswrapper[4958]: I1006 12:02:37.213865 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/368db9f7-1e1b-42e1-a8d7-af0c7d9d910f-combined-ca-bundle\") pod \"swift-ring-rebalance-mvxsm\" (UID: \"368db9f7-1e1b-42e1-a8d7-af0c7d9d910f\") " pod="openstack/swift-ring-rebalance-mvxsm" Oct 06 12:02:37 crc kubenswrapper[4958]: I1006 12:02:37.228868 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bw8hl\" (UniqueName: \"kubernetes.io/projected/368db9f7-1e1b-42e1-a8d7-af0c7d9d910f-kube-api-access-bw8hl\") pod \"swift-ring-rebalance-mvxsm\" (UID: \"368db9f7-1e1b-42e1-a8d7-af0c7d9d910f\") " pod="openstack/swift-ring-rebalance-mvxsm" Oct 06 12:02:37 crc kubenswrapper[4958]: I1006 12:02:37.318064 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-mvxsm" Oct 06 12:02:37 crc kubenswrapper[4958]: I1006 12:02:37.845722 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-ring-rebalance-mvxsm"] Oct 06 12:02:38 crc kubenswrapper[4958]: I1006 12:02:38.822133 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-mvxsm" event={"ID":"368db9f7-1e1b-42e1-a8d7-af0c7d9d910f","Type":"ContainerStarted","Data":"511a6cffb38ae9061dce3da932594be3acef1add350f3dcb1148f80b5d913424"} Oct 06 12:02:39 crc kubenswrapper[4958]: I1006 12:02:39.301560 4958 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-db-create-wqr9j"] Oct 06 12:02:39 crc kubenswrapper[4958]: I1006 12:02:39.303923 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-wqr9j" Oct 06 12:02:39 crc kubenswrapper[4958]: I1006 12:02:39.316124 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-create-wqr9j"] Oct 06 12:02:39 crc kubenswrapper[4958]: I1006 12:02:39.444543 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tcjml\" (UniqueName: \"kubernetes.io/projected/c828918b-7cf5-4ac3-8edb-a28834ce4249-kube-api-access-tcjml\") pod \"keystone-db-create-wqr9j\" (UID: \"c828918b-7cf5-4ac3-8edb-a28834ce4249\") " pod="openstack/keystone-db-create-wqr9j" Oct 06 12:02:39 crc kubenswrapper[4958]: I1006 12:02:39.506488 4958 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-db-create-r2wq4"] Oct 06 12:02:39 crc kubenswrapper[4958]: I1006 12:02:39.507585 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-r2wq4" Oct 06 12:02:39 crc kubenswrapper[4958]: I1006 12:02:39.513913 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-create-r2wq4"] Oct 06 12:02:39 crc kubenswrapper[4958]: I1006 12:02:39.547320 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tcjml\" (UniqueName: \"kubernetes.io/projected/c828918b-7cf5-4ac3-8edb-a28834ce4249-kube-api-access-tcjml\") pod \"keystone-db-create-wqr9j\" (UID: \"c828918b-7cf5-4ac3-8edb-a28834ce4249\") " pod="openstack/keystone-db-create-wqr9j" Oct 06 12:02:39 crc kubenswrapper[4958]: I1006 12:02:39.578708 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tcjml\" (UniqueName: \"kubernetes.io/projected/c828918b-7cf5-4ac3-8edb-a28834ce4249-kube-api-access-tcjml\") pod \"keystone-db-create-wqr9j\" (UID: \"c828918b-7cf5-4ac3-8edb-a28834ce4249\") " pod="openstack/keystone-db-create-wqr9j" Oct 06 12:02:39 crc kubenswrapper[4958]: I1006 12:02:39.642330 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-wqr9j" Oct 06 12:02:39 crc kubenswrapper[4958]: I1006 12:02:39.648915 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2pbvm\" (UniqueName: \"kubernetes.io/projected/07512fa7-630e-4f59-b11f-ef1d0f014d88-kube-api-access-2pbvm\") pod \"placement-db-create-r2wq4\" (UID: \"07512fa7-630e-4f59-b11f-ef1d0f014d88\") " pod="openstack/placement-db-create-r2wq4" Oct 06 12:02:39 crc kubenswrapper[4958]: I1006 12:02:39.751161 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2pbvm\" (UniqueName: \"kubernetes.io/projected/07512fa7-630e-4f59-b11f-ef1d0f014d88-kube-api-access-2pbvm\") pod \"placement-db-create-r2wq4\" (UID: \"07512fa7-630e-4f59-b11f-ef1d0f014d88\") " pod="openstack/placement-db-create-r2wq4" Oct 06 12:02:39 crc kubenswrapper[4958]: I1006 12:02:39.770448 4958 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-db-create-kcdjr"] Oct 06 12:02:39 crc kubenswrapper[4958]: I1006 12:02:39.771749 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-kcdjr" Oct 06 12:02:39 crc kubenswrapper[4958]: I1006 12:02:39.785360 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-create-kcdjr"] Oct 06 12:02:39 crc kubenswrapper[4958]: I1006 12:02:39.788790 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2pbvm\" (UniqueName: \"kubernetes.io/projected/07512fa7-630e-4f59-b11f-ef1d0f014d88-kube-api-access-2pbvm\") pod \"placement-db-create-r2wq4\" (UID: \"07512fa7-630e-4f59-b11f-ef1d0f014d88\") " pod="openstack/placement-db-create-r2wq4" Oct 06 12:02:39 crc kubenswrapper[4958]: I1006 12:02:39.830618 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-r2wq4" Oct 06 12:02:39 crc kubenswrapper[4958]: I1006 12:02:39.852435 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-crcz9\" (UniqueName: \"kubernetes.io/projected/0f21ab19-342f-4a65-9ebc-37c9e83c1099-kube-api-access-crcz9\") pod \"glance-db-create-kcdjr\" (UID: \"0f21ab19-342f-4a65-9ebc-37c9e83c1099\") " pod="openstack/glance-db-create-kcdjr" Oct 06 12:02:39 crc kubenswrapper[4958]: I1006 12:02:39.954382 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-crcz9\" (UniqueName: \"kubernetes.io/projected/0f21ab19-342f-4a65-9ebc-37c9e83c1099-kube-api-access-crcz9\") pod \"glance-db-create-kcdjr\" (UID: \"0f21ab19-342f-4a65-9ebc-37c9e83c1099\") " pod="openstack/glance-db-create-kcdjr" Oct 06 12:02:39 crc kubenswrapper[4958]: I1006 12:02:39.980582 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-crcz9\" (UniqueName: \"kubernetes.io/projected/0f21ab19-342f-4a65-9ebc-37c9e83c1099-kube-api-access-crcz9\") pod \"glance-db-create-kcdjr\" (UID: \"0f21ab19-342f-4a65-9ebc-37c9e83c1099\") " pod="openstack/glance-db-create-kcdjr" Oct 06 12:02:40 crc kubenswrapper[4958]: I1006 12:02:40.126778 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-kcdjr" Oct 06 12:02:40 crc kubenswrapper[4958]: I1006 12:02:40.867760 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/7a90ffe6-00a1-4bee-862b-b1ca74e3185d-etc-swift\") pod \"swift-storage-0\" (UID: \"7a90ffe6-00a1-4bee-862b-b1ca74e3185d\") " pod="openstack/swift-storage-0" Oct 06 12:02:40 crc kubenswrapper[4958]: E1006 12:02:40.867941 4958 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Oct 06 12:02:40 crc kubenswrapper[4958]: E1006 12:02:40.868453 4958 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Oct 06 12:02:40 crc kubenswrapper[4958]: E1006 12:02:40.868518 4958 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/7a90ffe6-00a1-4bee-862b-b1ca74e3185d-etc-swift podName:7a90ffe6-00a1-4bee-862b-b1ca74e3185d nodeName:}" failed. No retries permitted until 2025-10-06 12:02:48.868502196 +0000 UTC m=+922.754527504 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/7a90ffe6-00a1-4bee-862b-b1ca74e3185d-etc-swift") pod "swift-storage-0" (UID: "7a90ffe6-00a1-4bee-862b-b1ca74e3185d") : configmap "swift-ring-files" not found Oct 06 12:02:41 crc kubenswrapper[4958]: I1006 12:02:41.101498 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-create-wqr9j"] Oct 06 12:02:41 crc kubenswrapper[4958]: W1006 12:02:41.105995 4958 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc828918b_7cf5_4ac3_8edb_a28834ce4249.slice/crio-090d8686241cb0a37aab4cde2e3af4ae00bb748eeefe0df04a7161913d8a3d07 WatchSource:0}: Error finding container 090d8686241cb0a37aab4cde2e3af4ae00bb748eeefe0df04a7161913d8a3d07: Status 404 returned error can't find the container with id 090d8686241cb0a37aab4cde2e3af4ae00bb748eeefe0df04a7161913d8a3d07 Oct 06 12:02:41 crc kubenswrapper[4958]: W1006 12:02:41.106959 4958 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod0f21ab19_342f_4a65_9ebc_37c9e83c1099.slice/crio-3c8d63221c285fc6487c9b5e70b045b251c7c1b2a41d8f94ade6a5ba6456b8f0 WatchSource:0}: Error finding container 3c8d63221c285fc6487c9b5e70b045b251c7c1b2a41d8f94ade6a5ba6456b8f0: Status 404 returned error can't find the container with id 3c8d63221c285fc6487c9b5e70b045b251c7c1b2a41d8f94ade6a5ba6456b8f0 Oct 06 12:02:41 crc kubenswrapper[4958]: I1006 12:02:41.107941 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-create-kcdjr"] Oct 06 12:02:41 crc kubenswrapper[4958]: I1006 12:02:41.212692 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-create-r2wq4"] Oct 06 12:02:41 crc kubenswrapper[4958]: W1006 12:02:41.215009 4958 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod07512fa7_630e_4f59_b11f_ef1d0f014d88.slice/crio-c7a0dc4e4274ea6931d5523184dc6229a12272ce4286428c47e46c0b8fe62919 WatchSource:0}: Error finding container c7a0dc4e4274ea6931d5523184dc6229a12272ce4286428c47e46c0b8fe62919: Status 404 returned error can't find the container with id c7a0dc4e4274ea6931d5523184dc6229a12272ce4286428c47e46c0b8fe62919 Oct 06 12:02:41 crc kubenswrapper[4958]: I1006 12:02:41.872832 4958 generic.go:334] "Generic (PLEG): container finished" podID="0f21ab19-342f-4a65-9ebc-37c9e83c1099" containerID="8dabafb2cf0213490fa5b5bea5dbc8c60a6d07907857f8ae7b51cd92f834644c" exitCode=0 Oct 06 12:02:41 crc kubenswrapper[4958]: I1006 12:02:41.872916 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-kcdjr" event={"ID":"0f21ab19-342f-4a65-9ebc-37c9e83c1099","Type":"ContainerDied","Data":"8dabafb2cf0213490fa5b5bea5dbc8c60a6d07907857f8ae7b51cd92f834644c"} Oct 06 12:02:41 crc kubenswrapper[4958]: I1006 12:02:41.872943 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-kcdjr" event={"ID":"0f21ab19-342f-4a65-9ebc-37c9e83c1099","Type":"ContainerStarted","Data":"3c8d63221c285fc6487c9b5e70b045b251c7c1b2a41d8f94ade6a5ba6456b8f0"} Oct 06 12:02:41 crc kubenswrapper[4958]: I1006 12:02:41.876420 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-mvxsm" event={"ID":"368db9f7-1e1b-42e1-a8d7-af0c7d9d910f","Type":"ContainerStarted","Data":"f285ba67a336eb089ebdd738140537b042799fc0a7da41757625f6d33f17e186"} Oct 06 12:02:41 crc kubenswrapper[4958]: I1006 12:02:41.884451 4958 generic.go:334] "Generic (PLEG): container finished" podID="07512fa7-630e-4f59-b11f-ef1d0f014d88" containerID="2359cce51ae6486b852d0fabec225f8159ba0a26aa51b3ac1aebcb8cdf74d4e5" exitCode=0 Oct 06 12:02:41 crc kubenswrapper[4958]: I1006 12:02:41.884575 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-r2wq4" event={"ID":"07512fa7-630e-4f59-b11f-ef1d0f014d88","Type":"ContainerDied","Data":"2359cce51ae6486b852d0fabec225f8159ba0a26aa51b3ac1aebcb8cdf74d4e5"} Oct 06 12:02:41 crc kubenswrapper[4958]: I1006 12:02:41.884643 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-r2wq4" event={"ID":"07512fa7-630e-4f59-b11f-ef1d0f014d88","Type":"ContainerStarted","Data":"c7a0dc4e4274ea6931d5523184dc6229a12272ce4286428c47e46c0b8fe62919"} Oct 06 12:02:41 crc kubenswrapper[4958]: I1006 12:02:41.911807 4958 generic.go:334] "Generic (PLEG): container finished" podID="c828918b-7cf5-4ac3-8edb-a28834ce4249" containerID="94995a963fdafe4a2ba4bef81b89b364bcfc0f8134d6805905d95f7cf5bb7c16" exitCode=0 Oct 06 12:02:41 crc kubenswrapper[4958]: I1006 12:02:41.911870 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-wqr9j" event={"ID":"c828918b-7cf5-4ac3-8edb-a28834ce4249","Type":"ContainerDied","Data":"94995a963fdafe4a2ba4bef81b89b364bcfc0f8134d6805905d95f7cf5bb7c16"} Oct 06 12:02:41 crc kubenswrapper[4958]: I1006 12:02:41.911903 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-wqr9j" event={"ID":"c828918b-7cf5-4ac3-8edb-a28834ce4249","Type":"ContainerStarted","Data":"090d8686241cb0a37aab4cde2e3af4ae00bb748eeefe0df04a7161913d8a3d07"} Oct 06 12:02:41 crc kubenswrapper[4958]: I1006 12:02:41.943303 4958 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/swift-ring-rebalance-mvxsm" podStartSLOduration=3.154874758 podStartE2EDuration="5.943280316s" podCreationTimestamp="2025-10-06 12:02:36 +0000 UTC" firstStartedPulling="2025-10-06 12:02:37.861082546 +0000 UTC m=+911.747107864" lastFinishedPulling="2025-10-06 12:02:40.649488114 +0000 UTC m=+914.535513422" observedRunningTime="2025-10-06 12:02:41.925480379 +0000 UTC m=+915.811505707" watchObservedRunningTime="2025-10-06 12:02:41.943280316 +0000 UTC m=+915.829305634" Oct 06 12:02:42 crc kubenswrapper[4958]: I1006 12:02:42.091404 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-698758b865-sx8gl" Oct 06 12:02:42 crc kubenswrapper[4958]: I1006 12:02:42.177664 4958 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-86db49b7ff-hmmpn"] Oct 06 12:02:42 crc kubenswrapper[4958]: I1006 12:02:42.177918 4958 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-86db49b7ff-hmmpn" podUID="2084f4e2-c329-4c6b-98f3-9814b782b587" containerName="dnsmasq-dns" containerID="cri-o://13d45eed1f0706706b5564ffff6172aa9e58369e08775eb3af4d7a0572f099c0" gracePeriod=10 Oct 06 12:02:42 crc kubenswrapper[4958]: I1006 12:02:42.923667 4958 generic.go:334] "Generic (PLEG): container finished" podID="2084f4e2-c329-4c6b-98f3-9814b782b587" containerID="13d45eed1f0706706b5564ffff6172aa9e58369e08775eb3af4d7a0572f099c0" exitCode=0 Oct 06 12:02:42 crc kubenswrapper[4958]: I1006 12:02:42.928105 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-86db49b7ff-hmmpn" event={"ID":"2084f4e2-c329-4c6b-98f3-9814b782b587","Type":"ContainerDied","Data":"13d45eed1f0706706b5564ffff6172aa9e58369e08775eb3af4d7a0572f099c0"} Oct 06 12:02:43 crc kubenswrapper[4958]: I1006 12:02:43.296821 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-wqr9j" Oct 06 12:02:43 crc kubenswrapper[4958]: I1006 12:02:43.405236 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-kcdjr" Oct 06 12:02:43 crc kubenswrapper[4958]: I1006 12:02:43.410758 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-r2wq4" Oct 06 12:02:43 crc kubenswrapper[4958]: I1006 12:02:43.457157 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tcjml\" (UniqueName: \"kubernetes.io/projected/c828918b-7cf5-4ac3-8edb-a28834ce4249-kube-api-access-tcjml\") pod \"c828918b-7cf5-4ac3-8edb-a28834ce4249\" (UID: \"c828918b-7cf5-4ac3-8edb-a28834ce4249\") " Oct 06 12:02:43 crc kubenswrapper[4958]: I1006 12:02:43.463339 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c828918b-7cf5-4ac3-8edb-a28834ce4249-kube-api-access-tcjml" (OuterVolumeSpecName: "kube-api-access-tcjml") pod "c828918b-7cf5-4ac3-8edb-a28834ce4249" (UID: "c828918b-7cf5-4ac3-8edb-a28834ce4249"). InnerVolumeSpecName "kube-api-access-tcjml". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 06 12:02:43 crc kubenswrapper[4958]: I1006 12:02:43.558287 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-crcz9\" (UniqueName: \"kubernetes.io/projected/0f21ab19-342f-4a65-9ebc-37c9e83c1099-kube-api-access-crcz9\") pod \"0f21ab19-342f-4a65-9ebc-37c9e83c1099\" (UID: \"0f21ab19-342f-4a65-9ebc-37c9e83c1099\") " Oct 06 12:02:43 crc kubenswrapper[4958]: I1006 12:02:43.558478 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2pbvm\" (UniqueName: \"kubernetes.io/projected/07512fa7-630e-4f59-b11f-ef1d0f014d88-kube-api-access-2pbvm\") pod \"07512fa7-630e-4f59-b11f-ef1d0f014d88\" (UID: \"07512fa7-630e-4f59-b11f-ef1d0f014d88\") " Oct 06 12:02:43 crc kubenswrapper[4958]: I1006 12:02:43.558801 4958 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tcjml\" (UniqueName: \"kubernetes.io/projected/c828918b-7cf5-4ac3-8edb-a28834ce4249-kube-api-access-tcjml\") on node \"crc\" DevicePath \"\"" Oct 06 12:02:43 crc kubenswrapper[4958]: I1006 12:02:43.561303 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/07512fa7-630e-4f59-b11f-ef1d0f014d88-kube-api-access-2pbvm" (OuterVolumeSpecName: "kube-api-access-2pbvm") pod "07512fa7-630e-4f59-b11f-ef1d0f014d88" (UID: "07512fa7-630e-4f59-b11f-ef1d0f014d88"). InnerVolumeSpecName "kube-api-access-2pbvm". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 06 12:02:43 crc kubenswrapper[4958]: I1006 12:02:43.562590 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0f21ab19-342f-4a65-9ebc-37c9e83c1099-kube-api-access-crcz9" (OuterVolumeSpecName: "kube-api-access-crcz9") pod "0f21ab19-342f-4a65-9ebc-37c9e83c1099" (UID: "0f21ab19-342f-4a65-9ebc-37c9e83c1099"). InnerVolumeSpecName "kube-api-access-crcz9". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 06 12:02:43 crc kubenswrapper[4958]: I1006 12:02:43.660645 4958 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2pbvm\" (UniqueName: \"kubernetes.io/projected/07512fa7-630e-4f59-b11f-ef1d0f014d88-kube-api-access-2pbvm\") on node \"crc\" DevicePath \"\"" Oct 06 12:02:43 crc kubenswrapper[4958]: I1006 12:02:43.660702 4958 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-crcz9\" (UniqueName: \"kubernetes.io/projected/0f21ab19-342f-4a65-9ebc-37c9e83c1099-kube-api-access-crcz9\") on node \"crc\" DevicePath \"\"" Oct 06 12:02:43 crc kubenswrapper[4958]: I1006 12:02:43.938904 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-kcdjr" event={"ID":"0f21ab19-342f-4a65-9ebc-37c9e83c1099","Type":"ContainerDied","Data":"3c8d63221c285fc6487c9b5e70b045b251c7c1b2a41d8f94ade6a5ba6456b8f0"} Oct 06 12:02:43 crc kubenswrapper[4958]: I1006 12:02:43.938977 4958 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3c8d63221c285fc6487c9b5e70b045b251c7c1b2a41d8f94ade6a5ba6456b8f0" Oct 06 12:02:43 crc kubenswrapper[4958]: I1006 12:02:43.938934 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-kcdjr" Oct 06 12:02:43 crc kubenswrapper[4958]: I1006 12:02:43.942018 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-r2wq4" event={"ID":"07512fa7-630e-4f59-b11f-ef1d0f014d88","Type":"ContainerDied","Data":"c7a0dc4e4274ea6931d5523184dc6229a12272ce4286428c47e46c0b8fe62919"} Oct 06 12:02:43 crc kubenswrapper[4958]: I1006 12:02:43.942068 4958 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c7a0dc4e4274ea6931d5523184dc6229a12272ce4286428c47e46c0b8fe62919" Oct 06 12:02:43 crc kubenswrapper[4958]: I1006 12:02:43.942130 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-r2wq4" Oct 06 12:02:43 crc kubenswrapper[4958]: I1006 12:02:43.945539 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-wqr9j" event={"ID":"c828918b-7cf5-4ac3-8edb-a28834ce4249","Type":"ContainerDied","Data":"090d8686241cb0a37aab4cde2e3af4ae00bb748eeefe0df04a7161913d8a3d07"} Oct 06 12:02:43 crc kubenswrapper[4958]: I1006 12:02:43.945578 4958 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="090d8686241cb0a37aab4cde2e3af4ae00bb748eeefe0df04a7161913d8a3d07" Oct 06 12:02:43 crc kubenswrapper[4958]: I1006 12:02:43.945671 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-wqr9j" Oct 06 12:02:45 crc kubenswrapper[4958]: I1006 12:02:45.530372 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-86db49b7ff-hmmpn" Oct 06 12:02:45 crc kubenswrapper[4958]: I1006 12:02:45.695706 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/2084f4e2-c329-4c6b-98f3-9814b782b587-dns-svc\") pod \"2084f4e2-c329-4c6b-98f3-9814b782b587\" (UID: \"2084f4e2-c329-4c6b-98f3-9814b782b587\") " Oct 06 12:02:45 crc kubenswrapper[4958]: I1006 12:02:45.696121 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/2084f4e2-c329-4c6b-98f3-9814b782b587-ovsdbserver-nb\") pod \"2084f4e2-c329-4c6b-98f3-9814b782b587\" (UID: \"2084f4e2-c329-4c6b-98f3-9814b782b587\") " Oct 06 12:02:45 crc kubenswrapper[4958]: I1006 12:02:45.696459 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5djjp\" (UniqueName: \"kubernetes.io/projected/2084f4e2-c329-4c6b-98f3-9814b782b587-kube-api-access-5djjp\") pod \"2084f4e2-c329-4c6b-98f3-9814b782b587\" (UID: \"2084f4e2-c329-4c6b-98f3-9814b782b587\") " Oct 06 12:02:45 crc kubenswrapper[4958]: I1006 12:02:45.698139 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/2084f4e2-c329-4c6b-98f3-9814b782b587-ovsdbserver-sb\") pod \"2084f4e2-c329-4c6b-98f3-9814b782b587\" (UID: \"2084f4e2-c329-4c6b-98f3-9814b782b587\") " Oct 06 12:02:45 crc kubenswrapper[4958]: I1006 12:02:45.698452 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2084f4e2-c329-4c6b-98f3-9814b782b587-config\") pod \"2084f4e2-c329-4c6b-98f3-9814b782b587\" (UID: \"2084f4e2-c329-4c6b-98f3-9814b782b587\") " Oct 06 12:02:45 crc kubenswrapper[4958]: I1006 12:02:45.702708 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2084f4e2-c329-4c6b-98f3-9814b782b587-kube-api-access-5djjp" (OuterVolumeSpecName: "kube-api-access-5djjp") pod "2084f4e2-c329-4c6b-98f3-9814b782b587" (UID: "2084f4e2-c329-4c6b-98f3-9814b782b587"). InnerVolumeSpecName "kube-api-access-5djjp". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 06 12:02:45 crc kubenswrapper[4958]: I1006 12:02:45.743122 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2084f4e2-c329-4c6b-98f3-9814b782b587-config" (OuterVolumeSpecName: "config") pod "2084f4e2-c329-4c6b-98f3-9814b782b587" (UID: "2084f4e2-c329-4c6b-98f3-9814b782b587"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 06 12:02:45 crc kubenswrapper[4958]: I1006 12:02:45.750659 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2084f4e2-c329-4c6b-98f3-9814b782b587-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "2084f4e2-c329-4c6b-98f3-9814b782b587" (UID: "2084f4e2-c329-4c6b-98f3-9814b782b587"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 06 12:02:45 crc kubenswrapper[4958]: I1006 12:02:45.759357 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2084f4e2-c329-4c6b-98f3-9814b782b587-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "2084f4e2-c329-4c6b-98f3-9814b782b587" (UID: "2084f4e2-c329-4c6b-98f3-9814b782b587"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 06 12:02:45 crc kubenswrapper[4958]: I1006 12:02:45.770211 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2084f4e2-c329-4c6b-98f3-9814b782b587-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "2084f4e2-c329-4c6b-98f3-9814b782b587" (UID: "2084f4e2-c329-4c6b-98f3-9814b782b587"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 06 12:02:45 crc kubenswrapper[4958]: I1006 12:02:45.801121 4958 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/2084f4e2-c329-4c6b-98f3-9814b782b587-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Oct 06 12:02:45 crc kubenswrapper[4958]: I1006 12:02:45.801184 4958 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2084f4e2-c329-4c6b-98f3-9814b782b587-config\") on node \"crc\" DevicePath \"\"" Oct 06 12:02:45 crc kubenswrapper[4958]: I1006 12:02:45.801203 4958 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/2084f4e2-c329-4c6b-98f3-9814b782b587-dns-svc\") on node \"crc\" DevicePath \"\"" Oct 06 12:02:45 crc kubenswrapper[4958]: I1006 12:02:45.801222 4958 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/2084f4e2-c329-4c6b-98f3-9814b782b587-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Oct 06 12:02:45 crc kubenswrapper[4958]: I1006 12:02:45.801241 4958 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5djjp\" (UniqueName: \"kubernetes.io/projected/2084f4e2-c329-4c6b-98f3-9814b782b587-kube-api-access-5djjp\") on node \"crc\" DevicePath \"\"" Oct 06 12:02:45 crc kubenswrapper[4958]: I1006 12:02:45.970767 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-86db49b7ff-hmmpn" event={"ID":"2084f4e2-c329-4c6b-98f3-9814b782b587","Type":"ContainerDied","Data":"78a0043918bbb3f8bc2b40ad0fd59a371cb419e3541524ed18a45b7c4099a0e6"} Oct 06 12:02:45 crc kubenswrapper[4958]: I1006 12:02:45.970851 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-86db49b7ff-hmmpn" Oct 06 12:02:45 crc kubenswrapper[4958]: I1006 12:02:45.971324 4958 scope.go:117] "RemoveContainer" containerID="13d45eed1f0706706b5564ffff6172aa9e58369e08775eb3af4d7a0572f099c0" Oct 06 12:02:46 crc kubenswrapper[4958]: I1006 12:02:46.004535 4958 scope.go:117] "RemoveContainer" containerID="d373f140ed0982edd619a65bad33aa8467267528d63e97bf9aee7d5cfde78b34" Oct 06 12:02:46 crc kubenswrapper[4958]: I1006 12:02:46.032933 4958 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-86db49b7ff-hmmpn"] Oct 06 12:02:46 crc kubenswrapper[4958]: I1006 12:02:46.051235 4958 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-86db49b7ff-hmmpn"] Oct 06 12:02:46 crc kubenswrapper[4958]: I1006 12:02:46.200521 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-northd-0" Oct 06 12:02:46 crc kubenswrapper[4958]: I1006 12:02:46.921175 4958 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2084f4e2-c329-4c6b-98f3-9814b782b587" path="/var/lib/kubelet/pods/2084f4e2-c329-4c6b-98f3-9814b782b587/volumes" Oct 06 12:02:47 crc kubenswrapper[4958]: I1006 12:02:47.997165 4958 generic.go:334] "Generic (PLEG): container finished" podID="368db9f7-1e1b-42e1-a8d7-af0c7d9d910f" containerID="f285ba67a336eb089ebdd738140537b042799fc0a7da41757625f6d33f17e186" exitCode=0 Oct 06 12:02:47 crc kubenswrapper[4958]: I1006 12:02:47.997256 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-mvxsm" event={"ID":"368db9f7-1e1b-42e1-a8d7-af0c7d9d910f","Type":"ContainerDied","Data":"f285ba67a336eb089ebdd738140537b042799fc0a7da41757625f6d33f17e186"} Oct 06 12:02:48 crc kubenswrapper[4958]: I1006 12:02:48.956590 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/7a90ffe6-00a1-4bee-862b-b1ca74e3185d-etc-swift\") pod \"swift-storage-0\" (UID: \"7a90ffe6-00a1-4bee-862b-b1ca74e3185d\") " pod="openstack/swift-storage-0" Oct 06 12:02:48 crc kubenswrapper[4958]: I1006 12:02:48.967323 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/7a90ffe6-00a1-4bee-862b-b1ca74e3185d-etc-swift\") pod \"swift-storage-0\" (UID: \"7a90ffe6-00a1-4bee-862b-b1ca74e3185d\") " pod="openstack/swift-storage-0" Oct 06 12:02:49 crc kubenswrapper[4958]: I1006 12:02:49.034236 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-storage-0" Oct 06 12:02:49 crc kubenswrapper[4958]: I1006 12:02:49.382123 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-mvxsm" Oct 06 12:02:49 crc kubenswrapper[4958]: I1006 12:02:49.464376 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/368db9f7-1e1b-42e1-a8d7-af0c7d9d910f-ring-data-devices\") pod \"368db9f7-1e1b-42e1-a8d7-af0c7d9d910f\" (UID: \"368db9f7-1e1b-42e1-a8d7-af0c7d9d910f\") " Oct 06 12:02:49 crc kubenswrapper[4958]: I1006 12:02:49.464512 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/368db9f7-1e1b-42e1-a8d7-af0c7d9d910f-etc-swift\") pod \"368db9f7-1e1b-42e1-a8d7-af0c7d9d910f\" (UID: \"368db9f7-1e1b-42e1-a8d7-af0c7d9d910f\") " Oct 06 12:02:49 crc kubenswrapper[4958]: I1006 12:02:49.464604 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bw8hl\" (UniqueName: \"kubernetes.io/projected/368db9f7-1e1b-42e1-a8d7-af0c7d9d910f-kube-api-access-bw8hl\") pod \"368db9f7-1e1b-42e1-a8d7-af0c7d9d910f\" (UID: \"368db9f7-1e1b-42e1-a8d7-af0c7d9d910f\") " Oct 06 12:02:49 crc kubenswrapper[4958]: I1006 12:02:49.464644 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/368db9f7-1e1b-42e1-a8d7-af0c7d9d910f-scripts\") pod \"368db9f7-1e1b-42e1-a8d7-af0c7d9d910f\" (UID: \"368db9f7-1e1b-42e1-a8d7-af0c7d9d910f\") " Oct 06 12:02:49 crc kubenswrapper[4958]: I1006 12:02:49.464697 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/368db9f7-1e1b-42e1-a8d7-af0c7d9d910f-swiftconf\") pod \"368db9f7-1e1b-42e1-a8d7-af0c7d9d910f\" (UID: \"368db9f7-1e1b-42e1-a8d7-af0c7d9d910f\") " Oct 06 12:02:49 crc kubenswrapper[4958]: I1006 12:02:49.464789 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/368db9f7-1e1b-42e1-a8d7-af0c7d9d910f-dispersionconf\") pod \"368db9f7-1e1b-42e1-a8d7-af0c7d9d910f\" (UID: \"368db9f7-1e1b-42e1-a8d7-af0c7d9d910f\") " Oct 06 12:02:49 crc kubenswrapper[4958]: I1006 12:02:49.464834 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/368db9f7-1e1b-42e1-a8d7-af0c7d9d910f-combined-ca-bundle\") pod \"368db9f7-1e1b-42e1-a8d7-af0c7d9d910f\" (UID: \"368db9f7-1e1b-42e1-a8d7-af0c7d9d910f\") " Oct 06 12:02:49 crc kubenswrapper[4958]: I1006 12:02:49.466091 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/368db9f7-1e1b-42e1-a8d7-af0c7d9d910f-ring-data-devices" (OuterVolumeSpecName: "ring-data-devices") pod "368db9f7-1e1b-42e1-a8d7-af0c7d9d910f" (UID: "368db9f7-1e1b-42e1-a8d7-af0c7d9d910f"). InnerVolumeSpecName "ring-data-devices". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 06 12:02:49 crc kubenswrapper[4958]: I1006 12:02:49.466233 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/368db9f7-1e1b-42e1-a8d7-af0c7d9d910f-etc-swift" (OuterVolumeSpecName: "etc-swift") pod "368db9f7-1e1b-42e1-a8d7-af0c7d9d910f" (UID: "368db9f7-1e1b-42e1-a8d7-af0c7d9d910f"). InnerVolumeSpecName "etc-swift". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 06 12:02:49 crc kubenswrapper[4958]: I1006 12:02:49.470333 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/368db9f7-1e1b-42e1-a8d7-af0c7d9d910f-kube-api-access-bw8hl" (OuterVolumeSpecName: "kube-api-access-bw8hl") pod "368db9f7-1e1b-42e1-a8d7-af0c7d9d910f" (UID: "368db9f7-1e1b-42e1-a8d7-af0c7d9d910f"). InnerVolumeSpecName "kube-api-access-bw8hl". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 06 12:02:49 crc kubenswrapper[4958]: I1006 12:02:49.477183 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/368db9f7-1e1b-42e1-a8d7-af0c7d9d910f-dispersionconf" (OuterVolumeSpecName: "dispersionconf") pod "368db9f7-1e1b-42e1-a8d7-af0c7d9d910f" (UID: "368db9f7-1e1b-42e1-a8d7-af0c7d9d910f"). InnerVolumeSpecName "dispersionconf". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 06 12:02:49 crc kubenswrapper[4958]: I1006 12:02:49.504384 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/368db9f7-1e1b-42e1-a8d7-af0c7d9d910f-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "368db9f7-1e1b-42e1-a8d7-af0c7d9d910f" (UID: "368db9f7-1e1b-42e1-a8d7-af0c7d9d910f"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 06 12:02:49 crc kubenswrapper[4958]: I1006 12:02:49.505855 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/368db9f7-1e1b-42e1-a8d7-af0c7d9d910f-scripts" (OuterVolumeSpecName: "scripts") pod "368db9f7-1e1b-42e1-a8d7-af0c7d9d910f" (UID: "368db9f7-1e1b-42e1-a8d7-af0c7d9d910f"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 06 12:02:49 crc kubenswrapper[4958]: I1006 12:02:49.507581 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/368db9f7-1e1b-42e1-a8d7-af0c7d9d910f-swiftconf" (OuterVolumeSpecName: "swiftconf") pod "368db9f7-1e1b-42e1-a8d7-af0c7d9d910f" (UID: "368db9f7-1e1b-42e1-a8d7-af0c7d9d910f"). InnerVolumeSpecName "swiftconf". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 06 12:02:49 crc kubenswrapper[4958]: I1006 12:02:49.553483 4958 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-8f99-account-create-xhcbg"] Oct 06 12:02:49 crc kubenswrapper[4958]: E1006 12:02:49.553888 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0f21ab19-342f-4a65-9ebc-37c9e83c1099" containerName="mariadb-database-create" Oct 06 12:02:49 crc kubenswrapper[4958]: I1006 12:02:49.553905 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="0f21ab19-342f-4a65-9ebc-37c9e83c1099" containerName="mariadb-database-create" Oct 06 12:02:49 crc kubenswrapper[4958]: E1006 12:02:49.553942 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2084f4e2-c329-4c6b-98f3-9814b782b587" containerName="init" Oct 06 12:02:49 crc kubenswrapper[4958]: I1006 12:02:49.553950 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="2084f4e2-c329-4c6b-98f3-9814b782b587" containerName="init" Oct 06 12:02:49 crc kubenswrapper[4958]: E1006 12:02:49.553964 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2084f4e2-c329-4c6b-98f3-9814b782b587" containerName="dnsmasq-dns" Oct 06 12:02:49 crc kubenswrapper[4958]: I1006 12:02:49.553973 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="2084f4e2-c329-4c6b-98f3-9814b782b587" containerName="dnsmasq-dns" Oct 06 12:02:49 crc kubenswrapper[4958]: E1006 12:02:49.553986 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="368db9f7-1e1b-42e1-a8d7-af0c7d9d910f" containerName="swift-ring-rebalance" Oct 06 12:02:49 crc kubenswrapper[4958]: I1006 12:02:49.553993 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="368db9f7-1e1b-42e1-a8d7-af0c7d9d910f" containerName="swift-ring-rebalance" Oct 06 12:02:49 crc kubenswrapper[4958]: E1006 12:02:49.554006 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c828918b-7cf5-4ac3-8edb-a28834ce4249" containerName="mariadb-database-create" Oct 06 12:02:49 crc kubenswrapper[4958]: I1006 12:02:49.554014 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="c828918b-7cf5-4ac3-8edb-a28834ce4249" containerName="mariadb-database-create" Oct 06 12:02:49 crc kubenswrapper[4958]: E1006 12:02:49.554030 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="07512fa7-630e-4f59-b11f-ef1d0f014d88" containerName="mariadb-database-create" Oct 06 12:02:49 crc kubenswrapper[4958]: I1006 12:02:49.554038 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="07512fa7-630e-4f59-b11f-ef1d0f014d88" containerName="mariadb-database-create" Oct 06 12:02:49 crc kubenswrapper[4958]: I1006 12:02:49.554248 4958 memory_manager.go:354] "RemoveStaleState removing state" podUID="2084f4e2-c329-4c6b-98f3-9814b782b587" containerName="dnsmasq-dns" Oct 06 12:02:49 crc kubenswrapper[4958]: I1006 12:02:49.554271 4958 memory_manager.go:354] "RemoveStaleState removing state" podUID="0f21ab19-342f-4a65-9ebc-37c9e83c1099" containerName="mariadb-database-create" Oct 06 12:02:49 crc kubenswrapper[4958]: I1006 12:02:49.554282 4958 memory_manager.go:354] "RemoveStaleState removing state" podUID="07512fa7-630e-4f59-b11f-ef1d0f014d88" containerName="mariadb-database-create" Oct 06 12:02:49 crc kubenswrapper[4958]: I1006 12:02:49.554294 4958 memory_manager.go:354] "RemoveStaleState removing state" podUID="c828918b-7cf5-4ac3-8edb-a28834ce4249" containerName="mariadb-database-create" Oct 06 12:02:49 crc kubenswrapper[4958]: I1006 12:02:49.554306 4958 memory_manager.go:354] "RemoveStaleState removing state" podUID="368db9f7-1e1b-42e1-a8d7-af0c7d9d910f" containerName="swift-ring-rebalance" Oct 06 12:02:49 crc kubenswrapper[4958]: I1006 12:02:49.554954 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-8f99-account-create-xhcbg" Oct 06 12:02:49 crc kubenswrapper[4958]: I1006 12:02:49.560426 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-db-secret" Oct 06 12:02:49 crc kubenswrapper[4958]: I1006 12:02:49.561042 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-8f99-account-create-xhcbg"] Oct 06 12:02:49 crc kubenswrapper[4958]: I1006 12:02:49.566357 4958 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bw8hl\" (UniqueName: \"kubernetes.io/projected/368db9f7-1e1b-42e1-a8d7-af0c7d9d910f-kube-api-access-bw8hl\") on node \"crc\" DevicePath \"\"" Oct 06 12:02:49 crc kubenswrapper[4958]: I1006 12:02:49.566388 4958 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/368db9f7-1e1b-42e1-a8d7-af0c7d9d910f-scripts\") on node \"crc\" DevicePath \"\"" Oct 06 12:02:49 crc kubenswrapper[4958]: I1006 12:02:49.566400 4958 reconciler_common.go:293] "Volume detached for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/368db9f7-1e1b-42e1-a8d7-af0c7d9d910f-swiftconf\") on node \"crc\" DevicePath \"\"" Oct 06 12:02:49 crc kubenswrapper[4958]: I1006 12:02:49.566412 4958 reconciler_common.go:293] "Volume detached for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/368db9f7-1e1b-42e1-a8d7-af0c7d9d910f-dispersionconf\") on node \"crc\" DevicePath \"\"" Oct 06 12:02:49 crc kubenswrapper[4958]: I1006 12:02:49.566424 4958 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/368db9f7-1e1b-42e1-a8d7-af0c7d9d910f-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 06 12:02:49 crc kubenswrapper[4958]: I1006 12:02:49.566436 4958 reconciler_common.go:293] "Volume detached for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/368db9f7-1e1b-42e1-a8d7-af0c7d9d910f-ring-data-devices\") on node \"crc\" DevicePath \"\"" Oct 06 12:02:49 crc kubenswrapper[4958]: I1006 12:02:49.566446 4958 reconciler_common.go:293] "Volume detached for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/368db9f7-1e1b-42e1-a8d7-af0c7d9d910f-etc-swift\") on node \"crc\" DevicePath \"\"" Oct 06 12:02:49 crc kubenswrapper[4958]: W1006 12:02:49.645130 4958 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod7a90ffe6_00a1_4bee_862b_b1ca74e3185d.slice/crio-6deab2f8e794d8a4787670bceb85ab846727f4a4c3cc2731bc2965bce88e0f96 WatchSource:0}: Error finding container 6deab2f8e794d8a4787670bceb85ab846727f4a4c3cc2731bc2965bce88e0f96: Status 404 returned error can't find the container with id 6deab2f8e794d8a4787670bceb85ab846727f4a4c3cc2731bc2965bce88e0f96 Oct 06 12:02:49 crc kubenswrapper[4958]: I1006 12:02:49.653648 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-storage-0"] Oct 06 12:02:49 crc kubenswrapper[4958]: I1006 12:02:49.667632 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sgkp7\" (UniqueName: \"kubernetes.io/projected/8e9b74b3-2333-45a2-9f90-c015f543989d-kube-api-access-sgkp7\") pod \"placement-8f99-account-create-xhcbg\" (UID: \"8e9b74b3-2333-45a2-9f90-c015f543989d\") " pod="openstack/placement-8f99-account-create-xhcbg" Oct 06 12:02:49 crc kubenswrapper[4958]: I1006 12:02:49.769442 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sgkp7\" (UniqueName: \"kubernetes.io/projected/8e9b74b3-2333-45a2-9f90-c015f543989d-kube-api-access-sgkp7\") pod \"placement-8f99-account-create-xhcbg\" (UID: \"8e9b74b3-2333-45a2-9f90-c015f543989d\") " pod="openstack/placement-8f99-account-create-xhcbg" Oct 06 12:02:49 crc kubenswrapper[4958]: I1006 12:02:49.793070 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sgkp7\" (UniqueName: \"kubernetes.io/projected/8e9b74b3-2333-45a2-9f90-c015f543989d-kube-api-access-sgkp7\") pod \"placement-8f99-account-create-xhcbg\" (UID: \"8e9b74b3-2333-45a2-9f90-c015f543989d\") " pod="openstack/placement-8f99-account-create-xhcbg" Oct 06 12:02:49 crc kubenswrapper[4958]: I1006 12:02:49.838716 4958 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-ad5d-account-create-bwdzc"] Oct 06 12:02:49 crc kubenswrapper[4958]: I1006 12:02:49.839697 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-ad5d-account-create-bwdzc" Oct 06 12:02:49 crc kubenswrapper[4958]: I1006 12:02:49.842258 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-db-secret" Oct 06 12:02:49 crc kubenswrapper[4958]: I1006 12:02:49.857368 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-ad5d-account-create-bwdzc"] Oct 06 12:02:49 crc kubenswrapper[4958]: I1006 12:02:49.891832 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-8f99-account-create-xhcbg" Oct 06 12:02:49 crc kubenswrapper[4958]: I1006 12:02:49.977514 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mljfv\" (UniqueName: \"kubernetes.io/projected/f203980b-58c6-4be4-8b4c-7f3e67b7de9c-kube-api-access-mljfv\") pod \"glance-ad5d-account-create-bwdzc\" (UID: \"f203980b-58c6-4be4-8b4c-7f3e67b7de9c\") " pod="openstack/glance-ad5d-account-create-bwdzc" Oct 06 12:02:50 crc kubenswrapper[4958]: I1006 12:02:50.033818 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"7a90ffe6-00a1-4bee-862b-b1ca74e3185d","Type":"ContainerStarted","Data":"6deab2f8e794d8a4787670bceb85ab846727f4a4c3cc2731bc2965bce88e0f96"} Oct 06 12:02:50 crc kubenswrapper[4958]: I1006 12:02:50.035878 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-mvxsm" event={"ID":"368db9f7-1e1b-42e1-a8d7-af0c7d9d910f","Type":"ContainerDied","Data":"511a6cffb38ae9061dce3da932594be3acef1add350f3dcb1148f80b5d913424"} Oct 06 12:02:50 crc kubenswrapper[4958]: I1006 12:02:50.035895 4958 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="511a6cffb38ae9061dce3da932594be3acef1add350f3dcb1148f80b5d913424" Oct 06 12:02:50 crc kubenswrapper[4958]: I1006 12:02:50.035969 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-mvxsm" Oct 06 12:02:50 crc kubenswrapper[4958]: I1006 12:02:50.080925 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mljfv\" (UniqueName: \"kubernetes.io/projected/f203980b-58c6-4be4-8b4c-7f3e67b7de9c-kube-api-access-mljfv\") pod \"glance-ad5d-account-create-bwdzc\" (UID: \"f203980b-58c6-4be4-8b4c-7f3e67b7de9c\") " pod="openstack/glance-ad5d-account-create-bwdzc" Oct 06 12:02:50 crc kubenswrapper[4958]: I1006 12:02:50.104124 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mljfv\" (UniqueName: \"kubernetes.io/projected/f203980b-58c6-4be4-8b4c-7f3e67b7de9c-kube-api-access-mljfv\") pod \"glance-ad5d-account-create-bwdzc\" (UID: \"f203980b-58c6-4be4-8b4c-7f3e67b7de9c\") " pod="openstack/glance-ad5d-account-create-bwdzc" Oct 06 12:02:50 crc kubenswrapper[4958]: I1006 12:02:50.207175 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-ad5d-account-create-bwdzc" Oct 06 12:02:50 crc kubenswrapper[4958]: I1006 12:02:50.391875 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-8f99-account-create-xhcbg"] Oct 06 12:02:50 crc kubenswrapper[4958]: I1006 12:02:50.630280 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-ad5d-account-create-bwdzc"] Oct 06 12:02:51 crc kubenswrapper[4958]: I1006 12:02:51.047075 4958 generic.go:334] "Generic (PLEG): container finished" podID="8e9b74b3-2333-45a2-9f90-c015f543989d" containerID="b3e58623d5f51a6601bc53362f56c55b0979b0e6b819c574d4828c6e866c78ed" exitCode=0 Oct 06 12:02:51 crc kubenswrapper[4958]: I1006 12:02:51.047226 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-8f99-account-create-xhcbg" event={"ID":"8e9b74b3-2333-45a2-9f90-c015f543989d","Type":"ContainerDied","Data":"b3e58623d5f51a6601bc53362f56c55b0979b0e6b819c574d4828c6e866c78ed"} Oct 06 12:02:51 crc kubenswrapper[4958]: I1006 12:02:51.047588 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-8f99-account-create-xhcbg" event={"ID":"8e9b74b3-2333-45a2-9f90-c015f543989d","Type":"ContainerStarted","Data":"8342d422b07503c7f9a1bbec3b8c59ea0164764bce4c4463bdf81d737b02eca4"} Oct 06 12:02:51 crc kubenswrapper[4958]: I1006 12:02:51.053385 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"7a90ffe6-00a1-4bee-862b-b1ca74e3185d","Type":"ContainerStarted","Data":"3c682037648d6b18d189411e73bad2602e1f09ba47406f64427ecac3780df661"} Oct 06 12:02:51 crc kubenswrapper[4958]: I1006 12:02:51.054984 4958 generic.go:334] "Generic (PLEG): container finished" podID="f203980b-58c6-4be4-8b4c-7f3e67b7de9c" containerID="21f68007797b01fab017c4476f71a86655b274243b82ee6d9974b37f51cae81b" exitCode=0 Oct 06 12:02:51 crc kubenswrapper[4958]: I1006 12:02:51.055030 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-ad5d-account-create-bwdzc" event={"ID":"f203980b-58c6-4be4-8b4c-7f3e67b7de9c","Type":"ContainerDied","Data":"21f68007797b01fab017c4476f71a86655b274243b82ee6d9974b37f51cae81b"} Oct 06 12:02:51 crc kubenswrapper[4958]: I1006 12:02:51.055056 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-ad5d-account-create-bwdzc" event={"ID":"f203980b-58c6-4be4-8b4c-7f3e67b7de9c","Type":"ContainerStarted","Data":"070bd43796e0858461202fef0af28a6ae9e4e48099dcabcfb00fac6a408be9ff"} Oct 06 12:02:52 crc kubenswrapper[4958]: I1006 12:02:52.069655 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"7a90ffe6-00a1-4bee-862b-b1ca74e3185d","Type":"ContainerStarted","Data":"66d501f77e7fbfaa3d3664b65eb091e099b3a46c6c94af766b81e78422c34972"} Oct 06 12:02:52 crc kubenswrapper[4958]: I1006 12:02:52.070114 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"7a90ffe6-00a1-4bee-862b-b1ca74e3185d","Type":"ContainerStarted","Data":"ef067665c173189599385f5bd35162096fff8adfc848b30b224ca9025ac5bdee"} Oct 06 12:02:52 crc kubenswrapper[4958]: I1006 12:02:52.070189 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"7a90ffe6-00a1-4bee-862b-b1ca74e3185d","Type":"ContainerStarted","Data":"3f0318e5eb805231374e0e666be54b43133c18fffe72c1ee12399b9f36b5e9ab"} Oct 06 12:02:52 crc kubenswrapper[4958]: I1006 12:02:52.687509 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-ad5d-account-create-bwdzc" Oct 06 12:02:52 crc kubenswrapper[4958]: I1006 12:02:52.691941 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-8f99-account-create-xhcbg" Oct 06 12:02:52 crc kubenswrapper[4958]: I1006 12:02:52.826370 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sgkp7\" (UniqueName: \"kubernetes.io/projected/8e9b74b3-2333-45a2-9f90-c015f543989d-kube-api-access-sgkp7\") pod \"8e9b74b3-2333-45a2-9f90-c015f543989d\" (UID: \"8e9b74b3-2333-45a2-9f90-c015f543989d\") " Oct 06 12:02:52 crc kubenswrapper[4958]: I1006 12:02:52.826531 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mljfv\" (UniqueName: \"kubernetes.io/projected/f203980b-58c6-4be4-8b4c-7f3e67b7de9c-kube-api-access-mljfv\") pod \"f203980b-58c6-4be4-8b4c-7f3e67b7de9c\" (UID: \"f203980b-58c6-4be4-8b4c-7f3e67b7de9c\") " Oct 06 12:02:52 crc kubenswrapper[4958]: I1006 12:02:52.834374 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8e9b74b3-2333-45a2-9f90-c015f543989d-kube-api-access-sgkp7" (OuterVolumeSpecName: "kube-api-access-sgkp7") pod "8e9b74b3-2333-45a2-9f90-c015f543989d" (UID: "8e9b74b3-2333-45a2-9f90-c015f543989d"). InnerVolumeSpecName "kube-api-access-sgkp7". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 06 12:02:52 crc kubenswrapper[4958]: I1006 12:02:52.834492 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f203980b-58c6-4be4-8b4c-7f3e67b7de9c-kube-api-access-mljfv" (OuterVolumeSpecName: "kube-api-access-mljfv") pod "f203980b-58c6-4be4-8b4c-7f3e67b7de9c" (UID: "f203980b-58c6-4be4-8b4c-7f3e67b7de9c"). InnerVolumeSpecName "kube-api-access-mljfv". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 06 12:02:52 crc kubenswrapper[4958]: I1006 12:02:52.931252 4958 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mljfv\" (UniqueName: \"kubernetes.io/projected/f203980b-58c6-4be4-8b4c-7f3e67b7de9c-kube-api-access-mljfv\") on node \"crc\" DevicePath \"\"" Oct 06 12:02:52 crc kubenswrapper[4958]: I1006 12:02:52.931303 4958 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sgkp7\" (UniqueName: \"kubernetes.io/projected/8e9b74b3-2333-45a2-9f90-c015f543989d-kube-api-access-sgkp7\") on node \"crc\" DevicePath \"\"" Oct 06 12:02:53 crc kubenswrapper[4958]: I1006 12:02:53.080274 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-ad5d-account-create-bwdzc" event={"ID":"f203980b-58c6-4be4-8b4c-7f3e67b7de9c","Type":"ContainerDied","Data":"070bd43796e0858461202fef0af28a6ae9e4e48099dcabcfb00fac6a408be9ff"} Oct 06 12:02:53 crc kubenswrapper[4958]: I1006 12:02:53.080314 4958 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="070bd43796e0858461202fef0af28a6ae9e4e48099dcabcfb00fac6a408be9ff" Oct 06 12:02:53 crc kubenswrapper[4958]: I1006 12:02:53.080284 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-ad5d-account-create-bwdzc" Oct 06 12:02:53 crc kubenswrapper[4958]: I1006 12:02:53.088963 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-8f99-account-create-xhcbg" event={"ID":"8e9b74b3-2333-45a2-9f90-c015f543989d","Type":"ContainerDied","Data":"8342d422b07503c7f9a1bbec3b8c59ea0164764bce4c4463bdf81d737b02eca4"} Oct 06 12:02:53 crc kubenswrapper[4958]: I1006 12:02:53.089004 4958 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8342d422b07503c7f9a1bbec3b8c59ea0164764bce4c4463bdf81d737b02eca4" Oct 06 12:02:53 crc kubenswrapper[4958]: I1006 12:02:53.089009 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-8f99-account-create-xhcbg" Oct 06 12:02:53 crc kubenswrapper[4958]: I1006 12:02:53.093993 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"7a90ffe6-00a1-4bee-862b-b1ca74e3185d","Type":"ContainerStarted","Data":"56a6767b2a2bb98c868ba1a24b61db34374833fa91fbbc0d50825704c47857f6"} Oct 06 12:02:53 crc kubenswrapper[4958]: I1006 12:02:53.094049 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"7a90ffe6-00a1-4bee-862b-b1ca74e3185d","Type":"ContainerStarted","Data":"eaceec28e15d27d8c27c377fe3a52c6511963635fff5e7c2a2a344b265f5f64d"} Oct 06 12:02:54 crc kubenswrapper[4958]: I1006 12:02:54.112061 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"7a90ffe6-00a1-4bee-862b-b1ca74e3185d","Type":"ContainerStarted","Data":"49f3521677c9431216b89a3c7fecb1b1b897c9345e6cc2a64d4111ff26f7e093"} Oct 06 12:02:54 crc kubenswrapper[4958]: I1006 12:02:54.112603 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"7a90ffe6-00a1-4bee-862b-b1ca74e3185d","Type":"ContainerStarted","Data":"90fac2b7f59bdc5da4e61c72179bf196c5cdf4652dc5b0e6145f0fa4dca4c2d0"} Oct 06 12:02:54 crc kubenswrapper[4958]: I1006 12:02:54.904647 4958 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-db-sync-jxgv9"] Oct 06 12:02:54 crc kubenswrapper[4958]: E1006 12:02:54.905004 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8e9b74b3-2333-45a2-9f90-c015f543989d" containerName="mariadb-account-create" Oct 06 12:02:54 crc kubenswrapper[4958]: I1006 12:02:54.905022 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="8e9b74b3-2333-45a2-9f90-c015f543989d" containerName="mariadb-account-create" Oct 06 12:02:54 crc kubenswrapper[4958]: E1006 12:02:54.905042 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f203980b-58c6-4be4-8b4c-7f3e67b7de9c" containerName="mariadb-account-create" Oct 06 12:02:54 crc kubenswrapper[4958]: I1006 12:02:54.905049 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="f203980b-58c6-4be4-8b4c-7f3e67b7de9c" containerName="mariadb-account-create" Oct 06 12:02:54 crc kubenswrapper[4958]: I1006 12:02:54.905297 4958 memory_manager.go:354] "RemoveStaleState removing state" podUID="8e9b74b3-2333-45a2-9f90-c015f543989d" containerName="mariadb-account-create" Oct 06 12:02:54 crc kubenswrapper[4958]: I1006 12:02:54.905333 4958 memory_manager.go:354] "RemoveStaleState removing state" podUID="f203980b-58c6-4be4-8b4c-7f3e67b7de9c" containerName="mariadb-account-create" Oct 06 12:02:54 crc kubenswrapper[4958]: I1006 12:02:54.905883 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-jxgv9" Oct 06 12:02:54 crc kubenswrapper[4958]: I1006 12:02:54.910913 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-glance-dockercfg-r7q2w" Oct 06 12:02:54 crc kubenswrapper[4958]: I1006 12:02:54.910915 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-config-data" Oct 06 12:02:54 crc kubenswrapper[4958]: I1006 12:02:54.930437 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-sync-jxgv9"] Oct 06 12:02:55 crc kubenswrapper[4958]: I1006 12:02:55.011364 4958 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ovn-controller-5x288" podUID="ca1fdee5-1c5e-4740-b69a-d2111ba255ee" containerName="ovn-controller" probeResult="failure" output=< Oct 06 12:02:55 crc kubenswrapper[4958]: ERROR - ovn-controller connection status is 'not connected', expecting 'connected' status Oct 06 12:02:55 crc kubenswrapper[4958]: > Oct 06 12:02:55 crc kubenswrapper[4958]: I1006 12:02:55.086883 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-ovs-k7nm2" Oct 06 12:02:55 crc kubenswrapper[4958]: I1006 12:02:55.089218 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-ovs-k7nm2" Oct 06 12:02:55 crc kubenswrapper[4958]: I1006 12:02:55.091940 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e3d53199-b7b6-4d78-9b4a-53ec81b1041d-combined-ca-bundle\") pod \"glance-db-sync-jxgv9\" (UID: \"e3d53199-b7b6-4d78-9b4a-53ec81b1041d\") " pod="openstack/glance-db-sync-jxgv9" Oct 06 12:02:55 crc kubenswrapper[4958]: I1006 12:02:55.091988 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e3d53199-b7b6-4d78-9b4a-53ec81b1041d-config-data\") pod \"glance-db-sync-jxgv9\" (UID: \"e3d53199-b7b6-4d78-9b4a-53ec81b1041d\") " pod="openstack/glance-db-sync-jxgv9" Oct 06 12:02:55 crc kubenswrapper[4958]: I1006 12:02:55.092012 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hcvc5\" (UniqueName: \"kubernetes.io/projected/e3d53199-b7b6-4d78-9b4a-53ec81b1041d-kube-api-access-hcvc5\") pod \"glance-db-sync-jxgv9\" (UID: \"e3d53199-b7b6-4d78-9b4a-53ec81b1041d\") " pod="openstack/glance-db-sync-jxgv9" Oct 06 12:02:55 crc kubenswrapper[4958]: I1006 12:02:55.092052 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/e3d53199-b7b6-4d78-9b4a-53ec81b1041d-db-sync-config-data\") pod \"glance-db-sync-jxgv9\" (UID: \"e3d53199-b7b6-4d78-9b4a-53ec81b1041d\") " pod="openstack/glance-db-sync-jxgv9" Oct 06 12:02:55 crc kubenswrapper[4958]: I1006 12:02:55.193132 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e3d53199-b7b6-4d78-9b4a-53ec81b1041d-combined-ca-bundle\") pod \"glance-db-sync-jxgv9\" (UID: \"e3d53199-b7b6-4d78-9b4a-53ec81b1041d\") " pod="openstack/glance-db-sync-jxgv9" Oct 06 12:02:55 crc kubenswrapper[4958]: I1006 12:02:55.193191 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e3d53199-b7b6-4d78-9b4a-53ec81b1041d-config-data\") pod \"glance-db-sync-jxgv9\" (UID: \"e3d53199-b7b6-4d78-9b4a-53ec81b1041d\") " pod="openstack/glance-db-sync-jxgv9" Oct 06 12:02:55 crc kubenswrapper[4958]: I1006 12:02:55.193214 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hcvc5\" (UniqueName: \"kubernetes.io/projected/e3d53199-b7b6-4d78-9b4a-53ec81b1041d-kube-api-access-hcvc5\") pod \"glance-db-sync-jxgv9\" (UID: \"e3d53199-b7b6-4d78-9b4a-53ec81b1041d\") " pod="openstack/glance-db-sync-jxgv9" Oct 06 12:02:55 crc kubenswrapper[4958]: I1006 12:02:55.193244 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/e3d53199-b7b6-4d78-9b4a-53ec81b1041d-db-sync-config-data\") pod \"glance-db-sync-jxgv9\" (UID: \"e3d53199-b7b6-4d78-9b4a-53ec81b1041d\") " pod="openstack/glance-db-sync-jxgv9" Oct 06 12:02:55 crc kubenswrapper[4958]: I1006 12:02:55.198572 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/e3d53199-b7b6-4d78-9b4a-53ec81b1041d-db-sync-config-data\") pod \"glance-db-sync-jxgv9\" (UID: \"e3d53199-b7b6-4d78-9b4a-53ec81b1041d\") " pod="openstack/glance-db-sync-jxgv9" Oct 06 12:02:55 crc kubenswrapper[4958]: I1006 12:02:55.200552 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e3d53199-b7b6-4d78-9b4a-53ec81b1041d-combined-ca-bundle\") pod \"glance-db-sync-jxgv9\" (UID: \"e3d53199-b7b6-4d78-9b4a-53ec81b1041d\") " pod="openstack/glance-db-sync-jxgv9" Oct 06 12:02:55 crc kubenswrapper[4958]: I1006 12:02:55.200729 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e3d53199-b7b6-4d78-9b4a-53ec81b1041d-config-data\") pod \"glance-db-sync-jxgv9\" (UID: \"e3d53199-b7b6-4d78-9b4a-53ec81b1041d\") " pod="openstack/glance-db-sync-jxgv9" Oct 06 12:02:55 crc kubenswrapper[4958]: I1006 12:02:55.215015 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hcvc5\" (UniqueName: \"kubernetes.io/projected/e3d53199-b7b6-4d78-9b4a-53ec81b1041d-kube-api-access-hcvc5\") pod \"glance-db-sync-jxgv9\" (UID: \"e3d53199-b7b6-4d78-9b4a-53ec81b1041d\") " pod="openstack/glance-db-sync-jxgv9" Oct 06 12:02:55 crc kubenswrapper[4958]: I1006 12:02:55.237919 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-jxgv9" Oct 06 12:02:55 crc kubenswrapper[4958]: I1006 12:02:55.314279 4958 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-5x288-config-4srf8"] Oct 06 12:02:55 crc kubenswrapper[4958]: I1006 12:02:55.315795 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-5x288-config-4srf8" Oct 06 12:02:55 crc kubenswrapper[4958]: I1006 12:02:55.318734 4958 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-extra-scripts" Oct 06 12:02:55 crc kubenswrapper[4958]: I1006 12:02:55.340052 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-5x288-config-4srf8"] Oct 06 12:02:55 crc kubenswrapper[4958]: I1006 12:02:55.498032 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/c02f903c-67d6-4ceb-8da4-fb3d04fcc119-additional-scripts\") pod \"ovn-controller-5x288-config-4srf8\" (UID: \"c02f903c-67d6-4ceb-8da4-fb3d04fcc119\") " pod="openstack/ovn-controller-5x288-config-4srf8" Oct 06 12:02:55 crc kubenswrapper[4958]: I1006 12:02:55.498075 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/c02f903c-67d6-4ceb-8da4-fb3d04fcc119-var-run-ovn\") pod \"ovn-controller-5x288-config-4srf8\" (UID: \"c02f903c-67d6-4ceb-8da4-fb3d04fcc119\") " pod="openstack/ovn-controller-5x288-config-4srf8" Oct 06 12:02:55 crc kubenswrapper[4958]: I1006 12:02:55.498098 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/c02f903c-67d6-4ceb-8da4-fb3d04fcc119-scripts\") pod \"ovn-controller-5x288-config-4srf8\" (UID: \"c02f903c-67d6-4ceb-8da4-fb3d04fcc119\") " pod="openstack/ovn-controller-5x288-config-4srf8" Oct 06 12:02:55 crc kubenswrapper[4958]: I1006 12:02:55.498182 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/c02f903c-67d6-4ceb-8da4-fb3d04fcc119-var-log-ovn\") pod \"ovn-controller-5x288-config-4srf8\" (UID: \"c02f903c-67d6-4ceb-8da4-fb3d04fcc119\") " pod="openstack/ovn-controller-5x288-config-4srf8" Oct 06 12:02:55 crc kubenswrapper[4958]: I1006 12:02:55.498210 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/c02f903c-67d6-4ceb-8da4-fb3d04fcc119-var-run\") pod \"ovn-controller-5x288-config-4srf8\" (UID: \"c02f903c-67d6-4ceb-8da4-fb3d04fcc119\") " pod="openstack/ovn-controller-5x288-config-4srf8" Oct 06 12:02:55 crc kubenswrapper[4958]: I1006 12:02:55.498252 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lqknq\" (UniqueName: \"kubernetes.io/projected/c02f903c-67d6-4ceb-8da4-fb3d04fcc119-kube-api-access-lqknq\") pod \"ovn-controller-5x288-config-4srf8\" (UID: \"c02f903c-67d6-4ceb-8da4-fb3d04fcc119\") " pod="openstack/ovn-controller-5x288-config-4srf8" Oct 06 12:02:55 crc kubenswrapper[4958]: I1006 12:02:55.600291 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/c02f903c-67d6-4ceb-8da4-fb3d04fcc119-var-log-ovn\") pod \"ovn-controller-5x288-config-4srf8\" (UID: \"c02f903c-67d6-4ceb-8da4-fb3d04fcc119\") " pod="openstack/ovn-controller-5x288-config-4srf8" Oct 06 12:02:55 crc kubenswrapper[4958]: I1006 12:02:55.600374 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/c02f903c-67d6-4ceb-8da4-fb3d04fcc119-var-run\") pod \"ovn-controller-5x288-config-4srf8\" (UID: \"c02f903c-67d6-4ceb-8da4-fb3d04fcc119\") " pod="openstack/ovn-controller-5x288-config-4srf8" Oct 06 12:02:55 crc kubenswrapper[4958]: I1006 12:02:55.600426 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lqknq\" (UniqueName: \"kubernetes.io/projected/c02f903c-67d6-4ceb-8da4-fb3d04fcc119-kube-api-access-lqknq\") pod \"ovn-controller-5x288-config-4srf8\" (UID: \"c02f903c-67d6-4ceb-8da4-fb3d04fcc119\") " pod="openstack/ovn-controller-5x288-config-4srf8" Oct 06 12:02:55 crc kubenswrapper[4958]: I1006 12:02:55.600564 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/c02f903c-67d6-4ceb-8da4-fb3d04fcc119-additional-scripts\") pod \"ovn-controller-5x288-config-4srf8\" (UID: \"c02f903c-67d6-4ceb-8da4-fb3d04fcc119\") " pod="openstack/ovn-controller-5x288-config-4srf8" Oct 06 12:02:55 crc kubenswrapper[4958]: I1006 12:02:55.600618 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/c02f903c-67d6-4ceb-8da4-fb3d04fcc119-var-run-ovn\") pod \"ovn-controller-5x288-config-4srf8\" (UID: \"c02f903c-67d6-4ceb-8da4-fb3d04fcc119\") " pod="openstack/ovn-controller-5x288-config-4srf8" Oct 06 12:02:55 crc kubenswrapper[4958]: I1006 12:02:55.600641 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/c02f903c-67d6-4ceb-8da4-fb3d04fcc119-scripts\") pod \"ovn-controller-5x288-config-4srf8\" (UID: \"c02f903c-67d6-4ceb-8da4-fb3d04fcc119\") " pod="openstack/ovn-controller-5x288-config-4srf8" Oct 06 12:02:55 crc kubenswrapper[4958]: I1006 12:02:55.600954 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/c02f903c-67d6-4ceb-8da4-fb3d04fcc119-var-run\") pod \"ovn-controller-5x288-config-4srf8\" (UID: \"c02f903c-67d6-4ceb-8da4-fb3d04fcc119\") " pod="openstack/ovn-controller-5x288-config-4srf8" Oct 06 12:02:55 crc kubenswrapper[4958]: I1006 12:02:55.601689 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/c02f903c-67d6-4ceb-8da4-fb3d04fcc119-additional-scripts\") pod \"ovn-controller-5x288-config-4srf8\" (UID: \"c02f903c-67d6-4ceb-8da4-fb3d04fcc119\") " pod="openstack/ovn-controller-5x288-config-4srf8" Oct 06 12:02:55 crc kubenswrapper[4958]: I1006 12:02:55.601830 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/c02f903c-67d6-4ceb-8da4-fb3d04fcc119-var-run-ovn\") pod \"ovn-controller-5x288-config-4srf8\" (UID: \"c02f903c-67d6-4ceb-8da4-fb3d04fcc119\") " pod="openstack/ovn-controller-5x288-config-4srf8" Oct 06 12:02:55 crc kubenswrapper[4958]: I1006 12:02:55.601927 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/c02f903c-67d6-4ceb-8da4-fb3d04fcc119-var-log-ovn\") pod \"ovn-controller-5x288-config-4srf8\" (UID: \"c02f903c-67d6-4ceb-8da4-fb3d04fcc119\") " pod="openstack/ovn-controller-5x288-config-4srf8" Oct 06 12:02:55 crc kubenswrapper[4958]: I1006 12:02:55.605631 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/c02f903c-67d6-4ceb-8da4-fb3d04fcc119-scripts\") pod \"ovn-controller-5x288-config-4srf8\" (UID: \"c02f903c-67d6-4ceb-8da4-fb3d04fcc119\") " pod="openstack/ovn-controller-5x288-config-4srf8" Oct 06 12:02:55 crc kubenswrapper[4958]: I1006 12:02:55.629303 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lqknq\" (UniqueName: \"kubernetes.io/projected/c02f903c-67d6-4ceb-8da4-fb3d04fcc119-kube-api-access-lqknq\") pod \"ovn-controller-5x288-config-4srf8\" (UID: \"c02f903c-67d6-4ceb-8da4-fb3d04fcc119\") " pod="openstack/ovn-controller-5x288-config-4srf8" Oct 06 12:02:55 crc kubenswrapper[4958]: I1006 12:02:55.646998 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-5x288-config-4srf8" Oct 06 12:02:55 crc kubenswrapper[4958]: I1006 12:02:55.810889 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-sync-jxgv9"] Oct 06 12:02:55 crc kubenswrapper[4958]: W1006 12:02:55.828528 4958 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode3d53199_b7b6_4d78_9b4a_53ec81b1041d.slice/crio-a9bb83df40b5230af12a72137e79620a7106ebf30b379fdad3300019164c863f WatchSource:0}: Error finding container a9bb83df40b5230af12a72137e79620a7106ebf30b379fdad3300019164c863f: Status 404 returned error can't find the container with id a9bb83df40b5230af12a72137e79620a7106ebf30b379fdad3300019164c863f Oct 06 12:02:55 crc kubenswrapper[4958]: I1006 12:02:55.939173 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-5x288-config-4srf8"] Oct 06 12:02:56 crc kubenswrapper[4958]: W1006 12:02:56.010958 4958 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc02f903c_67d6_4ceb_8da4_fb3d04fcc119.slice/crio-3657b12b8d9f021d61a32e6b3af2df19821326506657893a6da260e10c64a9ba WatchSource:0}: Error finding container 3657b12b8d9f021d61a32e6b3af2df19821326506657893a6da260e10c64a9ba: Status 404 returned error can't find the container with id 3657b12b8d9f021d61a32e6b3af2df19821326506657893a6da260e10c64a9ba Oct 06 12:02:56 crc kubenswrapper[4958]: I1006 12:02:56.131972 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-jxgv9" event={"ID":"e3d53199-b7b6-4d78-9b4a-53ec81b1041d","Type":"ContainerStarted","Data":"a9bb83df40b5230af12a72137e79620a7106ebf30b379fdad3300019164c863f"} Oct 06 12:02:56 crc kubenswrapper[4958]: I1006 12:02:56.132705 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-5x288-config-4srf8" event={"ID":"c02f903c-67d6-4ceb-8da4-fb3d04fcc119","Type":"ContainerStarted","Data":"3657b12b8d9f021d61a32e6b3af2df19821326506657893a6da260e10c64a9ba"} Oct 06 12:02:56 crc kubenswrapper[4958]: I1006 12:02:56.136717 4958 generic.go:334] "Generic (PLEG): container finished" podID="2542eba4-43d2-4108-a6f0-8eb4a1714f77" containerID="7e0e80d0691399bbda45e00199dfd3f3bdb4ab8902cdd466a7d63235da759697" exitCode=0 Oct 06 12:02:56 crc kubenswrapper[4958]: I1006 12:02:56.136783 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"2542eba4-43d2-4108-a6f0-8eb4a1714f77","Type":"ContainerDied","Data":"7e0e80d0691399bbda45e00199dfd3f3bdb4ab8902cdd466a7d63235da759697"} Oct 06 12:02:56 crc kubenswrapper[4958]: I1006 12:02:56.163895 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"7a90ffe6-00a1-4bee-862b-b1ca74e3185d","Type":"ContainerStarted","Data":"1ff58dd70bd63bc1fd3a3b4dd9a16fee3c538b99b83819ba3952dbbd7e0be2cb"} Oct 06 12:02:56 crc kubenswrapper[4958]: I1006 12:02:56.164120 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"7a90ffe6-00a1-4bee-862b-b1ca74e3185d","Type":"ContainerStarted","Data":"ae0571ae2599471bcc693f4cfd79cfa5ca13279142de4332573a8325d6ac4d9f"} Oct 06 12:02:56 crc kubenswrapper[4958]: I1006 12:02:56.164131 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"7a90ffe6-00a1-4bee-862b-b1ca74e3185d","Type":"ContainerStarted","Data":"842a1c3a95248956c252fcae941f7e04f244eec744046b56bfd0dbc0135e1ead"} Oct 06 12:02:56 crc kubenswrapper[4958]: I1006 12:02:56.164139 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"7a90ffe6-00a1-4bee-862b-b1ca74e3185d","Type":"ContainerStarted","Data":"c1f1c829a8f37619b1634672eb753a9837379e4b2c75744bfed764695de8f87f"} Oct 06 12:02:56 crc kubenswrapper[4958]: I1006 12:02:56.164149 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"7a90ffe6-00a1-4bee-862b-b1ca74e3185d","Type":"ContainerStarted","Data":"257dc9967ec6c5a14f965c5502b3af04b546cf603c7aa132201bb02dfec78105"} Oct 06 12:02:57 crc kubenswrapper[4958]: I1006 12:02:57.175228 4958 generic.go:334] "Generic (PLEG): container finished" podID="4c931ada-9afe-4ec4-9f75-42db89dc36e8" containerID="9d985a2890095c07e0d679b7da0afb5dd7107bb0d4001822c4341d3e1ea05dc6" exitCode=0 Oct 06 12:02:57 crc kubenswrapper[4958]: I1006 12:02:57.175338 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"4c931ada-9afe-4ec4-9f75-42db89dc36e8","Type":"ContainerDied","Data":"9d985a2890095c07e0d679b7da0afb5dd7107bb0d4001822c4341d3e1ea05dc6"} Oct 06 12:02:57 crc kubenswrapper[4958]: I1006 12:02:57.181371 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"2542eba4-43d2-4108-a6f0-8eb4a1714f77","Type":"ContainerStarted","Data":"327e47a6ce4967557ddf6edf15c775605ac3aaa4237286f9ef9ba8bb333db27e"} Oct 06 12:02:57 crc kubenswrapper[4958]: I1006 12:02:57.182342 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-server-0" Oct 06 12:02:57 crc kubenswrapper[4958]: I1006 12:02:57.189302 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"7a90ffe6-00a1-4bee-862b-b1ca74e3185d","Type":"ContainerStarted","Data":"eea1bb66451045ec87d7481743e728034374ec66436c78e993a1c5464cc0506f"} Oct 06 12:02:57 crc kubenswrapper[4958]: I1006 12:02:57.189338 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"7a90ffe6-00a1-4bee-862b-b1ca74e3185d","Type":"ContainerStarted","Data":"f75cd60b0e601c21b98c47213a8f9d92d00670bdf6bc190d0cbd5f50c3afc710"} Oct 06 12:02:57 crc kubenswrapper[4958]: I1006 12:02:57.192790 4958 generic.go:334] "Generic (PLEG): container finished" podID="c02f903c-67d6-4ceb-8da4-fb3d04fcc119" containerID="30cd0c216f2064ac080a7bb4f98aba5b4c0e6e0ef7cc12f7ee211c9ad90da191" exitCode=0 Oct 06 12:02:57 crc kubenswrapper[4958]: I1006 12:02:57.192828 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-5x288-config-4srf8" event={"ID":"c02f903c-67d6-4ceb-8da4-fb3d04fcc119","Type":"ContainerDied","Data":"30cd0c216f2064ac080a7bb4f98aba5b4c0e6e0ef7cc12f7ee211c9ad90da191"} Oct 06 12:02:57 crc kubenswrapper[4958]: I1006 12:02:57.242151 4958 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/swift-storage-0" podStartSLOduration=20.873574672 podStartE2EDuration="26.242127339s" podCreationTimestamp="2025-10-06 12:02:31 +0000 UTC" firstStartedPulling="2025-10-06 12:02:49.648018957 +0000 UTC m=+923.534044265" lastFinishedPulling="2025-10-06 12:02:55.016571624 +0000 UTC m=+928.902596932" observedRunningTime="2025-10-06 12:02:57.238628374 +0000 UTC m=+931.124653682" watchObservedRunningTime="2025-10-06 12:02:57.242127339 +0000 UTC m=+931.128152647" Oct 06 12:02:57 crc kubenswrapper[4958]: I1006 12:02:57.279560 4958 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-server-0" podStartSLOduration=51.179171977 podStartE2EDuration="1m3.279535709s" podCreationTimestamp="2025-10-06 12:01:54 +0000 UTC" firstStartedPulling="2025-10-06 12:02:08.942303122 +0000 UTC m=+882.828328430" lastFinishedPulling="2025-10-06 12:02:21.042666854 +0000 UTC m=+894.928692162" observedRunningTime="2025-10-06 12:02:57.261965548 +0000 UTC m=+931.147990866" watchObservedRunningTime="2025-10-06 12:02:57.279535709 +0000 UTC m=+931.165561027" Oct 06 12:02:57 crc kubenswrapper[4958]: I1006 12:02:57.504794 4958 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-77585f5f8c-9ms5t"] Oct 06 12:02:57 crc kubenswrapper[4958]: I1006 12:02:57.506502 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-77585f5f8c-9ms5t" Oct 06 12:02:57 crc kubenswrapper[4958]: I1006 12:02:57.509989 4958 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns-swift-storage-0" Oct 06 12:02:57 crc kubenswrapper[4958]: I1006 12:02:57.529793 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-77585f5f8c-9ms5t"] Oct 06 12:02:57 crc kubenswrapper[4958]: I1006 12:02:57.636497 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/61ae269a-4ca3-435d-b709-c08d5aa97ba6-ovsdbserver-nb\") pod \"dnsmasq-dns-77585f5f8c-9ms5t\" (UID: \"61ae269a-4ca3-435d-b709-c08d5aa97ba6\") " pod="openstack/dnsmasq-dns-77585f5f8c-9ms5t" Oct 06 12:02:57 crc kubenswrapper[4958]: I1006 12:02:57.636765 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/61ae269a-4ca3-435d-b709-c08d5aa97ba6-ovsdbserver-sb\") pod \"dnsmasq-dns-77585f5f8c-9ms5t\" (UID: \"61ae269a-4ca3-435d-b709-c08d5aa97ba6\") " pod="openstack/dnsmasq-dns-77585f5f8c-9ms5t" Oct 06 12:02:57 crc kubenswrapper[4958]: I1006 12:02:57.636793 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/61ae269a-4ca3-435d-b709-c08d5aa97ba6-dns-svc\") pod \"dnsmasq-dns-77585f5f8c-9ms5t\" (UID: \"61ae269a-4ca3-435d-b709-c08d5aa97ba6\") " pod="openstack/dnsmasq-dns-77585f5f8c-9ms5t" Oct 06 12:02:57 crc kubenswrapper[4958]: I1006 12:02:57.636919 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/61ae269a-4ca3-435d-b709-c08d5aa97ba6-config\") pod \"dnsmasq-dns-77585f5f8c-9ms5t\" (UID: \"61ae269a-4ca3-435d-b709-c08d5aa97ba6\") " pod="openstack/dnsmasq-dns-77585f5f8c-9ms5t" Oct 06 12:02:57 crc kubenswrapper[4958]: I1006 12:02:57.636951 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8bkjs\" (UniqueName: \"kubernetes.io/projected/61ae269a-4ca3-435d-b709-c08d5aa97ba6-kube-api-access-8bkjs\") pod \"dnsmasq-dns-77585f5f8c-9ms5t\" (UID: \"61ae269a-4ca3-435d-b709-c08d5aa97ba6\") " pod="openstack/dnsmasq-dns-77585f5f8c-9ms5t" Oct 06 12:02:57 crc kubenswrapper[4958]: I1006 12:02:57.636991 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/61ae269a-4ca3-435d-b709-c08d5aa97ba6-dns-swift-storage-0\") pod \"dnsmasq-dns-77585f5f8c-9ms5t\" (UID: \"61ae269a-4ca3-435d-b709-c08d5aa97ba6\") " pod="openstack/dnsmasq-dns-77585f5f8c-9ms5t" Oct 06 12:02:57 crc kubenswrapper[4958]: I1006 12:02:57.738718 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/61ae269a-4ca3-435d-b709-c08d5aa97ba6-config\") pod \"dnsmasq-dns-77585f5f8c-9ms5t\" (UID: \"61ae269a-4ca3-435d-b709-c08d5aa97ba6\") " pod="openstack/dnsmasq-dns-77585f5f8c-9ms5t" Oct 06 12:02:57 crc kubenswrapper[4958]: I1006 12:02:57.738824 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8bkjs\" (UniqueName: \"kubernetes.io/projected/61ae269a-4ca3-435d-b709-c08d5aa97ba6-kube-api-access-8bkjs\") pod \"dnsmasq-dns-77585f5f8c-9ms5t\" (UID: \"61ae269a-4ca3-435d-b709-c08d5aa97ba6\") " pod="openstack/dnsmasq-dns-77585f5f8c-9ms5t" Oct 06 12:02:57 crc kubenswrapper[4958]: I1006 12:02:57.738908 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/61ae269a-4ca3-435d-b709-c08d5aa97ba6-dns-swift-storage-0\") pod \"dnsmasq-dns-77585f5f8c-9ms5t\" (UID: \"61ae269a-4ca3-435d-b709-c08d5aa97ba6\") " pod="openstack/dnsmasq-dns-77585f5f8c-9ms5t" Oct 06 12:02:57 crc kubenswrapper[4958]: I1006 12:02:57.739023 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/61ae269a-4ca3-435d-b709-c08d5aa97ba6-ovsdbserver-nb\") pod \"dnsmasq-dns-77585f5f8c-9ms5t\" (UID: \"61ae269a-4ca3-435d-b709-c08d5aa97ba6\") " pod="openstack/dnsmasq-dns-77585f5f8c-9ms5t" Oct 06 12:02:57 crc kubenswrapper[4958]: I1006 12:02:57.739091 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/61ae269a-4ca3-435d-b709-c08d5aa97ba6-ovsdbserver-sb\") pod \"dnsmasq-dns-77585f5f8c-9ms5t\" (UID: \"61ae269a-4ca3-435d-b709-c08d5aa97ba6\") " pod="openstack/dnsmasq-dns-77585f5f8c-9ms5t" Oct 06 12:02:57 crc kubenswrapper[4958]: I1006 12:02:57.739140 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/61ae269a-4ca3-435d-b709-c08d5aa97ba6-dns-svc\") pod \"dnsmasq-dns-77585f5f8c-9ms5t\" (UID: \"61ae269a-4ca3-435d-b709-c08d5aa97ba6\") " pod="openstack/dnsmasq-dns-77585f5f8c-9ms5t" Oct 06 12:02:57 crc kubenswrapper[4958]: I1006 12:02:57.740684 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/61ae269a-4ca3-435d-b709-c08d5aa97ba6-dns-svc\") pod \"dnsmasq-dns-77585f5f8c-9ms5t\" (UID: \"61ae269a-4ca3-435d-b709-c08d5aa97ba6\") " pod="openstack/dnsmasq-dns-77585f5f8c-9ms5t" Oct 06 12:02:57 crc kubenswrapper[4958]: I1006 12:02:57.741778 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/61ae269a-4ca3-435d-b709-c08d5aa97ba6-config\") pod \"dnsmasq-dns-77585f5f8c-9ms5t\" (UID: \"61ae269a-4ca3-435d-b709-c08d5aa97ba6\") " pod="openstack/dnsmasq-dns-77585f5f8c-9ms5t" Oct 06 12:02:57 crc kubenswrapper[4958]: I1006 12:02:57.743282 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/61ae269a-4ca3-435d-b709-c08d5aa97ba6-dns-swift-storage-0\") pod \"dnsmasq-dns-77585f5f8c-9ms5t\" (UID: \"61ae269a-4ca3-435d-b709-c08d5aa97ba6\") " pod="openstack/dnsmasq-dns-77585f5f8c-9ms5t" Oct 06 12:02:57 crc kubenswrapper[4958]: I1006 12:02:57.744170 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/61ae269a-4ca3-435d-b709-c08d5aa97ba6-ovsdbserver-nb\") pod \"dnsmasq-dns-77585f5f8c-9ms5t\" (UID: \"61ae269a-4ca3-435d-b709-c08d5aa97ba6\") " pod="openstack/dnsmasq-dns-77585f5f8c-9ms5t" Oct 06 12:02:57 crc kubenswrapper[4958]: I1006 12:02:57.744355 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/61ae269a-4ca3-435d-b709-c08d5aa97ba6-ovsdbserver-sb\") pod \"dnsmasq-dns-77585f5f8c-9ms5t\" (UID: \"61ae269a-4ca3-435d-b709-c08d5aa97ba6\") " pod="openstack/dnsmasq-dns-77585f5f8c-9ms5t" Oct 06 12:02:57 crc kubenswrapper[4958]: I1006 12:02:57.760085 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8bkjs\" (UniqueName: \"kubernetes.io/projected/61ae269a-4ca3-435d-b709-c08d5aa97ba6-kube-api-access-8bkjs\") pod \"dnsmasq-dns-77585f5f8c-9ms5t\" (UID: \"61ae269a-4ca3-435d-b709-c08d5aa97ba6\") " pod="openstack/dnsmasq-dns-77585f5f8c-9ms5t" Oct 06 12:02:57 crc kubenswrapper[4958]: I1006 12:02:57.856379 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-77585f5f8c-9ms5t" Oct 06 12:02:58 crc kubenswrapper[4958]: I1006 12:02:58.104727 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-77585f5f8c-9ms5t"] Oct 06 12:02:58 crc kubenswrapper[4958]: I1006 12:02:58.205033 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"4c931ada-9afe-4ec4-9f75-42db89dc36e8","Type":"ContainerStarted","Data":"bbc482e823030cc744cfe19b36baff376ce7c79d0bc3b6dd654dc5f2f2130684"} Oct 06 12:02:58 crc kubenswrapper[4958]: I1006 12:02:58.206078 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-cell1-server-0" Oct 06 12:02:58 crc kubenswrapper[4958]: I1006 12:02:58.207636 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-77585f5f8c-9ms5t" event={"ID":"61ae269a-4ca3-435d-b709-c08d5aa97ba6","Type":"ContainerStarted","Data":"7dbbff66f8099e2138127dc307676b2b75dffc9b5e0619a24b4fb7f0f97fc8b7"} Oct 06 12:02:58 crc kubenswrapper[4958]: I1006 12:02:58.229594 4958 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-cell1-server-0" podStartSLOduration=51.144841272 podStartE2EDuration="1m3.229577901s" podCreationTimestamp="2025-10-06 12:01:55 +0000 UTC" firstStartedPulling="2025-10-06 12:02:09.465610819 +0000 UTC m=+883.351636127" lastFinishedPulling="2025-10-06 12:02:21.550347408 +0000 UTC m=+895.436372756" observedRunningTime="2025-10-06 12:02:58.224698224 +0000 UTC m=+932.110723542" watchObservedRunningTime="2025-10-06 12:02:58.229577901 +0000 UTC m=+932.115603209" Oct 06 12:02:58 crc kubenswrapper[4958]: I1006 12:02:58.490253 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-5x288-config-4srf8" Oct 06 12:02:58 crc kubenswrapper[4958]: I1006 12:02:58.553606 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/c02f903c-67d6-4ceb-8da4-fb3d04fcc119-var-run\") pod \"c02f903c-67d6-4ceb-8da4-fb3d04fcc119\" (UID: \"c02f903c-67d6-4ceb-8da4-fb3d04fcc119\") " Oct 06 12:02:58 crc kubenswrapper[4958]: I1006 12:02:58.553764 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/c02f903c-67d6-4ceb-8da4-fb3d04fcc119-var-log-ovn\") pod \"c02f903c-67d6-4ceb-8da4-fb3d04fcc119\" (UID: \"c02f903c-67d6-4ceb-8da4-fb3d04fcc119\") " Oct 06 12:02:58 crc kubenswrapper[4958]: I1006 12:02:58.553835 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/c02f903c-67d6-4ceb-8da4-fb3d04fcc119-scripts\") pod \"c02f903c-67d6-4ceb-8da4-fb3d04fcc119\" (UID: \"c02f903c-67d6-4ceb-8da4-fb3d04fcc119\") " Oct 06 12:02:58 crc kubenswrapper[4958]: I1006 12:02:58.553883 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lqknq\" (UniqueName: \"kubernetes.io/projected/c02f903c-67d6-4ceb-8da4-fb3d04fcc119-kube-api-access-lqknq\") pod \"c02f903c-67d6-4ceb-8da4-fb3d04fcc119\" (UID: \"c02f903c-67d6-4ceb-8da4-fb3d04fcc119\") " Oct 06 12:02:58 crc kubenswrapper[4958]: I1006 12:02:58.554113 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/c02f903c-67d6-4ceb-8da4-fb3d04fcc119-var-run-ovn\") pod \"c02f903c-67d6-4ceb-8da4-fb3d04fcc119\" (UID: \"c02f903c-67d6-4ceb-8da4-fb3d04fcc119\") " Oct 06 12:02:58 crc kubenswrapper[4958]: I1006 12:02:58.554130 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/c02f903c-67d6-4ceb-8da4-fb3d04fcc119-additional-scripts\") pod \"c02f903c-67d6-4ceb-8da4-fb3d04fcc119\" (UID: \"c02f903c-67d6-4ceb-8da4-fb3d04fcc119\") " Oct 06 12:02:58 crc kubenswrapper[4958]: I1006 12:02:58.554326 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c02f903c-67d6-4ceb-8da4-fb3d04fcc119-var-run" (OuterVolumeSpecName: "var-run") pod "c02f903c-67d6-4ceb-8da4-fb3d04fcc119" (UID: "c02f903c-67d6-4ceb-8da4-fb3d04fcc119"). InnerVolumeSpecName "var-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 06 12:02:58 crc kubenswrapper[4958]: I1006 12:02:58.554393 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c02f903c-67d6-4ceb-8da4-fb3d04fcc119-var-run-ovn" (OuterVolumeSpecName: "var-run-ovn") pod "c02f903c-67d6-4ceb-8da4-fb3d04fcc119" (UID: "c02f903c-67d6-4ceb-8da4-fb3d04fcc119"). InnerVolumeSpecName "var-run-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 06 12:02:58 crc kubenswrapper[4958]: I1006 12:02:58.554414 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c02f903c-67d6-4ceb-8da4-fb3d04fcc119-var-log-ovn" (OuterVolumeSpecName: "var-log-ovn") pod "c02f903c-67d6-4ceb-8da4-fb3d04fcc119" (UID: "c02f903c-67d6-4ceb-8da4-fb3d04fcc119"). InnerVolumeSpecName "var-log-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 06 12:02:58 crc kubenswrapper[4958]: I1006 12:02:58.555265 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c02f903c-67d6-4ceb-8da4-fb3d04fcc119-additional-scripts" (OuterVolumeSpecName: "additional-scripts") pod "c02f903c-67d6-4ceb-8da4-fb3d04fcc119" (UID: "c02f903c-67d6-4ceb-8da4-fb3d04fcc119"). InnerVolumeSpecName "additional-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 06 12:02:58 crc kubenswrapper[4958]: I1006 12:02:58.555572 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c02f903c-67d6-4ceb-8da4-fb3d04fcc119-scripts" (OuterVolumeSpecName: "scripts") pod "c02f903c-67d6-4ceb-8da4-fb3d04fcc119" (UID: "c02f903c-67d6-4ceb-8da4-fb3d04fcc119"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 06 12:02:58 crc kubenswrapper[4958]: I1006 12:02:58.555982 4958 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/c02f903c-67d6-4ceb-8da4-fb3d04fcc119-scripts\") on node \"crc\" DevicePath \"\"" Oct 06 12:02:58 crc kubenswrapper[4958]: I1006 12:02:58.556062 4958 reconciler_common.go:293] "Volume detached for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/c02f903c-67d6-4ceb-8da4-fb3d04fcc119-var-run-ovn\") on node \"crc\" DevicePath \"\"" Oct 06 12:02:58 crc kubenswrapper[4958]: I1006 12:02:58.556119 4958 reconciler_common.go:293] "Volume detached for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/c02f903c-67d6-4ceb-8da4-fb3d04fcc119-additional-scripts\") on node \"crc\" DevicePath \"\"" Oct 06 12:02:58 crc kubenswrapper[4958]: I1006 12:02:58.556204 4958 reconciler_common.go:293] "Volume detached for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/c02f903c-67d6-4ceb-8da4-fb3d04fcc119-var-run\") on node \"crc\" DevicePath \"\"" Oct 06 12:02:58 crc kubenswrapper[4958]: I1006 12:02:58.556271 4958 reconciler_common.go:293] "Volume detached for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/c02f903c-67d6-4ceb-8da4-fb3d04fcc119-var-log-ovn\") on node \"crc\" DevicePath \"\"" Oct 06 12:02:58 crc kubenswrapper[4958]: I1006 12:02:58.562363 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c02f903c-67d6-4ceb-8da4-fb3d04fcc119-kube-api-access-lqknq" (OuterVolumeSpecName: "kube-api-access-lqknq") pod "c02f903c-67d6-4ceb-8da4-fb3d04fcc119" (UID: "c02f903c-67d6-4ceb-8da4-fb3d04fcc119"). InnerVolumeSpecName "kube-api-access-lqknq". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 06 12:02:58 crc kubenswrapper[4958]: I1006 12:02:58.658070 4958 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lqknq\" (UniqueName: \"kubernetes.io/projected/c02f903c-67d6-4ceb-8da4-fb3d04fcc119-kube-api-access-lqknq\") on node \"crc\" DevicePath \"\"" Oct 06 12:02:59 crc kubenswrapper[4958]: I1006 12:02:59.219499 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-5x288-config-4srf8" Oct 06 12:02:59 crc kubenswrapper[4958]: I1006 12:02:59.219520 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-5x288-config-4srf8" event={"ID":"c02f903c-67d6-4ceb-8da4-fb3d04fcc119","Type":"ContainerDied","Data":"3657b12b8d9f021d61a32e6b3af2df19821326506657893a6da260e10c64a9ba"} Oct 06 12:02:59 crc kubenswrapper[4958]: I1006 12:02:59.219938 4958 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3657b12b8d9f021d61a32e6b3af2df19821326506657893a6da260e10c64a9ba" Oct 06 12:02:59 crc kubenswrapper[4958]: I1006 12:02:59.225613 4958 generic.go:334] "Generic (PLEG): container finished" podID="61ae269a-4ca3-435d-b709-c08d5aa97ba6" containerID="426f0da7ae574a456f0ecb849ecc21079f3371ad23eef1691e5be38d4ed0b9fd" exitCode=0 Oct 06 12:02:59 crc kubenswrapper[4958]: I1006 12:02:59.225871 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-77585f5f8c-9ms5t" event={"ID":"61ae269a-4ca3-435d-b709-c08d5aa97ba6","Type":"ContainerDied","Data":"426f0da7ae574a456f0ecb849ecc21079f3371ad23eef1691e5be38d4ed0b9fd"} Oct 06 12:02:59 crc kubenswrapper[4958]: I1006 12:02:59.341729 4958 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-9c95-account-create-drzjl"] Oct 06 12:02:59 crc kubenswrapper[4958]: E1006 12:02:59.342078 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c02f903c-67d6-4ceb-8da4-fb3d04fcc119" containerName="ovn-config" Oct 06 12:02:59 crc kubenswrapper[4958]: I1006 12:02:59.342092 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="c02f903c-67d6-4ceb-8da4-fb3d04fcc119" containerName="ovn-config" Oct 06 12:02:59 crc kubenswrapper[4958]: I1006 12:02:59.342291 4958 memory_manager.go:354] "RemoveStaleState removing state" podUID="c02f903c-67d6-4ceb-8da4-fb3d04fcc119" containerName="ovn-config" Oct 06 12:02:59 crc kubenswrapper[4958]: I1006 12:02:59.342756 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-9c95-account-create-drzjl" Oct 06 12:02:59 crc kubenswrapper[4958]: I1006 12:02:59.360506 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-db-secret" Oct 06 12:02:59 crc kubenswrapper[4958]: I1006 12:02:59.373915 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-9c95-account-create-drzjl"] Oct 06 12:02:59 crc kubenswrapper[4958]: I1006 12:02:59.379687 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7bthm\" (UniqueName: \"kubernetes.io/projected/b899478f-34ca-4002-837c-7e795f4dc77a-kube-api-access-7bthm\") pod \"keystone-9c95-account-create-drzjl\" (UID: \"b899478f-34ca-4002-837c-7e795f4dc77a\") " pod="openstack/keystone-9c95-account-create-drzjl" Oct 06 12:02:59 crc kubenswrapper[4958]: I1006 12:02:59.480772 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7bthm\" (UniqueName: \"kubernetes.io/projected/b899478f-34ca-4002-837c-7e795f4dc77a-kube-api-access-7bthm\") pod \"keystone-9c95-account-create-drzjl\" (UID: \"b899478f-34ca-4002-837c-7e795f4dc77a\") " pod="openstack/keystone-9c95-account-create-drzjl" Oct 06 12:02:59 crc kubenswrapper[4958]: I1006 12:02:59.503452 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7bthm\" (UniqueName: \"kubernetes.io/projected/b899478f-34ca-4002-837c-7e795f4dc77a-kube-api-access-7bthm\") pod \"keystone-9c95-account-create-drzjl\" (UID: \"b899478f-34ca-4002-837c-7e795f4dc77a\") " pod="openstack/keystone-9c95-account-create-drzjl" Oct 06 12:02:59 crc kubenswrapper[4958]: I1006 12:02:59.597286 4958 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovn-controller-5x288-config-4srf8"] Oct 06 12:02:59 crc kubenswrapper[4958]: I1006 12:02:59.601593 4958 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ovn-controller-5x288-config-4srf8"] Oct 06 12:02:59 crc kubenswrapper[4958]: I1006 12:02:59.707939 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-9c95-account-create-drzjl" Oct 06 12:03:00 crc kubenswrapper[4958]: I1006 12:03:00.023201 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-5x288" Oct 06 12:03:00 crc kubenswrapper[4958]: I1006 12:03:00.168598 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-9c95-account-create-drzjl"] Oct 06 12:03:00 crc kubenswrapper[4958]: I1006 12:03:00.234810 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-9c95-account-create-drzjl" event={"ID":"b899478f-34ca-4002-837c-7e795f4dc77a","Type":"ContainerStarted","Data":"5d32bd8717cf16183f700b509afab30ab71d59c47baeeccbef65c7bdf8280e40"} Oct 06 12:03:00 crc kubenswrapper[4958]: I1006 12:03:00.237364 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-77585f5f8c-9ms5t" event={"ID":"61ae269a-4ca3-435d-b709-c08d5aa97ba6","Type":"ContainerStarted","Data":"92b6e2f44a3cab4fc9c7926d1ee7c98f725ae4b07dc4216c352874912933f915"} Oct 06 12:03:00 crc kubenswrapper[4958]: I1006 12:03:00.237616 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-77585f5f8c-9ms5t" Oct 06 12:03:00 crc kubenswrapper[4958]: I1006 12:03:00.260009 4958 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-77585f5f8c-9ms5t" podStartSLOduration=3.259992014 podStartE2EDuration="3.259992014s" podCreationTimestamp="2025-10-06 12:02:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-06 12:03:00.255282152 +0000 UTC m=+934.141307450" watchObservedRunningTime="2025-10-06 12:03:00.259992014 +0000 UTC m=+934.146017312" Oct 06 12:03:00 crc kubenswrapper[4958]: I1006 12:03:00.926885 4958 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c02f903c-67d6-4ceb-8da4-fb3d04fcc119" path="/var/lib/kubelet/pods/c02f903c-67d6-4ceb-8da4-fb3d04fcc119/volumes" Oct 06 12:03:01 crc kubenswrapper[4958]: I1006 12:03:01.249465 4958 generic.go:334] "Generic (PLEG): container finished" podID="b899478f-34ca-4002-837c-7e795f4dc77a" containerID="fb52276eccb8dd8524101c011d1b2c9d89ecb3e4454b9b7c163f658f2020be99" exitCode=0 Oct 06 12:03:01 crc kubenswrapper[4958]: I1006 12:03:01.249512 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-9c95-account-create-drzjl" event={"ID":"b899478f-34ca-4002-837c-7e795f4dc77a","Type":"ContainerDied","Data":"fb52276eccb8dd8524101c011d1b2c9d89ecb3e4454b9b7c163f658f2020be99"} Oct 06 12:03:06 crc kubenswrapper[4958]: I1006 12:03:06.287320 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-server-0" Oct 06 12:03:06 crc kubenswrapper[4958]: I1006 12:03:06.654209 4958 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-db-create-4wj4v"] Oct 06 12:03:06 crc kubenswrapper[4958]: I1006 12:03:06.656006 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-4wj4v" Oct 06 12:03:06 crc kubenswrapper[4958]: I1006 12:03:06.663139 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-create-4wj4v"] Oct 06 12:03:06 crc kubenswrapper[4958]: I1006 12:03:06.741591 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-54zgm\" (UniqueName: \"kubernetes.io/projected/131f77e7-dc40-48c1-87c5-1a59b9ff226e-kube-api-access-54zgm\") pod \"cinder-db-create-4wj4v\" (UID: \"131f77e7-dc40-48c1-87c5-1a59b9ff226e\") " pod="openstack/cinder-db-create-4wj4v" Oct 06 12:03:06 crc kubenswrapper[4958]: I1006 12:03:06.761271 4958 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-db-create-vtjz2"] Oct 06 12:03:06 crc kubenswrapper[4958]: I1006 12:03:06.762432 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-vtjz2" Oct 06 12:03:06 crc kubenswrapper[4958]: I1006 12:03:06.768223 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-create-vtjz2"] Oct 06 12:03:06 crc kubenswrapper[4958]: I1006 12:03:06.842867 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f5xdq\" (UniqueName: \"kubernetes.io/projected/5193f80f-d1c5-44bc-ad58-5c3b1a9f3977-kube-api-access-f5xdq\") pod \"barbican-db-create-vtjz2\" (UID: \"5193f80f-d1c5-44bc-ad58-5c3b1a9f3977\") " pod="openstack/barbican-db-create-vtjz2" Oct 06 12:03:06 crc kubenswrapper[4958]: I1006 12:03:06.842943 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-54zgm\" (UniqueName: \"kubernetes.io/projected/131f77e7-dc40-48c1-87c5-1a59b9ff226e-kube-api-access-54zgm\") pod \"cinder-db-create-4wj4v\" (UID: \"131f77e7-dc40-48c1-87c5-1a59b9ff226e\") " pod="openstack/cinder-db-create-4wj4v" Oct 06 12:03:06 crc kubenswrapper[4958]: I1006 12:03:06.860983 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-54zgm\" (UniqueName: \"kubernetes.io/projected/131f77e7-dc40-48c1-87c5-1a59b9ff226e-kube-api-access-54zgm\") pod \"cinder-db-create-4wj4v\" (UID: \"131f77e7-dc40-48c1-87c5-1a59b9ff226e\") " pod="openstack/cinder-db-create-4wj4v" Oct 06 12:03:06 crc kubenswrapper[4958]: I1006 12:03:06.908168 4958 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-db-create-pvr9g"] Oct 06 12:03:06 crc kubenswrapper[4958]: I1006 12:03:06.911972 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-pvr9g" Oct 06 12:03:06 crc kubenswrapper[4958]: I1006 12:03:06.953276 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f5xdq\" (UniqueName: \"kubernetes.io/projected/5193f80f-d1c5-44bc-ad58-5c3b1a9f3977-kube-api-access-f5xdq\") pod \"barbican-db-create-vtjz2\" (UID: \"5193f80f-d1c5-44bc-ad58-5c3b1a9f3977\") " pod="openstack/barbican-db-create-vtjz2" Oct 06 12:03:06 crc kubenswrapper[4958]: I1006 12:03:06.975925 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-create-pvr9g"] Oct 06 12:03:06 crc kubenswrapper[4958]: I1006 12:03:06.983895 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f5xdq\" (UniqueName: \"kubernetes.io/projected/5193f80f-d1c5-44bc-ad58-5c3b1a9f3977-kube-api-access-f5xdq\") pod \"barbican-db-create-vtjz2\" (UID: \"5193f80f-d1c5-44bc-ad58-5c3b1a9f3977\") " pod="openstack/barbican-db-create-vtjz2" Oct 06 12:03:06 crc kubenswrapper[4958]: I1006 12:03:06.988743 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-4wj4v" Oct 06 12:03:07 crc kubenswrapper[4958]: I1006 12:03:07.055263 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bl4h2\" (UniqueName: \"kubernetes.io/projected/ecceb49a-e566-4bbb-8f62-c4d0e59bd18d-kube-api-access-bl4h2\") pod \"neutron-db-create-pvr9g\" (UID: \"ecceb49a-e566-4bbb-8f62-c4d0e59bd18d\") " pod="openstack/neutron-db-create-pvr9g" Oct 06 12:03:07 crc kubenswrapper[4958]: I1006 12:03:07.076105 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-vtjz2" Oct 06 12:03:07 crc kubenswrapper[4958]: I1006 12:03:07.157808 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bl4h2\" (UniqueName: \"kubernetes.io/projected/ecceb49a-e566-4bbb-8f62-c4d0e59bd18d-kube-api-access-bl4h2\") pod \"neutron-db-create-pvr9g\" (UID: \"ecceb49a-e566-4bbb-8f62-c4d0e59bd18d\") " pod="openstack/neutron-db-create-pvr9g" Oct 06 12:03:07 crc kubenswrapper[4958]: I1006 12:03:07.190365 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bl4h2\" (UniqueName: \"kubernetes.io/projected/ecceb49a-e566-4bbb-8f62-c4d0e59bd18d-kube-api-access-bl4h2\") pod \"neutron-db-create-pvr9g\" (UID: \"ecceb49a-e566-4bbb-8f62-c4d0e59bd18d\") " pod="openstack/neutron-db-create-pvr9g" Oct 06 12:03:07 crc kubenswrapper[4958]: I1006 12:03:07.247848 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-pvr9g" Oct 06 12:03:07 crc kubenswrapper[4958]: I1006 12:03:07.857283 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-77585f5f8c-9ms5t" Oct 06 12:03:07 crc kubenswrapper[4958]: I1006 12:03:07.942367 4958 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-698758b865-sx8gl"] Oct 06 12:03:07 crc kubenswrapper[4958]: I1006 12:03:07.942647 4958 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-698758b865-sx8gl" podUID="7884f459-0365-448a-b528-326be335a9e3" containerName="dnsmasq-dns" containerID="cri-o://e87e53ebe5f7e4a8f74f3c4d9dea6daa8eaec3c2c356781a77913a111e33335c" gracePeriod=10 Oct 06 12:03:09 crc kubenswrapper[4958]: I1006 12:03:09.341263 4958 generic.go:334] "Generic (PLEG): container finished" podID="7884f459-0365-448a-b528-326be335a9e3" containerID="e87e53ebe5f7e4a8f74f3c4d9dea6daa8eaec3c2c356781a77913a111e33335c" exitCode=0 Oct 06 12:03:09 crc kubenswrapper[4958]: I1006 12:03:09.341404 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-698758b865-sx8gl" event={"ID":"7884f459-0365-448a-b528-326be335a9e3","Type":"ContainerDied","Data":"e87e53ebe5f7e4a8f74f3c4d9dea6daa8eaec3c2c356781a77913a111e33335c"} Oct 06 12:03:11 crc kubenswrapper[4958]: E1006 12:03:11.232068 4958 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-glance-api:current-podified" Oct 06 12:03:11 crc kubenswrapper[4958]: E1006 12:03:11.232620 4958 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:glance-db-sync,Image:quay.io/podified-antelope-centos9/openstack-glance-api:current-podified,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:true,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:db-sync-config-data,ReadOnly:true,MountPath:/etc/glance/glance.conf.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/my.cnf,SubPath:my.cnf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:db-sync-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-hcvc5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42415,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:*42415,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod glance-db-sync-jxgv9_openstack(e3d53199-b7b6-4d78-9b4a-53ec81b1041d): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Oct 06 12:03:11 crc kubenswrapper[4958]: E1006 12:03:11.234085 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"glance-db-sync\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/glance-db-sync-jxgv9" podUID="e3d53199-b7b6-4d78-9b4a-53ec81b1041d" Oct 06 12:03:11 crc kubenswrapper[4958]: I1006 12:03:11.381696 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-9c95-account-create-drzjl" Oct 06 12:03:11 crc kubenswrapper[4958]: I1006 12:03:11.381773 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-9c95-account-create-drzjl" event={"ID":"b899478f-34ca-4002-837c-7e795f4dc77a","Type":"ContainerDied","Data":"5d32bd8717cf16183f700b509afab30ab71d59c47baeeccbef65c7bdf8280e40"} Oct 06 12:03:11 crc kubenswrapper[4958]: I1006 12:03:11.381974 4958 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5d32bd8717cf16183f700b509afab30ab71d59c47baeeccbef65c7bdf8280e40" Oct 06 12:03:11 crc kubenswrapper[4958]: E1006 12:03:11.383884 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"glance-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-glance-api:current-podified\\\"\"" pod="openstack/glance-db-sync-jxgv9" podUID="e3d53199-b7b6-4d78-9b4a-53ec81b1041d" Oct 06 12:03:11 crc kubenswrapper[4958]: I1006 12:03:11.436183 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7bthm\" (UniqueName: \"kubernetes.io/projected/b899478f-34ca-4002-837c-7e795f4dc77a-kube-api-access-7bthm\") pod \"b899478f-34ca-4002-837c-7e795f4dc77a\" (UID: \"b899478f-34ca-4002-837c-7e795f4dc77a\") " Oct 06 12:03:11 crc kubenswrapper[4958]: I1006 12:03:11.461671 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b899478f-34ca-4002-837c-7e795f4dc77a-kube-api-access-7bthm" (OuterVolumeSpecName: "kube-api-access-7bthm") pod "b899478f-34ca-4002-837c-7e795f4dc77a" (UID: "b899478f-34ca-4002-837c-7e795f4dc77a"). InnerVolumeSpecName "kube-api-access-7bthm". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 06 12:03:11 crc kubenswrapper[4958]: I1006 12:03:11.511274 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-create-vtjz2"] Oct 06 12:03:11 crc kubenswrapper[4958]: I1006 12:03:11.547008 4958 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7bthm\" (UniqueName: \"kubernetes.io/projected/b899478f-34ca-4002-837c-7e795f4dc77a-kube-api-access-7bthm\") on node \"crc\" DevicePath \"\"" Oct 06 12:03:11 crc kubenswrapper[4958]: I1006 12:03:11.550438 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-698758b865-sx8gl" Oct 06 12:03:11 crc kubenswrapper[4958]: I1006 12:03:11.598045 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-create-4wj4v"] Oct 06 12:03:11 crc kubenswrapper[4958]: I1006 12:03:11.641128 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-create-pvr9g"] Oct 06 12:03:11 crc kubenswrapper[4958]: W1006 12:03:11.642801 4958 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podecceb49a_e566_4bbb_8f62_c4d0e59bd18d.slice/crio-f2679caf6597d868191fc3aa98c1c5997a79f893ef1d8fc368c5acf3b55741f8 WatchSource:0}: Error finding container f2679caf6597d868191fc3aa98c1c5997a79f893ef1d8fc368c5acf3b55741f8: Status 404 returned error can't find the container with id f2679caf6597d868191fc3aa98c1c5997a79f893ef1d8fc368c5acf3b55741f8 Oct 06 12:03:11 crc kubenswrapper[4958]: I1006 12:03:11.753508 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7884f459-0365-448a-b528-326be335a9e3-config\") pod \"7884f459-0365-448a-b528-326be335a9e3\" (UID: \"7884f459-0365-448a-b528-326be335a9e3\") " Oct 06 12:03:11 crc kubenswrapper[4958]: I1006 12:03:11.753573 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/7884f459-0365-448a-b528-326be335a9e3-ovsdbserver-sb\") pod \"7884f459-0365-448a-b528-326be335a9e3\" (UID: \"7884f459-0365-448a-b528-326be335a9e3\") " Oct 06 12:03:11 crc kubenswrapper[4958]: I1006 12:03:11.753601 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/7884f459-0365-448a-b528-326be335a9e3-dns-svc\") pod \"7884f459-0365-448a-b528-326be335a9e3\" (UID: \"7884f459-0365-448a-b528-326be335a9e3\") " Oct 06 12:03:11 crc kubenswrapper[4958]: I1006 12:03:11.753631 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2fpm9\" (UniqueName: \"kubernetes.io/projected/7884f459-0365-448a-b528-326be335a9e3-kube-api-access-2fpm9\") pod \"7884f459-0365-448a-b528-326be335a9e3\" (UID: \"7884f459-0365-448a-b528-326be335a9e3\") " Oct 06 12:03:11 crc kubenswrapper[4958]: I1006 12:03:11.753654 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/7884f459-0365-448a-b528-326be335a9e3-ovsdbserver-nb\") pod \"7884f459-0365-448a-b528-326be335a9e3\" (UID: \"7884f459-0365-448a-b528-326be335a9e3\") " Oct 06 12:03:11 crc kubenswrapper[4958]: I1006 12:03:11.765393 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7884f459-0365-448a-b528-326be335a9e3-kube-api-access-2fpm9" (OuterVolumeSpecName: "kube-api-access-2fpm9") pod "7884f459-0365-448a-b528-326be335a9e3" (UID: "7884f459-0365-448a-b528-326be335a9e3"). InnerVolumeSpecName "kube-api-access-2fpm9". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 06 12:03:11 crc kubenswrapper[4958]: I1006 12:03:11.808485 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7884f459-0365-448a-b528-326be335a9e3-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "7884f459-0365-448a-b528-326be335a9e3" (UID: "7884f459-0365-448a-b528-326be335a9e3"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 06 12:03:11 crc kubenswrapper[4958]: I1006 12:03:11.810225 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7884f459-0365-448a-b528-326be335a9e3-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "7884f459-0365-448a-b528-326be335a9e3" (UID: "7884f459-0365-448a-b528-326be335a9e3"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 06 12:03:11 crc kubenswrapper[4958]: I1006 12:03:11.818738 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7884f459-0365-448a-b528-326be335a9e3-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "7884f459-0365-448a-b528-326be335a9e3" (UID: "7884f459-0365-448a-b528-326be335a9e3"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 06 12:03:11 crc kubenswrapper[4958]: I1006 12:03:11.826773 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7884f459-0365-448a-b528-326be335a9e3-config" (OuterVolumeSpecName: "config") pod "7884f459-0365-448a-b528-326be335a9e3" (UID: "7884f459-0365-448a-b528-326be335a9e3"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 06 12:03:11 crc kubenswrapper[4958]: I1006 12:03:11.856444 4958 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7884f459-0365-448a-b528-326be335a9e3-config\") on node \"crc\" DevicePath \"\"" Oct 06 12:03:11 crc kubenswrapper[4958]: I1006 12:03:11.856479 4958 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/7884f459-0365-448a-b528-326be335a9e3-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Oct 06 12:03:11 crc kubenswrapper[4958]: I1006 12:03:11.856492 4958 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/7884f459-0365-448a-b528-326be335a9e3-dns-svc\") on node \"crc\" DevicePath \"\"" Oct 06 12:03:11 crc kubenswrapper[4958]: I1006 12:03:11.856503 4958 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2fpm9\" (UniqueName: \"kubernetes.io/projected/7884f459-0365-448a-b528-326be335a9e3-kube-api-access-2fpm9\") on node \"crc\" DevicePath \"\"" Oct 06 12:03:11 crc kubenswrapper[4958]: I1006 12:03:11.856515 4958 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/7884f459-0365-448a-b528-326be335a9e3-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Oct 06 12:03:12 crc kubenswrapper[4958]: I1006 12:03:12.396532 4958 generic.go:334] "Generic (PLEG): container finished" podID="131f77e7-dc40-48c1-87c5-1a59b9ff226e" containerID="89026b4879c20b132c0de374e0bcb9900df5c58f7291f97f1d0d09e516d209ec" exitCode=0 Oct 06 12:03:12 crc kubenswrapper[4958]: I1006 12:03:12.396685 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-4wj4v" event={"ID":"131f77e7-dc40-48c1-87c5-1a59b9ff226e","Type":"ContainerDied","Data":"89026b4879c20b132c0de374e0bcb9900df5c58f7291f97f1d0d09e516d209ec"} Oct 06 12:03:12 crc kubenswrapper[4958]: I1006 12:03:12.396764 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-4wj4v" event={"ID":"131f77e7-dc40-48c1-87c5-1a59b9ff226e","Type":"ContainerStarted","Data":"c5c01c94a06ad8ace5d6a2522477b9740a98f637548e612043ab5cea7db25580"} Oct 06 12:03:12 crc kubenswrapper[4958]: I1006 12:03:12.400522 4958 generic.go:334] "Generic (PLEG): container finished" podID="5193f80f-d1c5-44bc-ad58-5c3b1a9f3977" containerID="13a53203f35bee5adaaa48ed11d86bee6ef684b94bdd7507a19bb89089a904b5" exitCode=0 Oct 06 12:03:12 crc kubenswrapper[4958]: I1006 12:03:12.400553 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-vtjz2" event={"ID":"5193f80f-d1c5-44bc-ad58-5c3b1a9f3977","Type":"ContainerDied","Data":"13a53203f35bee5adaaa48ed11d86bee6ef684b94bdd7507a19bb89089a904b5"} Oct 06 12:03:12 crc kubenswrapper[4958]: I1006 12:03:12.400632 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-vtjz2" event={"ID":"5193f80f-d1c5-44bc-ad58-5c3b1a9f3977","Type":"ContainerStarted","Data":"d3d378bd4a923406917b9b14b99bd48571b496fe366056a8d16c68c98a759efb"} Oct 06 12:03:12 crc kubenswrapper[4958]: I1006 12:03:12.404285 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-698758b865-sx8gl" event={"ID":"7884f459-0365-448a-b528-326be335a9e3","Type":"ContainerDied","Data":"c93d8e36e3130a1f799e91e57777e01f227df15915ab22d44ebe514be489348b"} Oct 06 12:03:12 crc kubenswrapper[4958]: I1006 12:03:12.404439 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-698758b865-sx8gl" Oct 06 12:03:12 crc kubenswrapper[4958]: I1006 12:03:12.404542 4958 scope.go:117] "RemoveContainer" containerID="e87e53ebe5f7e4a8f74f3c4d9dea6daa8eaec3c2c356781a77913a111e33335c" Oct 06 12:03:12 crc kubenswrapper[4958]: I1006 12:03:12.410605 4958 generic.go:334] "Generic (PLEG): container finished" podID="ecceb49a-e566-4bbb-8f62-c4d0e59bd18d" containerID="7d28a66531b28268da699c6de6f051713fc683d75174a9fa0bc34d70271ee2fa" exitCode=0 Oct 06 12:03:12 crc kubenswrapper[4958]: I1006 12:03:12.410716 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-9c95-account-create-drzjl" Oct 06 12:03:12 crc kubenswrapper[4958]: I1006 12:03:12.412291 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-pvr9g" event={"ID":"ecceb49a-e566-4bbb-8f62-c4d0e59bd18d","Type":"ContainerDied","Data":"7d28a66531b28268da699c6de6f051713fc683d75174a9fa0bc34d70271ee2fa"} Oct 06 12:03:12 crc kubenswrapper[4958]: I1006 12:03:12.412336 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-pvr9g" event={"ID":"ecceb49a-e566-4bbb-8f62-c4d0e59bd18d","Type":"ContainerStarted","Data":"f2679caf6597d868191fc3aa98c1c5997a79f893ef1d8fc368c5acf3b55741f8"} Oct 06 12:03:12 crc kubenswrapper[4958]: I1006 12:03:12.445564 4958 scope.go:117] "RemoveContainer" containerID="8ffba54cf142986fe765a77c2b8e0fc265dd3ce5f9e50697ee8594bf036a8e7e" Oct 06 12:03:12 crc kubenswrapper[4958]: I1006 12:03:12.518408 4958 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-698758b865-sx8gl"] Oct 06 12:03:12 crc kubenswrapper[4958]: I1006 12:03:12.526248 4958 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-698758b865-sx8gl"] Oct 06 12:03:12 crc kubenswrapper[4958]: I1006 12:03:12.935218 4958 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7884f459-0365-448a-b528-326be335a9e3" path="/var/lib/kubelet/pods/7884f459-0365-448a-b528-326be335a9e3/volumes" Oct 06 12:03:13 crc kubenswrapper[4958]: I1006 12:03:13.840571 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-4wj4v" Oct 06 12:03:13 crc kubenswrapper[4958]: I1006 12:03:13.965595 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-pvr9g" Oct 06 12:03:13 crc kubenswrapper[4958]: I1006 12:03:13.972688 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-vtjz2" Oct 06 12:03:14 crc kubenswrapper[4958]: I1006 12:03:14.023451 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-f5xdq\" (UniqueName: \"kubernetes.io/projected/5193f80f-d1c5-44bc-ad58-5c3b1a9f3977-kube-api-access-f5xdq\") pod \"5193f80f-d1c5-44bc-ad58-5c3b1a9f3977\" (UID: \"5193f80f-d1c5-44bc-ad58-5c3b1a9f3977\") " Oct 06 12:03:14 crc kubenswrapper[4958]: I1006 12:03:14.023509 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-54zgm\" (UniqueName: \"kubernetes.io/projected/131f77e7-dc40-48c1-87c5-1a59b9ff226e-kube-api-access-54zgm\") pod \"131f77e7-dc40-48c1-87c5-1a59b9ff226e\" (UID: \"131f77e7-dc40-48c1-87c5-1a59b9ff226e\") " Oct 06 12:03:14 crc kubenswrapper[4958]: I1006 12:03:14.030857 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/131f77e7-dc40-48c1-87c5-1a59b9ff226e-kube-api-access-54zgm" (OuterVolumeSpecName: "kube-api-access-54zgm") pod "131f77e7-dc40-48c1-87c5-1a59b9ff226e" (UID: "131f77e7-dc40-48c1-87c5-1a59b9ff226e"). InnerVolumeSpecName "kube-api-access-54zgm". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 06 12:03:14 crc kubenswrapper[4958]: I1006 12:03:14.035663 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5193f80f-d1c5-44bc-ad58-5c3b1a9f3977-kube-api-access-f5xdq" (OuterVolumeSpecName: "kube-api-access-f5xdq") pod "5193f80f-d1c5-44bc-ad58-5c3b1a9f3977" (UID: "5193f80f-d1c5-44bc-ad58-5c3b1a9f3977"). InnerVolumeSpecName "kube-api-access-f5xdq". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 06 12:03:14 crc kubenswrapper[4958]: I1006 12:03:14.125866 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bl4h2\" (UniqueName: \"kubernetes.io/projected/ecceb49a-e566-4bbb-8f62-c4d0e59bd18d-kube-api-access-bl4h2\") pod \"ecceb49a-e566-4bbb-8f62-c4d0e59bd18d\" (UID: \"ecceb49a-e566-4bbb-8f62-c4d0e59bd18d\") " Oct 06 12:03:14 crc kubenswrapper[4958]: I1006 12:03:14.126624 4958 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-f5xdq\" (UniqueName: \"kubernetes.io/projected/5193f80f-d1c5-44bc-ad58-5c3b1a9f3977-kube-api-access-f5xdq\") on node \"crc\" DevicePath \"\"" Oct 06 12:03:14 crc kubenswrapper[4958]: I1006 12:03:14.126651 4958 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-54zgm\" (UniqueName: \"kubernetes.io/projected/131f77e7-dc40-48c1-87c5-1a59b9ff226e-kube-api-access-54zgm\") on node \"crc\" DevicePath \"\"" Oct 06 12:03:14 crc kubenswrapper[4958]: I1006 12:03:14.129190 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ecceb49a-e566-4bbb-8f62-c4d0e59bd18d-kube-api-access-bl4h2" (OuterVolumeSpecName: "kube-api-access-bl4h2") pod "ecceb49a-e566-4bbb-8f62-c4d0e59bd18d" (UID: "ecceb49a-e566-4bbb-8f62-c4d0e59bd18d"). InnerVolumeSpecName "kube-api-access-bl4h2". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 06 12:03:14 crc kubenswrapper[4958]: I1006 12:03:14.228672 4958 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bl4h2\" (UniqueName: \"kubernetes.io/projected/ecceb49a-e566-4bbb-8f62-c4d0e59bd18d-kube-api-access-bl4h2\") on node \"crc\" DevicePath \"\"" Oct 06 12:03:14 crc kubenswrapper[4958]: I1006 12:03:14.442699 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-pvr9g" event={"ID":"ecceb49a-e566-4bbb-8f62-c4d0e59bd18d","Type":"ContainerDied","Data":"f2679caf6597d868191fc3aa98c1c5997a79f893ef1d8fc368c5acf3b55741f8"} Oct 06 12:03:14 crc kubenswrapper[4958]: I1006 12:03:14.442759 4958 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f2679caf6597d868191fc3aa98c1c5997a79f893ef1d8fc368c5acf3b55741f8" Oct 06 12:03:14 crc kubenswrapper[4958]: I1006 12:03:14.442731 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-pvr9g" Oct 06 12:03:14 crc kubenswrapper[4958]: I1006 12:03:14.446129 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-4wj4v" event={"ID":"131f77e7-dc40-48c1-87c5-1a59b9ff226e","Type":"ContainerDied","Data":"c5c01c94a06ad8ace5d6a2522477b9740a98f637548e612043ab5cea7db25580"} Oct 06 12:03:14 crc kubenswrapper[4958]: I1006 12:03:14.446188 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-4wj4v" Oct 06 12:03:14 crc kubenswrapper[4958]: I1006 12:03:14.446220 4958 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c5c01c94a06ad8ace5d6a2522477b9740a98f637548e612043ab5cea7db25580" Oct 06 12:03:14 crc kubenswrapper[4958]: I1006 12:03:14.451204 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-vtjz2" event={"ID":"5193f80f-d1c5-44bc-ad58-5c3b1a9f3977","Type":"ContainerDied","Data":"d3d378bd4a923406917b9b14b99bd48571b496fe366056a8d16c68c98a759efb"} Oct 06 12:03:14 crc kubenswrapper[4958]: I1006 12:03:14.451271 4958 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d3d378bd4a923406917b9b14b99bd48571b496fe366056a8d16c68c98a759efb" Oct 06 12:03:14 crc kubenswrapper[4958]: I1006 12:03:14.451268 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-vtjz2" Oct 06 12:03:14 crc kubenswrapper[4958]: I1006 12:03:14.955482 4958 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-db-sync-rs8jp"] Oct 06 12:03:14 crc kubenswrapper[4958]: E1006 12:03:14.955800 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b899478f-34ca-4002-837c-7e795f4dc77a" containerName="mariadb-account-create" Oct 06 12:03:14 crc kubenswrapper[4958]: I1006 12:03:14.955813 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="b899478f-34ca-4002-837c-7e795f4dc77a" containerName="mariadb-account-create" Oct 06 12:03:14 crc kubenswrapper[4958]: E1006 12:03:14.955827 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7884f459-0365-448a-b528-326be335a9e3" containerName="dnsmasq-dns" Oct 06 12:03:14 crc kubenswrapper[4958]: I1006 12:03:14.955833 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="7884f459-0365-448a-b528-326be335a9e3" containerName="dnsmasq-dns" Oct 06 12:03:14 crc kubenswrapper[4958]: E1006 12:03:14.955843 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ecceb49a-e566-4bbb-8f62-c4d0e59bd18d" containerName="mariadb-database-create" Oct 06 12:03:14 crc kubenswrapper[4958]: I1006 12:03:14.955850 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="ecceb49a-e566-4bbb-8f62-c4d0e59bd18d" containerName="mariadb-database-create" Oct 06 12:03:14 crc kubenswrapper[4958]: E1006 12:03:14.955870 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5193f80f-d1c5-44bc-ad58-5c3b1a9f3977" containerName="mariadb-database-create" Oct 06 12:03:14 crc kubenswrapper[4958]: I1006 12:03:14.955876 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="5193f80f-d1c5-44bc-ad58-5c3b1a9f3977" containerName="mariadb-database-create" Oct 06 12:03:14 crc kubenswrapper[4958]: E1006 12:03:14.955894 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="131f77e7-dc40-48c1-87c5-1a59b9ff226e" containerName="mariadb-database-create" Oct 06 12:03:14 crc kubenswrapper[4958]: I1006 12:03:14.955900 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="131f77e7-dc40-48c1-87c5-1a59b9ff226e" containerName="mariadb-database-create" Oct 06 12:03:14 crc kubenswrapper[4958]: E1006 12:03:14.955909 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7884f459-0365-448a-b528-326be335a9e3" containerName="init" Oct 06 12:03:14 crc kubenswrapper[4958]: I1006 12:03:14.955915 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="7884f459-0365-448a-b528-326be335a9e3" containerName="init" Oct 06 12:03:14 crc kubenswrapper[4958]: I1006 12:03:14.956058 4958 memory_manager.go:354] "RemoveStaleState removing state" podUID="b899478f-34ca-4002-837c-7e795f4dc77a" containerName="mariadb-account-create" Oct 06 12:03:14 crc kubenswrapper[4958]: I1006 12:03:14.956072 4958 memory_manager.go:354] "RemoveStaleState removing state" podUID="ecceb49a-e566-4bbb-8f62-c4d0e59bd18d" containerName="mariadb-database-create" Oct 06 12:03:14 crc kubenswrapper[4958]: I1006 12:03:14.956083 4958 memory_manager.go:354] "RemoveStaleState removing state" podUID="5193f80f-d1c5-44bc-ad58-5c3b1a9f3977" containerName="mariadb-database-create" Oct 06 12:03:14 crc kubenswrapper[4958]: I1006 12:03:14.956090 4958 memory_manager.go:354] "RemoveStaleState removing state" podUID="131f77e7-dc40-48c1-87c5-1a59b9ff226e" containerName="mariadb-database-create" Oct 06 12:03:14 crc kubenswrapper[4958]: I1006 12:03:14.956102 4958 memory_manager.go:354] "RemoveStaleState removing state" podUID="7884f459-0365-448a-b528-326be335a9e3" containerName="dnsmasq-dns" Oct 06 12:03:14 crc kubenswrapper[4958]: I1006 12:03:14.956637 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-rs8jp" Oct 06 12:03:14 crc kubenswrapper[4958]: I1006 12:03:14.959605 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Oct 06 12:03:14 crc kubenswrapper[4958]: I1006 12:03:14.959906 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Oct 06 12:03:14 crc kubenswrapper[4958]: I1006 12:03:14.960401 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Oct 06 12:03:14 crc kubenswrapper[4958]: I1006 12:03:14.960968 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-mxvw8" Oct 06 12:03:14 crc kubenswrapper[4958]: I1006 12:03:14.975291 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-sync-rs8jp"] Oct 06 12:03:15 crc kubenswrapper[4958]: I1006 12:03:15.142032 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nc7bk\" (UniqueName: \"kubernetes.io/projected/dfe42ef2-678c-4c5d-af64-8cce7c3eb55b-kube-api-access-nc7bk\") pod \"keystone-db-sync-rs8jp\" (UID: \"dfe42ef2-678c-4c5d-af64-8cce7c3eb55b\") " pod="openstack/keystone-db-sync-rs8jp" Oct 06 12:03:15 crc kubenswrapper[4958]: I1006 12:03:15.142287 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dfe42ef2-678c-4c5d-af64-8cce7c3eb55b-combined-ca-bundle\") pod \"keystone-db-sync-rs8jp\" (UID: \"dfe42ef2-678c-4c5d-af64-8cce7c3eb55b\") " pod="openstack/keystone-db-sync-rs8jp" Oct 06 12:03:15 crc kubenswrapper[4958]: I1006 12:03:15.142433 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dfe42ef2-678c-4c5d-af64-8cce7c3eb55b-config-data\") pod \"keystone-db-sync-rs8jp\" (UID: \"dfe42ef2-678c-4c5d-af64-8cce7c3eb55b\") " pod="openstack/keystone-db-sync-rs8jp" Oct 06 12:03:15 crc kubenswrapper[4958]: I1006 12:03:15.245025 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nc7bk\" (UniqueName: \"kubernetes.io/projected/dfe42ef2-678c-4c5d-af64-8cce7c3eb55b-kube-api-access-nc7bk\") pod \"keystone-db-sync-rs8jp\" (UID: \"dfe42ef2-678c-4c5d-af64-8cce7c3eb55b\") " pod="openstack/keystone-db-sync-rs8jp" Oct 06 12:03:15 crc kubenswrapper[4958]: I1006 12:03:15.245676 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dfe42ef2-678c-4c5d-af64-8cce7c3eb55b-combined-ca-bundle\") pod \"keystone-db-sync-rs8jp\" (UID: \"dfe42ef2-678c-4c5d-af64-8cce7c3eb55b\") " pod="openstack/keystone-db-sync-rs8jp" Oct 06 12:03:15 crc kubenswrapper[4958]: I1006 12:03:15.245772 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dfe42ef2-678c-4c5d-af64-8cce7c3eb55b-config-data\") pod \"keystone-db-sync-rs8jp\" (UID: \"dfe42ef2-678c-4c5d-af64-8cce7c3eb55b\") " pod="openstack/keystone-db-sync-rs8jp" Oct 06 12:03:15 crc kubenswrapper[4958]: I1006 12:03:15.255057 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dfe42ef2-678c-4c5d-af64-8cce7c3eb55b-combined-ca-bundle\") pod \"keystone-db-sync-rs8jp\" (UID: \"dfe42ef2-678c-4c5d-af64-8cce7c3eb55b\") " pod="openstack/keystone-db-sync-rs8jp" Oct 06 12:03:15 crc kubenswrapper[4958]: I1006 12:03:15.256923 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dfe42ef2-678c-4c5d-af64-8cce7c3eb55b-config-data\") pod \"keystone-db-sync-rs8jp\" (UID: \"dfe42ef2-678c-4c5d-af64-8cce7c3eb55b\") " pod="openstack/keystone-db-sync-rs8jp" Oct 06 12:03:15 crc kubenswrapper[4958]: I1006 12:03:15.267769 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nc7bk\" (UniqueName: \"kubernetes.io/projected/dfe42ef2-678c-4c5d-af64-8cce7c3eb55b-kube-api-access-nc7bk\") pod \"keystone-db-sync-rs8jp\" (UID: \"dfe42ef2-678c-4c5d-af64-8cce7c3eb55b\") " pod="openstack/keystone-db-sync-rs8jp" Oct 06 12:03:15 crc kubenswrapper[4958]: I1006 12:03:15.293039 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-rs8jp" Oct 06 12:03:15 crc kubenswrapper[4958]: I1006 12:03:15.783865 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-sync-rs8jp"] Oct 06 12:03:16 crc kubenswrapper[4958]: I1006 12:03:16.478568 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-rs8jp" event={"ID":"dfe42ef2-678c-4c5d-af64-8cce7c3eb55b","Type":"ContainerStarted","Data":"3ca63e427d4d541960e42f271f937d55ff9c8abcebe74b8130aabc2dbd56a42f"} Oct 06 12:03:16 crc kubenswrapper[4958]: I1006 12:03:16.610486 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-cell1-server-0" Oct 06 12:03:20 crc kubenswrapper[4958]: I1006 12:03:20.517312 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-rs8jp" event={"ID":"dfe42ef2-678c-4c5d-af64-8cce7c3eb55b","Type":"ContainerStarted","Data":"2dc678b984de79d7288a020c06b74c7886ffbef49af3e052f28644726affd6bf"} Oct 06 12:03:23 crc kubenswrapper[4958]: I1006 12:03:23.545601 4958 generic.go:334] "Generic (PLEG): container finished" podID="dfe42ef2-678c-4c5d-af64-8cce7c3eb55b" containerID="2dc678b984de79d7288a020c06b74c7886ffbef49af3e052f28644726affd6bf" exitCode=0 Oct 06 12:03:23 crc kubenswrapper[4958]: I1006 12:03:23.545714 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-rs8jp" event={"ID":"dfe42ef2-678c-4c5d-af64-8cce7c3eb55b","Type":"ContainerDied","Data":"2dc678b984de79d7288a020c06b74c7886ffbef49af3e052f28644726affd6bf"} Oct 06 12:03:24 crc kubenswrapper[4958]: I1006 12:03:24.918990 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-rs8jp" Oct 06 12:03:25 crc kubenswrapper[4958]: I1006 12:03:25.022006 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dfe42ef2-678c-4c5d-af64-8cce7c3eb55b-combined-ca-bundle\") pod \"dfe42ef2-678c-4c5d-af64-8cce7c3eb55b\" (UID: \"dfe42ef2-678c-4c5d-af64-8cce7c3eb55b\") " Oct 06 12:03:25 crc kubenswrapper[4958]: I1006 12:03:25.022122 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nc7bk\" (UniqueName: \"kubernetes.io/projected/dfe42ef2-678c-4c5d-af64-8cce7c3eb55b-kube-api-access-nc7bk\") pod \"dfe42ef2-678c-4c5d-af64-8cce7c3eb55b\" (UID: \"dfe42ef2-678c-4c5d-af64-8cce7c3eb55b\") " Oct 06 12:03:25 crc kubenswrapper[4958]: I1006 12:03:25.022240 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dfe42ef2-678c-4c5d-af64-8cce7c3eb55b-config-data\") pod \"dfe42ef2-678c-4c5d-af64-8cce7c3eb55b\" (UID: \"dfe42ef2-678c-4c5d-af64-8cce7c3eb55b\") " Oct 06 12:03:25 crc kubenswrapper[4958]: I1006 12:03:25.029076 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dfe42ef2-678c-4c5d-af64-8cce7c3eb55b-kube-api-access-nc7bk" (OuterVolumeSpecName: "kube-api-access-nc7bk") pod "dfe42ef2-678c-4c5d-af64-8cce7c3eb55b" (UID: "dfe42ef2-678c-4c5d-af64-8cce7c3eb55b"). InnerVolumeSpecName "kube-api-access-nc7bk". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 06 12:03:25 crc kubenswrapper[4958]: I1006 12:03:25.068645 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dfe42ef2-678c-4c5d-af64-8cce7c3eb55b-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "dfe42ef2-678c-4c5d-af64-8cce7c3eb55b" (UID: "dfe42ef2-678c-4c5d-af64-8cce7c3eb55b"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 06 12:03:25 crc kubenswrapper[4958]: I1006 12:03:25.099844 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dfe42ef2-678c-4c5d-af64-8cce7c3eb55b-config-data" (OuterVolumeSpecName: "config-data") pod "dfe42ef2-678c-4c5d-af64-8cce7c3eb55b" (UID: "dfe42ef2-678c-4c5d-af64-8cce7c3eb55b"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 06 12:03:25 crc kubenswrapper[4958]: I1006 12:03:25.126617 4958 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dfe42ef2-678c-4c5d-af64-8cce7c3eb55b-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 06 12:03:25 crc kubenswrapper[4958]: I1006 12:03:25.126651 4958 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nc7bk\" (UniqueName: \"kubernetes.io/projected/dfe42ef2-678c-4c5d-af64-8cce7c3eb55b-kube-api-access-nc7bk\") on node \"crc\" DevicePath \"\"" Oct 06 12:03:25 crc kubenswrapper[4958]: I1006 12:03:25.126666 4958 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dfe42ef2-678c-4c5d-af64-8cce7c3eb55b-config-data\") on node \"crc\" DevicePath \"\"" Oct 06 12:03:25 crc kubenswrapper[4958]: I1006 12:03:25.564601 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-rs8jp" event={"ID":"dfe42ef2-678c-4c5d-af64-8cce7c3eb55b","Type":"ContainerDied","Data":"3ca63e427d4d541960e42f271f937d55ff9c8abcebe74b8130aabc2dbd56a42f"} Oct 06 12:03:25 crc kubenswrapper[4958]: I1006 12:03:25.564661 4958 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3ca63e427d4d541960e42f271f937d55ff9c8abcebe74b8130aabc2dbd56a42f" Oct 06 12:03:25 crc kubenswrapper[4958]: I1006 12:03:25.564691 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-rs8jp" Oct 06 12:03:25 crc kubenswrapper[4958]: I1006 12:03:25.865974 4958 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-bootstrap-lfmxp"] Oct 06 12:03:25 crc kubenswrapper[4958]: E1006 12:03:25.866509 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dfe42ef2-678c-4c5d-af64-8cce7c3eb55b" containerName="keystone-db-sync" Oct 06 12:03:25 crc kubenswrapper[4958]: I1006 12:03:25.866532 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="dfe42ef2-678c-4c5d-af64-8cce7c3eb55b" containerName="keystone-db-sync" Oct 06 12:03:25 crc kubenswrapper[4958]: I1006 12:03:25.866731 4958 memory_manager.go:354] "RemoveStaleState removing state" podUID="dfe42ef2-678c-4c5d-af64-8cce7c3eb55b" containerName="keystone-db-sync" Oct 06 12:03:25 crc kubenswrapper[4958]: I1006 12:03:25.867441 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-lfmxp" Oct 06 12:03:25 crc kubenswrapper[4958]: I1006 12:03:25.871764 4958 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-55fff446b9-pntks"] Oct 06 12:03:25 crc kubenswrapper[4958]: I1006 12:03:25.874766 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Oct 06 12:03:25 crc kubenswrapper[4958]: I1006 12:03:25.877334 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Oct 06 12:03:25 crc kubenswrapper[4958]: I1006 12:03:25.877532 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-mxvw8" Oct 06 12:03:25 crc kubenswrapper[4958]: I1006 12:03:25.877565 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Oct 06 12:03:25 crc kubenswrapper[4958]: I1006 12:03:25.878334 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-55fff446b9-pntks" Oct 06 12:03:25 crc kubenswrapper[4958]: I1006 12:03:25.924818 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-55fff446b9-pntks"] Oct 06 12:03:25 crc kubenswrapper[4958]: I1006 12:03:25.962117 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-lfmxp"] Oct 06 12:03:26 crc kubenswrapper[4958]: I1006 12:03:26.009723 4958 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/horizon-78b66cc59c-dwbpp"] Oct 06 12:03:26 crc kubenswrapper[4958]: I1006 12:03:26.011174 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-78b66cc59c-dwbpp" Oct 06 12:03:26 crc kubenswrapper[4958]: I1006 12:03:26.018693 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"horizon" Oct 06 12:03:26 crc kubenswrapper[4958]: I1006 12:03:26.019315 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"horizon-horizon-dockercfg-qq5pt" Oct 06 12:03:26 crc kubenswrapper[4958]: I1006 12:03:26.019618 4958 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"horizon-config-data" Oct 06 12:03:26 crc kubenswrapper[4958]: I1006 12:03:26.019671 4958 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"horizon-scripts" Oct 06 12:03:26 crc kubenswrapper[4958]: I1006 12:03:26.043066 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d22edd9d-3613-41b4-bb0f-765b7ec199df-combined-ca-bundle\") pod \"keystone-bootstrap-lfmxp\" (UID: \"d22edd9d-3613-41b4-bb0f-765b7ec199df\") " pod="openstack/keystone-bootstrap-lfmxp" Oct 06 12:03:26 crc kubenswrapper[4958]: I1006 12:03:26.043113 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e3dc7d4f-8a9f-4c91-9940-a61e3bb26784-dns-svc\") pod \"dnsmasq-dns-55fff446b9-pntks\" (UID: \"e3dc7d4f-8a9f-4c91-9940-a61e3bb26784\") " pod="openstack/dnsmasq-dns-55fff446b9-pntks" Oct 06 12:03:26 crc kubenswrapper[4958]: I1006 12:03:26.043205 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e3dc7d4f-8a9f-4c91-9940-a61e3bb26784-config\") pod \"dnsmasq-dns-55fff446b9-pntks\" (UID: \"e3dc7d4f-8a9f-4c91-9940-a61e3bb26784\") " pod="openstack/dnsmasq-dns-55fff446b9-pntks" Oct 06 12:03:26 crc kubenswrapper[4958]: I1006 12:03:26.043229 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/d22edd9d-3613-41b4-bb0f-765b7ec199df-credential-keys\") pod \"keystone-bootstrap-lfmxp\" (UID: \"d22edd9d-3613-41b4-bb0f-765b7ec199df\") " pod="openstack/keystone-bootstrap-lfmxp" Oct 06 12:03:26 crc kubenswrapper[4958]: I1006 12:03:26.043246 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d4xrh\" (UniqueName: \"kubernetes.io/projected/d22edd9d-3613-41b4-bb0f-765b7ec199df-kube-api-access-d4xrh\") pod \"keystone-bootstrap-lfmxp\" (UID: \"d22edd9d-3613-41b4-bb0f-765b7ec199df\") " pod="openstack/keystone-bootstrap-lfmxp" Oct 06 12:03:26 crc kubenswrapper[4958]: I1006 12:03:26.043266 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/e3dc7d4f-8a9f-4c91-9940-a61e3bb26784-ovsdbserver-sb\") pod \"dnsmasq-dns-55fff446b9-pntks\" (UID: \"e3dc7d4f-8a9f-4c91-9940-a61e3bb26784\") " pod="openstack/dnsmasq-dns-55fff446b9-pntks" Oct 06 12:03:26 crc kubenswrapper[4958]: I1006 12:03:26.043331 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/e3dc7d4f-8a9f-4c91-9940-a61e3bb26784-dns-swift-storage-0\") pod \"dnsmasq-dns-55fff446b9-pntks\" (UID: \"e3dc7d4f-8a9f-4c91-9940-a61e3bb26784\") " pod="openstack/dnsmasq-dns-55fff446b9-pntks" Oct 06 12:03:26 crc kubenswrapper[4958]: I1006 12:03:26.043347 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5xw5k\" (UniqueName: \"kubernetes.io/projected/e3dc7d4f-8a9f-4c91-9940-a61e3bb26784-kube-api-access-5xw5k\") pod \"dnsmasq-dns-55fff446b9-pntks\" (UID: \"e3dc7d4f-8a9f-4c91-9940-a61e3bb26784\") " pod="openstack/dnsmasq-dns-55fff446b9-pntks" Oct 06 12:03:26 crc kubenswrapper[4958]: I1006 12:03:26.043369 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d22edd9d-3613-41b4-bb0f-765b7ec199df-config-data\") pod \"keystone-bootstrap-lfmxp\" (UID: \"d22edd9d-3613-41b4-bb0f-765b7ec199df\") " pod="openstack/keystone-bootstrap-lfmxp" Oct 06 12:03:26 crc kubenswrapper[4958]: I1006 12:03:26.043386 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/e3dc7d4f-8a9f-4c91-9940-a61e3bb26784-ovsdbserver-nb\") pod \"dnsmasq-dns-55fff446b9-pntks\" (UID: \"e3dc7d4f-8a9f-4c91-9940-a61e3bb26784\") " pod="openstack/dnsmasq-dns-55fff446b9-pntks" Oct 06 12:03:26 crc kubenswrapper[4958]: I1006 12:03:26.043429 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/d22edd9d-3613-41b4-bb0f-765b7ec199df-fernet-keys\") pod \"keystone-bootstrap-lfmxp\" (UID: \"d22edd9d-3613-41b4-bb0f-765b7ec199df\") " pod="openstack/keystone-bootstrap-lfmxp" Oct 06 12:03:26 crc kubenswrapper[4958]: I1006 12:03:26.043446 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d22edd9d-3613-41b4-bb0f-765b7ec199df-scripts\") pod \"keystone-bootstrap-lfmxp\" (UID: \"d22edd9d-3613-41b4-bb0f-765b7ec199df\") " pod="openstack/keystone-bootstrap-lfmxp" Oct 06 12:03:26 crc kubenswrapper[4958]: I1006 12:03:26.081454 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-78b66cc59c-dwbpp"] Oct 06 12:03:26 crc kubenswrapper[4958]: I1006 12:03:26.103439 4958 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Oct 06 12:03:26 crc kubenswrapper[4958]: I1006 12:03:26.105183 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Oct 06 12:03:26 crc kubenswrapper[4958]: I1006 12:03:26.110611 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Oct 06 12:03:26 crc kubenswrapper[4958]: I1006 12:03:26.110910 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Oct 06 12:03:26 crc kubenswrapper[4958]: I1006 12:03:26.125330 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Oct 06 12:03:26 crc kubenswrapper[4958]: I1006 12:03:26.144985 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/e3dc7d4f-8a9f-4c91-9940-a61e3bb26784-dns-swift-storage-0\") pod \"dnsmasq-dns-55fff446b9-pntks\" (UID: \"e3dc7d4f-8a9f-4c91-9940-a61e3bb26784\") " pod="openstack/dnsmasq-dns-55fff446b9-pntks" Oct 06 12:03:26 crc kubenswrapper[4958]: I1006 12:03:26.145019 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5xw5k\" (UniqueName: \"kubernetes.io/projected/e3dc7d4f-8a9f-4c91-9940-a61e3bb26784-kube-api-access-5xw5k\") pod \"dnsmasq-dns-55fff446b9-pntks\" (UID: \"e3dc7d4f-8a9f-4c91-9940-a61e3bb26784\") " pod="openstack/dnsmasq-dns-55fff446b9-pntks" Oct 06 12:03:26 crc kubenswrapper[4958]: I1006 12:03:26.145046 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d22edd9d-3613-41b4-bb0f-765b7ec199df-config-data\") pod \"keystone-bootstrap-lfmxp\" (UID: \"d22edd9d-3613-41b4-bb0f-765b7ec199df\") " pod="openstack/keystone-bootstrap-lfmxp" Oct 06 12:03:26 crc kubenswrapper[4958]: I1006 12:03:26.145064 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/e3dc7d4f-8a9f-4c91-9940-a61e3bb26784-ovsdbserver-nb\") pod \"dnsmasq-dns-55fff446b9-pntks\" (UID: \"e3dc7d4f-8a9f-4c91-9940-a61e3bb26784\") " pod="openstack/dnsmasq-dns-55fff446b9-pntks" Oct 06 12:03:26 crc kubenswrapper[4958]: I1006 12:03:26.145102 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hllnl\" (UniqueName: \"kubernetes.io/projected/cedfa2ab-1916-46e6-8e95-18c7a6da6046-kube-api-access-hllnl\") pod \"horizon-78b66cc59c-dwbpp\" (UID: \"cedfa2ab-1916-46e6-8e95-18c7a6da6046\") " pod="openstack/horizon-78b66cc59c-dwbpp" Oct 06 12:03:26 crc kubenswrapper[4958]: I1006 12:03:26.145125 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/d22edd9d-3613-41b4-bb0f-765b7ec199df-fernet-keys\") pod \"keystone-bootstrap-lfmxp\" (UID: \"d22edd9d-3613-41b4-bb0f-765b7ec199df\") " pod="openstack/keystone-bootstrap-lfmxp" Oct 06 12:03:26 crc kubenswrapper[4958]: I1006 12:03:26.145233 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d22edd9d-3613-41b4-bb0f-765b7ec199df-scripts\") pod \"keystone-bootstrap-lfmxp\" (UID: \"d22edd9d-3613-41b4-bb0f-765b7ec199df\") " pod="openstack/keystone-bootstrap-lfmxp" Oct 06 12:03:26 crc kubenswrapper[4958]: I1006 12:03:26.145277 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/cedfa2ab-1916-46e6-8e95-18c7a6da6046-logs\") pod \"horizon-78b66cc59c-dwbpp\" (UID: \"cedfa2ab-1916-46e6-8e95-18c7a6da6046\") " pod="openstack/horizon-78b66cc59c-dwbpp" Oct 06 12:03:26 crc kubenswrapper[4958]: I1006 12:03:26.145297 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d22edd9d-3613-41b4-bb0f-765b7ec199df-combined-ca-bundle\") pod \"keystone-bootstrap-lfmxp\" (UID: \"d22edd9d-3613-41b4-bb0f-765b7ec199df\") " pod="openstack/keystone-bootstrap-lfmxp" Oct 06 12:03:26 crc kubenswrapper[4958]: I1006 12:03:26.145315 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e3dc7d4f-8a9f-4c91-9940-a61e3bb26784-dns-svc\") pod \"dnsmasq-dns-55fff446b9-pntks\" (UID: \"e3dc7d4f-8a9f-4c91-9940-a61e3bb26784\") " pod="openstack/dnsmasq-dns-55fff446b9-pntks" Oct 06 12:03:26 crc kubenswrapper[4958]: I1006 12:03:26.145347 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e3dc7d4f-8a9f-4c91-9940-a61e3bb26784-config\") pod \"dnsmasq-dns-55fff446b9-pntks\" (UID: \"e3dc7d4f-8a9f-4c91-9940-a61e3bb26784\") " pod="openstack/dnsmasq-dns-55fff446b9-pntks" Oct 06 12:03:26 crc kubenswrapper[4958]: I1006 12:03:26.145366 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/cedfa2ab-1916-46e6-8e95-18c7a6da6046-config-data\") pod \"horizon-78b66cc59c-dwbpp\" (UID: \"cedfa2ab-1916-46e6-8e95-18c7a6da6046\") " pod="openstack/horizon-78b66cc59c-dwbpp" Oct 06 12:03:26 crc kubenswrapper[4958]: I1006 12:03:26.145389 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/d22edd9d-3613-41b4-bb0f-765b7ec199df-credential-keys\") pod \"keystone-bootstrap-lfmxp\" (UID: \"d22edd9d-3613-41b4-bb0f-765b7ec199df\") " pod="openstack/keystone-bootstrap-lfmxp" Oct 06 12:03:26 crc kubenswrapper[4958]: I1006 12:03:26.145406 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d4xrh\" (UniqueName: \"kubernetes.io/projected/d22edd9d-3613-41b4-bb0f-765b7ec199df-kube-api-access-d4xrh\") pod \"keystone-bootstrap-lfmxp\" (UID: \"d22edd9d-3613-41b4-bb0f-765b7ec199df\") " pod="openstack/keystone-bootstrap-lfmxp" Oct 06 12:03:26 crc kubenswrapper[4958]: I1006 12:03:26.145422 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/e3dc7d4f-8a9f-4c91-9940-a61e3bb26784-ovsdbserver-sb\") pod \"dnsmasq-dns-55fff446b9-pntks\" (UID: \"e3dc7d4f-8a9f-4c91-9940-a61e3bb26784\") " pod="openstack/dnsmasq-dns-55fff446b9-pntks" Oct 06 12:03:26 crc kubenswrapper[4958]: I1006 12:03:26.145448 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/cedfa2ab-1916-46e6-8e95-18c7a6da6046-horizon-secret-key\") pod \"horizon-78b66cc59c-dwbpp\" (UID: \"cedfa2ab-1916-46e6-8e95-18c7a6da6046\") " pod="openstack/horizon-78b66cc59c-dwbpp" Oct 06 12:03:26 crc kubenswrapper[4958]: I1006 12:03:26.145468 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/cedfa2ab-1916-46e6-8e95-18c7a6da6046-scripts\") pod \"horizon-78b66cc59c-dwbpp\" (UID: \"cedfa2ab-1916-46e6-8e95-18c7a6da6046\") " pod="openstack/horizon-78b66cc59c-dwbpp" Oct 06 12:03:26 crc kubenswrapper[4958]: I1006 12:03:26.146267 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/e3dc7d4f-8a9f-4c91-9940-a61e3bb26784-dns-swift-storage-0\") pod \"dnsmasq-dns-55fff446b9-pntks\" (UID: \"e3dc7d4f-8a9f-4c91-9940-a61e3bb26784\") " pod="openstack/dnsmasq-dns-55fff446b9-pntks" Oct 06 12:03:26 crc kubenswrapper[4958]: I1006 12:03:26.148224 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e3dc7d4f-8a9f-4c91-9940-a61e3bb26784-dns-svc\") pod \"dnsmasq-dns-55fff446b9-pntks\" (UID: \"e3dc7d4f-8a9f-4c91-9940-a61e3bb26784\") " pod="openstack/dnsmasq-dns-55fff446b9-pntks" Oct 06 12:03:26 crc kubenswrapper[4958]: I1006 12:03:26.148758 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/e3dc7d4f-8a9f-4c91-9940-a61e3bb26784-ovsdbserver-nb\") pod \"dnsmasq-dns-55fff446b9-pntks\" (UID: \"e3dc7d4f-8a9f-4c91-9940-a61e3bb26784\") " pod="openstack/dnsmasq-dns-55fff446b9-pntks" Oct 06 12:03:26 crc kubenswrapper[4958]: I1006 12:03:26.151907 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/e3dc7d4f-8a9f-4c91-9940-a61e3bb26784-ovsdbserver-sb\") pod \"dnsmasq-dns-55fff446b9-pntks\" (UID: \"e3dc7d4f-8a9f-4c91-9940-a61e3bb26784\") " pod="openstack/dnsmasq-dns-55fff446b9-pntks" Oct 06 12:03:26 crc kubenswrapper[4958]: I1006 12:03:26.152572 4958 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-55fff446b9-pntks"] Oct 06 12:03:26 crc kubenswrapper[4958]: E1006 12:03:26.153529 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[config kube-api-access-5xw5k], unattached volumes=[], failed to process volumes=[]: context canceled" pod="openstack/dnsmasq-dns-55fff446b9-pntks" podUID="e3dc7d4f-8a9f-4c91-9940-a61e3bb26784" Oct 06 12:03:26 crc kubenswrapper[4958]: I1006 12:03:26.157641 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e3dc7d4f-8a9f-4c91-9940-a61e3bb26784-config\") pod \"dnsmasq-dns-55fff446b9-pntks\" (UID: \"e3dc7d4f-8a9f-4c91-9940-a61e3bb26784\") " pod="openstack/dnsmasq-dns-55fff446b9-pntks" Oct 06 12:03:26 crc kubenswrapper[4958]: I1006 12:03:26.160477 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d22edd9d-3613-41b4-bb0f-765b7ec199df-config-data\") pod \"keystone-bootstrap-lfmxp\" (UID: \"d22edd9d-3613-41b4-bb0f-765b7ec199df\") " pod="openstack/keystone-bootstrap-lfmxp" Oct 06 12:03:26 crc kubenswrapper[4958]: I1006 12:03:26.171072 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/d22edd9d-3613-41b4-bb0f-765b7ec199df-fernet-keys\") pod \"keystone-bootstrap-lfmxp\" (UID: \"d22edd9d-3613-41b4-bb0f-765b7ec199df\") " pod="openstack/keystone-bootstrap-lfmxp" Oct 06 12:03:26 crc kubenswrapper[4958]: I1006 12:03:26.171869 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d22edd9d-3613-41b4-bb0f-765b7ec199df-combined-ca-bundle\") pod \"keystone-bootstrap-lfmxp\" (UID: \"d22edd9d-3613-41b4-bb0f-765b7ec199df\") " pod="openstack/keystone-bootstrap-lfmxp" Oct 06 12:03:26 crc kubenswrapper[4958]: I1006 12:03:26.179778 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5xw5k\" (UniqueName: \"kubernetes.io/projected/e3dc7d4f-8a9f-4c91-9940-a61e3bb26784-kube-api-access-5xw5k\") pod \"dnsmasq-dns-55fff446b9-pntks\" (UID: \"e3dc7d4f-8a9f-4c91-9940-a61e3bb26784\") " pod="openstack/dnsmasq-dns-55fff446b9-pntks" Oct 06 12:03:26 crc kubenswrapper[4958]: I1006 12:03:26.180531 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d22edd9d-3613-41b4-bb0f-765b7ec199df-scripts\") pod \"keystone-bootstrap-lfmxp\" (UID: \"d22edd9d-3613-41b4-bb0f-765b7ec199df\") " pod="openstack/keystone-bootstrap-lfmxp" Oct 06 12:03:26 crc kubenswrapper[4958]: I1006 12:03:26.182637 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/d22edd9d-3613-41b4-bb0f-765b7ec199df-credential-keys\") pod \"keystone-bootstrap-lfmxp\" (UID: \"d22edd9d-3613-41b4-bb0f-765b7ec199df\") " pod="openstack/keystone-bootstrap-lfmxp" Oct 06 12:03:26 crc kubenswrapper[4958]: I1006 12:03:26.223207 4958 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/horizon-f9bd9f6f-dzxmr"] Oct 06 12:03:26 crc kubenswrapper[4958]: I1006 12:03:26.223969 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d4xrh\" (UniqueName: \"kubernetes.io/projected/d22edd9d-3613-41b4-bb0f-765b7ec199df-kube-api-access-d4xrh\") pod \"keystone-bootstrap-lfmxp\" (UID: \"d22edd9d-3613-41b4-bb0f-765b7ec199df\") " pod="openstack/keystone-bootstrap-lfmxp" Oct 06 12:03:26 crc kubenswrapper[4958]: I1006 12:03:26.224645 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-f9bd9f6f-dzxmr" Oct 06 12:03:26 crc kubenswrapper[4958]: I1006 12:03:26.239974 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-f9bd9f6f-dzxmr"] Oct 06 12:03:26 crc kubenswrapper[4958]: I1006 12:03:26.252288 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7198a6cb-9e91-48c8-82b5-16f40fb6b732-config-data\") pod \"ceilometer-0\" (UID: \"7198a6cb-9e91-48c8-82b5-16f40fb6b732\") " pod="openstack/ceilometer-0" Oct 06 12:03:26 crc kubenswrapper[4958]: I1006 12:03:26.252346 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7198a6cb-9e91-48c8-82b5-16f40fb6b732-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"7198a6cb-9e91-48c8-82b5-16f40fb6b732\") " pod="openstack/ceilometer-0" Oct 06 12:03:26 crc kubenswrapper[4958]: I1006 12:03:26.252379 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hllnl\" (UniqueName: \"kubernetes.io/projected/cedfa2ab-1916-46e6-8e95-18c7a6da6046-kube-api-access-hllnl\") pod \"horizon-78b66cc59c-dwbpp\" (UID: \"cedfa2ab-1916-46e6-8e95-18c7a6da6046\") " pod="openstack/horizon-78b66cc59c-dwbpp" Oct 06 12:03:26 crc kubenswrapper[4958]: I1006 12:03:26.252421 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/7198a6cb-9e91-48c8-82b5-16f40fb6b732-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"7198a6cb-9e91-48c8-82b5-16f40fb6b732\") " pod="openstack/ceilometer-0" Oct 06 12:03:26 crc kubenswrapper[4958]: I1006 12:03:26.252443 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/cedfa2ab-1916-46e6-8e95-18c7a6da6046-logs\") pod \"horizon-78b66cc59c-dwbpp\" (UID: \"cedfa2ab-1916-46e6-8e95-18c7a6da6046\") " pod="openstack/horizon-78b66cc59c-dwbpp" Oct 06 12:03:26 crc kubenswrapper[4958]: I1006 12:03:26.252465 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7198a6cb-9e91-48c8-82b5-16f40fb6b732-scripts\") pod \"ceilometer-0\" (UID: \"7198a6cb-9e91-48c8-82b5-16f40fb6b732\") " pod="openstack/ceilometer-0" Oct 06 12:03:26 crc kubenswrapper[4958]: I1006 12:03:26.252482 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/7198a6cb-9e91-48c8-82b5-16f40fb6b732-run-httpd\") pod \"ceilometer-0\" (UID: \"7198a6cb-9e91-48c8-82b5-16f40fb6b732\") " pod="openstack/ceilometer-0" Oct 06 12:03:26 crc kubenswrapper[4958]: I1006 12:03:26.252497 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jnktv\" (UniqueName: \"kubernetes.io/projected/7198a6cb-9e91-48c8-82b5-16f40fb6b732-kube-api-access-jnktv\") pod \"ceilometer-0\" (UID: \"7198a6cb-9e91-48c8-82b5-16f40fb6b732\") " pod="openstack/ceilometer-0" Oct 06 12:03:26 crc kubenswrapper[4958]: I1006 12:03:26.252519 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/7198a6cb-9e91-48c8-82b5-16f40fb6b732-log-httpd\") pod \"ceilometer-0\" (UID: \"7198a6cb-9e91-48c8-82b5-16f40fb6b732\") " pod="openstack/ceilometer-0" Oct 06 12:03:26 crc kubenswrapper[4958]: I1006 12:03:26.252538 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/cedfa2ab-1916-46e6-8e95-18c7a6da6046-config-data\") pod \"horizon-78b66cc59c-dwbpp\" (UID: \"cedfa2ab-1916-46e6-8e95-18c7a6da6046\") " pod="openstack/horizon-78b66cc59c-dwbpp" Oct 06 12:03:26 crc kubenswrapper[4958]: I1006 12:03:26.252567 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/cedfa2ab-1916-46e6-8e95-18c7a6da6046-horizon-secret-key\") pod \"horizon-78b66cc59c-dwbpp\" (UID: \"cedfa2ab-1916-46e6-8e95-18c7a6da6046\") " pod="openstack/horizon-78b66cc59c-dwbpp" Oct 06 12:03:26 crc kubenswrapper[4958]: I1006 12:03:26.252587 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/cedfa2ab-1916-46e6-8e95-18c7a6da6046-scripts\") pod \"horizon-78b66cc59c-dwbpp\" (UID: \"cedfa2ab-1916-46e6-8e95-18c7a6da6046\") " pod="openstack/horizon-78b66cc59c-dwbpp" Oct 06 12:03:26 crc kubenswrapper[4958]: I1006 12:03:26.253346 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/cedfa2ab-1916-46e6-8e95-18c7a6da6046-scripts\") pod \"horizon-78b66cc59c-dwbpp\" (UID: \"cedfa2ab-1916-46e6-8e95-18c7a6da6046\") " pod="openstack/horizon-78b66cc59c-dwbpp" Oct 06 12:03:26 crc kubenswrapper[4958]: I1006 12:03:26.253660 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/cedfa2ab-1916-46e6-8e95-18c7a6da6046-logs\") pod \"horizon-78b66cc59c-dwbpp\" (UID: \"cedfa2ab-1916-46e6-8e95-18c7a6da6046\") " pod="openstack/horizon-78b66cc59c-dwbpp" Oct 06 12:03:26 crc kubenswrapper[4958]: I1006 12:03:26.254487 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/cedfa2ab-1916-46e6-8e95-18c7a6da6046-config-data\") pod \"horizon-78b66cc59c-dwbpp\" (UID: \"cedfa2ab-1916-46e6-8e95-18c7a6da6046\") " pod="openstack/horizon-78b66cc59c-dwbpp" Oct 06 12:03:26 crc kubenswrapper[4958]: I1006 12:03:26.259732 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/cedfa2ab-1916-46e6-8e95-18c7a6da6046-horizon-secret-key\") pod \"horizon-78b66cc59c-dwbpp\" (UID: \"cedfa2ab-1916-46e6-8e95-18c7a6da6046\") " pod="openstack/horizon-78b66cc59c-dwbpp" Oct 06 12:03:26 crc kubenswrapper[4958]: I1006 12:03:26.263779 4958 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-76fcf4b695-l8kbr"] Oct 06 12:03:26 crc kubenswrapper[4958]: I1006 12:03:26.265540 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-76fcf4b695-l8kbr" Oct 06 12:03:26 crc kubenswrapper[4958]: I1006 12:03:26.281264 4958 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-db-sync-s7xhl"] Oct 06 12:03:26 crc kubenswrapper[4958]: I1006 12:03:26.283249 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-s7xhl" Oct 06 12:03:26 crc kubenswrapper[4958]: I1006 12:03:26.288766 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hllnl\" (UniqueName: \"kubernetes.io/projected/cedfa2ab-1916-46e6-8e95-18c7a6da6046-kube-api-access-hllnl\") pod \"horizon-78b66cc59c-dwbpp\" (UID: \"cedfa2ab-1916-46e6-8e95-18c7a6da6046\") " pod="openstack/horizon-78b66cc59c-dwbpp" Oct 06 12:03:26 crc kubenswrapper[4958]: I1006 12:03:26.288809 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-config-data" Oct 06 12:03:26 crc kubenswrapper[4958]: I1006 12:03:26.288860 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-scripts" Oct 06 12:03:26 crc kubenswrapper[4958]: I1006 12:03:26.288964 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-placement-dockercfg-nhngr" Oct 06 12:03:26 crc kubenswrapper[4958]: I1006 12:03:26.297408 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-76fcf4b695-l8kbr"] Oct 06 12:03:26 crc kubenswrapper[4958]: I1006 12:03:26.303998 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-sync-s7xhl"] Oct 06 12:03:26 crc kubenswrapper[4958]: I1006 12:03:26.343540 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-78b66cc59c-dwbpp" Oct 06 12:03:26 crc kubenswrapper[4958]: I1006 12:03:26.353887 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7198a6cb-9e91-48c8-82b5-16f40fb6b732-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"7198a6cb-9e91-48c8-82b5-16f40fb6b732\") " pod="openstack/ceilometer-0" Oct 06 12:03:26 crc kubenswrapper[4958]: I1006 12:03:26.354019 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7ac6939f-98bc-44dd-9b9e-cf9ee69fce50-config\") pod \"dnsmasq-dns-76fcf4b695-l8kbr\" (UID: \"7ac6939f-98bc-44dd-9b9e-cf9ee69fce50\") " pod="openstack/dnsmasq-dns-76fcf4b695-l8kbr" Oct 06 12:03:26 crc kubenswrapper[4958]: I1006 12:03:26.354055 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/76fb3845-b6c8-49ed-a7c5-fbf1254134dd-horizon-secret-key\") pod \"horizon-f9bd9f6f-dzxmr\" (UID: \"76fb3845-b6c8-49ed-a7c5-fbf1254134dd\") " pod="openstack/horizon-f9bd9f6f-dzxmr" Oct 06 12:03:26 crc kubenswrapper[4958]: I1006 12:03:26.354097 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/7198a6cb-9e91-48c8-82b5-16f40fb6b732-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"7198a6cb-9e91-48c8-82b5-16f40fb6b732\") " pod="openstack/ceilometer-0" Oct 06 12:03:26 crc kubenswrapper[4958]: I1006 12:03:26.354130 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7198a6cb-9e91-48c8-82b5-16f40fb6b732-scripts\") pod \"ceilometer-0\" (UID: \"7198a6cb-9e91-48c8-82b5-16f40fb6b732\") " pod="openstack/ceilometer-0" Oct 06 12:03:26 crc kubenswrapper[4958]: I1006 12:03:26.354170 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/76fb3845-b6c8-49ed-a7c5-fbf1254134dd-scripts\") pod \"horizon-f9bd9f6f-dzxmr\" (UID: \"76fb3845-b6c8-49ed-a7c5-fbf1254134dd\") " pod="openstack/horizon-f9bd9f6f-dzxmr" Oct 06 12:03:26 crc kubenswrapper[4958]: I1006 12:03:26.354199 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/7198a6cb-9e91-48c8-82b5-16f40fb6b732-run-httpd\") pod \"ceilometer-0\" (UID: \"7198a6cb-9e91-48c8-82b5-16f40fb6b732\") " pod="openstack/ceilometer-0" Oct 06 12:03:26 crc kubenswrapper[4958]: I1006 12:03:26.354222 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jnktv\" (UniqueName: \"kubernetes.io/projected/7198a6cb-9e91-48c8-82b5-16f40fb6b732-kube-api-access-jnktv\") pod \"ceilometer-0\" (UID: \"7198a6cb-9e91-48c8-82b5-16f40fb6b732\") " pod="openstack/ceilometer-0" Oct 06 12:03:26 crc kubenswrapper[4958]: I1006 12:03:26.354271 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/7198a6cb-9e91-48c8-82b5-16f40fb6b732-log-httpd\") pod \"ceilometer-0\" (UID: \"7198a6cb-9e91-48c8-82b5-16f40fb6b732\") " pod="openstack/ceilometer-0" Oct 06 12:03:26 crc kubenswrapper[4958]: I1006 12:03:26.354299 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/7ac6939f-98bc-44dd-9b9e-cf9ee69fce50-ovsdbserver-nb\") pod \"dnsmasq-dns-76fcf4b695-l8kbr\" (UID: \"7ac6939f-98bc-44dd-9b9e-cf9ee69fce50\") " pod="openstack/dnsmasq-dns-76fcf4b695-l8kbr" Oct 06 12:03:26 crc kubenswrapper[4958]: I1006 12:03:26.354323 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cpkcs\" (UniqueName: \"kubernetes.io/projected/76fb3845-b6c8-49ed-a7c5-fbf1254134dd-kube-api-access-cpkcs\") pod \"horizon-f9bd9f6f-dzxmr\" (UID: \"76fb3845-b6c8-49ed-a7c5-fbf1254134dd\") " pod="openstack/horizon-f9bd9f6f-dzxmr" Oct 06 12:03:26 crc kubenswrapper[4958]: I1006 12:03:26.354373 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fvnl9\" (UniqueName: \"kubernetes.io/projected/7ac6939f-98bc-44dd-9b9e-cf9ee69fce50-kube-api-access-fvnl9\") pod \"dnsmasq-dns-76fcf4b695-l8kbr\" (UID: \"7ac6939f-98bc-44dd-9b9e-cf9ee69fce50\") " pod="openstack/dnsmasq-dns-76fcf4b695-l8kbr" Oct 06 12:03:26 crc kubenswrapper[4958]: I1006 12:03:26.354400 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/7ac6939f-98bc-44dd-9b9e-cf9ee69fce50-dns-swift-storage-0\") pod \"dnsmasq-dns-76fcf4b695-l8kbr\" (UID: \"7ac6939f-98bc-44dd-9b9e-cf9ee69fce50\") " pod="openstack/dnsmasq-dns-76fcf4b695-l8kbr" Oct 06 12:03:26 crc kubenswrapper[4958]: I1006 12:03:26.354475 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/7ac6939f-98bc-44dd-9b9e-cf9ee69fce50-ovsdbserver-sb\") pod \"dnsmasq-dns-76fcf4b695-l8kbr\" (UID: \"7ac6939f-98bc-44dd-9b9e-cf9ee69fce50\") " pod="openstack/dnsmasq-dns-76fcf4b695-l8kbr" Oct 06 12:03:26 crc kubenswrapper[4958]: I1006 12:03:26.354521 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/7ac6939f-98bc-44dd-9b9e-cf9ee69fce50-dns-svc\") pod \"dnsmasq-dns-76fcf4b695-l8kbr\" (UID: \"7ac6939f-98bc-44dd-9b9e-cf9ee69fce50\") " pod="openstack/dnsmasq-dns-76fcf4b695-l8kbr" Oct 06 12:03:26 crc kubenswrapper[4958]: I1006 12:03:26.354549 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7198a6cb-9e91-48c8-82b5-16f40fb6b732-config-data\") pod \"ceilometer-0\" (UID: \"7198a6cb-9e91-48c8-82b5-16f40fb6b732\") " pod="openstack/ceilometer-0" Oct 06 12:03:26 crc kubenswrapper[4958]: I1006 12:03:26.354595 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/76fb3845-b6c8-49ed-a7c5-fbf1254134dd-logs\") pod \"horizon-f9bd9f6f-dzxmr\" (UID: \"76fb3845-b6c8-49ed-a7c5-fbf1254134dd\") " pod="openstack/horizon-f9bd9f6f-dzxmr" Oct 06 12:03:26 crc kubenswrapper[4958]: I1006 12:03:26.354644 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/76fb3845-b6c8-49ed-a7c5-fbf1254134dd-config-data\") pod \"horizon-f9bd9f6f-dzxmr\" (UID: \"76fb3845-b6c8-49ed-a7c5-fbf1254134dd\") " pod="openstack/horizon-f9bd9f6f-dzxmr" Oct 06 12:03:26 crc kubenswrapper[4958]: I1006 12:03:26.356306 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/7198a6cb-9e91-48c8-82b5-16f40fb6b732-run-httpd\") pod \"ceilometer-0\" (UID: \"7198a6cb-9e91-48c8-82b5-16f40fb6b732\") " pod="openstack/ceilometer-0" Oct 06 12:03:26 crc kubenswrapper[4958]: I1006 12:03:26.357837 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/7198a6cb-9e91-48c8-82b5-16f40fb6b732-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"7198a6cb-9e91-48c8-82b5-16f40fb6b732\") " pod="openstack/ceilometer-0" Oct 06 12:03:26 crc kubenswrapper[4958]: I1006 12:03:26.358189 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/7198a6cb-9e91-48c8-82b5-16f40fb6b732-log-httpd\") pod \"ceilometer-0\" (UID: \"7198a6cb-9e91-48c8-82b5-16f40fb6b732\") " pod="openstack/ceilometer-0" Oct 06 12:03:26 crc kubenswrapper[4958]: I1006 12:03:26.361335 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7198a6cb-9e91-48c8-82b5-16f40fb6b732-scripts\") pod \"ceilometer-0\" (UID: \"7198a6cb-9e91-48c8-82b5-16f40fb6b732\") " pod="openstack/ceilometer-0" Oct 06 12:03:26 crc kubenswrapper[4958]: I1006 12:03:26.362757 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7198a6cb-9e91-48c8-82b5-16f40fb6b732-config-data\") pod \"ceilometer-0\" (UID: \"7198a6cb-9e91-48c8-82b5-16f40fb6b732\") " pod="openstack/ceilometer-0" Oct 06 12:03:26 crc kubenswrapper[4958]: I1006 12:03:26.372950 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7198a6cb-9e91-48c8-82b5-16f40fb6b732-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"7198a6cb-9e91-48c8-82b5-16f40fb6b732\") " pod="openstack/ceilometer-0" Oct 06 12:03:26 crc kubenswrapper[4958]: I1006 12:03:26.382714 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jnktv\" (UniqueName: \"kubernetes.io/projected/7198a6cb-9e91-48c8-82b5-16f40fb6b732-kube-api-access-jnktv\") pod \"ceilometer-0\" (UID: \"7198a6cb-9e91-48c8-82b5-16f40fb6b732\") " pod="openstack/ceilometer-0" Oct 06 12:03:26 crc kubenswrapper[4958]: I1006 12:03:26.433160 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Oct 06 12:03:26 crc kubenswrapper[4958]: I1006 12:03:26.456208 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/76fb3845-b6c8-49ed-a7c5-fbf1254134dd-horizon-secret-key\") pod \"horizon-f9bd9f6f-dzxmr\" (UID: \"76fb3845-b6c8-49ed-a7c5-fbf1254134dd\") " pod="openstack/horizon-f9bd9f6f-dzxmr" Oct 06 12:03:26 crc kubenswrapper[4958]: I1006 12:03:26.456327 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e2efb2d0-1b2f-47fd-a1ee-5e69cafa0fd7-combined-ca-bundle\") pod \"placement-db-sync-s7xhl\" (UID: \"e2efb2d0-1b2f-47fd-a1ee-5e69cafa0fd7\") " pod="openstack/placement-db-sync-s7xhl" Oct 06 12:03:26 crc kubenswrapper[4958]: I1006 12:03:26.456426 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e2efb2d0-1b2f-47fd-a1ee-5e69cafa0fd7-scripts\") pod \"placement-db-sync-s7xhl\" (UID: \"e2efb2d0-1b2f-47fd-a1ee-5e69cafa0fd7\") " pod="openstack/placement-db-sync-s7xhl" Oct 06 12:03:26 crc kubenswrapper[4958]: I1006 12:03:26.456455 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jwmbq\" (UniqueName: \"kubernetes.io/projected/e2efb2d0-1b2f-47fd-a1ee-5e69cafa0fd7-kube-api-access-jwmbq\") pod \"placement-db-sync-s7xhl\" (UID: \"e2efb2d0-1b2f-47fd-a1ee-5e69cafa0fd7\") " pod="openstack/placement-db-sync-s7xhl" Oct 06 12:03:26 crc kubenswrapper[4958]: I1006 12:03:26.456481 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/76fb3845-b6c8-49ed-a7c5-fbf1254134dd-scripts\") pod \"horizon-f9bd9f6f-dzxmr\" (UID: \"76fb3845-b6c8-49ed-a7c5-fbf1254134dd\") " pod="openstack/horizon-f9bd9f6f-dzxmr" Oct 06 12:03:26 crc kubenswrapper[4958]: I1006 12:03:26.456533 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/7ac6939f-98bc-44dd-9b9e-cf9ee69fce50-ovsdbserver-nb\") pod \"dnsmasq-dns-76fcf4b695-l8kbr\" (UID: \"7ac6939f-98bc-44dd-9b9e-cf9ee69fce50\") " pod="openstack/dnsmasq-dns-76fcf4b695-l8kbr" Oct 06 12:03:26 crc kubenswrapper[4958]: I1006 12:03:26.456558 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cpkcs\" (UniqueName: \"kubernetes.io/projected/76fb3845-b6c8-49ed-a7c5-fbf1254134dd-kube-api-access-cpkcs\") pod \"horizon-f9bd9f6f-dzxmr\" (UID: \"76fb3845-b6c8-49ed-a7c5-fbf1254134dd\") " pod="openstack/horizon-f9bd9f6f-dzxmr" Oct 06 12:03:26 crc kubenswrapper[4958]: I1006 12:03:26.456590 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fvnl9\" (UniqueName: \"kubernetes.io/projected/7ac6939f-98bc-44dd-9b9e-cf9ee69fce50-kube-api-access-fvnl9\") pod \"dnsmasq-dns-76fcf4b695-l8kbr\" (UID: \"7ac6939f-98bc-44dd-9b9e-cf9ee69fce50\") " pod="openstack/dnsmasq-dns-76fcf4b695-l8kbr" Oct 06 12:03:26 crc kubenswrapper[4958]: I1006 12:03:26.456621 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/7ac6939f-98bc-44dd-9b9e-cf9ee69fce50-dns-swift-storage-0\") pod \"dnsmasq-dns-76fcf4b695-l8kbr\" (UID: \"7ac6939f-98bc-44dd-9b9e-cf9ee69fce50\") " pod="openstack/dnsmasq-dns-76fcf4b695-l8kbr" Oct 06 12:03:26 crc kubenswrapper[4958]: I1006 12:03:26.456641 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/7ac6939f-98bc-44dd-9b9e-cf9ee69fce50-ovsdbserver-sb\") pod \"dnsmasq-dns-76fcf4b695-l8kbr\" (UID: \"7ac6939f-98bc-44dd-9b9e-cf9ee69fce50\") " pod="openstack/dnsmasq-dns-76fcf4b695-l8kbr" Oct 06 12:03:26 crc kubenswrapper[4958]: I1006 12:03:26.456668 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e2efb2d0-1b2f-47fd-a1ee-5e69cafa0fd7-config-data\") pod \"placement-db-sync-s7xhl\" (UID: \"e2efb2d0-1b2f-47fd-a1ee-5e69cafa0fd7\") " pod="openstack/placement-db-sync-s7xhl" Oct 06 12:03:26 crc kubenswrapper[4958]: I1006 12:03:26.456689 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/7ac6939f-98bc-44dd-9b9e-cf9ee69fce50-dns-svc\") pod \"dnsmasq-dns-76fcf4b695-l8kbr\" (UID: \"7ac6939f-98bc-44dd-9b9e-cf9ee69fce50\") " pod="openstack/dnsmasq-dns-76fcf4b695-l8kbr" Oct 06 12:03:26 crc kubenswrapper[4958]: I1006 12:03:26.456722 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e2efb2d0-1b2f-47fd-a1ee-5e69cafa0fd7-logs\") pod \"placement-db-sync-s7xhl\" (UID: \"e2efb2d0-1b2f-47fd-a1ee-5e69cafa0fd7\") " pod="openstack/placement-db-sync-s7xhl" Oct 06 12:03:26 crc kubenswrapper[4958]: I1006 12:03:26.456748 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/76fb3845-b6c8-49ed-a7c5-fbf1254134dd-logs\") pod \"horizon-f9bd9f6f-dzxmr\" (UID: \"76fb3845-b6c8-49ed-a7c5-fbf1254134dd\") " pod="openstack/horizon-f9bd9f6f-dzxmr" Oct 06 12:03:26 crc kubenswrapper[4958]: I1006 12:03:26.456780 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/76fb3845-b6c8-49ed-a7c5-fbf1254134dd-config-data\") pod \"horizon-f9bd9f6f-dzxmr\" (UID: \"76fb3845-b6c8-49ed-a7c5-fbf1254134dd\") " pod="openstack/horizon-f9bd9f6f-dzxmr" Oct 06 12:03:26 crc kubenswrapper[4958]: I1006 12:03:26.456814 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7ac6939f-98bc-44dd-9b9e-cf9ee69fce50-config\") pod \"dnsmasq-dns-76fcf4b695-l8kbr\" (UID: \"7ac6939f-98bc-44dd-9b9e-cf9ee69fce50\") " pod="openstack/dnsmasq-dns-76fcf4b695-l8kbr" Oct 06 12:03:26 crc kubenswrapper[4958]: I1006 12:03:26.458171 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7ac6939f-98bc-44dd-9b9e-cf9ee69fce50-config\") pod \"dnsmasq-dns-76fcf4b695-l8kbr\" (UID: \"7ac6939f-98bc-44dd-9b9e-cf9ee69fce50\") " pod="openstack/dnsmasq-dns-76fcf4b695-l8kbr" Oct 06 12:03:26 crc kubenswrapper[4958]: I1006 12:03:26.458521 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/7ac6939f-98bc-44dd-9b9e-cf9ee69fce50-dns-swift-storage-0\") pod \"dnsmasq-dns-76fcf4b695-l8kbr\" (UID: \"7ac6939f-98bc-44dd-9b9e-cf9ee69fce50\") " pod="openstack/dnsmasq-dns-76fcf4b695-l8kbr" Oct 06 12:03:26 crc kubenswrapper[4958]: I1006 12:03:26.459031 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/76fb3845-b6c8-49ed-a7c5-fbf1254134dd-scripts\") pod \"horizon-f9bd9f6f-dzxmr\" (UID: \"76fb3845-b6c8-49ed-a7c5-fbf1254134dd\") " pod="openstack/horizon-f9bd9f6f-dzxmr" Oct 06 12:03:26 crc kubenswrapper[4958]: I1006 12:03:26.459600 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/76fb3845-b6c8-49ed-a7c5-fbf1254134dd-horizon-secret-key\") pod \"horizon-f9bd9f6f-dzxmr\" (UID: \"76fb3845-b6c8-49ed-a7c5-fbf1254134dd\") " pod="openstack/horizon-f9bd9f6f-dzxmr" Oct 06 12:03:26 crc kubenswrapper[4958]: I1006 12:03:26.460126 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/7ac6939f-98bc-44dd-9b9e-cf9ee69fce50-dns-svc\") pod \"dnsmasq-dns-76fcf4b695-l8kbr\" (UID: \"7ac6939f-98bc-44dd-9b9e-cf9ee69fce50\") " pod="openstack/dnsmasq-dns-76fcf4b695-l8kbr" Oct 06 12:03:26 crc kubenswrapper[4958]: I1006 12:03:26.460588 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/7ac6939f-98bc-44dd-9b9e-cf9ee69fce50-ovsdbserver-nb\") pod \"dnsmasq-dns-76fcf4b695-l8kbr\" (UID: \"7ac6939f-98bc-44dd-9b9e-cf9ee69fce50\") " pod="openstack/dnsmasq-dns-76fcf4b695-l8kbr" Oct 06 12:03:26 crc kubenswrapper[4958]: I1006 12:03:26.461074 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/76fb3845-b6c8-49ed-a7c5-fbf1254134dd-logs\") pod \"horizon-f9bd9f6f-dzxmr\" (UID: \"76fb3845-b6c8-49ed-a7c5-fbf1254134dd\") " pod="openstack/horizon-f9bd9f6f-dzxmr" Oct 06 12:03:26 crc kubenswrapper[4958]: I1006 12:03:26.461435 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/76fb3845-b6c8-49ed-a7c5-fbf1254134dd-config-data\") pod \"horizon-f9bd9f6f-dzxmr\" (UID: \"76fb3845-b6c8-49ed-a7c5-fbf1254134dd\") " pod="openstack/horizon-f9bd9f6f-dzxmr" Oct 06 12:03:26 crc kubenswrapper[4958]: I1006 12:03:26.461487 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/7ac6939f-98bc-44dd-9b9e-cf9ee69fce50-ovsdbserver-sb\") pod \"dnsmasq-dns-76fcf4b695-l8kbr\" (UID: \"7ac6939f-98bc-44dd-9b9e-cf9ee69fce50\") " pod="openstack/dnsmasq-dns-76fcf4b695-l8kbr" Oct 06 12:03:26 crc kubenswrapper[4958]: I1006 12:03:26.475340 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cpkcs\" (UniqueName: \"kubernetes.io/projected/76fb3845-b6c8-49ed-a7c5-fbf1254134dd-kube-api-access-cpkcs\") pod \"horizon-f9bd9f6f-dzxmr\" (UID: \"76fb3845-b6c8-49ed-a7c5-fbf1254134dd\") " pod="openstack/horizon-f9bd9f6f-dzxmr" Oct 06 12:03:26 crc kubenswrapper[4958]: I1006 12:03:26.480940 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fvnl9\" (UniqueName: \"kubernetes.io/projected/7ac6939f-98bc-44dd-9b9e-cf9ee69fce50-kube-api-access-fvnl9\") pod \"dnsmasq-dns-76fcf4b695-l8kbr\" (UID: \"7ac6939f-98bc-44dd-9b9e-cf9ee69fce50\") " pod="openstack/dnsmasq-dns-76fcf4b695-l8kbr" Oct 06 12:03:26 crc kubenswrapper[4958]: I1006 12:03:26.491840 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-lfmxp" Oct 06 12:03:26 crc kubenswrapper[4958]: I1006 12:03:26.558428 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e2efb2d0-1b2f-47fd-a1ee-5e69cafa0fd7-config-data\") pod \"placement-db-sync-s7xhl\" (UID: \"e2efb2d0-1b2f-47fd-a1ee-5e69cafa0fd7\") " pod="openstack/placement-db-sync-s7xhl" Oct 06 12:03:26 crc kubenswrapper[4958]: I1006 12:03:26.558490 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e2efb2d0-1b2f-47fd-a1ee-5e69cafa0fd7-logs\") pod \"placement-db-sync-s7xhl\" (UID: \"e2efb2d0-1b2f-47fd-a1ee-5e69cafa0fd7\") " pod="openstack/placement-db-sync-s7xhl" Oct 06 12:03:26 crc kubenswrapper[4958]: I1006 12:03:26.558552 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e2efb2d0-1b2f-47fd-a1ee-5e69cafa0fd7-combined-ca-bundle\") pod \"placement-db-sync-s7xhl\" (UID: \"e2efb2d0-1b2f-47fd-a1ee-5e69cafa0fd7\") " pod="openstack/placement-db-sync-s7xhl" Oct 06 12:03:26 crc kubenswrapper[4958]: I1006 12:03:26.558587 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e2efb2d0-1b2f-47fd-a1ee-5e69cafa0fd7-scripts\") pod \"placement-db-sync-s7xhl\" (UID: \"e2efb2d0-1b2f-47fd-a1ee-5e69cafa0fd7\") " pod="openstack/placement-db-sync-s7xhl" Oct 06 12:03:26 crc kubenswrapper[4958]: I1006 12:03:26.558604 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jwmbq\" (UniqueName: \"kubernetes.io/projected/e2efb2d0-1b2f-47fd-a1ee-5e69cafa0fd7-kube-api-access-jwmbq\") pod \"placement-db-sync-s7xhl\" (UID: \"e2efb2d0-1b2f-47fd-a1ee-5e69cafa0fd7\") " pod="openstack/placement-db-sync-s7xhl" Oct 06 12:03:26 crc kubenswrapper[4958]: I1006 12:03:26.560026 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e2efb2d0-1b2f-47fd-a1ee-5e69cafa0fd7-logs\") pod \"placement-db-sync-s7xhl\" (UID: \"e2efb2d0-1b2f-47fd-a1ee-5e69cafa0fd7\") " pod="openstack/placement-db-sync-s7xhl" Oct 06 12:03:26 crc kubenswrapper[4958]: I1006 12:03:26.567097 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e2efb2d0-1b2f-47fd-a1ee-5e69cafa0fd7-config-data\") pod \"placement-db-sync-s7xhl\" (UID: \"e2efb2d0-1b2f-47fd-a1ee-5e69cafa0fd7\") " pod="openstack/placement-db-sync-s7xhl" Oct 06 12:03:26 crc kubenswrapper[4958]: I1006 12:03:26.569521 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e2efb2d0-1b2f-47fd-a1ee-5e69cafa0fd7-scripts\") pod \"placement-db-sync-s7xhl\" (UID: \"e2efb2d0-1b2f-47fd-a1ee-5e69cafa0fd7\") " pod="openstack/placement-db-sync-s7xhl" Oct 06 12:03:26 crc kubenswrapper[4958]: I1006 12:03:26.571046 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e2efb2d0-1b2f-47fd-a1ee-5e69cafa0fd7-combined-ca-bundle\") pod \"placement-db-sync-s7xhl\" (UID: \"e2efb2d0-1b2f-47fd-a1ee-5e69cafa0fd7\") " pod="openstack/placement-db-sync-s7xhl" Oct 06 12:03:26 crc kubenswrapper[4958]: I1006 12:03:26.577755 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-55fff446b9-pntks" Oct 06 12:03:26 crc kubenswrapper[4958]: I1006 12:03:26.582100 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jwmbq\" (UniqueName: \"kubernetes.io/projected/e2efb2d0-1b2f-47fd-a1ee-5e69cafa0fd7-kube-api-access-jwmbq\") pod \"placement-db-sync-s7xhl\" (UID: \"e2efb2d0-1b2f-47fd-a1ee-5e69cafa0fd7\") " pod="openstack/placement-db-sync-s7xhl" Oct 06 12:03:26 crc kubenswrapper[4958]: I1006 12:03:26.609451 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-55fff446b9-pntks" Oct 06 12:03:26 crc kubenswrapper[4958]: I1006 12:03:26.657047 4958 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-0515-account-create-txz72"] Oct 06 12:03:26 crc kubenswrapper[4958]: I1006 12:03:26.658532 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-0515-account-create-txz72" Oct 06 12:03:26 crc kubenswrapper[4958]: I1006 12:03:26.662137 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-db-secret" Oct 06 12:03:26 crc kubenswrapper[4958]: I1006 12:03:26.662713 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-f9bd9f6f-dzxmr" Oct 06 12:03:26 crc kubenswrapper[4958]: I1006 12:03:26.672960 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-76fcf4b695-l8kbr" Oct 06 12:03:26 crc kubenswrapper[4958]: I1006 12:03:26.672955 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-0515-account-create-txz72"] Oct 06 12:03:26 crc kubenswrapper[4958]: I1006 12:03:26.709710 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-s7xhl" Oct 06 12:03:26 crc kubenswrapper[4958]: I1006 12:03:26.751240 4958 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-408a-account-create-vwbr4"] Oct 06 12:03:26 crc kubenswrapper[4958]: I1006 12:03:26.752210 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-408a-account-create-vwbr4" Oct 06 12:03:26 crc kubenswrapper[4958]: I1006 12:03:26.754844 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-db-secret" Oct 06 12:03:26 crc kubenswrapper[4958]: I1006 12:03:26.760685 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e3dc7d4f-8a9f-4c91-9940-a61e3bb26784-config\") pod \"e3dc7d4f-8a9f-4c91-9940-a61e3bb26784\" (UID: \"e3dc7d4f-8a9f-4c91-9940-a61e3bb26784\") " Oct 06 12:03:26 crc kubenswrapper[4958]: I1006 12:03:26.760752 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/e3dc7d4f-8a9f-4c91-9940-a61e3bb26784-ovsdbserver-sb\") pod \"e3dc7d4f-8a9f-4c91-9940-a61e3bb26784\" (UID: \"e3dc7d4f-8a9f-4c91-9940-a61e3bb26784\") " Oct 06 12:03:26 crc kubenswrapper[4958]: I1006 12:03:26.760830 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/e3dc7d4f-8a9f-4c91-9940-a61e3bb26784-dns-swift-storage-0\") pod \"e3dc7d4f-8a9f-4c91-9940-a61e3bb26784\" (UID: \"e3dc7d4f-8a9f-4c91-9940-a61e3bb26784\") " Oct 06 12:03:26 crc kubenswrapper[4958]: I1006 12:03:26.760892 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e3dc7d4f-8a9f-4c91-9940-a61e3bb26784-dns-svc\") pod \"e3dc7d4f-8a9f-4c91-9940-a61e3bb26784\" (UID: \"e3dc7d4f-8a9f-4c91-9940-a61e3bb26784\") " Oct 06 12:03:26 crc kubenswrapper[4958]: I1006 12:03:26.760913 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/e3dc7d4f-8a9f-4c91-9940-a61e3bb26784-ovsdbserver-nb\") pod \"e3dc7d4f-8a9f-4c91-9940-a61e3bb26784\" (UID: \"e3dc7d4f-8a9f-4c91-9940-a61e3bb26784\") " Oct 06 12:03:26 crc kubenswrapper[4958]: I1006 12:03:26.760973 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5xw5k\" (UniqueName: \"kubernetes.io/projected/e3dc7d4f-8a9f-4c91-9940-a61e3bb26784-kube-api-access-5xw5k\") pod \"e3dc7d4f-8a9f-4c91-9940-a61e3bb26784\" (UID: \"e3dc7d4f-8a9f-4c91-9940-a61e3bb26784\") " Oct 06 12:03:26 crc kubenswrapper[4958]: I1006 12:03:26.761249 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l9pbn\" (UniqueName: \"kubernetes.io/projected/0a8e6024-5fe5-4759-ab73-414ebc3388f9-kube-api-access-l9pbn\") pod \"cinder-0515-account-create-txz72\" (UID: \"0a8e6024-5fe5-4759-ab73-414ebc3388f9\") " pod="openstack/cinder-0515-account-create-txz72" Oct 06 12:03:26 crc kubenswrapper[4958]: I1006 12:03:26.762301 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e3dc7d4f-8a9f-4c91-9940-a61e3bb26784-config" (OuterVolumeSpecName: "config") pod "e3dc7d4f-8a9f-4c91-9940-a61e3bb26784" (UID: "e3dc7d4f-8a9f-4c91-9940-a61e3bb26784"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 06 12:03:26 crc kubenswrapper[4958]: I1006 12:03:26.762606 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e3dc7d4f-8a9f-4c91-9940-a61e3bb26784-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "e3dc7d4f-8a9f-4c91-9940-a61e3bb26784" (UID: "e3dc7d4f-8a9f-4c91-9940-a61e3bb26784"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 06 12:03:26 crc kubenswrapper[4958]: I1006 12:03:26.762942 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e3dc7d4f-8a9f-4c91-9940-a61e3bb26784-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "e3dc7d4f-8a9f-4c91-9940-a61e3bb26784" (UID: "e3dc7d4f-8a9f-4c91-9940-a61e3bb26784"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 06 12:03:26 crc kubenswrapper[4958]: I1006 12:03:26.763262 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e3dc7d4f-8a9f-4c91-9940-a61e3bb26784-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "e3dc7d4f-8a9f-4c91-9940-a61e3bb26784" (UID: "e3dc7d4f-8a9f-4c91-9940-a61e3bb26784"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 06 12:03:26 crc kubenswrapper[4958]: I1006 12:03:26.763577 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e3dc7d4f-8a9f-4c91-9940-a61e3bb26784-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "e3dc7d4f-8a9f-4c91-9940-a61e3bb26784" (UID: "e3dc7d4f-8a9f-4c91-9940-a61e3bb26784"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 06 12:03:26 crc kubenswrapper[4958]: I1006 12:03:26.768297 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e3dc7d4f-8a9f-4c91-9940-a61e3bb26784-kube-api-access-5xw5k" (OuterVolumeSpecName: "kube-api-access-5xw5k") pod "e3dc7d4f-8a9f-4c91-9940-a61e3bb26784" (UID: "e3dc7d4f-8a9f-4c91-9940-a61e3bb26784"). InnerVolumeSpecName "kube-api-access-5xw5k". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 06 12:03:26 crc kubenswrapper[4958]: I1006 12:03:26.770418 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-408a-account-create-vwbr4"] Oct 06 12:03:26 crc kubenswrapper[4958]: I1006 12:03:26.864480 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l9pbn\" (UniqueName: \"kubernetes.io/projected/0a8e6024-5fe5-4759-ab73-414ebc3388f9-kube-api-access-l9pbn\") pod \"cinder-0515-account-create-txz72\" (UID: \"0a8e6024-5fe5-4759-ab73-414ebc3388f9\") " pod="openstack/cinder-0515-account-create-txz72" Oct 06 12:03:26 crc kubenswrapper[4958]: I1006 12:03:26.864559 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-blmwk\" (UniqueName: \"kubernetes.io/projected/7bf2444f-b495-4fa7-822d-053a8d5c9b5d-kube-api-access-blmwk\") pod \"barbican-408a-account-create-vwbr4\" (UID: \"7bf2444f-b495-4fa7-822d-053a8d5c9b5d\") " pod="openstack/barbican-408a-account-create-vwbr4" Oct 06 12:03:26 crc kubenswrapper[4958]: I1006 12:03:26.864677 4958 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e3dc7d4f-8a9f-4c91-9940-a61e3bb26784-dns-svc\") on node \"crc\" DevicePath \"\"" Oct 06 12:03:26 crc kubenswrapper[4958]: I1006 12:03:26.865846 4958 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/e3dc7d4f-8a9f-4c91-9940-a61e3bb26784-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Oct 06 12:03:26 crc kubenswrapper[4958]: I1006 12:03:26.865857 4958 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5xw5k\" (UniqueName: \"kubernetes.io/projected/e3dc7d4f-8a9f-4c91-9940-a61e3bb26784-kube-api-access-5xw5k\") on node \"crc\" DevicePath \"\"" Oct 06 12:03:26 crc kubenswrapper[4958]: I1006 12:03:26.865868 4958 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e3dc7d4f-8a9f-4c91-9940-a61e3bb26784-config\") on node \"crc\" DevicePath \"\"" Oct 06 12:03:26 crc kubenswrapper[4958]: I1006 12:03:26.865893 4958 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/e3dc7d4f-8a9f-4c91-9940-a61e3bb26784-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Oct 06 12:03:26 crc kubenswrapper[4958]: I1006 12:03:26.865902 4958 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/e3dc7d4f-8a9f-4c91-9940-a61e3bb26784-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Oct 06 12:03:26 crc kubenswrapper[4958]: I1006 12:03:26.868639 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-78b66cc59c-dwbpp"] Oct 06 12:03:26 crc kubenswrapper[4958]: W1006 12:03:26.876315 4958 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podcedfa2ab_1916_46e6_8e95_18c7a6da6046.slice/crio-69bdb513c58fd97119854329b2d4b4b9dc23d3283c0b30c656c6f660c5d2d241 WatchSource:0}: Error finding container 69bdb513c58fd97119854329b2d4b4b9dc23d3283c0b30c656c6f660c5d2d241: Status 404 returned error can't find the container with id 69bdb513c58fd97119854329b2d4b4b9dc23d3283c0b30c656c6f660c5d2d241 Oct 06 12:03:26 crc kubenswrapper[4958]: I1006 12:03:26.886593 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l9pbn\" (UniqueName: \"kubernetes.io/projected/0a8e6024-5fe5-4759-ab73-414ebc3388f9-kube-api-access-l9pbn\") pod \"cinder-0515-account-create-txz72\" (UID: \"0a8e6024-5fe5-4759-ab73-414ebc3388f9\") " pod="openstack/cinder-0515-account-create-txz72" Oct 06 12:03:26 crc kubenswrapper[4958]: I1006 12:03:26.944379 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Oct 06 12:03:26 crc kubenswrapper[4958]: I1006 12:03:26.966764 4958 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-0757-account-create-7jfwp"] Oct 06 12:03:26 crc kubenswrapper[4958]: I1006 12:03:26.969025 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-0757-account-create-7jfwp" Oct 06 12:03:26 crc kubenswrapper[4958]: I1006 12:03:26.971975 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-blmwk\" (UniqueName: \"kubernetes.io/projected/7bf2444f-b495-4fa7-822d-053a8d5c9b5d-kube-api-access-blmwk\") pod \"barbican-408a-account-create-vwbr4\" (UID: \"7bf2444f-b495-4fa7-822d-053a8d5c9b5d\") " pod="openstack/barbican-408a-account-create-vwbr4" Oct 06 12:03:26 crc kubenswrapper[4958]: I1006 12:03:26.974417 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-db-secret" Oct 06 12:03:26 crc kubenswrapper[4958]: I1006 12:03:26.976843 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-0515-account-create-txz72" Oct 06 12:03:27 crc kubenswrapper[4958]: I1006 12:03:27.000897 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-blmwk\" (UniqueName: \"kubernetes.io/projected/7bf2444f-b495-4fa7-822d-053a8d5c9b5d-kube-api-access-blmwk\") pod \"barbican-408a-account-create-vwbr4\" (UID: \"7bf2444f-b495-4fa7-822d-053a8d5c9b5d\") " pod="openstack/barbican-408a-account-create-vwbr4" Oct 06 12:03:27 crc kubenswrapper[4958]: W1006 12:03:27.003459 4958 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod7198a6cb_9e91_48c8_82b5_16f40fb6b732.slice/crio-329a77dfcd7ff2498a851bc6159119753fc1f1b8af3580ee332432affb01290b WatchSource:0}: Error finding container 329a77dfcd7ff2498a851bc6159119753fc1f1b8af3580ee332432affb01290b: Status 404 returned error can't find the container with id 329a77dfcd7ff2498a851bc6159119753fc1f1b8af3580ee332432affb01290b Oct 06 12:03:27 crc kubenswrapper[4958]: I1006 12:03:27.017714 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-0757-account-create-7jfwp"] Oct 06 12:03:27 crc kubenswrapper[4958]: I1006 12:03:27.071028 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-lfmxp"] Oct 06 12:03:27 crc kubenswrapper[4958]: I1006 12:03:27.073600 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fk57m\" (UniqueName: \"kubernetes.io/projected/16cb82f4-081c-4085-b916-7ac6b4366c0a-kube-api-access-fk57m\") pod \"neutron-0757-account-create-7jfwp\" (UID: \"16cb82f4-081c-4085-b916-7ac6b4366c0a\") " pod="openstack/neutron-0757-account-create-7jfwp" Oct 06 12:03:27 crc kubenswrapper[4958]: I1006 12:03:27.100405 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-408a-account-create-vwbr4" Oct 06 12:03:27 crc kubenswrapper[4958]: I1006 12:03:27.176586 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fk57m\" (UniqueName: \"kubernetes.io/projected/16cb82f4-081c-4085-b916-7ac6b4366c0a-kube-api-access-fk57m\") pod \"neutron-0757-account-create-7jfwp\" (UID: \"16cb82f4-081c-4085-b916-7ac6b4366c0a\") " pod="openstack/neutron-0757-account-create-7jfwp" Oct 06 12:03:27 crc kubenswrapper[4958]: I1006 12:03:27.209124 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fk57m\" (UniqueName: \"kubernetes.io/projected/16cb82f4-081c-4085-b916-7ac6b4366c0a-kube-api-access-fk57m\") pod \"neutron-0757-account-create-7jfwp\" (UID: \"16cb82f4-081c-4085-b916-7ac6b4366c0a\") " pod="openstack/neutron-0757-account-create-7jfwp" Oct 06 12:03:27 crc kubenswrapper[4958]: I1006 12:03:27.245082 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-76fcf4b695-l8kbr"] Oct 06 12:03:27 crc kubenswrapper[4958]: I1006 12:03:27.310919 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-0757-account-create-7jfwp" Oct 06 12:03:27 crc kubenswrapper[4958]: I1006 12:03:27.336663 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-f9bd9f6f-dzxmr"] Oct 06 12:03:27 crc kubenswrapper[4958]: W1006 12:03:27.355943 4958 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod76fb3845_b6c8_49ed_a7c5_fbf1254134dd.slice/crio-02d5da75838d46d26dbdaaa46f2c7fff052db9eb054857c9b03128366ec258ca WatchSource:0}: Error finding container 02d5da75838d46d26dbdaaa46f2c7fff052db9eb054857c9b03128366ec258ca: Status 404 returned error can't find the container with id 02d5da75838d46d26dbdaaa46f2c7fff052db9eb054857c9b03128366ec258ca Oct 06 12:03:27 crc kubenswrapper[4958]: I1006 12:03:27.367240 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-sync-s7xhl"] Oct 06 12:03:27 crc kubenswrapper[4958]: W1006 12:03:27.422094 4958 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode2efb2d0_1b2f_47fd_a1ee_5e69cafa0fd7.slice/crio-103a1e5f07b752fe87fd1135d1837adcabc9ef6feed5d7964e4c7ec38a2932f0 WatchSource:0}: Error finding container 103a1e5f07b752fe87fd1135d1837adcabc9ef6feed5d7964e4c7ec38a2932f0: Status 404 returned error can't find the container with id 103a1e5f07b752fe87fd1135d1837adcabc9ef6feed5d7964e4c7ec38a2932f0 Oct 06 12:03:27 crc kubenswrapper[4958]: I1006 12:03:27.621176 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-76fcf4b695-l8kbr" event={"ID":"7ac6939f-98bc-44dd-9b9e-cf9ee69fce50","Type":"ContainerStarted","Data":"c8b03a415d00a3932d743b37d0a12180281d9e8e5d1d5bef517482c86703efcc"} Oct 06 12:03:27 crc kubenswrapper[4958]: I1006 12:03:27.624759 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-78b66cc59c-dwbpp" event={"ID":"cedfa2ab-1916-46e6-8e95-18c7a6da6046","Type":"ContainerStarted","Data":"69bdb513c58fd97119854329b2d4b4b9dc23d3283c0b30c656c6f660c5d2d241"} Oct 06 12:03:27 crc kubenswrapper[4958]: I1006 12:03:27.624799 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-f9bd9f6f-dzxmr" event={"ID":"76fb3845-b6c8-49ed-a7c5-fbf1254134dd","Type":"ContainerStarted","Data":"02d5da75838d46d26dbdaaa46f2c7fff052db9eb054857c9b03128366ec258ca"} Oct 06 12:03:27 crc kubenswrapper[4958]: I1006 12:03:27.625003 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"7198a6cb-9e91-48c8-82b5-16f40fb6b732","Type":"ContainerStarted","Data":"329a77dfcd7ff2498a851bc6159119753fc1f1b8af3580ee332432affb01290b"} Oct 06 12:03:27 crc kubenswrapper[4958]: I1006 12:03:27.627044 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-lfmxp" event={"ID":"d22edd9d-3613-41b4-bb0f-765b7ec199df","Type":"ContainerStarted","Data":"8d78d00d2c22dce09d3e13698fded654ebb19a162eb7bd6955ceee62046878be"} Oct 06 12:03:27 crc kubenswrapper[4958]: I1006 12:03:27.627240 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-lfmxp" event={"ID":"d22edd9d-3613-41b4-bb0f-765b7ec199df","Type":"ContainerStarted","Data":"d79b3d17645d0cb3ef2c5be11e53740518157b8ad192eb1fb6b6968de094daa1"} Oct 06 12:03:27 crc kubenswrapper[4958]: I1006 12:03:27.631648 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-s7xhl" event={"ID":"e2efb2d0-1b2f-47fd-a1ee-5e69cafa0fd7","Type":"ContainerStarted","Data":"103a1e5f07b752fe87fd1135d1837adcabc9ef6feed5d7964e4c7ec38a2932f0"} Oct 06 12:03:27 crc kubenswrapper[4958]: I1006 12:03:27.631767 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-55fff446b9-pntks" Oct 06 12:03:27 crc kubenswrapper[4958]: I1006 12:03:27.673212 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-0515-account-create-txz72"] Oct 06 12:03:27 crc kubenswrapper[4958]: I1006 12:03:27.680020 4958 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-bootstrap-lfmxp" podStartSLOduration=2.679994357 podStartE2EDuration="2.679994357s" podCreationTimestamp="2025-10-06 12:03:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-06 12:03:27.67843489 +0000 UTC m=+961.564460198" watchObservedRunningTime="2025-10-06 12:03:27.679994357 +0000 UTC m=+961.566019665" Oct 06 12:03:27 crc kubenswrapper[4958]: I1006 12:03:27.695696 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-db-secret" Oct 06 12:03:27 crc kubenswrapper[4958]: I1006 12:03:27.730846 4958 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-55fff446b9-pntks"] Oct 06 12:03:27 crc kubenswrapper[4958]: I1006 12:03:27.737260 4958 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-55fff446b9-pntks"] Oct 06 12:03:27 crc kubenswrapper[4958]: I1006 12:03:27.809820 4958 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-78b66cc59c-dwbpp"] Oct 06 12:03:27 crc kubenswrapper[4958]: I1006 12:03:27.839135 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-408a-account-create-vwbr4"] Oct 06 12:03:27 crc kubenswrapper[4958]: I1006 12:03:27.860522 4958 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/horizon-ccb77967f-gznd9"] Oct 06 12:03:27 crc kubenswrapper[4958]: I1006 12:03:27.861892 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-ccb77967f-gznd9" Oct 06 12:03:27 crc kubenswrapper[4958]: I1006 12:03:27.892382 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-ccb77967f-gznd9"] Oct 06 12:03:27 crc kubenswrapper[4958]: I1006 12:03:27.925210 4958 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Oct 06 12:03:27 crc kubenswrapper[4958]: I1006 12:03:27.934848 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-db-secret" Oct 06 12:03:27 crc kubenswrapper[4958]: I1006 12:03:27.965706 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-0757-account-create-7jfwp"] Oct 06 12:03:27 crc kubenswrapper[4958]: I1006 12:03:27.993558 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nvgz6\" (UniqueName: \"kubernetes.io/projected/13d1aca0-56df-432b-97e2-bdf76bda20b8-kube-api-access-nvgz6\") pod \"horizon-ccb77967f-gznd9\" (UID: \"13d1aca0-56df-432b-97e2-bdf76bda20b8\") " pod="openstack/horizon-ccb77967f-gznd9" Oct 06 12:03:27 crc kubenswrapper[4958]: I1006 12:03:27.993661 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/13d1aca0-56df-432b-97e2-bdf76bda20b8-scripts\") pod \"horizon-ccb77967f-gznd9\" (UID: \"13d1aca0-56df-432b-97e2-bdf76bda20b8\") " pod="openstack/horizon-ccb77967f-gznd9" Oct 06 12:03:27 crc kubenswrapper[4958]: I1006 12:03:27.993690 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/13d1aca0-56df-432b-97e2-bdf76bda20b8-logs\") pod \"horizon-ccb77967f-gznd9\" (UID: \"13d1aca0-56df-432b-97e2-bdf76bda20b8\") " pod="openstack/horizon-ccb77967f-gznd9" Oct 06 12:03:27 crc kubenswrapper[4958]: I1006 12:03:27.993710 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/13d1aca0-56df-432b-97e2-bdf76bda20b8-horizon-secret-key\") pod \"horizon-ccb77967f-gznd9\" (UID: \"13d1aca0-56df-432b-97e2-bdf76bda20b8\") " pod="openstack/horizon-ccb77967f-gznd9" Oct 06 12:03:27 crc kubenswrapper[4958]: I1006 12:03:27.993778 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/13d1aca0-56df-432b-97e2-bdf76bda20b8-config-data\") pod \"horizon-ccb77967f-gznd9\" (UID: \"13d1aca0-56df-432b-97e2-bdf76bda20b8\") " pod="openstack/horizon-ccb77967f-gznd9" Oct 06 12:03:28 crc kubenswrapper[4958]: I1006 12:03:28.094924 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/13d1aca0-56df-432b-97e2-bdf76bda20b8-config-data\") pod \"horizon-ccb77967f-gznd9\" (UID: \"13d1aca0-56df-432b-97e2-bdf76bda20b8\") " pod="openstack/horizon-ccb77967f-gznd9" Oct 06 12:03:28 crc kubenswrapper[4958]: I1006 12:03:28.094992 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nvgz6\" (UniqueName: \"kubernetes.io/projected/13d1aca0-56df-432b-97e2-bdf76bda20b8-kube-api-access-nvgz6\") pod \"horizon-ccb77967f-gznd9\" (UID: \"13d1aca0-56df-432b-97e2-bdf76bda20b8\") " pod="openstack/horizon-ccb77967f-gznd9" Oct 06 12:03:28 crc kubenswrapper[4958]: I1006 12:03:28.095077 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/13d1aca0-56df-432b-97e2-bdf76bda20b8-scripts\") pod \"horizon-ccb77967f-gznd9\" (UID: \"13d1aca0-56df-432b-97e2-bdf76bda20b8\") " pod="openstack/horizon-ccb77967f-gznd9" Oct 06 12:03:28 crc kubenswrapper[4958]: I1006 12:03:28.095099 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/13d1aca0-56df-432b-97e2-bdf76bda20b8-logs\") pod \"horizon-ccb77967f-gznd9\" (UID: \"13d1aca0-56df-432b-97e2-bdf76bda20b8\") " pod="openstack/horizon-ccb77967f-gznd9" Oct 06 12:03:28 crc kubenswrapper[4958]: I1006 12:03:28.095117 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/13d1aca0-56df-432b-97e2-bdf76bda20b8-horizon-secret-key\") pod \"horizon-ccb77967f-gznd9\" (UID: \"13d1aca0-56df-432b-97e2-bdf76bda20b8\") " pod="openstack/horizon-ccb77967f-gznd9" Oct 06 12:03:28 crc kubenswrapper[4958]: I1006 12:03:28.095950 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/13d1aca0-56df-432b-97e2-bdf76bda20b8-scripts\") pod \"horizon-ccb77967f-gznd9\" (UID: \"13d1aca0-56df-432b-97e2-bdf76bda20b8\") " pod="openstack/horizon-ccb77967f-gznd9" Oct 06 12:03:28 crc kubenswrapper[4958]: I1006 12:03:28.096104 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/13d1aca0-56df-432b-97e2-bdf76bda20b8-logs\") pod \"horizon-ccb77967f-gznd9\" (UID: \"13d1aca0-56df-432b-97e2-bdf76bda20b8\") " pod="openstack/horizon-ccb77967f-gznd9" Oct 06 12:03:28 crc kubenswrapper[4958]: I1006 12:03:28.096446 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/13d1aca0-56df-432b-97e2-bdf76bda20b8-config-data\") pod \"horizon-ccb77967f-gznd9\" (UID: \"13d1aca0-56df-432b-97e2-bdf76bda20b8\") " pod="openstack/horizon-ccb77967f-gznd9" Oct 06 12:03:28 crc kubenswrapper[4958]: I1006 12:03:28.099639 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/13d1aca0-56df-432b-97e2-bdf76bda20b8-horizon-secret-key\") pod \"horizon-ccb77967f-gznd9\" (UID: \"13d1aca0-56df-432b-97e2-bdf76bda20b8\") " pod="openstack/horizon-ccb77967f-gznd9" Oct 06 12:03:28 crc kubenswrapper[4958]: I1006 12:03:28.112692 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nvgz6\" (UniqueName: \"kubernetes.io/projected/13d1aca0-56df-432b-97e2-bdf76bda20b8-kube-api-access-nvgz6\") pod \"horizon-ccb77967f-gznd9\" (UID: \"13d1aca0-56df-432b-97e2-bdf76bda20b8\") " pod="openstack/horizon-ccb77967f-gznd9" Oct 06 12:03:28 crc kubenswrapper[4958]: I1006 12:03:28.259436 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-ccb77967f-gznd9" Oct 06 12:03:28 crc kubenswrapper[4958]: I1006 12:03:28.641032 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-jxgv9" event={"ID":"e3d53199-b7b6-4d78-9b4a-53ec81b1041d","Type":"ContainerStarted","Data":"ba28d7229c7cd7e99f7c0055e526ec03a8f7ff0840038c6b1530766f70185199"} Oct 06 12:03:28 crc kubenswrapper[4958]: I1006 12:03:28.646132 4958 generic.go:334] "Generic (PLEG): container finished" podID="16cb82f4-081c-4085-b916-7ac6b4366c0a" containerID="649ac2b31dec1f4a4111458193e96b6acbe4f8ffaf2b76c0e7b7bcebf89ba7b0" exitCode=0 Oct 06 12:03:28 crc kubenswrapper[4958]: I1006 12:03:28.646223 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-0757-account-create-7jfwp" event={"ID":"16cb82f4-081c-4085-b916-7ac6b4366c0a","Type":"ContainerDied","Data":"649ac2b31dec1f4a4111458193e96b6acbe4f8ffaf2b76c0e7b7bcebf89ba7b0"} Oct 06 12:03:28 crc kubenswrapper[4958]: I1006 12:03:28.646250 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-0757-account-create-7jfwp" event={"ID":"16cb82f4-081c-4085-b916-7ac6b4366c0a","Type":"ContainerStarted","Data":"0a09d375411272db65c105d52fe0df4b0fd7b1f963d07236add842869bcfb221"} Oct 06 12:03:28 crc kubenswrapper[4958]: I1006 12:03:28.648844 4958 generic.go:334] "Generic (PLEG): container finished" podID="7bf2444f-b495-4fa7-822d-053a8d5c9b5d" containerID="4d3a216076817b0747696b3a72343cd223743ed31aec65783864ca48a94b246a" exitCode=0 Oct 06 12:03:28 crc kubenswrapper[4958]: I1006 12:03:28.648907 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-408a-account-create-vwbr4" event={"ID":"7bf2444f-b495-4fa7-822d-053a8d5c9b5d","Type":"ContainerDied","Data":"4d3a216076817b0747696b3a72343cd223743ed31aec65783864ca48a94b246a"} Oct 06 12:03:28 crc kubenswrapper[4958]: I1006 12:03:28.648925 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-408a-account-create-vwbr4" event={"ID":"7bf2444f-b495-4fa7-822d-053a8d5c9b5d","Type":"ContainerStarted","Data":"000e2107d506c44ae2e22c5d219cce359f69631b61b0ac27852bf2ed2e7f68b3"} Oct 06 12:03:28 crc kubenswrapper[4958]: I1006 12:03:28.650700 4958 generic.go:334] "Generic (PLEG): container finished" podID="7ac6939f-98bc-44dd-9b9e-cf9ee69fce50" containerID="e2ee0a47c24873de1268babcae30e2cde51e3a99302edc26d6276d280520157b" exitCode=0 Oct 06 12:03:28 crc kubenswrapper[4958]: I1006 12:03:28.650748 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-76fcf4b695-l8kbr" event={"ID":"7ac6939f-98bc-44dd-9b9e-cf9ee69fce50","Type":"ContainerDied","Data":"e2ee0a47c24873de1268babcae30e2cde51e3a99302edc26d6276d280520157b"} Oct 06 12:03:28 crc kubenswrapper[4958]: I1006 12:03:28.661008 4958 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-db-sync-jxgv9" podStartSLOduration=3.742281698 podStartE2EDuration="34.660942804s" podCreationTimestamp="2025-10-06 12:02:54 +0000 UTC" firstStartedPulling="2025-10-06 12:02:55.836909251 +0000 UTC m=+929.722934569" lastFinishedPulling="2025-10-06 12:03:26.755570367 +0000 UTC m=+960.641595675" observedRunningTime="2025-10-06 12:03:28.656921593 +0000 UTC m=+962.542946921" watchObservedRunningTime="2025-10-06 12:03:28.660942804 +0000 UTC m=+962.546968112" Oct 06 12:03:28 crc kubenswrapper[4958]: I1006 12:03:28.662474 4958 generic.go:334] "Generic (PLEG): container finished" podID="0a8e6024-5fe5-4759-ab73-414ebc3388f9" containerID="aa4c6efd4b9366291e2ea0bbf13dc0b150b5630f61004579382ea0e57ad415b3" exitCode=0 Oct 06 12:03:28 crc kubenswrapper[4958]: I1006 12:03:28.663253 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-0515-account-create-txz72" event={"ID":"0a8e6024-5fe5-4759-ab73-414ebc3388f9","Type":"ContainerDied","Data":"aa4c6efd4b9366291e2ea0bbf13dc0b150b5630f61004579382ea0e57ad415b3"} Oct 06 12:03:28 crc kubenswrapper[4958]: I1006 12:03:28.663285 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-0515-account-create-txz72" event={"ID":"0a8e6024-5fe5-4759-ab73-414ebc3388f9","Type":"ContainerStarted","Data":"6c60d24e7f6324e4426976bfc4bf82f0b023aa32ed5972d2664aac15f9c3e7b4"} Oct 06 12:03:28 crc kubenswrapper[4958]: I1006 12:03:28.787080 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-ccb77967f-gznd9"] Oct 06 12:03:28 crc kubenswrapper[4958]: W1006 12:03:28.810709 4958 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod13d1aca0_56df_432b_97e2_bdf76bda20b8.slice/crio-29128926a0f563208b2ad7e30807da90ceeba3973e2a12128bbc0b0296b618f0 WatchSource:0}: Error finding container 29128926a0f563208b2ad7e30807da90ceeba3973e2a12128bbc0b0296b618f0: Status 404 returned error can't find the container with id 29128926a0f563208b2ad7e30807da90ceeba3973e2a12128bbc0b0296b618f0 Oct 06 12:03:28 crc kubenswrapper[4958]: I1006 12:03:28.928426 4958 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e3dc7d4f-8a9f-4c91-9940-a61e3bb26784" path="/var/lib/kubelet/pods/e3dc7d4f-8a9f-4c91-9940-a61e3bb26784/volumes" Oct 06 12:03:29 crc kubenswrapper[4958]: I1006 12:03:29.675530 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-ccb77967f-gznd9" event={"ID":"13d1aca0-56df-432b-97e2-bdf76bda20b8","Type":"ContainerStarted","Data":"29128926a0f563208b2ad7e30807da90ceeba3973e2a12128bbc0b0296b618f0"} Oct 06 12:03:29 crc kubenswrapper[4958]: I1006 12:03:29.680298 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-76fcf4b695-l8kbr" event={"ID":"7ac6939f-98bc-44dd-9b9e-cf9ee69fce50","Type":"ContainerStarted","Data":"174de2d0c3aa07f89385198bd740a7bd7177cda9c12afae246e3efbe502acda1"} Oct 06 12:03:29 crc kubenswrapper[4958]: I1006 12:03:29.712296 4958 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-76fcf4b695-l8kbr" podStartSLOduration=3.712276015 podStartE2EDuration="3.712276015s" podCreationTimestamp="2025-10-06 12:03:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-06 12:03:29.704852782 +0000 UTC m=+963.590878090" watchObservedRunningTime="2025-10-06 12:03:29.712276015 +0000 UTC m=+963.598301313" Oct 06 12:03:30 crc kubenswrapper[4958]: I1006 12:03:30.689826 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-76fcf4b695-l8kbr" Oct 06 12:03:32 crc kubenswrapper[4958]: I1006 12:03:32.514802 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-408a-account-create-vwbr4" Oct 06 12:03:32 crc kubenswrapper[4958]: I1006 12:03:32.629977 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-blmwk\" (UniqueName: \"kubernetes.io/projected/7bf2444f-b495-4fa7-822d-053a8d5c9b5d-kube-api-access-blmwk\") pod \"7bf2444f-b495-4fa7-822d-053a8d5c9b5d\" (UID: \"7bf2444f-b495-4fa7-822d-053a8d5c9b5d\") " Oct 06 12:03:32 crc kubenswrapper[4958]: I1006 12:03:32.638239 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7bf2444f-b495-4fa7-822d-053a8d5c9b5d-kube-api-access-blmwk" (OuterVolumeSpecName: "kube-api-access-blmwk") pod "7bf2444f-b495-4fa7-822d-053a8d5c9b5d" (UID: "7bf2444f-b495-4fa7-822d-053a8d5c9b5d"). InnerVolumeSpecName "kube-api-access-blmwk". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 06 12:03:32 crc kubenswrapper[4958]: I1006 12:03:32.714038 4958 generic.go:334] "Generic (PLEG): container finished" podID="d22edd9d-3613-41b4-bb0f-765b7ec199df" containerID="8d78d00d2c22dce09d3e13698fded654ebb19a162eb7bd6955ceee62046878be" exitCode=0 Oct 06 12:03:32 crc kubenswrapper[4958]: I1006 12:03:32.714104 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-lfmxp" event={"ID":"d22edd9d-3613-41b4-bb0f-765b7ec199df","Type":"ContainerDied","Data":"8d78d00d2c22dce09d3e13698fded654ebb19a162eb7bd6955ceee62046878be"} Oct 06 12:03:32 crc kubenswrapper[4958]: I1006 12:03:32.717023 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-408a-account-create-vwbr4" event={"ID":"7bf2444f-b495-4fa7-822d-053a8d5c9b5d","Type":"ContainerDied","Data":"000e2107d506c44ae2e22c5d219cce359f69631b61b0ac27852bf2ed2e7f68b3"} Oct 06 12:03:32 crc kubenswrapper[4958]: I1006 12:03:32.717047 4958 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="000e2107d506c44ae2e22c5d219cce359f69631b61b0ac27852bf2ed2e7f68b3" Oct 06 12:03:32 crc kubenswrapper[4958]: I1006 12:03:32.717070 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-408a-account-create-vwbr4" Oct 06 12:03:32 crc kubenswrapper[4958]: I1006 12:03:32.739561 4958 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-blmwk\" (UniqueName: \"kubernetes.io/projected/7bf2444f-b495-4fa7-822d-053a8d5c9b5d-kube-api-access-blmwk\") on node \"crc\" DevicePath \"\"" Oct 06 12:03:34 crc kubenswrapper[4958]: I1006 12:03:34.849521 4958 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-f9bd9f6f-dzxmr"] Oct 06 12:03:34 crc kubenswrapper[4958]: I1006 12:03:34.879852 4958 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/horizon-7567d7f44b-s7rvg"] Oct 06 12:03:34 crc kubenswrapper[4958]: E1006 12:03:34.880258 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7bf2444f-b495-4fa7-822d-053a8d5c9b5d" containerName="mariadb-account-create" Oct 06 12:03:34 crc kubenswrapper[4958]: I1006 12:03:34.880275 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="7bf2444f-b495-4fa7-822d-053a8d5c9b5d" containerName="mariadb-account-create" Oct 06 12:03:34 crc kubenswrapper[4958]: I1006 12:03:34.880437 4958 memory_manager.go:354] "RemoveStaleState removing state" podUID="7bf2444f-b495-4fa7-822d-053a8d5c9b5d" containerName="mariadb-account-create" Oct 06 12:03:34 crc kubenswrapper[4958]: I1006 12:03:34.881293 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-7567d7f44b-s7rvg" Oct 06 12:03:34 crc kubenswrapper[4958]: I1006 12:03:34.886920 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-horizon-svc" Oct 06 12:03:34 crc kubenswrapper[4958]: I1006 12:03:34.899133 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-7567d7f44b-s7rvg"] Oct 06 12:03:34 crc kubenswrapper[4958]: I1006 12:03:34.950490 4958 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-ccb77967f-gznd9"] Oct 06 12:03:34 crc kubenswrapper[4958]: I1006 12:03:34.986903 4958 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/horizon-69f5d58bb-ghq4l"] Oct 06 12:03:34 crc kubenswrapper[4958]: I1006 12:03:34.988984 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-69f5d58bb-ghq4l" Oct 06 12:03:34 crc kubenswrapper[4958]: I1006 12:03:34.994351 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-69f5d58bb-ghq4l"] Oct 06 12:03:35 crc kubenswrapper[4958]: I1006 12:03:35.001002 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gdq4c\" (UniqueName: \"kubernetes.io/projected/f0aa3dc0-4553-4ec3-bec4-097c68139910-kube-api-access-gdq4c\") pod \"horizon-7567d7f44b-s7rvg\" (UID: \"f0aa3dc0-4553-4ec3-bec4-097c68139910\") " pod="openstack/horizon-7567d7f44b-s7rvg" Oct 06 12:03:35 crc kubenswrapper[4958]: I1006 12:03:35.001060 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f0aa3dc0-4553-4ec3-bec4-097c68139910-combined-ca-bundle\") pod \"horizon-7567d7f44b-s7rvg\" (UID: \"f0aa3dc0-4553-4ec3-bec4-097c68139910\") " pod="openstack/horizon-7567d7f44b-s7rvg" Oct 06 12:03:35 crc kubenswrapper[4958]: I1006 12:03:35.001099 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f0aa3dc0-4553-4ec3-bec4-097c68139910-logs\") pod \"horizon-7567d7f44b-s7rvg\" (UID: \"f0aa3dc0-4553-4ec3-bec4-097c68139910\") " pod="openstack/horizon-7567d7f44b-s7rvg" Oct 06 12:03:35 crc kubenswrapper[4958]: I1006 12:03:35.001118 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/f0aa3dc0-4553-4ec3-bec4-097c68139910-horizon-tls-certs\") pod \"horizon-7567d7f44b-s7rvg\" (UID: \"f0aa3dc0-4553-4ec3-bec4-097c68139910\") " pod="openstack/horizon-7567d7f44b-s7rvg" Oct 06 12:03:35 crc kubenswrapper[4958]: I1006 12:03:35.001177 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/f0aa3dc0-4553-4ec3-bec4-097c68139910-config-data\") pod \"horizon-7567d7f44b-s7rvg\" (UID: \"f0aa3dc0-4553-4ec3-bec4-097c68139910\") " pod="openstack/horizon-7567d7f44b-s7rvg" Oct 06 12:03:35 crc kubenswrapper[4958]: I1006 12:03:35.001196 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/f0aa3dc0-4553-4ec3-bec4-097c68139910-horizon-secret-key\") pod \"horizon-7567d7f44b-s7rvg\" (UID: \"f0aa3dc0-4553-4ec3-bec4-097c68139910\") " pod="openstack/horizon-7567d7f44b-s7rvg" Oct 06 12:03:35 crc kubenswrapper[4958]: I1006 12:03:35.001213 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/f0aa3dc0-4553-4ec3-bec4-097c68139910-scripts\") pod \"horizon-7567d7f44b-s7rvg\" (UID: \"f0aa3dc0-4553-4ec3-bec4-097c68139910\") " pod="openstack/horizon-7567d7f44b-s7rvg" Oct 06 12:03:35 crc kubenswrapper[4958]: I1006 12:03:35.102612 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gdq4c\" (UniqueName: \"kubernetes.io/projected/f0aa3dc0-4553-4ec3-bec4-097c68139910-kube-api-access-gdq4c\") pod \"horizon-7567d7f44b-s7rvg\" (UID: \"f0aa3dc0-4553-4ec3-bec4-097c68139910\") " pod="openstack/horizon-7567d7f44b-s7rvg" Oct 06 12:03:35 crc kubenswrapper[4958]: I1006 12:03:35.102958 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/7fdb6376-1709-4378-8fe4-eaf26cf5fde7-scripts\") pod \"horizon-69f5d58bb-ghq4l\" (UID: \"7fdb6376-1709-4378-8fe4-eaf26cf5fde7\") " pod="openstack/horizon-69f5d58bb-ghq4l" Oct 06 12:03:35 crc kubenswrapper[4958]: I1006 12:03:35.103007 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f0aa3dc0-4553-4ec3-bec4-097c68139910-combined-ca-bundle\") pod \"horizon-7567d7f44b-s7rvg\" (UID: \"f0aa3dc0-4553-4ec3-bec4-097c68139910\") " pod="openstack/horizon-7567d7f44b-s7rvg" Oct 06 12:03:35 crc kubenswrapper[4958]: I1006 12:03:35.103047 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/7fdb6376-1709-4378-8fe4-eaf26cf5fde7-config-data\") pod \"horizon-69f5d58bb-ghq4l\" (UID: \"7fdb6376-1709-4378-8fe4-eaf26cf5fde7\") " pod="openstack/horizon-69f5d58bb-ghq4l" Oct 06 12:03:35 crc kubenswrapper[4958]: I1006 12:03:35.103077 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7fdb6376-1709-4378-8fe4-eaf26cf5fde7-combined-ca-bundle\") pod \"horizon-69f5d58bb-ghq4l\" (UID: \"7fdb6376-1709-4378-8fe4-eaf26cf5fde7\") " pod="openstack/horizon-69f5d58bb-ghq4l" Oct 06 12:03:35 crc kubenswrapper[4958]: I1006 12:03:35.103101 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f0aa3dc0-4553-4ec3-bec4-097c68139910-logs\") pod \"horizon-7567d7f44b-s7rvg\" (UID: \"f0aa3dc0-4553-4ec3-bec4-097c68139910\") " pod="openstack/horizon-7567d7f44b-s7rvg" Oct 06 12:03:35 crc kubenswrapper[4958]: I1006 12:03:35.103115 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/f0aa3dc0-4553-4ec3-bec4-097c68139910-horizon-tls-certs\") pod \"horizon-7567d7f44b-s7rvg\" (UID: \"f0aa3dc0-4553-4ec3-bec4-097c68139910\") " pod="openstack/horizon-7567d7f44b-s7rvg" Oct 06 12:03:35 crc kubenswrapper[4958]: I1006 12:03:35.103133 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7fdb6376-1709-4378-8fe4-eaf26cf5fde7-logs\") pod \"horizon-69f5d58bb-ghq4l\" (UID: \"7fdb6376-1709-4378-8fe4-eaf26cf5fde7\") " pod="openstack/horizon-69f5d58bb-ghq4l" Oct 06 12:03:35 crc kubenswrapper[4958]: I1006 12:03:35.103194 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9n2mn\" (UniqueName: \"kubernetes.io/projected/7fdb6376-1709-4378-8fe4-eaf26cf5fde7-kube-api-access-9n2mn\") pod \"horizon-69f5d58bb-ghq4l\" (UID: \"7fdb6376-1709-4378-8fe4-eaf26cf5fde7\") " pod="openstack/horizon-69f5d58bb-ghq4l" Oct 06 12:03:35 crc kubenswrapper[4958]: I1006 12:03:35.103403 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/f0aa3dc0-4553-4ec3-bec4-097c68139910-config-data\") pod \"horizon-7567d7f44b-s7rvg\" (UID: \"f0aa3dc0-4553-4ec3-bec4-097c68139910\") " pod="openstack/horizon-7567d7f44b-s7rvg" Oct 06 12:03:35 crc kubenswrapper[4958]: I1006 12:03:35.103436 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/f0aa3dc0-4553-4ec3-bec4-097c68139910-horizon-secret-key\") pod \"horizon-7567d7f44b-s7rvg\" (UID: \"f0aa3dc0-4553-4ec3-bec4-097c68139910\") " pod="openstack/horizon-7567d7f44b-s7rvg" Oct 06 12:03:35 crc kubenswrapper[4958]: I1006 12:03:35.103452 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/f0aa3dc0-4553-4ec3-bec4-097c68139910-scripts\") pod \"horizon-7567d7f44b-s7rvg\" (UID: \"f0aa3dc0-4553-4ec3-bec4-097c68139910\") " pod="openstack/horizon-7567d7f44b-s7rvg" Oct 06 12:03:35 crc kubenswrapper[4958]: I1006 12:03:35.103489 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/7fdb6376-1709-4378-8fe4-eaf26cf5fde7-horizon-secret-key\") pod \"horizon-69f5d58bb-ghq4l\" (UID: \"7fdb6376-1709-4378-8fe4-eaf26cf5fde7\") " pod="openstack/horizon-69f5d58bb-ghq4l" Oct 06 12:03:35 crc kubenswrapper[4958]: I1006 12:03:35.103557 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/7fdb6376-1709-4378-8fe4-eaf26cf5fde7-horizon-tls-certs\") pod \"horizon-69f5d58bb-ghq4l\" (UID: \"7fdb6376-1709-4378-8fe4-eaf26cf5fde7\") " pod="openstack/horizon-69f5d58bb-ghq4l" Oct 06 12:03:35 crc kubenswrapper[4958]: I1006 12:03:35.104075 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f0aa3dc0-4553-4ec3-bec4-097c68139910-logs\") pod \"horizon-7567d7f44b-s7rvg\" (UID: \"f0aa3dc0-4553-4ec3-bec4-097c68139910\") " pod="openstack/horizon-7567d7f44b-s7rvg" Oct 06 12:03:35 crc kubenswrapper[4958]: I1006 12:03:35.105749 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/f0aa3dc0-4553-4ec3-bec4-097c68139910-scripts\") pod \"horizon-7567d7f44b-s7rvg\" (UID: \"f0aa3dc0-4553-4ec3-bec4-097c68139910\") " pod="openstack/horizon-7567d7f44b-s7rvg" Oct 06 12:03:35 crc kubenswrapper[4958]: I1006 12:03:35.105924 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/f0aa3dc0-4553-4ec3-bec4-097c68139910-config-data\") pod \"horizon-7567d7f44b-s7rvg\" (UID: \"f0aa3dc0-4553-4ec3-bec4-097c68139910\") " pod="openstack/horizon-7567d7f44b-s7rvg" Oct 06 12:03:35 crc kubenswrapper[4958]: I1006 12:03:35.110345 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/f0aa3dc0-4553-4ec3-bec4-097c68139910-horizon-tls-certs\") pod \"horizon-7567d7f44b-s7rvg\" (UID: \"f0aa3dc0-4553-4ec3-bec4-097c68139910\") " pod="openstack/horizon-7567d7f44b-s7rvg" Oct 06 12:03:35 crc kubenswrapper[4958]: I1006 12:03:35.112749 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f0aa3dc0-4553-4ec3-bec4-097c68139910-combined-ca-bundle\") pod \"horizon-7567d7f44b-s7rvg\" (UID: \"f0aa3dc0-4553-4ec3-bec4-097c68139910\") " pod="openstack/horizon-7567d7f44b-s7rvg" Oct 06 12:03:35 crc kubenswrapper[4958]: I1006 12:03:35.112771 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/f0aa3dc0-4553-4ec3-bec4-097c68139910-horizon-secret-key\") pod \"horizon-7567d7f44b-s7rvg\" (UID: \"f0aa3dc0-4553-4ec3-bec4-097c68139910\") " pod="openstack/horizon-7567d7f44b-s7rvg" Oct 06 12:03:35 crc kubenswrapper[4958]: I1006 12:03:35.122909 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gdq4c\" (UniqueName: \"kubernetes.io/projected/f0aa3dc0-4553-4ec3-bec4-097c68139910-kube-api-access-gdq4c\") pod \"horizon-7567d7f44b-s7rvg\" (UID: \"f0aa3dc0-4553-4ec3-bec4-097c68139910\") " pod="openstack/horizon-7567d7f44b-s7rvg" Oct 06 12:03:35 crc kubenswrapper[4958]: I1006 12:03:35.205042 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/7fdb6376-1709-4378-8fe4-eaf26cf5fde7-horizon-secret-key\") pod \"horizon-69f5d58bb-ghq4l\" (UID: \"7fdb6376-1709-4378-8fe4-eaf26cf5fde7\") " pod="openstack/horizon-69f5d58bb-ghq4l" Oct 06 12:03:35 crc kubenswrapper[4958]: I1006 12:03:35.205159 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/7fdb6376-1709-4378-8fe4-eaf26cf5fde7-horizon-tls-certs\") pod \"horizon-69f5d58bb-ghq4l\" (UID: \"7fdb6376-1709-4378-8fe4-eaf26cf5fde7\") " pod="openstack/horizon-69f5d58bb-ghq4l" Oct 06 12:03:35 crc kubenswrapper[4958]: I1006 12:03:35.205210 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/7fdb6376-1709-4378-8fe4-eaf26cf5fde7-scripts\") pod \"horizon-69f5d58bb-ghq4l\" (UID: \"7fdb6376-1709-4378-8fe4-eaf26cf5fde7\") " pod="openstack/horizon-69f5d58bb-ghq4l" Oct 06 12:03:35 crc kubenswrapper[4958]: I1006 12:03:35.205273 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/7fdb6376-1709-4378-8fe4-eaf26cf5fde7-config-data\") pod \"horizon-69f5d58bb-ghq4l\" (UID: \"7fdb6376-1709-4378-8fe4-eaf26cf5fde7\") " pod="openstack/horizon-69f5d58bb-ghq4l" Oct 06 12:03:35 crc kubenswrapper[4958]: I1006 12:03:35.205295 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7fdb6376-1709-4378-8fe4-eaf26cf5fde7-combined-ca-bundle\") pod \"horizon-69f5d58bb-ghq4l\" (UID: \"7fdb6376-1709-4378-8fe4-eaf26cf5fde7\") " pod="openstack/horizon-69f5d58bb-ghq4l" Oct 06 12:03:35 crc kubenswrapper[4958]: I1006 12:03:35.205321 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7fdb6376-1709-4378-8fe4-eaf26cf5fde7-logs\") pod \"horizon-69f5d58bb-ghq4l\" (UID: \"7fdb6376-1709-4378-8fe4-eaf26cf5fde7\") " pod="openstack/horizon-69f5d58bb-ghq4l" Oct 06 12:03:35 crc kubenswrapper[4958]: I1006 12:03:35.205349 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9n2mn\" (UniqueName: \"kubernetes.io/projected/7fdb6376-1709-4378-8fe4-eaf26cf5fde7-kube-api-access-9n2mn\") pod \"horizon-69f5d58bb-ghq4l\" (UID: \"7fdb6376-1709-4378-8fe4-eaf26cf5fde7\") " pod="openstack/horizon-69f5d58bb-ghq4l" Oct 06 12:03:35 crc kubenswrapper[4958]: I1006 12:03:35.206534 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/7fdb6376-1709-4378-8fe4-eaf26cf5fde7-scripts\") pod \"horizon-69f5d58bb-ghq4l\" (UID: \"7fdb6376-1709-4378-8fe4-eaf26cf5fde7\") " pod="openstack/horizon-69f5d58bb-ghq4l" Oct 06 12:03:35 crc kubenswrapper[4958]: I1006 12:03:35.206828 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7fdb6376-1709-4378-8fe4-eaf26cf5fde7-logs\") pod \"horizon-69f5d58bb-ghq4l\" (UID: \"7fdb6376-1709-4378-8fe4-eaf26cf5fde7\") " pod="openstack/horizon-69f5d58bb-ghq4l" Oct 06 12:03:35 crc kubenswrapper[4958]: I1006 12:03:35.208104 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/7fdb6376-1709-4378-8fe4-eaf26cf5fde7-config-data\") pod \"horizon-69f5d58bb-ghq4l\" (UID: \"7fdb6376-1709-4378-8fe4-eaf26cf5fde7\") " pod="openstack/horizon-69f5d58bb-ghq4l" Oct 06 12:03:35 crc kubenswrapper[4958]: I1006 12:03:35.209748 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/7fdb6376-1709-4378-8fe4-eaf26cf5fde7-horizon-tls-certs\") pod \"horizon-69f5d58bb-ghq4l\" (UID: \"7fdb6376-1709-4378-8fe4-eaf26cf5fde7\") " pod="openstack/horizon-69f5d58bb-ghq4l" Oct 06 12:03:35 crc kubenswrapper[4958]: I1006 12:03:35.210721 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7fdb6376-1709-4378-8fe4-eaf26cf5fde7-combined-ca-bundle\") pod \"horizon-69f5d58bb-ghq4l\" (UID: \"7fdb6376-1709-4378-8fe4-eaf26cf5fde7\") " pod="openstack/horizon-69f5d58bb-ghq4l" Oct 06 12:03:35 crc kubenswrapper[4958]: I1006 12:03:35.212158 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-7567d7f44b-s7rvg" Oct 06 12:03:35 crc kubenswrapper[4958]: I1006 12:03:35.213121 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/7fdb6376-1709-4378-8fe4-eaf26cf5fde7-horizon-secret-key\") pod \"horizon-69f5d58bb-ghq4l\" (UID: \"7fdb6376-1709-4378-8fe4-eaf26cf5fde7\") " pod="openstack/horizon-69f5d58bb-ghq4l" Oct 06 12:03:35 crc kubenswrapper[4958]: I1006 12:03:35.220950 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9n2mn\" (UniqueName: \"kubernetes.io/projected/7fdb6376-1709-4378-8fe4-eaf26cf5fde7-kube-api-access-9n2mn\") pod \"horizon-69f5d58bb-ghq4l\" (UID: \"7fdb6376-1709-4378-8fe4-eaf26cf5fde7\") " pod="openstack/horizon-69f5d58bb-ghq4l" Oct 06 12:03:35 crc kubenswrapper[4958]: I1006 12:03:35.317134 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-69f5d58bb-ghq4l" Oct 06 12:03:36 crc kubenswrapper[4958]: I1006 12:03:36.677362 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-76fcf4b695-l8kbr" Oct 06 12:03:36 crc kubenswrapper[4958]: I1006 12:03:36.729842 4958 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-77585f5f8c-9ms5t"] Oct 06 12:03:36 crc kubenswrapper[4958]: I1006 12:03:36.730066 4958 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-77585f5f8c-9ms5t" podUID="61ae269a-4ca3-435d-b709-c08d5aa97ba6" containerName="dnsmasq-dns" containerID="cri-o://92b6e2f44a3cab4fc9c7926d1ee7c98f725ae4b07dc4216c352874912933f915" gracePeriod=10 Oct 06 12:03:37 crc kubenswrapper[4958]: I1006 12:03:37.140432 4958 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-db-sync-9kx8n"] Oct 06 12:03:37 crc kubenswrapper[4958]: I1006 12:03:37.141654 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-9kx8n" Oct 06 12:03:37 crc kubenswrapper[4958]: I1006 12:03:37.145343 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-barbican-dockercfg-2rn69" Oct 06 12:03:37 crc kubenswrapper[4958]: I1006 12:03:37.145700 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-config-data" Oct 06 12:03:37 crc kubenswrapper[4958]: I1006 12:03:37.154983 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-sync-9kx8n"] Oct 06 12:03:37 crc kubenswrapper[4958]: I1006 12:03:37.240505 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/2d86d752-3a4d-4940-b221-242b3253c418-db-sync-config-data\") pod \"barbican-db-sync-9kx8n\" (UID: \"2d86d752-3a4d-4940-b221-242b3253c418\") " pod="openstack/barbican-db-sync-9kx8n" Oct 06 12:03:37 crc kubenswrapper[4958]: I1006 12:03:37.240550 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2d86d752-3a4d-4940-b221-242b3253c418-combined-ca-bundle\") pod \"barbican-db-sync-9kx8n\" (UID: \"2d86d752-3a4d-4940-b221-242b3253c418\") " pod="openstack/barbican-db-sync-9kx8n" Oct 06 12:03:37 crc kubenswrapper[4958]: I1006 12:03:37.240748 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2lkkf\" (UniqueName: \"kubernetes.io/projected/2d86d752-3a4d-4940-b221-242b3253c418-kube-api-access-2lkkf\") pod \"barbican-db-sync-9kx8n\" (UID: \"2d86d752-3a4d-4940-b221-242b3253c418\") " pod="openstack/barbican-db-sync-9kx8n" Oct 06 12:03:37 crc kubenswrapper[4958]: I1006 12:03:37.342041 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/2d86d752-3a4d-4940-b221-242b3253c418-db-sync-config-data\") pod \"barbican-db-sync-9kx8n\" (UID: \"2d86d752-3a4d-4940-b221-242b3253c418\") " pod="openstack/barbican-db-sync-9kx8n" Oct 06 12:03:37 crc kubenswrapper[4958]: I1006 12:03:37.342074 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2d86d752-3a4d-4940-b221-242b3253c418-combined-ca-bundle\") pod \"barbican-db-sync-9kx8n\" (UID: \"2d86d752-3a4d-4940-b221-242b3253c418\") " pod="openstack/barbican-db-sync-9kx8n" Oct 06 12:03:37 crc kubenswrapper[4958]: I1006 12:03:37.342159 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2lkkf\" (UniqueName: \"kubernetes.io/projected/2d86d752-3a4d-4940-b221-242b3253c418-kube-api-access-2lkkf\") pod \"barbican-db-sync-9kx8n\" (UID: \"2d86d752-3a4d-4940-b221-242b3253c418\") " pod="openstack/barbican-db-sync-9kx8n" Oct 06 12:03:37 crc kubenswrapper[4958]: I1006 12:03:37.347993 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/2d86d752-3a4d-4940-b221-242b3253c418-db-sync-config-data\") pod \"barbican-db-sync-9kx8n\" (UID: \"2d86d752-3a4d-4940-b221-242b3253c418\") " pod="openstack/barbican-db-sync-9kx8n" Oct 06 12:03:37 crc kubenswrapper[4958]: I1006 12:03:37.362629 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2lkkf\" (UniqueName: \"kubernetes.io/projected/2d86d752-3a4d-4940-b221-242b3253c418-kube-api-access-2lkkf\") pod \"barbican-db-sync-9kx8n\" (UID: \"2d86d752-3a4d-4940-b221-242b3253c418\") " pod="openstack/barbican-db-sync-9kx8n" Oct 06 12:03:37 crc kubenswrapper[4958]: I1006 12:03:37.366243 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2d86d752-3a4d-4940-b221-242b3253c418-combined-ca-bundle\") pod \"barbican-db-sync-9kx8n\" (UID: \"2d86d752-3a4d-4940-b221-242b3253c418\") " pod="openstack/barbican-db-sync-9kx8n" Oct 06 12:03:37 crc kubenswrapper[4958]: I1006 12:03:37.458597 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-9kx8n" Oct 06 12:03:37 crc kubenswrapper[4958]: I1006 12:03:37.776841 4958 generic.go:334] "Generic (PLEG): container finished" podID="61ae269a-4ca3-435d-b709-c08d5aa97ba6" containerID="92b6e2f44a3cab4fc9c7926d1ee7c98f725ae4b07dc4216c352874912933f915" exitCode=0 Oct 06 12:03:37 crc kubenswrapper[4958]: I1006 12:03:37.776980 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-77585f5f8c-9ms5t" event={"ID":"61ae269a-4ca3-435d-b709-c08d5aa97ba6","Type":"ContainerDied","Data":"92b6e2f44a3cab4fc9c7926d1ee7c98f725ae4b07dc4216c352874912933f915"} Oct 06 12:03:37 crc kubenswrapper[4958]: I1006 12:03:37.857561 4958 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-77585f5f8c-9ms5t" podUID="61ae269a-4ca3-435d-b709-c08d5aa97ba6" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.125:5353: connect: connection refused" Oct 06 12:03:38 crc kubenswrapper[4958]: I1006 12:03:38.792457 4958 generic.go:334] "Generic (PLEG): container finished" podID="e3d53199-b7b6-4d78-9b4a-53ec81b1041d" containerID="ba28d7229c7cd7e99f7c0055e526ec03a8f7ff0840038c6b1530766f70185199" exitCode=0 Oct 06 12:03:38 crc kubenswrapper[4958]: I1006 12:03:38.792559 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-jxgv9" event={"ID":"e3d53199-b7b6-4d78-9b4a-53ec81b1041d","Type":"ContainerDied","Data":"ba28d7229c7cd7e99f7c0055e526ec03a8f7ff0840038c6b1530766f70185199"} Oct 06 12:03:38 crc kubenswrapper[4958]: I1006 12:03:38.798216 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-0515-account-create-txz72" event={"ID":"0a8e6024-5fe5-4759-ab73-414ebc3388f9","Type":"ContainerDied","Data":"6c60d24e7f6324e4426976bfc4bf82f0b023aa32ed5972d2664aac15f9c3e7b4"} Oct 06 12:03:38 crc kubenswrapper[4958]: I1006 12:03:38.798257 4958 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6c60d24e7f6324e4426976bfc4bf82f0b023aa32ed5972d2664aac15f9c3e7b4" Oct 06 12:03:38 crc kubenswrapper[4958]: I1006 12:03:38.800008 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-lfmxp" event={"ID":"d22edd9d-3613-41b4-bb0f-765b7ec199df","Type":"ContainerDied","Data":"d79b3d17645d0cb3ef2c5be11e53740518157b8ad192eb1fb6b6968de094daa1"} Oct 06 12:03:38 crc kubenswrapper[4958]: I1006 12:03:38.800036 4958 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d79b3d17645d0cb3ef2c5be11e53740518157b8ad192eb1fb6b6968de094daa1" Oct 06 12:03:38 crc kubenswrapper[4958]: I1006 12:03:38.864714 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-lfmxp" Oct 06 12:03:38 crc kubenswrapper[4958]: I1006 12:03:38.881356 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-0515-account-create-txz72" Oct 06 12:03:38 crc kubenswrapper[4958]: I1006 12:03:38.981682 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d22edd9d-3613-41b4-bb0f-765b7ec199df-scripts\") pod \"d22edd9d-3613-41b4-bb0f-765b7ec199df\" (UID: \"d22edd9d-3613-41b4-bb0f-765b7ec199df\") " Oct 06 12:03:38 crc kubenswrapper[4958]: I1006 12:03:38.981761 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/d22edd9d-3613-41b4-bb0f-765b7ec199df-fernet-keys\") pod \"d22edd9d-3613-41b4-bb0f-765b7ec199df\" (UID: \"d22edd9d-3613-41b4-bb0f-765b7ec199df\") " Oct 06 12:03:38 crc kubenswrapper[4958]: I1006 12:03:38.981887 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d4xrh\" (UniqueName: \"kubernetes.io/projected/d22edd9d-3613-41b4-bb0f-765b7ec199df-kube-api-access-d4xrh\") pod \"d22edd9d-3613-41b4-bb0f-765b7ec199df\" (UID: \"d22edd9d-3613-41b4-bb0f-765b7ec199df\") " Oct 06 12:03:38 crc kubenswrapper[4958]: I1006 12:03:38.981936 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/d22edd9d-3613-41b4-bb0f-765b7ec199df-credential-keys\") pod \"d22edd9d-3613-41b4-bb0f-765b7ec199df\" (UID: \"d22edd9d-3613-41b4-bb0f-765b7ec199df\") " Oct 06 12:03:38 crc kubenswrapper[4958]: I1006 12:03:38.981970 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-l9pbn\" (UniqueName: \"kubernetes.io/projected/0a8e6024-5fe5-4759-ab73-414ebc3388f9-kube-api-access-l9pbn\") pod \"0a8e6024-5fe5-4759-ab73-414ebc3388f9\" (UID: \"0a8e6024-5fe5-4759-ab73-414ebc3388f9\") " Oct 06 12:03:38 crc kubenswrapper[4958]: I1006 12:03:38.981991 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d22edd9d-3613-41b4-bb0f-765b7ec199df-combined-ca-bundle\") pod \"d22edd9d-3613-41b4-bb0f-765b7ec199df\" (UID: \"d22edd9d-3613-41b4-bb0f-765b7ec199df\") " Oct 06 12:03:38 crc kubenswrapper[4958]: I1006 12:03:38.982022 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d22edd9d-3613-41b4-bb0f-765b7ec199df-config-data\") pod \"d22edd9d-3613-41b4-bb0f-765b7ec199df\" (UID: \"d22edd9d-3613-41b4-bb0f-765b7ec199df\") " Oct 06 12:03:38 crc kubenswrapper[4958]: I1006 12:03:38.989627 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0a8e6024-5fe5-4759-ab73-414ebc3388f9-kube-api-access-l9pbn" (OuterVolumeSpecName: "kube-api-access-l9pbn") pod "0a8e6024-5fe5-4759-ab73-414ebc3388f9" (UID: "0a8e6024-5fe5-4759-ab73-414ebc3388f9"). InnerVolumeSpecName "kube-api-access-l9pbn". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 06 12:03:38 crc kubenswrapper[4958]: I1006 12:03:38.990031 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d22edd9d-3613-41b4-bb0f-765b7ec199df-kube-api-access-d4xrh" (OuterVolumeSpecName: "kube-api-access-d4xrh") pod "d22edd9d-3613-41b4-bb0f-765b7ec199df" (UID: "d22edd9d-3613-41b4-bb0f-765b7ec199df"). InnerVolumeSpecName "kube-api-access-d4xrh". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 06 12:03:38 crc kubenswrapper[4958]: I1006 12:03:38.992322 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d22edd9d-3613-41b4-bb0f-765b7ec199df-credential-keys" (OuterVolumeSpecName: "credential-keys") pod "d22edd9d-3613-41b4-bb0f-765b7ec199df" (UID: "d22edd9d-3613-41b4-bb0f-765b7ec199df"). InnerVolumeSpecName "credential-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 06 12:03:38 crc kubenswrapper[4958]: I1006 12:03:38.993616 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d22edd9d-3613-41b4-bb0f-765b7ec199df-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "d22edd9d-3613-41b4-bb0f-765b7ec199df" (UID: "d22edd9d-3613-41b4-bb0f-765b7ec199df"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 06 12:03:38 crc kubenswrapper[4958]: I1006 12:03:38.994701 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d22edd9d-3613-41b4-bb0f-765b7ec199df-scripts" (OuterVolumeSpecName: "scripts") pod "d22edd9d-3613-41b4-bb0f-765b7ec199df" (UID: "d22edd9d-3613-41b4-bb0f-765b7ec199df"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 06 12:03:39 crc kubenswrapper[4958]: I1006 12:03:39.014763 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d22edd9d-3613-41b4-bb0f-765b7ec199df-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "d22edd9d-3613-41b4-bb0f-765b7ec199df" (UID: "d22edd9d-3613-41b4-bb0f-765b7ec199df"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 06 12:03:39 crc kubenswrapper[4958]: I1006 12:03:39.020646 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d22edd9d-3613-41b4-bb0f-765b7ec199df-config-data" (OuterVolumeSpecName: "config-data") pod "d22edd9d-3613-41b4-bb0f-765b7ec199df" (UID: "d22edd9d-3613-41b4-bb0f-765b7ec199df"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 06 12:03:39 crc kubenswrapper[4958]: I1006 12:03:39.083897 4958 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d22edd9d-3613-41b4-bb0f-765b7ec199df-scripts\") on node \"crc\" DevicePath \"\"" Oct 06 12:03:39 crc kubenswrapper[4958]: I1006 12:03:39.083934 4958 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/d22edd9d-3613-41b4-bb0f-765b7ec199df-fernet-keys\") on node \"crc\" DevicePath \"\"" Oct 06 12:03:39 crc kubenswrapper[4958]: I1006 12:03:39.083949 4958 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d4xrh\" (UniqueName: \"kubernetes.io/projected/d22edd9d-3613-41b4-bb0f-765b7ec199df-kube-api-access-d4xrh\") on node \"crc\" DevicePath \"\"" Oct 06 12:03:39 crc kubenswrapper[4958]: I1006 12:03:39.083962 4958 reconciler_common.go:293] "Volume detached for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/d22edd9d-3613-41b4-bb0f-765b7ec199df-credential-keys\") on node \"crc\" DevicePath \"\"" Oct 06 12:03:39 crc kubenswrapper[4958]: I1006 12:03:39.084005 4958 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-l9pbn\" (UniqueName: \"kubernetes.io/projected/0a8e6024-5fe5-4759-ab73-414ebc3388f9-kube-api-access-l9pbn\") on node \"crc\" DevicePath \"\"" Oct 06 12:03:39 crc kubenswrapper[4958]: I1006 12:03:39.084017 4958 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d22edd9d-3613-41b4-bb0f-765b7ec199df-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 06 12:03:39 crc kubenswrapper[4958]: I1006 12:03:39.084028 4958 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d22edd9d-3613-41b4-bb0f-765b7ec199df-config-data\") on node \"crc\" DevicePath \"\"" Oct 06 12:03:39 crc kubenswrapper[4958]: I1006 12:03:39.806819 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-lfmxp" Oct 06 12:03:39 crc kubenswrapper[4958]: I1006 12:03:39.806895 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-0515-account-create-txz72" Oct 06 12:03:40 crc kubenswrapper[4958]: I1006 12:03:40.039163 4958 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-bootstrap-lfmxp"] Oct 06 12:03:40 crc kubenswrapper[4958]: I1006 12:03:40.079456 4958 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-bootstrap-lfmxp"] Oct 06 12:03:40 crc kubenswrapper[4958]: I1006 12:03:40.106288 4958 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-bootstrap-vqtmw"] Oct 06 12:03:40 crc kubenswrapper[4958]: E1006 12:03:40.106672 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0a8e6024-5fe5-4759-ab73-414ebc3388f9" containerName="mariadb-account-create" Oct 06 12:03:40 crc kubenswrapper[4958]: I1006 12:03:40.106691 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="0a8e6024-5fe5-4759-ab73-414ebc3388f9" containerName="mariadb-account-create" Oct 06 12:03:40 crc kubenswrapper[4958]: E1006 12:03:40.106708 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d22edd9d-3613-41b4-bb0f-765b7ec199df" containerName="keystone-bootstrap" Oct 06 12:03:40 crc kubenswrapper[4958]: I1006 12:03:40.106715 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="d22edd9d-3613-41b4-bb0f-765b7ec199df" containerName="keystone-bootstrap" Oct 06 12:03:40 crc kubenswrapper[4958]: I1006 12:03:40.106920 4958 memory_manager.go:354] "RemoveStaleState removing state" podUID="0a8e6024-5fe5-4759-ab73-414ebc3388f9" containerName="mariadb-account-create" Oct 06 12:03:40 crc kubenswrapper[4958]: I1006 12:03:40.106948 4958 memory_manager.go:354] "RemoveStaleState removing state" podUID="d22edd9d-3613-41b4-bb0f-765b7ec199df" containerName="keystone-bootstrap" Oct 06 12:03:40 crc kubenswrapper[4958]: I1006 12:03:40.107721 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-vqtmw" Oct 06 12:03:40 crc kubenswrapper[4958]: I1006 12:03:40.110476 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Oct 06 12:03:40 crc kubenswrapper[4958]: I1006 12:03:40.110556 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Oct 06 12:03:40 crc kubenswrapper[4958]: I1006 12:03:40.110626 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Oct 06 12:03:40 crc kubenswrapper[4958]: I1006 12:03:40.111976 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-mxvw8" Oct 06 12:03:40 crc kubenswrapper[4958]: I1006 12:03:40.117865 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-vqtmw"] Oct 06 12:03:40 crc kubenswrapper[4958]: I1006 12:03:40.128581 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/d7df091e-b3a3-441a-a831-d57964a84438-fernet-keys\") pod \"keystone-bootstrap-vqtmw\" (UID: \"d7df091e-b3a3-441a-a831-d57964a84438\") " pod="openstack/keystone-bootstrap-vqtmw" Oct 06 12:03:40 crc kubenswrapper[4958]: I1006 12:03:40.128638 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d7df091e-b3a3-441a-a831-d57964a84438-config-data\") pod \"keystone-bootstrap-vqtmw\" (UID: \"d7df091e-b3a3-441a-a831-d57964a84438\") " pod="openstack/keystone-bootstrap-vqtmw" Oct 06 12:03:40 crc kubenswrapper[4958]: I1006 12:03:40.128658 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/d7df091e-b3a3-441a-a831-d57964a84438-credential-keys\") pod \"keystone-bootstrap-vqtmw\" (UID: \"d7df091e-b3a3-441a-a831-d57964a84438\") " pod="openstack/keystone-bootstrap-vqtmw" Oct 06 12:03:40 crc kubenswrapper[4958]: I1006 12:03:40.128674 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p87tt\" (UniqueName: \"kubernetes.io/projected/d7df091e-b3a3-441a-a831-d57964a84438-kube-api-access-p87tt\") pod \"keystone-bootstrap-vqtmw\" (UID: \"d7df091e-b3a3-441a-a831-d57964a84438\") " pod="openstack/keystone-bootstrap-vqtmw" Oct 06 12:03:40 crc kubenswrapper[4958]: I1006 12:03:40.128697 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d7df091e-b3a3-441a-a831-d57964a84438-combined-ca-bundle\") pod \"keystone-bootstrap-vqtmw\" (UID: \"d7df091e-b3a3-441a-a831-d57964a84438\") " pod="openstack/keystone-bootstrap-vqtmw" Oct 06 12:03:40 crc kubenswrapper[4958]: I1006 12:03:40.128819 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d7df091e-b3a3-441a-a831-d57964a84438-scripts\") pod \"keystone-bootstrap-vqtmw\" (UID: \"d7df091e-b3a3-441a-a831-d57964a84438\") " pod="openstack/keystone-bootstrap-vqtmw" Oct 06 12:03:40 crc kubenswrapper[4958]: I1006 12:03:40.231175 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d7df091e-b3a3-441a-a831-d57964a84438-scripts\") pod \"keystone-bootstrap-vqtmw\" (UID: \"d7df091e-b3a3-441a-a831-d57964a84438\") " pod="openstack/keystone-bootstrap-vqtmw" Oct 06 12:03:40 crc kubenswrapper[4958]: I1006 12:03:40.231698 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/d7df091e-b3a3-441a-a831-d57964a84438-fernet-keys\") pod \"keystone-bootstrap-vqtmw\" (UID: \"d7df091e-b3a3-441a-a831-d57964a84438\") " pod="openstack/keystone-bootstrap-vqtmw" Oct 06 12:03:40 crc kubenswrapper[4958]: I1006 12:03:40.231746 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d7df091e-b3a3-441a-a831-d57964a84438-config-data\") pod \"keystone-bootstrap-vqtmw\" (UID: \"d7df091e-b3a3-441a-a831-d57964a84438\") " pod="openstack/keystone-bootstrap-vqtmw" Oct 06 12:03:40 crc kubenswrapper[4958]: I1006 12:03:40.231775 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/d7df091e-b3a3-441a-a831-d57964a84438-credential-keys\") pod \"keystone-bootstrap-vqtmw\" (UID: \"d7df091e-b3a3-441a-a831-d57964a84438\") " pod="openstack/keystone-bootstrap-vqtmw" Oct 06 12:03:40 crc kubenswrapper[4958]: I1006 12:03:40.231811 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p87tt\" (UniqueName: \"kubernetes.io/projected/d7df091e-b3a3-441a-a831-d57964a84438-kube-api-access-p87tt\") pod \"keystone-bootstrap-vqtmw\" (UID: \"d7df091e-b3a3-441a-a831-d57964a84438\") " pod="openstack/keystone-bootstrap-vqtmw" Oct 06 12:03:40 crc kubenswrapper[4958]: I1006 12:03:40.231855 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d7df091e-b3a3-441a-a831-d57964a84438-combined-ca-bundle\") pod \"keystone-bootstrap-vqtmw\" (UID: \"d7df091e-b3a3-441a-a831-d57964a84438\") " pod="openstack/keystone-bootstrap-vqtmw" Oct 06 12:03:40 crc kubenswrapper[4958]: I1006 12:03:40.238252 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d7df091e-b3a3-441a-a831-d57964a84438-combined-ca-bundle\") pod \"keystone-bootstrap-vqtmw\" (UID: \"d7df091e-b3a3-441a-a831-d57964a84438\") " pod="openstack/keystone-bootstrap-vqtmw" Oct 06 12:03:40 crc kubenswrapper[4958]: I1006 12:03:40.238443 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d7df091e-b3a3-441a-a831-d57964a84438-config-data\") pod \"keystone-bootstrap-vqtmw\" (UID: \"d7df091e-b3a3-441a-a831-d57964a84438\") " pod="openstack/keystone-bootstrap-vqtmw" Oct 06 12:03:40 crc kubenswrapper[4958]: I1006 12:03:40.238534 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/d7df091e-b3a3-441a-a831-d57964a84438-credential-keys\") pod \"keystone-bootstrap-vqtmw\" (UID: \"d7df091e-b3a3-441a-a831-d57964a84438\") " pod="openstack/keystone-bootstrap-vqtmw" Oct 06 12:03:40 crc kubenswrapper[4958]: I1006 12:03:40.238533 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d7df091e-b3a3-441a-a831-d57964a84438-scripts\") pod \"keystone-bootstrap-vqtmw\" (UID: \"d7df091e-b3a3-441a-a831-d57964a84438\") " pod="openstack/keystone-bootstrap-vqtmw" Oct 06 12:03:40 crc kubenswrapper[4958]: I1006 12:03:40.238769 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/d7df091e-b3a3-441a-a831-d57964a84438-fernet-keys\") pod \"keystone-bootstrap-vqtmw\" (UID: \"d7df091e-b3a3-441a-a831-d57964a84438\") " pod="openstack/keystone-bootstrap-vqtmw" Oct 06 12:03:40 crc kubenswrapper[4958]: I1006 12:03:40.248560 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p87tt\" (UniqueName: \"kubernetes.io/projected/d7df091e-b3a3-441a-a831-d57964a84438-kube-api-access-p87tt\") pod \"keystone-bootstrap-vqtmw\" (UID: \"d7df091e-b3a3-441a-a831-d57964a84438\") " pod="openstack/keystone-bootstrap-vqtmw" Oct 06 12:03:40 crc kubenswrapper[4958]: I1006 12:03:40.432552 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-vqtmw" Oct 06 12:03:40 crc kubenswrapper[4958]: E1006 12:03:40.786415 4958 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-placement-api:current-podified" Oct 06 12:03:40 crc kubenswrapper[4958]: E1006 12:03:40.787292 4958 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:placement-db-sync,Image:quay.io/podified-antelope-centos9/openstack-placement-api:current-podified,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:true,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:scripts,ReadOnly:true,MountPath:/usr/local/bin/container-scripts,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:logs,ReadOnly:false,MountPath:/var/log/placement,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:false,MountPath:/var/lib/openstack/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:placement-dbsync-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-jwmbq,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42482,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod placement-db-sync-s7xhl_openstack(e2efb2d0-1b2f-47fd-a1ee-5e69cafa0fd7): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Oct 06 12:03:40 crc kubenswrapper[4958]: E1006 12:03:40.788648 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"placement-db-sync\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/placement-db-sync-s7xhl" podUID="e2efb2d0-1b2f-47fd-a1ee-5e69cafa0fd7" Oct 06 12:03:40 crc kubenswrapper[4958]: E1006 12:03:40.794236 4958 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-horizon:current-podified" Oct 06 12:03:40 crc kubenswrapper[4958]: E1006 12:03:40.794426 4958 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:horizon-log,Image:quay.io/podified-antelope-centos9/openstack-horizon:current-podified,Command:[/bin/bash],Args:[-c tail -n+1 -F /var/log/horizon/horizon.log],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:nfch67hbfh6ch6ch65h5bbh5c9hcch5dfh554h67fh65bhc7h547h56fh58dhd4h9ch656h67fh64dh7bh565h68dhc5h588h65fh689hf6h5cbh8bq,ValueFrom:nil,},EnvVar{Name:ENABLE_DESIGNATE,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_HEAT,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_IRONIC,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_MANILA,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_OCTAVIA,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_WATCHER,Value:no,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},EnvVar{Name:UNPACK_THEME,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:logs,ReadOnly:false,MountPath:/var/log/horizon,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-hllnl,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*48,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*42400,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod horizon-78b66cc59c-dwbpp_openstack(cedfa2ab-1916-46e6-8e95-18c7a6da6046): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Oct 06 12:03:40 crc kubenswrapper[4958]: I1006 12:03:40.817923 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-0757-account-create-7jfwp" event={"ID":"16cb82f4-081c-4085-b916-7ac6b4366c0a","Type":"ContainerDied","Data":"0a09d375411272db65c105d52fe0df4b0fd7b1f963d07236add842869bcfb221"} Oct 06 12:03:40 crc kubenswrapper[4958]: I1006 12:03:40.817970 4958 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0a09d375411272db65c105d52fe0df4b0fd7b1f963d07236add842869bcfb221" Oct 06 12:03:40 crc kubenswrapper[4958]: E1006 12:03:40.831217 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"placement-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-placement-api:current-podified\\\"\"" pod="openstack/placement-db-sync-s7xhl" podUID="e2efb2d0-1b2f-47fd-a1ee-5e69cafa0fd7" Oct 06 12:03:41 crc kubenswrapper[4958]: I1006 12:03:40.937908 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-0757-account-create-7jfwp" Oct 06 12:03:41 crc kubenswrapper[4958]: I1006 12:03:40.941658 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-77585f5f8c-9ms5t" Oct 06 12:03:41 crc kubenswrapper[4958]: I1006 12:03:40.943444 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fk57m\" (UniqueName: \"kubernetes.io/projected/16cb82f4-081c-4085-b916-7ac6b4366c0a-kube-api-access-fk57m\") pod \"16cb82f4-081c-4085-b916-7ac6b4366c0a\" (UID: \"16cb82f4-081c-4085-b916-7ac6b4366c0a\") " Oct 06 12:03:41 crc kubenswrapper[4958]: I1006 12:03:40.943908 4958 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d22edd9d-3613-41b4-bb0f-765b7ec199df" path="/var/lib/kubelet/pods/d22edd9d-3613-41b4-bb0f-765b7ec199df/volumes" Oct 06 12:03:41 crc kubenswrapper[4958]: I1006 12:03:40.957228 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/16cb82f4-081c-4085-b916-7ac6b4366c0a-kube-api-access-fk57m" (OuterVolumeSpecName: "kube-api-access-fk57m") pod "16cb82f4-081c-4085-b916-7ac6b4366c0a" (UID: "16cb82f4-081c-4085-b916-7ac6b4366c0a"). InnerVolumeSpecName "kube-api-access-fk57m". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 06 12:03:41 crc kubenswrapper[4958]: I1006 12:03:40.983941 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-jxgv9" Oct 06 12:03:41 crc kubenswrapper[4958]: I1006 12:03:41.045333 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e3d53199-b7b6-4d78-9b4a-53ec81b1041d-combined-ca-bundle\") pod \"e3d53199-b7b6-4d78-9b4a-53ec81b1041d\" (UID: \"e3d53199-b7b6-4d78-9b4a-53ec81b1041d\") " Oct 06 12:03:41 crc kubenswrapper[4958]: I1006 12:03:41.045718 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8bkjs\" (UniqueName: \"kubernetes.io/projected/61ae269a-4ca3-435d-b709-c08d5aa97ba6-kube-api-access-8bkjs\") pod \"61ae269a-4ca3-435d-b709-c08d5aa97ba6\" (UID: \"61ae269a-4ca3-435d-b709-c08d5aa97ba6\") " Oct 06 12:03:41 crc kubenswrapper[4958]: I1006 12:03:41.046207 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/61ae269a-4ca3-435d-b709-c08d5aa97ba6-config\") pod \"61ae269a-4ca3-435d-b709-c08d5aa97ba6\" (UID: \"61ae269a-4ca3-435d-b709-c08d5aa97ba6\") " Oct 06 12:03:41 crc kubenswrapper[4958]: I1006 12:03:41.046264 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/61ae269a-4ca3-435d-b709-c08d5aa97ba6-ovsdbserver-sb\") pod \"61ae269a-4ca3-435d-b709-c08d5aa97ba6\" (UID: \"61ae269a-4ca3-435d-b709-c08d5aa97ba6\") " Oct 06 12:03:41 crc kubenswrapper[4958]: I1006 12:03:41.046289 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/61ae269a-4ca3-435d-b709-c08d5aa97ba6-dns-swift-storage-0\") pod \"61ae269a-4ca3-435d-b709-c08d5aa97ba6\" (UID: \"61ae269a-4ca3-435d-b709-c08d5aa97ba6\") " Oct 06 12:03:41 crc kubenswrapper[4958]: I1006 12:03:41.046330 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/61ae269a-4ca3-435d-b709-c08d5aa97ba6-ovsdbserver-nb\") pod \"61ae269a-4ca3-435d-b709-c08d5aa97ba6\" (UID: \"61ae269a-4ca3-435d-b709-c08d5aa97ba6\") " Oct 06 12:03:41 crc kubenswrapper[4958]: I1006 12:03:41.046374 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e3d53199-b7b6-4d78-9b4a-53ec81b1041d-config-data\") pod \"e3d53199-b7b6-4d78-9b4a-53ec81b1041d\" (UID: \"e3d53199-b7b6-4d78-9b4a-53ec81b1041d\") " Oct 06 12:03:41 crc kubenswrapper[4958]: I1006 12:03:41.046418 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/61ae269a-4ca3-435d-b709-c08d5aa97ba6-dns-svc\") pod \"61ae269a-4ca3-435d-b709-c08d5aa97ba6\" (UID: \"61ae269a-4ca3-435d-b709-c08d5aa97ba6\") " Oct 06 12:03:41 crc kubenswrapper[4958]: I1006 12:03:41.046468 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hcvc5\" (UniqueName: \"kubernetes.io/projected/e3d53199-b7b6-4d78-9b4a-53ec81b1041d-kube-api-access-hcvc5\") pod \"e3d53199-b7b6-4d78-9b4a-53ec81b1041d\" (UID: \"e3d53199-b7b6-4d78-9b4a-53ec81b1041d\") " Oct 06 12:03:41 crc kubenswrapper[4958]: I1006 12:03:41.046494 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/e3d53199-b7b6-4d78-9b4a-53ec81b1041d-db-sync-config-data\") pod \"e3d53199-b7b6-4d78-9b4a-53ec81b1041d\" (UID: \"e3d53199-b7b6-4d78-9b4a-53ec81b1041d\") " Oct 06 12:03:41 crc kubenswrapper[4958]: I1006 12:03:41.047071 4958 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fk57m\" (UniqueName: \"kubernetes.io/projected/16cb82f4-081c-4085-b916-7ac6b4366c0a-kube-api-access-fk57m\") on node \"crc\" DevicePath \"\"" Oct 06 12:03:41 crc kubenswrapper[4958]: I1006 12:03:41.060677 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e3d53199-b7b6-4d78-9b4a-53ec81b1041d-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "e3d53199-b7b6-4d78-9b4a-53ec81b1041d" (UID: "e3d53199-b7b6-4d78-9b4a-53ec81b1041d"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 06 12:03:41 crc kubenswrapper[4958]: I1006 12:03:41.061534 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/61ae269a-4ca3-435d-b709-c08d5aa97ba6-kube-api-access-8bkjs" (OuterVolumeSpecName: "kube-api-access-8bkjs") pod "61ae269a-4ca3-435d-b709-c08d5aa97ba6" (UID: "61ae269a-4ca3-435d-b709-c08d5aa97ba6"). InnerVolumeSpecName "kube-api-access-8bkjs". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 06 12:03:41 crc kubenswrapper[4958]: I1006 12:03:41.064832 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e3d53199-b7b6-4d78-9b4a-53ec81b1041d-kube-api-access-hcvc5" (OuterVolumeSpecName: "kube-api-access-hcvc5") pod "e3d53199-b7b6-4d78-9b4a-53ec81b1041d" (UID: "e3d53199-b7b6-4d78-9b4a-53ec81b1041d"). InnerVolumeSpecName "kube-api-access-hcvc5". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 06 12:03:41 crc kubenswrapper[4958]: I1006 12:03:41.149280 4958 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hcvc5\" (UniqueName: \"kubernetes.io/projected/e3d53199-b7b6-4d78-9b4a-53ec81b1041d-kube-api-access-hcvc5\") on node \"crc\" DevicePath \"\"" Oct 06 12:03:41 crc kubenswrapper[4958]: I1006 12:03:41.149309 4958 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/e3d53199-b7b6-4d78-9b4a-53ec81b1041d-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Oct 06 12:03:41 crc kubenswrapper[4958]: I1006 12:03:41.149325 4958 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8bkjs\" (UniqueName: \"kubernetes.io/projected/61ae269a-4ca3-435d-b709-c08d5aa97ba6-kube-api-access-8bkjs\") on node \"crc\" DevicePath \"\"" Oct 06 12:03:41 crc kubenswrapper[4958]: I1006 12:03:41.236271 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e3d53199-b7b6-4d78-9b4a-53ec81b1041d-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "e3d53199-b7b6-4d78-9b4a-53ec81b1041d" (UID: "e3d53199-b7b6-4d78-9b4a-53ec81b1041d"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 06 12:03:41 crc kubenswrapper[4958]: I1006 12:03:41.252510 4958 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e3d53199-b7b6-4d78-9b4a-53ec81b1041d-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 06 12:03:41 crc kubenswrapper[4958]: E1006 12:03:41.252915 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon-log\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/horizon-78b66cc59c-dwbpp" podUID="cedfa2ab-1916-46e6-8e95-18c7a6da6046" Oct 06 12:03:41 crc kubenswrapper[4958]: I1006 12:03:41.305592 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/61ae269a-4ca3-435d-b709-c08d5aa97ba6-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "61ae269a-4ca3-435d-b709-c08d5aa97ba6" (UID: "61ae269a-4ca3-435d-b709-c08d5aa97ba6"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 06 12:03:41 crc kubenswrapper[4958]: I1006 12:03:41.335247 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/61ae269a-4ca3-435d-b709-c08d5aa97ba6-config" (OuterVolumeSpecName: "config") pod "61ae269a-4ca3-435d-b709-c08d5aa97ba6" (UID: "61ae269a-4ca3-435d-b709-c08d5aa97ba6"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 06 12:03:41 crc kubenswrapper[4958]: I1006 12:03:41.340213 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/61ae269a-4ca3-435d-b709-c08d5aa97ba6-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "61ae269a-4ca3-435d-b709-c08d5aa97ba6" (UID: "61ae269a-4ca3-435d-b709-c08d5aa97ba6"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 06 12:03:41 crc kubenswrapper[4958]: I1006 12:03:41.340377 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/61ae269a-4ca3-435d-b709-c08d5aa97ba6-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "61ae269a-4ca3-435d-b709-c08d5aa97ba6" (UID: "61ae269a-4ca3-435d-b709-c08d5aa97ba6"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 06 12:03:41 crc kubenswrapper[4958]: I1006 12:03:41.340960 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/61ae269a-4ca3-435d-b709-c08d5aa97ba6-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "61ae269a-4ca3-435d-b709-c08d5aa97ba6" (UID: "61ae269a-4ca3-435d-b709-c08d5aa97ba6"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 06 12:03:41 crc kubenswrapper[4958]: I1006 12:03:41.353268 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e3d53199-b7b6-4d78-9b4a-53ec81b1041d-config-data" (OuterVolumeSpecName: "config-data") pod "e3d53199-b7b6-4d78-9b4a-53ec81b1041d" (UID: "e3d53199-b7b6-4d78-9b4a-53ec81b1041d"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 06 12:03:41 crc kubenswrapper[4958]: I1006 12:03:41.354205 4958 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/61ae269a-4ca3-435d-b709-c08d5aa97ba6-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Oct 06 12:03:41 crc kubenswrapper[4958]: I1006 12:03:41.354228 4958 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e3d53199-b7b6-4d78-9b4a-53ec81b1041d-config-data\") on node \"crc\" DevicePath \"\"" Oct 06 12:03:41 crc kubenswrapper[4958]: I1006 12:03:41.354237 4958 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/61ae269a-4ca3-435d-b709-c08d5aa97ba6-dns-svc\") on node \"crc\" DevicePath \"\"" Oct 06 12:03:41 crc kubenswrapper[4958]: I1006 12:03:41.354245 4958 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/61ae269a-4ca3-435d-b709-c08d5aa97ba6-config\") on node \"crc\" DevicePath \"\"" Oct 06 12:03:41 crc kubenswrapper[4958]: I1006 12:03:41.354253 4958 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/61ae269a-4ca3-435d-b709-c08d5aa97ba6-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Oct 06 12:03:41 crc kubenswrapper[4958]: I1006 12:03:41.354265 4958 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/61ae269a-4ca3-435d-b709-c08d5aa97ba6-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Oct 06 12:03:41 crc kubenswrapper[4958]: I1006 12:03:41.838382 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-78b66cc59c-dwbpp" event={"ID":"cedfa2ab-1916-46e6-8e95-18c7a6da6046","Type":"ContainerStarted","Data":"4ccb0b9e775fc0c0e4010437180c3e059a9b00ad4b5d41146ffe438cedf527f5"} Oct 06 12:03:41 crc kubenswrapper[4958]: I1006 12:03:41.838539 4958 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/horizon-78b66cc59c-dwbpp" podUID="cedfa2ab-1916-46e6-8e95-18c7a6da6046" containerName="horizon" containerID="cri-o://4ccb0b9e775fc0c0e4010437180c3e059a9b00ad4b5d41146ffe438cedf527f5" gracePeriod=30 Oct 06 12:03:41 crc kubenswrapper[4958]: I1006 12:03:41.841608 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-f9bd9f6f-dzxmr" event={"ID":"76fb3845-b6c8-49ed-a7c5-fbf1254134dd","Type":"ContainerStarted","Data":"d00543839437d8ddab081b5d4fcdfeb78edeeaa73c91085cab177c7f11462364"} Oct 06 12:03:41 crc kubenswrapper[4958]: I1006 12:03:41.841637 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-f9bd9f6f-dzxmr" event={"ID":"76fb3845-b6c8-49ed-a7c5-fbf1254134dd","Type":"ContainerStarted","Data":"2248175c24a706093c575dc76c190f64771a8e5f014536cd647d96ba8aaf3967"} Oct 06 12:03:41 crc kubenswrapper[4958]: I1006 12:03:41.841912 4958 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/horizon-f9bd9f6f-dzxmr" podUID="76fb3845-b6c8-49ed-a7c5-fbf1254134dd" containerName="horizon" containerID="cri-o://d00543839437d8ddab081b5d4fcdfeb78edeeaa73c91085cab177c7f11462364" gracePeriod=30 Oct 06 12:03:41 crc kubenswrapper[4958]: I1006 12:03:41.841728 4958 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/horizon-f9bd9f6f-dzxmr" podUID="76fb3845-b6c8-49ed-a7c5-fbf1254134dd" containerName="horizon-log" containerID="cri-o://2248175c24a706093c575dc76c190f64771a8e5f014536cd647d96ba8aaf3967" gracePeriod=30 Oct 06 12:03:41 crc kubenswrapper[4958]: I1006 12:03:41.846043 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"7198a6cb-9e91-48c8-82b5-16f40fb6b732","Type":"ContainerStarted","Data":"d9d94ed4838b3a427237aa8e38a79d82bafd2ef7b89280db8b1bcc9bbd39fe85"} Oct 06 12:03:41 crc kubenswrapper[4958]: I1006 12:03:41.849196 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-77585f5f8c-9ms5t" event={"ID":"61ae269a-4ca3-435d-b709-c08d5aa97ba6","Type":"ContainerDied","Data":"7dbbff66f8099e2138127dc307676b2b75dffc9b5e0619a24b4fb7f0f97fc8b7"} Oct 06 12:03:41 crc kubenswrapper[4958]: I1006 12:03:41.849239 4958 scope.go:117] "RemoveContainer" containerID="92b6e2f44a3cab4fc9c7926d1ee7c98f725ae4b07dc4216c352874912933f915" Oct 06 12:03:41 crc kubenswrapper[4958]: I1006 12:03:41.850190 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-77585f5f8c-9ms5t" Oct 06 12:03:41 crc kubenswrapper[4958]: I1006 12:03:41.867105 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-jxgv9" Oct 06 12:03:41 crc kubenswrapper[4958]: I1006 12:03:41.869105 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-jxgv9" event={"ID":"e3d53199-b7b6-4d78-9b4a-53ec81b1041d","Type":"ContainerDied","Data":"a9bb83df40b5230af12a72137e79620a7106ebf30b379fdad3300019164c863f"} Oct 06 12:03:41 crc kubenswrapper[4958]: I1006 12:03:41.869158 4958 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a9bb83df40b5230af12a72137e79620a7106ebf30b379fdad3300019164c863f" Oct 06 12:03:41 crc kubenswrapper[4958]: I1006 12:03:41.885060 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-0757-account-create-7jfwp" Oct 06 12:03:41 crc kubenswrapper[4958]: I1006 12:03:41.890333 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-ccb77967f-gznd9" event={"ID":"13d1aca0-56df-432b-97e2-bdf76bda20b8","Type":"ContainerStarted","Data":"973962e82adea6461f9cec954912695e72c3a5913bf0cdacdb820efa3cd66b8b"} Oct 06 12:03:41 crc kubenswrapper[4958]: I1006 12:03:41.890387 4958 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/horizon-ccb77967f-gznd9" podUID="13d1aca0-56df-432b-97e2-bdf76bda20b8" containerName="horizon-log" containerID="cri-o://502d7fe944c247a897e334fe1a5a6ce35d47ed4b9f54fbfe8134074a495da73b" gracePeriod=30 Oct 06 12:03:41 crc kubenswrapper[4958]: I1006 12:03:41.890499 4958 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/horizon-ccb77967f-gznd9" podUID="13d1aca0-56df-432b-97e2-bdf76bda20b8" containerName="horizon" containerID="cri-o://973962e82adea6461f9cec954912695e72c3a5913bf0cdacdb820efa3cd66b8b" gracePeriod=30 Oct 06 12:03:41 crc kubenswrapper[4958]: I1006 12:03:41.890396 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-ccb77967f-gznd9" event={"ID":"13d1aca0-56df-432b-97e2-bdf76bda20b8","Type":"ContainerStarted","Data":"502d7fe944c247a897e334fe1a5a6ce35d47ed4b9f54fbfe8134074a495da73b"} Oct 06 12:03:41 crc kubenswrapper[4958]: I1006 12:03:41.913117 4958 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/horizon-f9bd9f6f-dzxmr" podStartSLOduration=2.448604463 podStartE2EDuration="15.913098283s" podCreationTimestamp="2025-10-06 12:03:26 +0000 UTC" firstStartedPulling="2025-10-06 12:03:27.379132733 +0000 UTC m=+961.265158031" lastFinishedPulling="2025-10-06 12:03:40.843626543 +0000 UTC m=+974.729651851" observedRunningTime="2025-10-06 12:03:41.890582263 +0000 UTC m=+975.776607571" watchObservedRunningTime="2025-10-06 12:03:41.913098283 +0000 UTC m=+975.799123591" Oct 06 12:03:41 crc kubenswrapper[4958]: I1006 12:03:41.914924 4958 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/horizon-ccb77967f-gznd9" podStartSLOduration=2.801653895 podStartE2EDuration="14.914916748s" podCreationTimestamp="2025-10-06 12:03:27 +0000 UTC" firstStartedPulling="2025-10-06 12:03:28.814746498 +0000 UTC m=+962.700771806" lastFinishedPulling="2025-10-06 12:03:40.928009351 +0000 UTC m=+974.814034659" observedRunningTime="2025-10-06 12:03:41.911640519 +0000 UTC m=+975.797665817" watchObservedRunningTime="2025-10-06 12:03:41.914916748 +0000 UTC m=+975.800942056" Oct 06 12:03:41 crc kubenswrapper[4958]: I1006 12:03:41.921777 4958 scope.go:117] "RemoveContainer" containerID="426f0da7ae574a456f0ecb849ecc21079f3371ad23eef1691e5be38d4ed0b9fd" Oct 06 12:03:41 crc kubenswrapper[4958]: I1006 12:03:41.948893 4958 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-77585f5f8c-9ms5t"] Oct 06 12:03:41 crc kubenswrapper[4958]: I1006 12:03:41.963877 4958 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-77585f5f8c-9ms5t"] Oct 06 12:03:42 crc kubenswrapper[4958]: I1006 12:03:42.050991 4958 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-db-sync-wqlhj"] Oct 06 12:03:42 crc kubenswrapper[4958]: E1006 12:03:42.051399 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="16cb82f4-081c-4085-b916-7ac6b4366c0a" containerName="mariadb-account-create" Oct 06 12:03:42 crc kubenswrapper[4958]: I1006 12:03:42.051424 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="16cb82f4-081c-4085-b916-7ac6b4366c0a" containerName="mariadb-account-create" Oct 06 12:03:42 crc kubenswrapper[4958]: E1006 12:03:42.051442 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e3d53199-b7b6-4d78-9b4a-53ec81b1041d" containerName="glance-db-sync" Oct 06 12:03:42 crc kubenswrapper[4958]: I1006 12:03:42.051451 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="e3d53199-b7b6-4d78-9b4a-53ec81b1041d" containerName="glance-db-sync" Oct 06 12:03:42 crc kubenswrapper[4958]: E1006 12:03:42.051464 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="61ae269a-4ca3-435d-b709-c08d5aa97ba6" containerName="init" Oct 06 12:03:42 crc kubenswrapper[4958]: I1006 12:03:42.051471 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="61ae269a-4ca3-435d-b709-c08d5aa97ba6" containerName="init" Oct 06 12:03:42 crc kubenswrapper[4958]: E1006 12:03:42.051508 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="61ae269a-4ca3-435d-b709-c08d5aa97ba6" containerName="dnsmasq-dns" Oct 06 12:03:42 crc kubenswrapper[4958]: I1006 12:03:42.051517 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="61ae269a-4ca3-435d-b709-c08d5aa97ba6" containerName="dnsmasq-dns" Oct 06 12:03:42 crc kubenswrapper[4958]: I1006 12:03:42.051702 4958 memory_manager.go:354] "RemoveStaleState removing state" podUID="16cb82f4-081c-4085-b916-7ac6b4366c0a" containerName="mariadb-account-create" Oct 06 12:03:42 crc kubenswrapper[4958]: I1006 12:03:42.051716 4958 memory_manager.go:354] "RemoveStaleState removing state" podUID="e3d53199-b7b6-4d78-9b4a-53ec81b1041d" containerName="glance-db-sync" Oct 06 12:03:42 crc kubenswrapper[4958]: I1006 12:03:42.051731 4958 memory_manager.go:354] "RemoveStaleState removing state" podUID="61ae269a-4ca3-435d-b709-c08d5aa97ba6" containerName="dnsmasq-dns" Oct 06 12:03:42 crc kubenswrapper[4958]: I1006 12:03:42.057547 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-wqlhj" Oct 06 12:03:42 crc kubenswrapper[4958]: I1006 12:03:42.059893 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-config-data" Oct 06 12:03:42 crc kubenswrapper[4958]: I1006 12:03:42.060221 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-cinder-dockercfg-x7gpf" Oct 06 12:03:42 crc kubenswrapper[4958]: I1006 12:03:42.066715 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-sync-wqlhj"] Oct 06 12:03:42 crc kubenswrapper[4958]: I1006 12:03:42.068725 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scripts" Oct 06 12:03:42 crc kubenswrapper[4958]: I1006 12:03:42.073495 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-69f5d58bb-ghq4l"] Oct 06 12:03:42 crc kubenswrapper[4958]: I1006 12:03:42.088357 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-7567d7f44b-s7rvg"] Oct 06 12:03:42 crc kubenswrapper[4958]: W1006 12:03:42.102647 4958 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf0aa3dc0_4553_4ec3_bec4_097c68139910.slice/crio-90d64ddf7e28f4fd3dba605b0e62c148245f6b667816016e63506f623d0db803 WatchSource:0}: Error finding container 90d64ddf7e28f4fd3dba605b0e62c148245f6b667816016e63506f623d0db803: Status 404 returned error can't find the container with id 90d64ddf7e28f4fd3dba605b0e62c148245f6b667816016e63506f623d0db803 Oct 06 12:03:42 crc kubenswrapper[4958]: I1006 12:03:42.104775 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-sync-9kx8n"] Oct 06 12:03:42 crc kubenswrapper[4958]: I1006 12:03:42.129435 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-vqtmw"] Oct 06 12:03:42 crc kubenswrapper[4958]: W1006 12:03:42.140197 4958 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd7df091e_b3a3_441a_a831_d57964a84438.slice/crio-d31ab66307e7b22a4dce67f5ce337de2239c6599e7f2dfbcb83b18aa87ae54e7 WatchSource:0}: Error finding container d31ab66307e7b22a4dce67f5ce337de2239c6599e7f2dfbcb83b18aa87ae54e7: Status 404 returned error can't find the container with id d31ab66307e7b22a4dce67f5ce337de2239c6599e7f2dfbcb83b18aa87ae54e7 Oct 06 12:03:42 crc kubenswrapper[4958]: I1006 12:03:42.182248 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9e240eda-9921-45e1-991d-971031189ee4-config-data\") pod \"cinder-db-sync-wqlhj\" (UID: \"9e240eda-9921-45e1-991d-971031189ee4\") " pod="openstack/cinder-db-sync-wqlhj" Oct 06 12:03:42 crc kubenswrapper[4958]: I1006 12:03:42.182358 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xjjl7\" (UniqueName: \"kubernetes.io/projected/9e240eda-9921-45e1-991d-971031189ee4-kube-api-access-xjjl7\") pod \"cinder-db-sync-wqlhj\" (UID: \"9e240eda-9921-45e1-991d-971031189ee4\") " pod="openstack/cinder-db-sync-wqlhj" Oct 06 12:03:42 crc kubenswrapper[4958]: I1006 12:03:42.182390 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/9e240eda-9921-45e1-991d-971031189ee4-etc-machine-id\") pod \"cinder-db-sync-wqlhj\" (UID: \"9e240eda-9921-45e1-991d-971031189ee4\") " pod="openstack/cinder-db-sync-wqlhj" Oct 06 12:03:42 crc kubenswrapper[4958]: I1006 12:03:42.182408 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/9e240eda-9921-45e1-991d-971031189ee4-db-sync-config-data\") pod \"cinder-db-sync-wqlhj\" (UID: \"9e240eda-9921-45e1-991d-971031189ee4\") " pod="openstack/cinder-db-sync-wqlhj" Oct 06 12:03:42 crc kubenswrapper[4958]: I1006 12:03:42.182438 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9e240eda-9921-45e1-991d-971031189ee4-combined-ca-bundle\") pod \"cinder-db-sync-wqlhj\" (UID: \"9e240eda-9921-45e1-991d-971031189ee4\") " pod="openstack/cinder-db-sync-wqlhj" Oct 06 12:03:42 crc kubenswrapper[4958]: I1006 12:03:42.182465 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9e240eda-9921-45e1-991d-971031189ee4-scripts\") pod \"cinder-db-sync-wqlhj\" (UID: \"9e240eda-9921-45e1-991d-971031189ee4\") " pod="openstack/cinder-db-sync-wqlhj" Oct 06 12:03:42 crc kubenswrapper[4958]: I1006 12:03:42.237501 4958 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-db-sync-dv44v"] Oct 06 12:03:42 crc kubenswrapper[4958]: I1006 12:03:42.241164 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-dv44v" Oct 06 12:03:42 crc kubenswrapper[4958]: I1006 12:03:42.244516 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-neutron-dockercfg-pxfvn" Oct 06 12:03:42 crc kubenswrapper[4958]: I1006 12:03:42.244759 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-config" Oct 06 12:03:42 crc kubenswrapper[4958]: I1006 12:03:42.244880 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-httpd-config" Oct 06 12:03:42 crc kubenswrapper[4958]: I1006 12:03:42.263864 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-sync-dv44v"] Oct 06 12:03:42 crc kubenswrapper[4958]: I1006 12:03:42.287241 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/d0818578-1e79-4dfb-8257-b8b1a2bc0cef-config\") pod \"neutron-db-sync-dv44v\" (UID: \"d0818578-1e79-4dfb-8257-b8b1a2bc0cef\") " pod="openstack/neutron-db-sync-dv44v" Oct 06 12:03:42 crc kubenswrapper[4958]: I1006 12:03:42.287312 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xjjl7\" (UniqueName: \"kubernetes.io/projected/9e240eda-9921-45e1-991d-971031189ee4-kube-api-access-xjjl7\") pod \"cinder-db-sync-wqlhj\" (UID: \"9e240eda-9921-45e1-991d-971031189ee4\") " pod="openstack/cinder-db-sync-wqlhj" Oct 06 12:03:42 crc kubenswrapper[4958]: I1006 12:03:42.287345 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/9e240eda-9921-45e1-991d-971031189ee4-etc-machine-id\") pod \"cinder-db-sync-wqlhj\" (UID: \"9e240eda-9921-45e1-991d-971031189ee4\") " pod="openstack/cinder-db-sync-wqlhj" Oct 06 12:03:42 crc kubenswrapper[4958]: I1006 12:03:42.287370 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/9e240eda-9921-45e1-991d-971031189ee4-db-sync-config-data\") pod \"cinder-db-sync-wqlhj\" (UID: \"9e240eda-9921-45e1-991d-971031189ee4\") " pod="openstack/cinder-db-sync-wqlhj" Oct 06 12:03:42 crc kubenswrapper[4958]: I1006 12:03:42.287392 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9e240eda-9921-45e1-991d-971031189ee4-combined-ca-bundle\") pod \"cinder-db-sync-wqlhj\" (UID: \"9e240eda-9921-45e1-991d-971031189ee4\") " pod="openstack/cinder-db-sync-wqlhj" Oct 06 12:03:42 crc kubenswrapper[4958]: I1006 12:03:42.287414 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9e240eda-9921-45e1-991d-971031189ee4-scripts\") pod \"cinder-db-sync-wqlhj\" (UID: \"9e240eda-9921-45e1-991d-971031189ee4\") " pod="openstack/cinder-db-sync-wqlhj" Oct 06 12:03:42 crc kubenswrapper[4958]: I1006 12:03:42.287432 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5ffvd\" (UniqueName: \"kubernetes.io/projected/d0818578-1e79-4dfb-8257-b8b1a2bc0cef-kube-api-access-5ffvd\") pod \"neutron-db-sync-dv44v\" (UID: \"d0818578-1e79-4dfb-8257-b8b1a2bc0cef\") " pod="openstack/neutron-db-sync-dv44v" Oct 06 12:03:42 crc kubenswrapper[4958]: I1006 12:03:42.287449 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d0818578-1e79-4dfb-8257-b8b1a2bc0cef-combined-ca-bundle\") pod \"neutron-db-sync-dv44v\" (UID: \"d0818578-1e79-4dfb-8257-b8b1a2bc0cef\") " pod="openstack/neutron-db-sync-dv44v" Oct 06 12:03:42 crc kubenswrapper[4958]: I1006 12:03:42.287514 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9e240eda-9921-45e1-991d-971031189ee4-config-data\") pod \"cinder-db-sync-wqlhj\" (UID: \"9e240eda-9921-45e1-991d-971031189ee4\") " pod="openstack/cinder-db-sync-wqlhj" Oct 06 12:03:42 crc kubenswrapper[4958]: I1006 12:03:42.296597 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9e240eda-9921-45e1-991d-971031189ee4-config-data\") pod \"cinder-db-sync-wqlhj\" (UID: \"9e240eda-9921-45e1-991d-971031189ee4\") " pod="openstack/cinder-db-sync-wqlhj" Oct 06 12:03:42 crc kubenswrapper[4958]: I1006 12:03:42.298066 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9e240eda-9921-45e1-991d-971031189ee4-combined-ca-bundle\") pod \"cinder-db-sync-wqlhj\" (UID: \"9e240eda-9921-45e1-991d-971031189ee4\") " pod="openstack/cinder-db-sync-wqlhj" Oct 06 12:03:42 crc kubenswrapper[4958]: I1006 12:03:42.298371 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/9e240eda-9921-45e1-991d-971031189ee4-etc-machine-id\") pod \"cinder-db-sync-wqlhj\" (UID: \"9e240eda-9921-45e1-991d-971031189ee4\") " pod="openstack/cinder-db-sync-wqlhj" Oct 06 12:03:42 crc kubenswrapper[4958]: I1006 12:03:42.302984 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9e240eda-9921-45e1-991d-971031189ee4-scripts\") pod \"cinder-db-sync-wqlhj\" (UID: \"9e240eda-9921-45e1-991d-971031189ee4\") " pod="openstack/cinder-db-sync-wqlhj" Oct 06 12:03:42 crc kubenswrapper[4958]: I1006 12:03:42.311730 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/9e240eda-9921-45e1-991d-971031189ee4-db-sync-config-data\") pod \"cinder-db-sync-wqlhj\" (UID: \"9e240eda-9921-45e1-991d-971031189ee4\") " pod="openstack/cinder-db-sync-wqlhj" Oct 06 12:03:42 crc kubenswrapper[4958]: I1006 12:03:42.348874 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xjjl7\" (UniqueName: \"kubernetes.io/projected/9e240eda-9921-45e1-991d-971031189ee4-kube-api-access-xjjl7\") pod \"cinder-db-sync-wqlhj\" (UID: \"9e240eda-9921-45e1-991d-971031189ee4\") " pod="openstack/cinder-db-sync-wqlhj" Oct 06 12:03:42 crc kubenswrapper[4958]: I1006 12:03:42.394136 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/d0818578-1e79-4dfb-8257-b8b1a2bc0cef-config\") pod \"neutron-db-sync-dv44v\" (UID: \"d0818578-1e79-4dfb-8257-b8b1a2bc0cef\") " pod="openstack/neutron-db-sync-dv44v" Oct 06 12:03:42 crc kubenswrapper[4958]: I1006 12:03:42.394976 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5ffvd\" (UniqueName: \"kubernetes.io/projected/d0818578-1e79-4dfb-8257-b8b1a2bc0cef-kube-api-access-5ffvd\") pod \"neutron-db-sync-dv44v\" (UID: \"d0818578-1e79-4dfb-8257-b8b1a2bc0cef\") " pod="openstack/neutron-db-sync-dv44v" Oct 06 12:03:42 crc kubenswrapper[4958]: I1006 12:03:42.395054 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d0818578-1e79-4dfb-8257-b8b1a2bc0cef-combined-ca-bundle\") pod \"neutron-db-sync-dv44v\" (UID: \"d0818578-1e79-4dfb-8257-b8b1a2bc0cef\") " pod="openstack/neutron-db-sync-dv44v" Oct 06 12:03:42 crc kubenswrapper[4958]: I1006 12:03:42.402785 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d0818578-1e79-4dfb-8257-b8b1a2bc0cef-combined-ca-bundle\") pod \"neutron-db-sync-dv44v\" (UID: \"d0818578-1e79-4dfb-8257-b8b1a2bc0cef\") " pod="openstack/neutron-db-sync-dv44v" Oct 06 12:03:42 crc kubenswrapper[4958]: I1006 12:03:42.407444 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-wqlhj" Oct 06 12:03:42 crc kubenswrapper[4958]: I1006 12:03:42.408324 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/d0818578-1e79-4dfb-8257-b8b1a2bc0cef-config\") pod \"neutron-db-sync-dv44v\" (UID: \"d0818578-1e79-4dfb-8257-b8b1a2bc0cef\") " pod="openstack/neutron-db-sync-dv44v" Oct 06 12:03:42 crc kubenswrapper[4958]: I1006 12:03:42.497821 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5ffvd\" (UniqueName: \"kubernetes.io/projected/d0818578-1e79-4dfb-8257-b8b1a2bc0cef-kube-api-access-5ffvd\") pod \"neutron-db-sync-dv44v\" (UID: \"d0818578-1e79-4dfb-8257-b8b1a2bc0cef\") " pod="openstack/neutron-db-sync-dv44v" Oct 06 12:03:42 crc kubenswrapper[4958]: I1006 12:03:42.502189 4958 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-8b5c85b87-s74ql"] Oct 06 12:03:42 crc kubenswrapper[4958]: I1006 12:03:42.504166 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-8b5c85b87-s74ql" Oct 06 12:03:42 crc kubenswrapper[4958]: I1006 12:03:42.555222 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-dv44v" Oct 06 12:03:42 crc kubenswrapper[4958]: I1006 12:03:42.583185 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-8b5c85b87-s74ql"] Oct 06 12:03:42 crc kubenswrapper[4958]: I1006 12:03:42.598069 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/09020dee-07c8-4b21-95e6-701ea33f70f4-dns-swift-storage-0\") pod \"dnsmasq-dns-8b5c85b87-s74ql\" (UID: \"09020dee-07c8-4b21-95e6-701ea33f70f4\") " pod="openstack/dnsmasq-dns-8b5c85b87-s74ql" Oct 06 12:03:42 crc kubenswrapper[4958]: I1006 12:03:42.598217 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8bxhc\" (UniqueName: \"kubernetes.io/projected/09020dee-07c8-4b21-95e6-701ea33f70f4-kube-api-access-8bxhc\") pod \"dnsmasq-dns-8b5c85b87-s74ql\" (UID: \"09020dee-07c8-4b21-95e6-701ea33f70f4\") " pod="openstack/dnsmasq-dns-8b5c85b87-s74ql" Oct 06 12:03:42 crc kubenswrapper[4958]: I1006 12:03:42.598308 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/09020dee-07c8-4b21-95e6-701ea33f70f4-ovsdbserver-nb\") pod \"dnsmasq-dns-8b5c85b87-s74ql\" (UID: \"09020dee-07c8-4b21-95e6-701ea33f70f4\") " pod="openstack/dnsmasq-dns-8b5c85b87-s74ql" Oct 06 12:03:42 crc kubenswrapper[4958]: I1006 12:03:42.598445 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/09020dee-07c8-4b21-95e6-701ea33f70f4-dns-svc\") pod \"dnsmasq-dns-8b5c85b87-s74ql\" (UID: \"09020dee-07c8-4b21-95e6-701ea33f70f4\") " pod="openstack/dnsmasq-dns-8b5c85b87-s74ql" Oct 06 12:03:42 crc kubenswrapper[4958]: I1006 12:03:42.598771 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/09020dee-07c8-4b21-95e6-701ea33f70f4-ovsdbserver-sb\") pod \"dnsmasq-dns-8b5c85b87-s74ql\" (UID: \"09020dee-07c8-4b21-95e6-701ea33f70f4\") " pod="openstack/dnsmasq-dns-8b5c85b87-s74ql" Oct 06 12:03:42 crc kubenswrapper[4958]: I1006 12:03:42.598859 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/09020dee-07c8-4b21-95e6-701ea33f70f4-config\") pod \"dnsmasq-dns-8b5c85b87-s74ql\" (UID: \"09020dee-07c8-4b21-95e6-701ea33f70f4\") " pod="openstack/dnsmasq-dns-8b5c85b87-s74ql" Oct 06 12:03:42 crc kubenswrapper[4958]: I1006 12:03:42.700108 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/09020dee-07c8-4b21-95e6-701ea33f70f4-dns-swift-storage-0\") pod \"dnsmasq-dns-8b5c85b87-s74ql\" (UID: \"09020dee-07c8-4b21-95e6-701ea33f70f4\") " pod="openstack/dnsmasq-dns-8b5c85b87-s74ql" Oct 06 12:03:42 crc kubenswrapper[4958]: I1006 12:03:42.700374 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8bxhc\" (UniqueName: \"kubernetes.io/projected/09020dee-07c8-4b21-95e6-701ea33f70f4-kube-api-access-8bxhc\") pod \"dnsmasq-dns-8b5c85b87-s74ql\" (UID: \"09020dee-07c8-4b21-95e6-701ea33f70f4\") " pod="openstack/dnsmasq-dns-8b5c85b87-s74ql" Oct 06 12:03:42 crc kubenswrapper[4958]: I1006 12:03:42.700398 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/09020dee-07c8-4b21-95e6-701ea33f70f4-ovsdbserver-nb\") pod \"dnsmasq-dns-8b5c85b87-s74ql\" (UID: \"09020dee-07c8-4b21-95e6-701ea33f70f4\") " pod="openstack/dnsmasq-dns-8b5c85b87-s74ql" Oct 06 12:03:42 crc kubenswrapper[4958]: I1006 12:03:42.700455 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/09020dee-07c8-4b21-95e6-701ea33f70f4-dns-svc\") pod \"dnsmasq-dns-8b5c85b87-s74ql\" (UID: \"09020dee-07c8-4b21-95e6-701ea33f70f4\") " pod="openstack/dnsmasq-dns-8b5c85b87-s74ql" Oct 06 12:03:42 crc kubenswrapper[4958]: I1006 12:03:42.700524 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/09020dee-07c8-4b21-95e6-701ea33f70f4-ovsdbserver-sb\") pod \"dnsmasq-dns-8b5c85b87-s74ql\" (UID: \"09020dee-07c8-4b21-95e6-701ea33f70f4\") " pod="openstack/dnsmasq-dns-8b5c85b87-s74ql" Oct 06 12:03:42 crc kubenswrapper[4958]: I1006 12:03:42.700553 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/09020dee-07c8-4b21-95e6-701ea33f70f4-config\") pod \"dnsmasq-dns-8b5c85b87-s74ql\" (UID: \"09020dee-07c8-4b21-95e6-701ea33f70f4\") " pod="openstack/dnsmasq-dns-8b5c85b87-s74ql" Oct 06 12:03:42 crc kubenswrapper[4958]: I1006 12:03:42.701499 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/09020dee-07c8-4b21-95e6-701ea33f70f4-ovsdbserver-nb\") pod \"dnsmasq-dns-8b5c85b87-s74ql\" (UID: \"09020dee-07c8-4b21-95e6-701ea33f70f4\") " pod="openstack/dnsmasq-dns-8b5c85b87-s74ql" Oct 06 12:03:42 crc kubenswrapper[4958]: I1006 12:03:42.701687 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/09020dee-07c8-4b21-95e6-701ea33f70f4-dns-svc\") pod \"dnsmasq-dns-8b5c85b87-s74ql\" (UID: \"09020dee-07c8-4b21-95e6-701ea33f70f4\") " pod="openstack/dnsmasq-dns-8b5c85b87-s74ql" Oct 06 12:03:42 crc kubenswrapper[4958]: I1006 12:03:42.701703 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/09020dee-07c8-4b21-95e6-701ea33f70f4-config\") pod \"dnsmasq-dns-8b5c85b87-s74ql\" (UID: \"09020dee-07c8-4b21-95e6-701ea33f70f4\") " pod="openstack/dnsmasq-dns-8b5c85b87-s74ql" Oct 06 12:03:42 crc kubenswrapper[4958]: I1006 12:03:42.702247 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/09020dee-07c8-4b21-95e6-701ea33f70f4-ovsdbserver-sb\") pod \"dnsmasq-dns-8b5c85b87-s74ql\" (UID: \"09020dee-07c8-4b21-95e6-701ea33f70f4\") " pod="openstack/dnsmasq-dns-8b5c85b87-s74ql" Oct 06 12:03:42 crc kubenswrapper[4958]: I1006 12:03:42.704831 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/09020dee-07c8-4b21-95e6-701ea33f70f4-dns-swift-storage-0\") pod \"dnsmasq-dns-8b5c85b87-s74ql\" (UID: \"09020dee-07c8-4b21-95e6-701ea33f70f4\") " pod="openstack/dnsmasq-dns-8b5c85b87-s74ql" Oct 06 12:03:42 crc kubenswrapper[4958]: I1006 12:03:42.749044 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8bxhc\" (UniqueName: \"kubernetes.io/projected/09020dee-07c8-4b21-95e6-701ea33f70f4-kube-api-access-8bxhc\") pod \"dnsmasq-dns-8b5c85b87-s74ql\" (UID: \"09020dee-07c8-4b21-95e6-701ea33f70f4\") " pod="openstack/dnsmasq-dns-8b5c85b87-s74ql" Oct 06 12:03:42 crc kubenswrapper[4958]: I1006 12:03:42.882917 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-8b5c85b87-s74ql" Oct 06 12:03:42 crc kubenswrapper[4958]: I1006 12:03:42.902568 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-7567d7f44b-s7rvg" event={"ID":"f0aa3dc0-4553-4ec3-bec4-097c68139910","Type":"ContainerStarted","Data":"1a53f4c2eeaf4a11211eb329d82ea7b02340d3241f4f3fb4358cb52e27b45808"} Oct 06 12:03:42 crc kubenswrapper[4958]: I1006 12:03:42.902615 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-7567d7f44b-s7rvg" event={"ID":"f0aa3dc0-4553-4ec3-bec4-097c68139910","Type":"ContainerStarted","Data":"90d64ddf7e28f4fd3dba605b0e62c148245f6b667816016e63506f623d0db803"} Oct 06 12:03:42 crc kubenswrapper[4958]: I1006 12:03:42.905531 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-69f5d58bb-ghq4l" event={"ID":"7fdb6376-1709-4378-8fe4-eaf26cf5fde7","Type":"ContainerStarted","Data":"fd7ba69c7cd8f81ca2f135dd22e65ec318129515caac750db5af6600045f0287"} Oct 06 12:03:42 crc kubenswrapper[4958]: I1006 12:03:42.905597 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-69f5d58bb-ghq4l" event={"ID":"7fdb6376-1709-4378-8fe4-eaf26cf5fde7","Type":"ContainerStarted","Data":"392e6e745d6d3232f3f75f91605cea2422196131a04bbdf6a9f3cf565ca9af43"} Oct 06 12:03:42 crc kubenswrapper[4958]: I1006 12:03:42.910948 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-vqtmw" event={"ID":"d7df091e-b3a3-441a-a831-d57964a84438","Type":"ContainerStarted","Data":"d31ab66307e7b22a4dce67f5ce337de2239c6599e7f2dfbcb83b18aa87ae54e7"} Oct 06 12:03:42 crc kubenswrapper[4958]: I1006 12:03:42.935558 4958 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-bootstrap-vqtmw" podStartSLOduration=2.93544948 podStartE2EDuration="2.93544948s" podCreationTimestamp="2025-10-06 12:03:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-06 12:03:42.931958685 +0000 UTC m=+976.817983993" watchObservedRunningTime="2025-10-06 12:03:42.93544948 +0000 UTC m=+976.821474778" Oct 06 12:03:42 crc kubenswrapper[4958]: I1006 12:03:42.937826 4958 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="61ae269a-4ca3-435d-b709-c08d5aa97ba6" path="/var/lib/kubelet/pods/61ae269a-4ca3-435d-b709-c08d5aa97ba6/volumes" Oct 06 12:03:42 crc kubenswrapper[4958]: I1006 12:03:42.939784 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-9kx8n" event={"ID":"2d86d752-3a4d-4940-b221-242b3253c418","Type":"ContainerStarted","Data":"d9bc707430764af4c1fa07d57ccec0af93a589301e0aeaf130ad7bf19bf380f2"} Oct 06 12:03:43 crc kubenswrapper[4958]: I1006 12:03:43.162980 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-sync-wqlhj"] Oct 06 12:03:43 crc kubenswrapper[4958]: I1006 12:03:43.347823 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-sync-dv44v"] Oct 06 12:03:43 crc kubenswrapper[4958]: I1006 12:03:43.391634 4958 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-external-api-0"] Oct 06 12:03:43 crc kubenswrapper[4958]: I1006 12:03:43.393033 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Oct 06 12:03:43 crc kubenswrapper[4958]: I1006 12:03:43.398011 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-glance-dockercfg-r7q2w" Oct 06 12:03:43 crc kubenswrapper[4958]: I1006 12:03:43.398282 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-scripts" Oct 06 12:03:43 crc kubenswrapper[4958]: I1006 12:03:43.398487 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-external-config-data" Oct 06 12:03:43 crc kubenswrapper[4958]: I1006 12:03:43.406701 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Oct 06 12:03:43 crc kubenswrapper[4958]: I1006 12:03:43.491881 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-8b5c85b87-s74ql"] Oct 06 12:03:43 crc kubenswrapper[4958]: I1006 12:03:43.522210 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9a220bc7-3077-4d2f-aad4-9d1b52c752a1-config-data\") pod \"glance-default-external-api-0\" (UID: \"9a220bc7-3077-4d2f-aad4-9d1b52c752a1\") " pod="openstack/glance-default-external-api-0" Oct 06 12:03:43 crc kubenswrapper[4958]: I1006 12:03:43.522319 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zrmtv\" (UniqueName: \"kubernetes.io/projected/9a220bc7-3077-4d2f-aad4-9d1b52c752a1-kube-api-access-zrmtv\") pod \"glance-default-external-api-0\" (UID: \"9a220bc7-3077-4d2f-aad4-9d1b52c752a1\") " pod="openstack/glance-default-external-api-0" Oct 06 12:03:43 crc kubenswrapper[4958]: I1006 12:03:43.522380 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9a220bc7-3077-4d2f-aad4-9d1b52c752a1-scripts\") pod \"glance-default-external-api-0\" (UID: \"9a220bc7-3077-4d2f-aad4-9d1b52c752a1\") " pod="openstack/glance-default-external-api-0" Oct 06 12:03:43 crc kubenswrapper[4958]: I1006 12:03:43.522413 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"glance-default-external-api-0\" (UID: \"9a220bc7-3077-4d2f-aad4-9d1b52c752a1\") " pod="openstack/glance-default-external-api-0" Oct 06 12:03:43 crc kubenswrapper[4958]: I1006 12:03:43.522445 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/9a220bc7-3077-4d2f-aad4-9d1b52c752a1-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"9a220bc7-3077-4d2f-aad4-9d1b52c752a1\") " pod="openstack/glance-default-external-api-0" Oct 06 12:03:43 crc kubenswrapper[4958]: I1006 12:03:43.522536 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9a220bc7-3077-4d2f-aad4-9d1b52c752a1-logs\") pod \"glance-default-external-api-0\" (UID: \"9a220bc7-3077-4d2f-aad4-9d1b52c752a1\") " pod="openstack/glance-default-external-api-0" Oct 06 12:03:43 crc kubenswrapper[4958]: I1006 12:03:43.522568 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9a220bc7-3077-4d2f-aad4-9d1b52c752a1-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"9a220bc7-3077-4d2f-aad4-9d1b52c752a1\") " pod="openstack/glance-default-external-api-0" Oct 06 12:03:43 crc kubenswrapper[4958]: I1006 12:03:43.606389 4958 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-internal-api-0"] Oct 06 12:03:43 crc kubenswrapper[4958]: I1006 12:03:43.608032 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Oct 06 12:03:43 crc kubenswrapper[4958]: I1006 12:03:43.612952 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-internal-config-data" Oct 06 12:03:43 crc kubenswrapper[4958]: I1006 12:03:43.627232 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"glance-default-external-api-0\" (UID: \"9a220bc7-3077-4d2f-aad4-9d1b52c752a1\") " pod="openstack/glance-default-external-api-0" Oct 06 12:03:43 crc kubenswrapper[4958]: I1006 12:03:43.627287 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/9a220bc7-3077-4d2f-aad4-9d1b52c752a1-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"9a220bc7-3077-4d2f-aad4-9d1b52c752a1\") " pod="openstack/glance-default-external-api-0" Oct 06 12:03:43 crc kubenswrapper[4958]: I1006 12:03:43.627353 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9a220bc7-3077-4d2f-aad4-9d1b52c752a1-logs\") pod \"glance-default-external-api-0\" (UID: \"9a220bc7-3077-4d2f-aad4-9d1b52c752a1\") " pod="openstack/glance-default-external-api-0" Oct 06 12:03:43 crc kubenswrapper[4958]: I1006 12:03:43.627378 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9a220bc7-3077-4d2f-aad4-9d1b52c752a1-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"9a220bc7-3077-4d2f-aad4-9d1b52c752a1\") " pod="openstack/glance-default-external-api-0" Oct 06 12:03:43 crc kubenswrapper[4958]: I1006 12:03:43.627456 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9a220bc7-3077-4d2f-aad4-9d1b52c752a1-config-data\") pod \"glance-default-external-api-0\" (UID: \"9a220bc7-3077-4d2f-aad4-9d1b52c752a1\") " pod="openstack/glance-default-external-api-0" Oct 06 12:03:43 crc kubenswrapper[4958]: I1006 12:03:43.627523 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zrmtv\" (UniqueName: \"kubernetes.io/projected/9a220bc7-3077-4d2f-aad4-9d1b52c752a1-kube-api-access-zrmtv\") pod \"glance-default-external-api-0\" (UID: \"9a220bc7-3077-4d2f-aad4-9d1b52c752a1\") " pod="openstack/glance-default-external-api-0" Oct 06 12:03:43 crc kubenswrapper[4958]: I1006 12:03:43.627545 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9a220bc7-3077-4d2f-aad4-9d1b52c752a1-scripts\") pod \"glance-default-external-api-0\" (UID: \"9a220bc7-3077-4d2f-aad4-9d1b52c752a1\") " pod="openstack/glance-default-external-api-0" Oct 06 12:03:43 crc kubenswrapper[4958]: I1006 12:03:43.632674 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9a220bc7-3077-4d2f-aad4-9d1b52c752a1-logs\") pod \"glance-default-external-api-0\" (UID: \"9a220bc7-3077-4d2f-aad4-9d1b52c752a1\") " pod="openstack/glance-default-external-api-0" Oct 06 12:03:43 crc kubenswrapper[4958]: I1006 12:03:43.632946 4958 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"glance-default-external-api-0\" (UID: \"9a220bc7-3077-4d2f-aad4-9d1b52c752a1\") device mount path \"/mnt/openstack/pv11\"" pod="openstack/glance-default-external-api-0" Oct 06 12:03:43 crc kubenswrapper[4958]: I1006 12:03:43.633592 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/9a220bc7-3077-4d2f-aad4-9d1b52c752a1-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"9a220bc7-3077-4d2f-aad4-9d1b52c752a1\") " pod="openstack/glance-default-external-api-0" Oct 06 12:03:43 crc kubenswrapper[4958]: I1006 12:03:43.642098 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Oct 06 12:03:43 crc kubenswrapper[4958]: I1006 12:03:43.643857 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9a220bc7-3077-4d2f-aad4-9d1b52c752a1-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"9a220bc7-3077-4d2f-aad4-9d1b52c752a1\") " pod="openstack/glance-default-external-api-0" Oct 06 12:03:43 crc kubenswrapper[4958]: I1006 12:03:43.647586 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9a220bc7-3077-4d2f-aad4-9d1b52c752a1-config-data\") pod \"glance-default-external-api-0\" (UID: \"9a220bc7-3077-4d2f-aad4-9d1b52c752a1\") " pod="openstack/glance-default-external-api-0" Oct 06 12:03:43 crc kubenswrapper[4958]: I1006 12:03:43.651400 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9a220bc7-3077-4d2f-aad4-9d1b52c752a1-scripts\") pod \"glance-default-external-api-0\" (UID: \"9a220bc7-3077-4d2f-aad4-9d1b52c752a1\") " pod="openstack/glance-default-external-api-0" Oct 06 12:03:43 crc kubenswrapper[4958]: I1006 12:03:43.657953 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zrmtv\" (UniqueName: \"kubernetes.io/projected/9a220bc7-3077-4d2f-aad4-9d1b52c752a1-kube-api-access-zrmtv\") pod \"glance-default-external-api-0\" (UID: \"9a220bc7-3077-4d2f-aad4-9d1b52c752a1\") " pod="openstack/glance-default-external-api-0" Oct 06 12:03:43 crc kubenswrapper[4958]: I1006 12:03:43.687524 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"glance-default-external-api-0\" (UID: \"9a220bc7-3077-4d2f-aad4-9d1b52c752a1\") " pod="openstack/glance-default-external-api-0" Oct 06 12:03:43 crc kubenswrapper[4958]: I1006 12:03:43.728984 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p7tbp\" (UniqueName: \"kubernetes.io/projected/0dd31eb7-a9e6-4ad9-99c6-85cf00d42b4b-kube-api-access-p7tbp\") pod \"glance-default-internal-api-0\" (UID: \"0dd31eb7-a9e6-4ad9-99c6-85cf00d42b4b\") " pod="openstack/glance-default-internal-api-0" Oct 06 12:03:43 crc kubenswrapper[4958]: I1006 12:03:43.729259 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"glance-default-internal-api-0\" (UID: \"0dd31eb7-a9e6-4ad9-99c6-85cf00d42b4b\") " pod="openstack/glance-default-internal-api-0" Oct 06 12:03:43 crc kubenswrapper[4958]: I1006 12:03:43.729291 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0dd31eb7-a9e6-4ad9-99c6-85cf00d42b4b-scripts\") pod \"glance-default-internal-api-0\" (UID: \"0dd31eb7-a9e6-4ad9-99c6-85cf00d42b4b\") " pod="openstack/glance-default-internal-api-0" Oct 06 12:03:43 crc kubenswrapper[4958]: I1006 12:03:43.729306 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0dd31eb7-a9e6-4ad9-99c6-85cf00d42b4b-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"0dd31eb7-a9e6-4ad9-99c6-85cf00d42b4b\") " pod="openstack/glance-default-internal-api-0" Oct 06 12:03:43 crc kubenswrapper[4958]: I1006 12:03:43.729351 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0dd31eb7-a9e6-4ad9-99c6-85cf00d42b4b-logs\") pod \"glance-default-internal-api-0\" (UID: \"0dd31eb7-a9e6-4ad9-99c6-85cf00d42b4b\") " pod="openstack/glance-default-internal-api-0" Oct 06 12:03:43 crc kubenswrapper[4958]: I1006 12:03:43.729367 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/0dd31eb7-a9e6-4ad9-99c6-85cf00d42b4b-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"0dd31eb7-a9e6-4ad9-99c6-85cf00d42b4b\") " pod="openstack/glance-default-internal-api-0" Oct 06 12:03:43 crc kubenswrapper[4958]: I1006 12:03:43.729423 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0dd31eb7-a9e6-4ad9-99c6-85cf00d42b4b-config-data\") pod \"glance-default-internal-api-0\" (UID: \"0dd31eb7-a9e6-4ad9-99c6-85cf00d42b4b\") " pod="openstack/glance-default-internal-api-0" Oct 06 12:03:43 crc kubenswrapper[4958]: I1006 12:03:43.772228 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Oct 06 12:03:43 crc kubenswrapper[4958]: W1006 12:03:43.788449 4958 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod09020dee_07c8_4b21_95e6_701ea33f70f4.slice/crio-dc4c5500e817f2f2aede49a19d1028adddd4cb4d0229cb4dde90ebfafea2ea05 WatchSource:0}: Error finding container dc4c5500e817f2f2aede49a19d1028adddd4cb4d0229cb4dde90ebfafea2ea05: Status 404 returned error can't find the container with id dc4c5500e817f2f2aede49a19d1028adddd4cb4d0229cb4dde90ebfafea2ea05 Oct 06 12:03:43 crc kubenswrapper[4958]: I1006 12:03:43.830616 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0dd31eb7-a9e6-4ad9-99c6-85cf00d42b4b-config-data\") pod \"glance-default-internal-api-0\" (UID: \"0dd31eb7-a9e6-4ad9-99c6-85cf00d42b4b\") " pod="openstack/glance-default-internal-api-0" Oct 06 12:03:43 crc kubenswrapper[4958]: I1006 12:03:43.830718 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p7tbp\" (UniqueName: \"kubernetes.io/projected/0dd31eb7-a9e6-4ad9-99c6-85cf00d42b4b-kube-api-access-p7tbp\") pod \"glance-default-internal-api-0\" (UID: \"0dd31eb7-a9e6-4ad9-99c6-85cf00d42b4b\") " pod="openstack/glance-default-internal-api-0" Oct 06 12:03:43 crc kubenswrapper[4958]: I1006 12:03:43.830754 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"glance-default-internal-api-0\" (UID: \"0dd31eb7-a9e6-4ad9-99c6-85cf00d42b4b\") " pod="openstack/glance-default-internal-api-0" Oct 06 12:03:43 crc kubenswrapper[4958]: I1006 12:03:43.830779 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0dd31eb7-a9e6-4ad9-99c6-85cf00d42b4b-scripts\") pod \"glance-default-internal-api-0\" (UID: \"0dd31eb7-a9e6-4ad9-99c6-85cf00d42b4b\") " pod="openstack/glance-default-internal-api-0" Oct 06 12:03:43 crc kubenswrapper[4958]: I1006 12:03:43.830796 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0dd31eb7-a9e6-4ad9-99c6-85cf00d42b4b-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"0dd31eb7-a9e6-4ad9-99c6-85cf00d42b4b\") " pod="openstack/glance-default-internal-api-0" Oct 06 12:03:43 crc kubenswrapper[4958]: I1006 12:03:43.830840 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0dd31eb7-a9e6-4ad9-99c6-85cf00d42b4b-logs\") pod \"glance-default-internal-api-0\" (UID: \"0dd31eb7-a9e6-4ad9-99c6-85cf00d42b4b\") " pod="openstack/glance-default-internal-api-0" Oct 06 12:03:43 crc kubenswrapper[4958]: I1006 12:03:43.830856 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/0dd31eb7-a9e6-4ad9-99c6-85cf00d42b4b-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"0dd31eb7-a9e6-4ad9-99c6-85cf00d42b4b\") " pod="openstack/glance-default-internal-api-0" Oct 06 12:03:43 crc kubenswrapper[4958]: I1006 12:03:43.831085 4958 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"glance-default-internal-api-0\" (UID: \"0dd31eb7-a9e6-4ad9-99c6-85cf00d42b4b\") device mount path \"/mnt/openstack/pv08\"" pod="openstack/glance-default-internal-api-0" Oct 06 12:03:43 crc kubenswrapper[4958]: I1006 12:03:43.834128 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/0dd31eb7-a9e6-4ad9-99c6-85cf00d42b4b-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"0dd31eb7-a9e6-4ad9-99c6-85cf00d42b4b\") " pod="openstack/glance-default-internal-api-0" Oct 06 12:03:43 crc kubenswrapper[4958]: I1006 12:03:43.834695 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0dd31eb7-a9e6-4ad9-99c6-85cf00d42b4b-logs\") pod \"glance-default-internal-api-0\" (UID: \"0dd31eb7-a9e6-4ad9-99c6-85cf00d42b4b\") " pod="openstack/glance-default-internal-api-0" Oct 06 12:03:43 crc kubenswrapper[4958]: I1006 12:03:43.835784 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0dd31eb7-a9e6-4ad9-99c6-85cf00d42b4b-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"0dd31eb7-a9e6-4ad9-99c6-85cf00d42b4b\") " pod="openstack/glance-default-internal-api-0" Oct 06 12:03:43 crc kubenswrapper[4958]: I1006 12:03:43.838535 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0dd31eb7-a9e6-4ad9-99c6-85cf00d42b4b-scripts\") pod \"glance-default-internal-api-0\" (UID: \"0dd31eb7-a9e6-4ad9-99c6-85cf00d42b4b\") " pod="openstack/glance-default-internal-api-0" Oct 06 12:03:43 crc kubenswrapper[4958]: I1006 12:03:43.841684 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0dd31eb7-a9e6-4ad9-99c6-85cf00d42b4b-config-data\") pod \"glance-default-internal-api-0\" (UID: \"0dd31eb7-a9e6-4ad9-99c6-85cf00d42b4b\") " pod="openstack/glance-default-internal-api-0" Oct 06 12:03:43 crc kubenswrapper[4958]: I1006 12:03:43.867327 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p7tbp\" (UniqueName: \"kubernetes.io/projected/0dd31eb7-a9e6-4ad9-99c6-85cf00d42b4b-kube-api-access-p7tbp\") pod \"glance-default-internal-api-0\" (UID: \"0dd31eb7-a9e6-4ad9-99c6-85cf00d42b4b\") " pod="openstack/glance-default-internal-api-0" Oct 06 12:03:43 crc kubenswrapper[4958]: I1006 12:03:43.888928 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"glance-default-internal-api-0\" (UID: \"0dd31eb7-a9e6-4ad9-99c6-85cf00d42b4b\") " pod="openstack/glance-default-internal-api-0" Oct 06 12:03:43 crc kubenswrapper[4958]: I1006 12:03:43.948810 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-69f5d58bb-ghq4l" event={"ID":"7fdb6376-1709-4378-8fe4-eaf26cf5fde7","Type":"ContainerStarted","Data":"17014812bc47ac203c200d80071006c040a84acef3f5bc065f014f13427b3df4"} Oct 06 12:03:43 crc kubenswrapper[4958]: I1006 12:03:43.953460 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-dv44v" event={"ID":"d0818578-1e79-4dfb-8257-b8b1a2bc0cef","Type":"ContainerStarted","Data":"b280abe1d757d880b67606af47c6b2fe244ce0cc13ae10da93ba2cfb9268b641"} Oct 06 12:03:43 crc kubenswrapper[4958]: I1006 12:03:43.976638 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-vqtmw" event={"ID":"d7df091e-b3a3-441a-a831-d57964a84438","Type":"ContainerStarted","Data":"caa5ca7ecebae672272f61cf77f436f3c18a6481059473294df40d3abfa17a2d"} Oct 06 12:03:43 crc kubenswrapper[4958]: I1006 12:03:43.977072 4958 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/horizon-69f5d58bb-ghq4l" podStartSLOduration=9.977061139 podStartE2EDuration="9.977061139s" podCreationTimestamp="2025-10-06 12:03:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-06 12:03:43.971187381 +0000 UTC m=+977.857212709" watchObservedRunningTime="2025-10-06 12:03:43.977061139 +0000 UTC m=+977.863086447" Oct 06 12:03:43 crc kubenswrapper[4958]: I1006 12:03:43.985045 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-wqlhj" event={"ID":"9e240eda-9921-45e1-991d-971031189ee4","Type":"ContainerStarted","Data":"51eaa048d83d3adeab10bb404c85867b8cc7e1a55df9a4b91af37d4bb000b800"} Oct 06 12:03:43 crc kubenswrapper[4958]: I1006 12:03:43.990086 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-7567d7f44b-s7rvg" event={"ID":"f0aa3dc0-4553-4ec3-bec4-097c68139910","Type":"ContainerStarted","Data":"2bf82343c7eb8ce707462fc4316e29f13036b7e1df47ae7e9cf97e8645c62246"} Oct 06 12:03:43 crc kubenswrapper[4958]: I1006 12:03:43.993501 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Oct 06 12:03:43 crc kubenswrapper[4958]: I1006 12:03:43.999347 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-8b5c85b87-s74ql" event={"ID":"09020dee-07c8-4b21-95e6-701ea33f70f4","Type":"ContainerStarted","Data":"dc4c5500e817f2f2aede49a19d1028adddd4cb4d0229cb4dde90ebfafea2ea05"} Oct 06 12:03:44 crc kubenswrapper[4958]: I1006 12:03:44.020389 4958 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/horizon-7567d7f44b-s7rvg" podStartSLOduration=10.020367496 podStartE2EDuration="10.020367496s" podCreationTimestamp="2025-10-06 12:03:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-06 12:03:44.020315735 +0000 UTC m=+977.906341053" watchObservedRunningTime="2025-10-06 12:03:44.020367496 +0000 UTC m=+977.906392804" Oct 06 12:03:44 crc kubenswrapper[4958]: I1006 12:03:44.474628 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Oct 06 12:03:45 crc kubenswrapper[4958]: I1006 12:03:45.020275 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-dv44v" event={"ID":"d0818578-1e79-4dfb-8257-b8b1a2bc0cef","Type":"ContainerStarted","Data":"be6cfd84d054a5d7f5ab91a026a82bbc0ded37e649b2de2a77f0d720bc80b0cf"} Oct 06 12:03:45 crc kubenswrapper[4958]: I1006 12:03:45.048257 4958 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-db-sync-dv44v" podStartSLOduration=3.048233809 podStartE2EDuration="3.048233809s" podCreationTimestamp="2025-10-06 12:03:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-06 12:03:45.042622649 +0000 UTC m=+978.928647977" watchObservedRunningTime="2025-10-06 12:03:45.048233809 +0000 UTC m=+978.934259117" Oct 06 12:03:45 crc kubenswrapper[4958]: I1006 12:03:45.213335 4958 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/horizon-7567d7f44b-s7rvg" Oct 06 12:03:45 crc kubenswrapper[4958]: I1006 12:03:45.213418 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-7567d7f44b-s7rvg" Oct 06 12:03:45 crc kubenswrapper[4958]: I1006 12:03:45.345306 4958 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/horizon-69f5d58bb-ghq4l" Oct 06 12:03:45 crc kubenswrapper[4958]: I1006 12:03:45.345363 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-69f5d58bb-ghq4l" Oct 06 12:03:45 crc kubenswrapper[4958]: W1006 12:03:45.375364 4958 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod9a220bc7_3077_4d2f_aad4_9d1b52c752a1.slice/crio-49ceed096fec91691e79e713b13aa963793e146f9e98e4cb36d00b9bec5ec9d9 WatchSource:0}: Error finding container 49ceed096fec91691e79e713b13aa963793e146f9e98e4cb36d00b9bec5ec9d9: Status 404 returned error can't find the container with id 49ceed096fec91691e79e713b13aa963793e146f9e98e4cb36d00b9bec5ec9d9 Oct 06 12:03:46 crc kubenswrapper[4958]: I1006 12:03:46.056649 4958 generic.go:334] "Generic (PLEG): container finished" podID="09020dee-07c8-4b21-95e6-701ea33f70f4" containerID="79261e888ea072b546b90d006fc9f88ca71e9b2b04bc81f03cd04e3abc3d9bf0" exitCode=0 Oct 06 12:03:46 crc kubenswrapper[4958]: I1006 12:03:46.057031 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-8b5c85b87-s74ql" event={"ID":"09020dee-07c8-4b21-95e6-701ea33f70f4","Type":"ContainerDied","Data":"79261e888ea072b546b90d006fc9f88ca71e9b2b04bc81f03cd04e3abc3d9bf0"} Oct 06 12:03:46 crc kubenswrapper[4958]: I1006 12:03:46.069200 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"9a220bc7-3077-4d2f-aad4-9d1b52c752a1","Type":"ContainerStarted","Data":"49ceed096fec91691e79e713b13aa963793e146f9e98e4cb36d00b9bec5ec9d9"} Oct 06 12:03:46 crc kubenswrapper[4958]: I1006 12:03:46.108610 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Oct 06 12:03:46 crc kubenswrapper[4958]: I1006 12:03:46.216182 4958 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Oct 06 12:03:46 crc kubenswrapper[4958]: I1006 12:03:46.280240 4958 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Oct 06 12:03:46 crc kubenswrapper[4958]: I1006 12:03:46.351728 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-78b66cc59c-dwbpp" Oct 06 12:03:46 crc kubenswrapper[4958]: I1006 12:03:46.667840 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-f9bd9f6f-dzxmr" Oct 06 12:03:47 crc kubenswrapper[4958]: I1006 12:03:47.084171 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"7198a6cb-9e91-48c8-82b5-16f40fb6b732","Type":"ContainerStarted","Data":"fa6d616ab64cdaf8a95c4dd191e9c95039f4bfe40449b0027e3e6a161b255e37"} Oct 06 12:03:47 crc kubenswrapper[4958]: I1006 12:03:47.087280 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"0dd31eb7-a9e6-4ad9-99c6-85cf00d42b4b","Type":"ContainerStarted","Data":"526b744ee4ba4bb6750ced97d55caced455d74d4b596c4dc153eabd3f6d228dc"} Oct 06 12:03:47 crc kubenswrapper[4958]: I1006 12:03:47.087354 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"0dd31eb7-a9e6-4ad9-99c6-85cf00d42b4b","Type":"ContainerStarted","Data":"211685982b424017832f7e049eb759becf79388279422a6a6c4d3298fb154152"} Oct 06 12:03:47 crc kubenswrapper[4958]: I1006 12:03:47.088769 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"9a220bc7-3077-4d2f-aad4-9d1b52c752a1","Type":"ContainerStarted","Data":"5614923537c082f06e78dccd10ac04a4d36118a089c1c129838c8de2cb99de2e"} Oct 06 12:03:47 crc kubenswrapper[4958]: I1006 12:03:47.090820 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-8b5c85b87-s74ql" event={"ID":"09020dee-07c8-4b21-95e6-701ea33f70f4","Type":"ContainerStarted","Data":"2e9196f8e2fa1b198017d2f2853d5824896c81e5e8f598d3763f9e71eb458ef2"} Oct 06 12:03:47 crc kubenswrapper[4958]: I1006 12:03:47.091027 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-8b5c85b87-s74ql" Oct 06 12:03:47 crc kubenswrapper[4958]: I1006 12:03:47.132157 4958 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-8b5c85b87-s74ql" podStartSLOduration=5.132125316 podStartE2EDuration="5.132125316s" podCreationTimestamp="2025-10-06 12:03:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-06 12:03:47.114291258 +0000 UTC m=+981.000316586" watchObservedRunningTime="2025-10-06 12:03:47.132125316 +0000 UTC m=+981.018150624" Oct 06 12:03:48 crc kubenswrapper[4958]: I1006 12:03:48.150540 4958 generic.go:334] "Generic (PLEG): container finished" podID="d7df091e-b3a3-441a-a831-d57964a84438" containerID="caa5ca7ecebae672272f61cf77f436f3c18a6481059473294df40d3abfa17a2d" exitCode=0 Oct 06 12:03:48 crc kubenswrapper[4958]: I1006 12:03:48.150994 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-vqtmw" event={"ID":"d7df091e-b3a3-441a-a831-d57964a84438","Type":"ContainerDied","Data":"caa5ca7ecebae672272f61cf77f436f3c18a6481059473294df40d3abfa17a2d"} Oct 06 12:03:48 crc kubenswrapper[4958]: I1006 12:03:48.168215 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"0dd31eb7-a9e6-4ad9-99c6-85cf00d42b4b","Type":"ContainerStarted","Data":"74682112b39b38c47f9f81cf700d4daaed822f0bed420dca32ef8590d546a69d"} Oct 06 12:03:48 crc kubenswrapper[4958]: I1006 12:03:48.168300 4958 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="0dd31eb7-a9e6-4ad9-99c6-85cf00d42b4b" containerName="glance-log" containerID="cri-o://526b744ee4ba4bb6750ced97d55caced455d74d4b596c4dc153eabd3f6d228dc" gracePeriod=30 Oct 06 12:03:48 crc kubenswrapper[4958]: I1006 12:03:48.168413 4958 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="0dd31eb7-a9e6-4ad9-99c6-85cf00d42b4b" containerName="glance-httpd" containerID="cri-o://74682112b39b38c47f9f81cf700d4daaed822f0bed420dca32ef8590d546a69d" gracePeriod=30 Oct 06 12:03:48 crc kubenswrapper[4958]: I1006 12:03:48.173246 4958 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="9a220bc7-3077-4d2f-aad4-9d1b52c752a1" containerName="glance-log" containerID="cri-o://5614923537c082f06e78dccd10ac04a4d36118a089c1c129838c8de2cb99de2e" gracePeriod=30 Oct 06 12:03:48 crc kubenswrapper[4958]: I1006 12:03:48.173320 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"9a220bc7-3077-4d2f-aad4-9d1b52c752a1","Type":"ContainerStarted","Data":"c90491938e74b470e788868037a5482006c19d5c5efdcc1ba2eb386461331631"} Oct 06 12:03:48 crc kubenswrapper[4958]: I1006 12:03:48.173366 4958 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="9a220bc7-3077-4d2f-aad4-9d1b52c752a1" containerName="glance-httpd" containerID="cri-o://c90491938e74b470e788868037a5482006c19d5c5efdcc1ba2eb386461331631" gracePeriod=30 Oct 06 12:03:48 crc kubenswrapper[4958]: I1006 12:03:48.210416 4958 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-internal-api-0" podStartSLOduration=6.21040022 podStartE2EDuration="6.21040022s" podCreationTimestamp="2025-10-06 12:03:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-06 12:03:48.208512083 +0000 UTC m=+982.094537391" watchObservedRunningTime="2025-10-06 12:03:48.21040022 +0000 UTC m=+982.096425518" Oct 06 12:03:48 crc kubenswrapper[4958]: I1006 12:03:48.246817 4958 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-external-api-0" podStartSLOduration=6.246795659 podStartE2EDuration="6.246795659s" podCreationTimestamp="2025-10-06 12:03:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-06 12:03:48.239700715 +0000 UTC m=+982.125726023" watchObservedRunningTime="2025-10-06 12:03:48.246795659 +0000 UTC m=+982.132820967" Oct 06 12:03:48 crc kubenswrapper[4958]: I1006 12:03:48.260376 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-ccb77967f-gznd9" Oct 06 12:03:49 crc kubenswrapper[4958]: I1006 12:03:49.185540 4958 generic.go:334] "Generic (PLEG): container finished" podID="0dd31eb7-a9e6-4ad9-99c6-85cf00d42b4b" containerID="74682112b39b38c47f9f81cf700d4daaed822f0bed420dca32ef8590d546a69d" exitCode=143 Oct 06 12:03:49 crc kubenswrapper[4958]: I1006 12:03:49.186676 4958 generic.go:334] "Generic (PLEG): container finished" podID="0dd31eb7-a9e6-4ad9-99c6-85cf00d42b4b" containerID="526b744ee4ba4bb6750ced97d55caced455d74d4b596c4dc153eabd3f6d228dc" exitCode=143 Oct 06 12:03:49 crc kubenswrapper[4958]: I1006 12:03:49.185627 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"0dd31eb7-a9e6-4ad9-99c6-85cf00d42b4b","Type":"ContainerDied","Data":"74682112b39b38c47f9f81cf700d4daaed822f0bed420dca32ef8590d546a69d"} Oct 06 12:03:49 crc kubenswrapper[4958]: I1006 12:03:49.186906 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"0dd31eb7-a9e6-4ad9-99c6-85cf00d42b4b","Type":"ContainerDied","Data":"526b744ee4ba4bb6750ced97d55caced455d74d4b596c4dc153eabd3f6d228dc"} Oct 06 12:03:49 crc kubenswrapper[4958]: I1006 12:03:49.189501 4958 generic.go:334] "Generic (PLEG): container finished" podID="9a220bc7-3077-4d2f-aad4-9d1b52c752a1" containerID="5614923537c082f06e78dccd10ac04a4d36118a089c1c129838c8de2cb99de2e" exitCode=143 Oct 06 12:03:49 crc kubenswrapper[4958]: I1006 12:03:49.189594 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"9a220bc7-3077-4d2f-aad4-9d1b52c752a1","Type":"ContainerDied","Data":"5614923537c082f06e78dccd10ac04a4d36118a089c1c129838c8de2cb99de2e"} Oct 06 12:03:50 crc kubenswrapper[4958]: I1006 12:03:50.199480 4958 generic.go:334] "Generic (PLEG): container finished" podID="9a220bc7-3077-4d2f-aad4-9d1b52c752a1" containerID="c90491938e74b470e788868037a5482006c19d5c5efdcc1ba2eb386461331631" exitCode=0 Oct 06 12:03:50 crc kubenswrapper[4958]: I1006 12:03:50.199526 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"9a220bc7-3077-4d2f-aad4-9d1b52c752a1","Type":"ContainerDied","Data":"c90491938e74b470e788868037a5482006c19d5c5efdcc1ba2eb386461331631"} Oct 06 12:03:51 crc kubenswrapper[4958]: I1006 12:03:51.981741 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Oct 06 12:03:52 crc kubenswrapper[4958]: I1006 12:03:52.111405 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0dd31eb7-a9e6-4ad9-99c6-85cf00d42b4b-logs\") pod \"0dd31eb7-a9e6-4ad9-99c6-85cf00d42b4b\" (UID: \"0dd31eb7-a9e6-4ad9-99c6-85cf00d42b4b\") " Oct 06 12:03:52 crc kubenswrapper[4958]: I1006 12:03:52.111462 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0dd31eb7-a9e6-4ad9-99c6-85cf00d42b4b-combined-ca-bundle\") pod \"0dd31eb7-a9e6-4ad9-99c6-85cf00d42b4b\" (UID: \"0dd31eb7-a9e6-4ad9-99c6-85cf00d42b4b\") " Oct 06 12:03:52 crc kubenswrapper[4958]: I1006 12:03:52.111513 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-p7tbp\" (UniqueName: \"kubernetes.io/projected/0dd31eb7-a9e6-4ad9-99c6-85cf00d42b4b-kube-api-access-p7tbp\") pod \"0dd31eb7-a9e6-4ad9-99c6-85cf00d42b4b\" (UID: \"0dd31eb7-a9e6-4ad9-99c6-85cf00d42b4b\") " Oct 06 12:03:52 crc kubenswrapper[4958]: I1006 12:03:52.111556 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0dd31eb7-a9e6-4ad9-99c6-85cf00d42b4b-scripts\") pod \"0dd31eb7-a9e6-4ad9-99c6-85cf00d42b4b\" (UID: \"0dd31eb7-a9e6-4ad9-99c6-85cf00d42b4b\") " Oct 06 12:03:52 crc kubenswrapper[4958]: I1006 12:03:52.111571 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"0dd31eb7-a9e6-4ad9-99c6-85cf00d42b4b\" (UID: \"0dd31eb7-a9e6-4ad9-99c6-85cf00d42b4b\") " Oct 06 12:03:52 crc kubenswrapper[4958]: I1006 12:03:52.111598 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0dd31eb7-a9e6-4ad9-99c6-85cf00d42b4b-config-data\") pod \"0dd31eb7-a9e6-4ad9-99c6-85cf00d42b4b\" (UID: \"0dd31eb7-a9e6-4ad9-99c6-85cf00d42b4b\") " Oct 06 12:03:52 crc kubenswrapper[4958]: I1006 12:03:52.113697 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/0dd31eb7-a9e6-4ad9-99c6-85cf00d42b4b-httpd-run\") pod \"0dd31eb7-a9e6-4ad9-99c6-85cf00d42b4b\" (UID: \"0dd31eb7-a9e6-4ad9-99c6-85cf00d42b4b\") " Oct 06 12:03:52 crc kubenswrapper[4958]: I1006 12:03:52.114916 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0dd31eb7-a9e6-4ad9-99c6-85cf00d42b4b-logs" (OuterVolumeSpecName: "logs") pod "0dd31eb7-a9e6-4ad9-99c6-85cf00d42b4b" (UID: "0dd31eb7-a9e6-4ad9-99c6-85cf00d42b4b"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 06 12:03:52 crc kubenswrapper[4958]: I1006 12:03:52.122545 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0dd31eb7-a9e6-4ad9-99c6-85cf00d42b4b-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "0dd31eb7-a9e6-4ad9-99c6-85cf00d42b4b" (UID: "0dd31eb7-a9e6-4ad9-99c6-85cf00d42b4b"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 06 12:03:52 crc kubenswrapper[4958]: I1006 12:03:52.128384 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage08-crc" (OuterVolumeSpecName: "glance") pod "0dd31eb7-a9e6-4ad9-99c6-85cf00d42b4b" (UID: "0dd31eb7-a9e6-4ad9-99c6-85cf00d42b4b"). InnerVolumeSpecName "local-storage08-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Oct 06 12:03:52 crc kubenswrapper[4958]: I1006 12:03:52.130421 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0dd31eb7-a9e6-4ad9-99c6-85cf00d42b4b-kube-api-access-p7tbp" (OuterVolumeSpecName: "kube-api-access-p7tbp") pod "0dd31eb7-a9e6-4ad9-99c6-85cf00d42b4b" (UID: "0dd31eb7-a9e6-4ad9-99c6-85cf00d42b4b"). InnerVolumeSpecName "kube-api-access-p7tbp". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 06 12:03:52 crc kubenswrapper[4958]: I1006 12:03:52.138359 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0dd31eb7-a9e6-4ad9-99c6-85cf00d42b4b-scripts" (OuterVolumeSpecName: "scripts") pod "0dd31eb7-a9e6-4ad9-99c6-85cf00d42b4b" (UID: "0dd31eb7-a9e6-4ad9-99c6-85cf00d42b4b"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 06 12:03:52 crc kubenswrapper[4958]: I1006 12:03:52.158294 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0dd31eb7-a9e6-4ad9-99c6-85cf00d42b4b-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "0dd31eb7-a9e6-4ad9-99c6-85cf00d42b4b" (UID: "0dd31eb7-a9e6-4ad9-99c6-85cf00d42b4b"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 06 12:03:52 crc kubenswrapper[4958]: I1006 12:03:52.208544 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0dd31eb7-a9e6-4ad9-99c6-85cf00d42b4b-config-data" (OuterVolumeSpecName: "config-data") pod "0dd31eb7-a9e6-4ad9-99c6-85cf00d42b4b" (UID: "0dd31eb7-a9e6-4ad9-99c6-85cf00d42b4b"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 06 12:03:52 crc kubenswrapper[4958]: I1006 12:03:52.215533 4958 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") on node \"crc\" " Oct 06 12:03:52 crc kubenswrapper[4958]: I1006 12:03:52.215561 4958 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0dd31eb7-a9e6-4ad9-99c6-85cf00d42b4b-scripts\") on node \"crc\" DevicePath \"\"" Oct 06 12:03:52 crc kubenswrapper[4958]: I1006 12:03:52.215574 4958 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0dd31eb7-a9e6-4ad9-99c6-85cf00d42b4b-config-data\") on node \"crc\" DevicePath \"\"" Oct 06 12:03:52 crc kubenswrapper[4958]: I1006 12:03:52.215587 4958 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/0dd31eb7-a9e6-4ad9-99c6-85cf00d42b4b-httpd-run\") on node \"crc\" DevicePath \"\"" Oct 06 12:03:52 crc kubenswrapper[4958]: I1006 12:03:52.215600 4958 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0dd31eb7-a9e6-4ad9-99c6-85cf00d42b4b-logs\") on node \"crc\" DevicePath \"\"" Oct 06 12:03:52 crc kubenswrapper[4958]: I1006 12:03:52.215611 4958 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0dd31eb7-a9e6-4ad9-99c6-85cf00d42b4b-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 06 12:03:52 crc kubenswrapper[4958]: I1006 12:03:52.215626 4958 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-p7tbp\" (UniqueName: \"kubernetes.io/projected/0dd31eb7-a9e6-4ad9-99c6-85cf00d42b4b-kube-api-access-p7tbp\") on node \"crc\" DevicePath \"\"" Oct 06 12:03:52 crc kubenswrapper[4958]: I1006 12:03:52.220595 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"0dd31eb7-a9e6-4ad9-99c6-85cf00d42b4b","Type":"ContainerDied","Data":"211685982b424017832f7e049eb759becf79388279422a6a6c4d3298fb154152"} Oct 06 12:03:52 crc kubenswrapper[4958]: I1006 12:03:52.220646 4958 scope.go:117] "RemoveContainer" containerID="74682112b39b38c47f9f81cf700d4daaed822f0bed420dca32ef8590d546a69d" Oct 06 12:03:52 crc kubenswrapper[4958]: I1006 12:03:52.220745 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Oct 06 12:03:52 crc kubenswrapper[4958]: I1006 12:03:52.251163 4958 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage08-crc" (UniqueName: "kubernetes.io/local-volume/local-storage08-crc") on node "crc" Oct 06 12:03:52 crc kubenswrapper[4958]: I1006 12:03:52.291207 4958 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Oct 06 12:03:52 crc kubenswrapper[4958]: I1006 12:03:52.314440 4958 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-internal-api-0"] Oct 06 12:03:52 crc kubenswrapper[4958]: I1006 12:03:52.318304 4958 reconciler_common.go:293] "Volume detached for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") on node \"crc\" DevicePath \"\"" Oct 06 12:03:52 crc kubenswrapper[4958]: I1006 12:03:52.327896 4958 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-internal-api-0"] Oct 06 12:03:52 crc kubenswrapper[4958]: E1006 12:03:52.328941 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0dd31eb7-a9e6-4ad9-99c6-85cf00d42b4b" containerName="glance-httpd" Oct 06 12:03:52 crc kubenswrapper[4958]: I1006 12:03:52.328955 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="0dd31eb7-a9e6-4ad9-99c6-85cf00d42b4b" containerName="glance-httpd" Oct 06 12:03:52 crc kubenswrapper[4958]: E1006 12:03:52.328996 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0dd31eb7-a9e6-4ad9-99c6-85cf00d42b4b" containerName="glance-log" Oct 06 12:03:52 crc kubenswrapper[4958]: I1006 12:03:52.329002 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="0dd31eb7-a9e6-4ad9-99c6-85cf00d42b4b" containerName="glance-log" Oct 06 12:03:52 crc kubenswrapper[4958]: I1006 12:03:52.330126 4958 memory_manager.go:354] "RemoveStaleState removing state" podUID="0dd31eb7-a9e6-4ad9-99c6-85cf00d42b4b" containerName="glance-httpd" Oct 06 12:03:52 crc kubenswrapper[4958]: I1006 12:03:52.330173 4958 memory_manager.go:354] "RemoveStaleState removing state" podUID="0dd31eb7-a9e6-4ad9-99c6-85cf00d42b4b" containerName="glance-log" Oct 06 12:03:52 crc kubenswrapper[4958]: I1006 12:03:52.336461 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Oct 06 12:03:52 crc kubenswrapper[4958]: I1006 12:03:52.339971 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Oct 06 12:03:52 crc kubenswrapper[4958]: I1006 12:03:52.344574 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-internal-svc" Oct 06 12:03:52 crc kubenswrapper[4958]: I1006 12:03:52.344778 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-internal-config-data" Oct 06 12:03:52 crc kubenswrapper[4958]: I1006 12:03:52.521945 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7b1cc01c-8b93-41aa-bf54-7d98363efbca-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"7b1cc01c-8b93-41aa-bf54-7d98363efbca\") " pod="openstack/glance-default-internal-api-0" Oct 06 12:03:52 crc kubenswrapper[4958]: I1006 12:03:52.522101 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7b1cc01c-8b93-41aa-bf54-7d98363efbca-logs\") pod \"glance-default-internal-api-0\" (UID: \"7b1cc01c-8b93-41aa-bf54-7d98363efbca\") " pod="openstack/glance-default-internal-api-0" Oct 06 12:03:52 crc kubenswrapper[4958]: I1006 12:03:52.522181 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/7b1cc01c-8b93-41aa-bf54-7d98363efbca-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"7b1cc01c-8b93-41aa-bf54-7d98363efbca\") " pod="openstack/glance-default-internal-api-0" Oct 06 12:03:52 crc kubenswrapper[4958]: I1006 12:03:52.522237 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7b1cc01c-8b93-41aa-bf54-7d98363efbca-config-data\") pod \"glance-default-internal-api-0\" (UID: \"7b1cc01c-8b93-41aa-bf54-7d98363efbca\") " pod="openstack/glance-default-internal-api-0" Oct 06 12:03:52 crc kubenswrapper[4958]: I1006 12:03:52.522390 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mfwwn\" (UniqueName: \"kubernetes.io/projected/7b1cc01c-8b93-41aa-bf54-7d98363efbca-kube-api-access-mfwwn\") pod \"glance-default-internal-api-0\" (UID: \"7b1cc01c-8b93-41aa-bf54-7d98363efbca\") " pod="openstack/glance-default-internal-api-0" Oct 06 12:03:52 crc kubenswrapper[4958]: I1006 12:03:52.522427 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7b1cc01c-8b93-41aa-bf54-7d98363efbca-scripts\") pod \"glance-default-internal-api-0\" (UID: \"7b1cc01c-8b93-41aa-bf54-7d98363efbca\") " pod="openstack/glance-default-internal-api-0" Oct 06 12:03:52 crc kubenswrapper[4958]: I1006 12:03:52.522462 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/7b1cc01c-8b93-41aa-bf54-7d98363efbca-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"7b1cc01c-8b93-41aa-bf54-7d98363efbca\") " pod="openstack/glance-default-internal-api-0" Oct 06 12:03:52 crc kubenswrapper[4958]: I1006 12:03:52.522547 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"glance-default-internal-api-0\" (UID: \"7b1cc01c-8b93-41aa-bf54-7d98363efbca\") " pod="openstack/glance-default-internal-api-0" Oct 06 12:03:52 crc kubenswrapper[4958]: I1006 12:03:52.624558 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mfwwn\" (UniqueName: \"kubernetes.io/projected/7b1cc01c-8b93-41aa-bf54-7d98363efbca-kube-api-access-mfwwn\") pod \"glance-default-internal-api-0\" (UID: \"7b1cc01c-8b93-41aa-bf54-7d98363efbca\") " pod="openstack/glance-default-internal-api-0" Oct 06 12:03:52 crc kubenswrapper[4958]: I1006 12:03:52.624623 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7b1cc01c-8b93-41aa-bf54-7d98363efbca-scripts\") pod \"glance-default-internal-api-0\" (UID: \"7b1cc01c-8b93-41aa-bf54-7d98363efbca\") " pod="openstack/glance-default-internal-api-0" Oct 06 12:03:52 crc kubenswrapper[4958]: I1006 12:03:52.624654 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/7b1cc01c-8b93-41aa-bf54-7d98363efbca-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"7b1cc01c-8b93-41aa-bf54-7d98363efbca\") " pod="openstack/glance-default-internal-api-0" Oct 06 12:03:52 crc kubenswrapper[4958]: I1006 12:03:52.624712 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"glance-default-internal-api-0\" (UID: \"7b1cc01c-8b93-41aa-bf54-7d98363efbca\") " pod="openstack/glance-default-internal-api-0" Oct 06 12:03:52 crc kubenswrapper[4958]: I1006 12:03:52.624755 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7b1cc01c-8b93-41aa-bf54-7d98363efbca-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"7b1cc01c-8b93-41aa-bf54-7d98363efbca\") " pod="openstack/glance-default-internal-api-0" Oct 06 12:03:52 crc kubenswrapper[4958]: I1006 12:03:52.624819 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7b1cc01c-8b93-41aa-bf54-7d98363efbca-logs\") pod \"glance-default-internal-api-0\" (UID: \"7b1cc01c-8b93-41aa-bf54-7d98363efbca\") " pod="openstack/glance-default-internal-api-0" Oct 06 12:03:52 crc kubenswrapper[4958]: I1006 12:03:52.624849 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/7b1cc01c-8b93-41aa-bf54-7d98363efbca-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"7b1cc01c-8b93-41aa-bf54-7d98363efbca\") " pod="openstack/glance-default-internal-api-0" Oct 06 12:03:52 crc kubenswrapper[4958]: I1006 12:03:52.624883 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7b1cc01c-8b93-41aa-bf54-7d98363efbca-config-data\") pod \"glance-default-internal-api-0\" (UID: \"7b1cc01c-8b93-41aa-bf54-7d98363efbca\") " pod="openstack/glance-default-internal-api-0" Oct 06 12:03:52 crc kubenswrapper[4958]: I1006 12:03:52.624912 4958 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"glance-default-internal-api-0\" (UID: \"7b1cc01c-8b93-41aa-bf54-7d98363efbca\") device mount path \"/mnt/openstack/pv08\"" pod="openstack/glance-default-internal-api-0" Oct 06 12:03:52 crc kubenswrapper[4958]: I1006 12:03:52.625349 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7b1cc01c-8b93-41aa-bf54-7d98363efbca-logs\") pod \"glance-default-internal-api-0\" (UID: \"7b1cc01c-8b93-41aa-bf54-7d98363efbca\") " pod="openstack/glance-default-internal-api-0" Oct 06 12:03:52 crc kubenswrapper[4958]: I1006 12:03:52.625693 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/7b1cc01c-8b93-41aa-bf54-7d98363efbca-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"7b1cc01c-8b93-41aa-bf54-7d98363efbca\") " pod="openstack/glance-default-internal-api-0" Oct 06 12:03:52 crc kubenswrapper[4958]: I1006 12:03:52.633227 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7b1cc01c-8b93-41aa-bf54-7d98363efbca-config-data\") pod \"glance-default-internal-api-0\" (UID: \"7b1cc01c-8b93-41aa-bf54-7d98363efbca\") " pod="openstack/glance-default-internal-api-0" Oct 06 12:03:52 crc kubenswrapper[4958]: I1006 12:03:52.634778 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7b1cc01c-8b93-41aa-bf54-7d98363efbca-scripts\") pod \"glance-default-internal-api-0\" (UID: \"7b1cc01c-8b93-41aa-bf54-7d98363efbca\") " pod="openstack/glance-default-internal-api-0" Oct 06 12:03:52 crc kubenswrapper[4958]: I1006 12:03:52.640577 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/7b1cc01c-8b93-41aa-bf54-7d98363efbca-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"7b1cc01c-8b93-41aa-bf54-7d98363efbca\") " pod="openstack/glance-default-internal-api-0" Oct 06 12:03:52 crc kubenswrapper[4958]: I1006 12:03:52.642733 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7b1cc01c-8b93-41aa-bf54-7d98363efbca-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"7b1cc01c-8b93-41aa-bf54-7d98363efbca\") " pod="openstack/glance-default-internal-api-0" Oct 06 12:03:52 crc kubenswrapper[4958]: I1006 12:03:52.643742 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mfwwn\" (UniqueName: \"kubernetes.io/projected/7b1cc01c-8b93-41aa-bf54-7d98363efbca-kube-api-access-mfwwn\") pod \"glance-default-internal-api-0\" (UID: \"7b1cc01c-8b93-41aa-bf54-7d98363efbca\") " pod="openstack/glance-default-internal-api-0" Oct 06 12:03:52 crc kubenswrapper[4958]: I1006 12:03:52.670446 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"glance-default-internal-api-0\" (UID: \"7b1cc01c-8b93-41aa-bf54-7d98363efbca\") " pod="openstack/glance-default-internal-api-0" Oct 06 12:03:52 crc kubenswrapper[4958]: I1006 12:03:52.885628 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-8b5c85b87-s74ql" Oct 06 12:03:52 crc kubenswrapper[4958]: I1006 12:03:52.925454 4958 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0dd31eb7-a9e6-4ad9-99c6-85cf00d42b4b" path="/var/lib/kubelet/pods/0dd31eb7-a9e6-4ad9-99c6-85cf00d42b4b/volumes" Oct 06 12:03:52 crc kubenswrapper[4958]: I1006 12:03:52.950026 4958 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-76fcf4b695-l8kbr"] Oct 06 12:03:52 crc kubenswrapper[4958]: I1006 12:03:52.950377 4958 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-76fcf4b695-l8kbr" podUID="7ac6939f-98bc-44dd-9b9e-cf9ee69fce50" containerName="dnsmasq-dns" containerID="cri-o://174de2d0c3aa07f89385198bd740a7bd7177cda9c12afae246e3efbe502acda1" gracePeriod=10 Oct 06 12:03:52 crc kubenswrapper[4958]: I1006 12:03:52.962499 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Oct 06 12:03:53 crc kubenswrapper[4958]: I1006 12:03:53.232039 4958 generic.go:334] "Generic (PLEG): container finished" podID="7ac6939f-98bc-44dd-9b9e-cf9ee69fce50" containerID="174de2d0c3aa07f89385198bd740a7bd7177cda9c12afae246e3efbe502acda1" exitCode=0 Oct 06 12:03:53 crc kubenswrapper[4958]: I1006 12:03:53.232079 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-76fcf4b695-l8kbr" event={"ID":"7ac6939f-98bc-44dd-9b9e-cf9ee69fce50","Type":"ContainerDied","Data":"174de2d0c3aa07f89385198bd740a7bd7177cda9c12afae246e3efbe502acda1"} Oct 06 12:03:53 crc kubenswrapper[4958]: I1006 12:03:53.801685 4958 patch_prober.go:28] interesting pod/machine-config-daemon-whw6z container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 06 12:03:53 crc kubenswrapper[4958]: I1006 12:03:53.801801 4958 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-whw6z" podUID="1a63a5e9-6d00-40ff-a080-cfcaf03a1c1b" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 06 12:03:55 crc kubenswrapper[4958]: I1006 12:03:55.215986 4958 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/horizon-7567d7f44b-s7rvg" podUID="f0aa3dc0-4553-4ec3-bec4-097c68139910" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.142:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.142:8443: connect: connection refused" Oct 06 12:03:55 crc kubenswrapper[4958]: I1006 12:03:55.220921 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-vqtmw" Oct 06 12:03:55 crc kubenswrapper[4958]: I1006 12:03:55.269873 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-vqtmw" event={"ID":"d7df091e-b3a3-441a-a831-d57964a84438","Type":"ContainerDied","Data":"d31ab66307e7b22a4dce67f5ce337de2239c6599e7f2dfbcb83b18aa87ae54e7"} Oct 06 12:03:55 crc kubenswrapper[4958]: I1006 12:03:55.269916 4958 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d31ab66307e7b22a4dce67f5ce337de2239c6599e7f2dfbcb83b18aa87ae54e7" Oct 06 12:03:55 crc kubenswrapper[4958]: I1006 12:03:55.269934 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-vqtmw" Oct 06 12:03:55 crc kubenswrapper[4958]: I1006 12:03:55.318963 4958 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/horizon-69f5d58bb-ghq4l" podUID="7fdb6376-1709-4378-8fe4-eaf26cf5fde7" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.143:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.143:8443: connect: connection refused" Oct 06 12:03:55 crc kubenswrapper[4958]: I1006 12:03:55.381065 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d7df091e-b3a3-441a-a831-d57964a84438-config-data\") pod \"d7df091e-b3a3-441a-a831-d57964a84438\" (UID: \"d7df091e-b3a3-441a-a831-d57964a84438\") " Oct 06 12:03:55 crc kubenswrapper[4958]: I1006 12:03:55.381109 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/d7df091e-b3a3-441a-a831-d57964a84438-credential-keys\") pod \"d7df091e-b3a3-441a-a831-d57964a84438\" (UID: \"d7df091e-b3a3-441a-a831-d57964a84438\") " Oct 06 12:03:55 crc kubenswrapper[4958]: I1006 12:03:55.381185 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-p87tt\" (UniqueName: \"kubernetes.io/projected/d7df091e-b3a3-441a-a831-d57964a84438-kube-api-access-p87tt\") pod \"d7df091e-b3a3-441a-a831-d57964a84438\" (UID: \"d7df091e-b3a3-441a-a831-d57964a84438\") " Oct 06 12:03:55 crc kubenswrapper[4958]: I1006 12:03:55.381236 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/d7df091e-b3a3-441a-a831-d57964a84438-fernet-keys\") pod \"d7df091e-b3a3-441a-a831-d57964a84438\" (UID: \"d7df091e-b3a3-441a-a831-d57964a84438\") " Oct 06 12:03:55 crc kubenswrapper[4958]: I1006 12:03:55.381252 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d7df091e-b3a3-441a-a831-d57964a84438-scripts\") pod \"d7df091e-b3a3-441a-a831-d57964a84438\" (UID: \"d7df091e-b3a3-441a-a831-d57964a84438\") " Oct 06 12:03:55 crc kubenswrapper[4958]: I1006 12:03:55.381290 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d7df091e-b3a3-441a-a831-d57964a84438-combined-ca-bundle\") pod \"d7df091e-b3a3-441a-a831-d57964a84438\" (UID: \"d7df091e-b3a3-441a-a831-d57964a84438\") " Oct 06 12:03:55 crc kubenswrapper[4958]: I1006 12:03:55.386200 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d7df091e-b3a3-441a-a831-d57964a84438-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "d7df091e-b3a3-441a-a831-d57964a84438" (UID: "d7df091e-b3a3-441a-a831-d57964a84438"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 06 12:03:55 crc kubenswrapper[4958]: I1006 12:03:55.390222 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d7df091e-b3a3-441a-a831-d57964a84438-kube-api-access-p87tt" (OuterVolumeSpecName: "kube-api-access-p87tt") pod "d7df091e-b3a3-441a-a831-d57964a84438" (UID: "d7df091e-b3a3-441a-a831-d57964a84438"). InnerVolumeSpecName "kube-api-access-p87tt". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 06 12:03:55 crc kubenswrapper[4958]: I1006 12:03:55.392250 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d7df091e-b3a3-441a-a831-d57964a84438-credential-keys" (OuterVolumeSpecName: "credential-keys") pod "d7df091e-b3a3-441a-a831-d57964a84438" (UID: "d7df091e-b3a3-441a-a831-d57964a84438"). InnerVolumeSpecName "credential-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 06 12:03:55 crc kubenswrapper[4958]: I1006 12:03:55.431273 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d7df091e-b3a3-441a-a831-d57964a84438-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "d7df091e-b3a3-441a-a831-d57964a84438" (UID: "d7df091e-b3a3-441a-a831-d57964a84438"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 06 12:03:55 crc kubenswrapper[4958]: I1006 12:03:55.446283 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d7df091e-b3a3-441a-a831-d57964a84438-scripts" (OuterVolumeSpecName: "scripts") pod "d7df091e-b3a3-441a-a831-d57964a84438" (UID: "d7df091e-b3a3-441a-a831-d57964a84438"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 06 12:03:55 crc kubenswrapper[4958]: I1006 12:03:55.480763 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d7df091e-b3a3-441a-a831-d57964a84438-config-data" (OuterVolumeSpecName: "config-data") pod "d7df091e-b3a3-441a-a831-d57964a84438" (UID: "d7df091e-b3a3-441a-a831-d57964a84438"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 06 12:03:55 crc kubenswrapper[4958]: I1006 12:03:55.482979 4958 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d7df091e-b3a3-441a-a831-d57964a84438-config-data\") on node \"crc\" DevicePath \"\"" Oct 06 12:03:55 crc kubenswrapper[4958]: I1006 12:03:55.483018 4958 reconciler_common.go:293] "Volume detached for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/d7df091e-b3a3-441a-a831-d57964a84438-credential-keys\") on node \"crc\" DevicePath \"\"" Oct 06 12:03:55 crc kubenswrapper[4958]: I1006 12:03:55.483031 4958 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-p87tt\" (UniqueName: \"kubernetes.io/projected/d7df091e-b3a3-441a-a831-d57964a84438-kube-api-access-p87tt\") on node \"crc\" DevicePath \"\"" Oct 06 12:03:55 crc kubenswrapper[4958]: I1006 12:03:55.483043 4958 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/d7df091e-b3a3-441a-a831-d57964a84438-fernet-keys\") on node \"crc\" DevicePath \"\"" Oct 06 12:03:55 crc kubenswrapper[4958]: I1006 12:03:55.483054 4958 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d7df091e-b3a3-441a-a831-d57964a84438-scripts\") on node \"crc\" DevicePath \"\"" Oct 06 12:03:55 crc kubenswrapper[4958]: I1006 12:03:55.483062 4958 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d7df091e-b3a3-441a-a831-d57964a84438-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 06 12:03:56 crc kubenswrapper[4958]: I1006 12:03:56.316984 4958 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-6d6b6556f7-c2dwg"] Oct 06 12:03:56 crc kubenswrapper[4958]: E1006 12:03:56.317379 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d7df091e-b3a3-441a-a831-d57964a84438" containerName="keystone-bootstrap" Oct 06 12:03:56 crc kubenswrapper[4958]: I1006 12:03:56.317392 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="d7df091e-b3a3-441a-a831-d57964a84438" containerName="keystone-bootstrap" Oct 06 12:03:56 crc kubenswrapper[4958]: I1006 12:03:56.317598 4958 memory_manager.go:354] "RemoveStaleState removing state" podUID="d7df091e-b3a3-441a-a831-d57964a84438" containerName="keystone-bootstrap" Oct 06 12:03:56 crc kubenswrapper[4958]: I1006 12:03:56.318186 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-6d6b6556f7-c2dwg" Oct 06 12:03:56 crc kubenswrapper[4958]: I1006 12:03:56.321913 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Oct 06 12:03:56 crc kubenswrapper[4958]: I1006 12:03:56.322310 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-mxvw8" Oct 06 12:03:56 crc kubenswrapper[4958]: I1006 12:03:56.322349 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Oct 06 12:03:56 crc kubenswrapper[4958]: I1006 12:03:56.322458 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-keystone-public-svc" Oct 06 12:03:56 crc kubenswrapper[4958]: I1006 12:03:56.322482 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Oct 06 12:03:56 crc kubenswrapper[4958]: I1006 12:03:56.323264 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-keystone-internal-svc" Oct 06 12:03:56 crc kubenswrapper[4958]: I1006 12:03:56.326752 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-6d6b6556f7-c2dwg"] Oct 06 12:03:56 crc kubenswrapper[4958]: I1006 12:03:56.505536 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2ba0cfe8-aec4-4d54-b307-7e7f5b1d9756-scripts\") pod \"keystone-6d6b6556f7-c2dwg\" (UID: \"2ba0cfe8-aec4-4d54-b307-7e7f5b1d9756\") " pod="openstack/keystone-6d6b6556f7-c2dwg" Oct 06 12:03:56 crc kubenswrapper[4958]: I1006 12:03:56.505867 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k787z\" (UniqueName: \"kubernetes.io/projected/2ba0cfe8-aec4-4d54-b307-7e7f5b1d9756-kube-api-access-k787z\") pod \"keystone-6d6b6556f7-c2dwg\" (UID: \"2ba0cfe8-aec4-4d54-b307-7e7f5b1d9756\") " pod="openstack/keystone-6d6b6556f7-c2dwg" Oct 06 12:03:56 crc kubenswrapper[4958]: I1006 12:03:56.505905 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/2ba0cfe8-aec4-4d54-b307-7e7f5b1d9756-public-tls-certs\") pod \"keystone-6d6b6556f7-c2dwg\" (UID: \"2ba0cfe8-aec4-4d54-b307-7e7f5b1d9756\") " pod="openstack/keystone-6d6b6556f7-c2dwg" Oct 06 12:03:56 crc kubenswrapper[4958]: I1006 12:03:56.505930 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/2ba0cfe8-aec4-4d54-b307-7e7f5b1d9756-internal-tls-certs\") pod \"keystone-6d6b6556f7-c2dwg\" (UID: \"2ba0cfe8-aec4-4d54-b307-7e7f5b1d9756\") " pod="openstack/keystone-6d6b6556f7-c2dwg" Oct 06 12:03:56 crc kubenswrapper[4958]: I1006 12:03:56.505956 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/2ba0cfe8-aec4-4d54-b307-7e7f5b1d9756-fernet-keys\") pod \"keystone-6d6b6556f7-c2dwg\" (UID: \"2ba0cfe8-aec4-4d54-b307-7e7f5b1d9756\") " pod="openstack/keystone-6d6b6556f7-c2dwg" Oct 06 12:03:56 crc kubenswrapper[4958]: I1006 12:03:56.505988 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2ba0cfe8-aec4-4d54-b307-7e7f5b1d9756-config-data\") pod \"keystone-6d6b6556f7-c2dwg\" (UID: \"2ba0cfe8-aec4-4d54-b307-7e7f5b1d9756\") " pod="openstack/keystone-6d6b6556f7-c2dwg" Oct 06 12:03:56 crc kubenswrapper[4958]: I1006 12:03:56.506015 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2ba0cfe8-aec4-4d54-b307-7e7f5b1d9756-combined-ca-bundle\") pod \"keystone-6d6b6556f7-c2dwg\" (UID: \"2ba0cfe8-aec4-4d54-b307-7e7f5b1d9756\") " pod="openstack/keystone-6d6b6556f7-c2dwg" Oct 06 12:03:56 crc kubenswrapper[4958]: I1006 12:03:56.506165 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/2ba0cfe8-aec4-4d54-b307-7e7f5b1d9756-credential-keys\") pod \"keystone-6d6b6556f7-c2dwg\" (UID: \"2ba0cfe8-aec4-4d54-b307-7e7f5b1d9756\") " pod="openstack/keystone-6d6b6556f7-c2dwg" Oct 06 12:03:56 crc kubenswrapper[4958]: I1006 12:03:56.607867 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/2ba0cfe8-aec4-4d54-b307-7e7f5b1d9756-public-tls-certs\") pod \"keystone-6d6b6556f7-c2dwg\" (UID: \"2ba0cfe8-aec4-4d54-b307-7e7f5b1d9756\") " pod="openstack/keystone-6d6b6556f7-c2dwg" Oct 06 12:03:56 crc kubenswrapper[4958]: I1006 12:03:56.607919 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/2ba0cfe8-aec4-4d54-b307-7e7f5b1d9756-internal-tls-certs\") pod \"keystone-6d6b6556f7-c2dwg\" (UID: \"2ba0cfe8-aec4-4d54-b307-7e7f5b1d9756\") " pod="openstack/keystone-6d6b6556f7-c2dwg" Oct 06 12:03:56 crc kubenswrapper[4958]: I1006 12:03:56.607952 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/2ba0cfe8-aec4-4d54-b307-7e7f5b1d9756-fernet-keys\") pod \"keystone-6d6b6556f7-c2dwg\" (UID: \"2ba0cfe8-aec4-4d54-b307-7e7f5b1d9756\") " pod="openstack/keystone-6d6b6556f7-c2dwg" Oct 06 12:03:56 crc kubenswrapper[4958]: I1006 12:03:56.607989 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2ba0cfe8-aec4-4d54-b307-7e7f5b1d9756-config-data\") pod \"keystone-6d6b6556f7-c2dwg\" (UID: \"2ba0cfe8-aec4-4d54-b307-7e7f5b1d9756\") " pod="openstack/keystone-6d6b6556f7-c2dwg" Oct 06 12:03:56 crc kubenswrapper[4958]: I1006 12:03:56.608018 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2ba0cfe8-aec4-4d54-b307-7e7f5b1d9756-combined-ca-bundle\") pod \"keystone-6d6b6556f7-c2dwg\" (UID: \"2ba0cfe8-aec4-4d54-b307-7e7f5b1d9756\") " pod="openstack/keystone-6d6b6556f7-c2dwg" Oct 06 12:03:56 crc kubenswrapper[4958]: I1006 12:03:56.608052 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/2ba0cfe8-aec4-4d54-b307-7e7f5b1d9756-credential-keys\") pod \"keystone-6d6b6556f7-c2dwg\" (UID: \"2ba0cfe8-aec4-4d54-b307-7e7f5b1d9756\") " pod="openstack/keystone-6d6b6556f7-c2dwg" Oct 06 12:03:56 crc kubenswrapper[4958]: I1006 12:03:56.608093 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2ba0cfe8-aec4-4d54-b307-7e7f5b1d9756-scripts\") pod \"keystone-6d6b6556f7-c2dwg\" (UID: \"2ba0cfe8-aec4-4d54-b307-7e7f5b1d9756\") " pod="openstack/keystone-6d6b6556f7-c2dwg" Oct 06 12:03:56 crc kubenswrapper[4958]: I1006 12:03:56.608134 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k787z\" (UniqueName: \"kubernetes.io/projected/2ba0cfe8-aec4-4d54-b307-7e7f5b1d9756-kube-api-access-k787z\") pod \"keystone-6d6b6556f7-c2dwg\" (UID: \"2ba0cfe8-aec4-4d54-b307-7e7f5b1d9756\") " pod="openstack/keystone-6d6b6556f7-c2dwg" Oct 06 12:03:56 crc kubenswrapper[4958]: I1006 12:03:56.613941 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/2ba0cfe8-aec4-4d54-b307-7e7f5b1d9756-fernet-keys\") pod \"keystone-6d6b6556f7-c2dwg\" (UID: \"2ba0cfe8-aec4-4d54-b307-7e7f5b1d9756\") " pod="openstack/keystone-6d6b6556f7-c2dwg" Oct 06 12:03:56 crc kubenswrapper[4958]: I1006 12:03:56.615003 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2ba0cfe8-aec4-4d54-b307-7e7f5b1d9756-combined-ca-bundle\") pod \"keystone-6d6b6556f7-c2dwg\" (UID: \"2ba0cfe8-aec4-4d54-b307-7e7f5b1d9756\") " pod="openstack/keystone-6d6b6556f7-c2dwg" Oct 06 12:03:56 crc kubenswrapper[4958]: I1006 12:03:56.616014 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2ba0cfe8-aec4-4d54-b307-7e7f5b1d9756-config-data\") pod \"keystone-6d6b6556f7-c2dwg\" (UID: \"2ba0cfe8-aec4-4d54-b307-7e7f5b1d9756\") " pod="openstack/keystone-6d6b6556f7-c2dwg" Oct 06 12:03:56 crc kubenswrapper[4958]: I1006 12:03:56.616508 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/2ba0cfe8-aec4-4d54-b307-7e7f5b1d9756-credential-keys\") pod \"keystone-6d6b6556f7-c2dwg\" (UID: \"2ba0cfe8-aec4-4d54-b307-7e7f5b1d9756\") " pod="openstack/keystone-6d6b6556f7-c2dwg" Oct 06 12:03:56 crc kubenswrapper[4958]: I1006 12:03:56.617891 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2ba0cfe8-aec4-4d54-b307-7e7f5b1d9756-scripts\") pod \"keystone-6d6b6556f7-c2dwg\" (UID: \"2ba0cfe8-aec4-4d54-b307-7e7f5b1d9756\") " pod="openstack/keystone-6d6b6556f7-c2dwg" Oct 06 12:03:56 crc kubenswrapper[4958]: I1006 12:03:56.618800 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/2ba0cfe8-aec4-4d54-b307-7e7f5b1d9756-public-tls-certs\") pod \"keystone-6d6b6556f7-c2dwg\" (UID: \"2ba0cfe8-aec4-4d54-b307-7e7f5b1d9756\") " pod="openstack/keystone-6d6b6556f7-c2dwg" Oct 06 12:03:56 crc kubenswrapper[4958]: I1006 12:03:56.620417 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/2ba0cfe8-aec4-4d54-b307-7e7f5b1d9756-internal-tls-certs\") pod \"keystone-6d6b6556f7-c2dwg\" (UID: \"2ba0cfe8-aec4-4d54-b307-7e7f5b1d9756\") " pod="openstack/keystone-6d6b6556f7-c2dwg" Oct 06 12:03:56 crc kubenswrapper[4958]: I1006 12:03:56.628073 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k787z\" (UniqueName: \"kubernetes.io/projected/2ba0cfe8-aec4-4d54-b307-7e7f5b1d9756-kube-api-access-k787z\") pod \"keystone-6d6b6556f7-c2dwg\" (UID: \"2ba0cfe8-aec4-4d54-b307-7e7f5b1d9756\") " pod="openstack/keystone-6d6b6556f7-c2dwg" Oct 06 12:03:56 crc kubenswrapper[4958]: I1006 12:03:56.655471 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-6d6b6556f7-c2dwg" Oct 06 12:03:57 crc kubenswrapper[4958]: I1006 12:03:57.791604 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-76fcf4b695-l8kbr" Oct 06 12:03:57 crc kubenswrapper[4958]: I1006 12:03:57.939397 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/7ac6939f-98bc-44dd-9b9e-cf9ee69fce50-dns-swift-storage-0\") pod \"7ac6939f-98bc-44dd-9b9e-cf9ee69fce50\" (UID: \"7ac6939f-98bc-44dd-9b9e-cf9ee69fce50\") " Oct 06 12:03:57 crc kubenswrapper[4958]: I1006 12:03:57.939735 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fvnl9\" (UniqueName: \"kubernetes.io/projected/7ac6939f-98bc-44dd-9b9e-cf9ee69fce50-kube-api-access-fvnl9\") pod \"7ac6939f-98bc-44dd-9b9e-cf9ee69fce50\" (UID: \"7ac6939f-98bc-44dd-9b9e-cf9ee69fce50\") " Oct 06 12:03:57 crc kubenswrapper[4958]: I1006 12:03:57.939770 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/7ac6939f-98bc-44dd-9b9e-cf9ee69fce50-ovsdbserver-nb\") pod \"7ac6939f-98bc-44dd-9b9e-cf9ee69fce50\" (UID: \"7ac6939f-98bc-44dd-9b9e-cf9ee69fce50\") " Oct 06 12:03:57 crc kubenswrapper[4958]: I1006 12:03:57.939800 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7ac6939f-98bc-44dd-9b9e-cf9ee69fce50-config\") pod \"7ac6939f-98bc-44dd-9b9e-cf9ee69fce50\" (UID: \"7ac6939f-98bc-44dd-9b9e-cf9ee69fce50\") " Oct 06 12:03:57 crc kubenswrapper[4958]: I1006 12:03:57.939825 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/7ac6939f-98bc-44dd-9b9e-cf9ee69fce50-dns-svc\") pod \"7ac6939f-98bc-44dd-9b9e-cf9ee69fce50\" (UID: \"7ac6939f-98bc-44dd-9b9e-cf9ee69fce50\") " Oct 06 12:03:57 crc kubenswrapper[4958]: I1006 12:03:57.939886 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/7ac6939f-98bc-44dd-9b9e-cf9ee69fce50-ovsdbserver-sb\") pod \"7ac6939f-98bc-44dd-9b9e-cf9ee69fce50\" (UID: \"7ac6939f-98bc-44dd-9b9e-cf9ee69fce50\") " Oct 06 12:03:57 crc kubenswrapper[4958]: I1006 12:03:57.950612 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7ac6939f-98bc-44dd-9b9e-cf9ee69fce50-kube-api-access-fvnl9" (OuterVolumeSpecName: "kube-api-access-fvnl9") pod "7ac6939f-98bc-44dd-9b9e-cf9ee69fce50" (UID: "7ac6939f-98bc-44dd-9b9e-cf9ee69fce50"). InnerVolumeSpecName "kube-api-access-fvnl9". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 06 12:03:57 crc kubenswrapper[4958]: I1006 12:03:57.991245 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7ac6939f-98bc-44dd-9b9e-cf9ee69fce50-config" (OuterVolumeSpecName: "config") pod "7ac6939f-98bc-44dd-9b9e-cf9ee69fce50" (UID: "7ac6939f-98bc-44dd-9b9e-cf9ee69fce50"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 06 12:03:57 crc kubenswrapper[4958]: I1006 12:03:57.993008 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7ac6939f-98bc-44dd-9b9e-cf9ee69fce50-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "7ac6939f-98bc-44dd-9b9e-cf9ee69fce50" (UID: "7ac6939f-98bc-44dd-9b9e-cf9ee69fce50"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 06 12:03:58 crc kubenswrapper[4958]: I1006 12:03:58.012016 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7ac6939f-98bc-44dd-9b9e-cf9ee69fce50-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "7ac6939f-98bc-44dd-9b9e-cf9ee69fce50" (UID: "7ac6939f-98bc-44dd-9b9e-cf9ee69fce50"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 06 12:03:58 crc kubenswrapper[4958]: I1006 12:03:58.034658 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7ac6939f-98bc-44dd-9b9e-cf9ee69fce50-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "7ac6939f-98bc-44dd-9b9e-cf9ee69fce50" (UID: "7ac6939f-98bc-44dd-9b9e-cf9ee69fce50"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 06 12:03:58 crc kubenswrapper[4958]: I1006 12:03:58.045274 4958 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/7ac6939f-98bc-44dd-9b9e-cf9ee69fce50-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Oct 06 12:03:58 crc kubenswrapper[4958]: I1006 12:03:58.045318 4958 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/7ac6939f-98bc-44dd-9b9e-cf9ee69fce50-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Oct 06 12:03:58 crc kubenswrapper[4958]: I1006 12:03:58.045328 4958 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fvnl9\" (UniqueName: \"kubernetes.io/projected/7ac6939f-98bc-44dd-9b9e-cf9ee69fce50-kube-api-access-fvnl9\") on node \"crc\" DevicePath \"\"" Oct 06 12:03:58 crc kubenswrapper[4958]: I1006 12:03:58.045340 4958 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7ac6939f-98bc-44dd-9b9e-cf9ee69fce50-config\") on node \"crc\" DevicePath \"\"" Oct 06 12:03:58 crc kubenswrapper[4958]: I1006 12:03:58.045349 4958 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/7ac6939f-98bc-44dd-9b9e-cf9ee69fce50-dns-svc\") on node \"crc\" DevicePath \"\"" Oct 06 12:03:58 crc kubenswrapper[4958]: I1006 12:03:58.048633 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7ac6939f-98bc-44dd-9b9e-cf9ee69fce50-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "7ac6939f-98bc-44dd-9b9e-cf9ee69fce50" (UID: "7ac6939f-98bc-44dd-9b9e-cf9ee69fce50"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 06 12:03:58 crc kubenswrapper[4958]: I1006 12:03:58.147511 4958 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/7ac6939f-98bc-44dd-9b9e-cf9ee69fce50-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Oct 06 12:03:58 crc kubenswrapper[4958]: I1006 12:03:58.302095 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-76fcf4b695-l8kbr" event={"ID":"7ac6939f-98bc-44dd-9b9e-cf9ee69fce50","Type":"ContainerDied","Data":"c8b03a415d00a3932d743b37d0a12180281d9e8e5d1d5bef517482c86703efcc"} Oct 06 12:03:58 crc kubenswrapper[4958]: I1006 12:03:58.302195 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-76fcf4b695-l8kbr" Oct 06 12:03:58 crc kubenswrapper[4958]: I1006 12:03:58.334277 4958 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-76fcf4b695-l8kbr"] Oct 06 12:03:58 crc kubenswrapper[4958]: I1006 12:03:58.342035 4958 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-76fcf4b695-l8kbr"] Oct 06 12:03:58 crc kubenswrapper[4958]: I1006 12:03:58.922674 4958 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7ac6939f-98bc-44dd-9b9e-cf9ee69fce50" path="/var/lib/kubelet/pods/7ac6939f-98bc-44dd-9b9e-cf9ee69fce50/volumes" Oct 06 12:04:01 crc kubenswrapper[4958]: I1006 12:04:01.674170 4958 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-76fcf4b695-l8kbr" podUID="7ac6939f-98bc-44dd-9b9e-cf9ee69fce50" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.136:5353: i/o timeout" Oct 06 12:04:02 crc kubenswrapper[4958]: I1006 12:04:02.340900 4958 generic.go:334] "Generic (PLEG): container finished" podID="d0818578-1e79-4dfb-8257-b8b1a2bc0cef" containerID="be6cfd84d054a5d7f5ab91a026a82bbc0ded37e649b2de2a77f0d720bc80b0cf" exitCode=0 Oct 06 12:04:02 crc kubenswrapper[4958]: I1006 12:04:02.341211 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-dv44v" event={"ID":"d0818578-1e79-4dfb-8257-b8b1a2bc0cef","Type":"ContainerDied","Data":"be6cfd84d054a5d7f5ab91a026a82bbc0ded37e649b2de2a77f0d720bc80b0cf"} Oct 06 12:04:05 crc kubenswrapper[4958]: I1006 12:04:05.213220 4958 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/horizon-7567d7f44b-s7rvg" podUID="f0aa3dc0-4553-4ec3-bec4-097c68139910" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.142:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.142:8443: connect: connection refused" Oct 06 12:04:07 crc kubenswrapper[4958]: I1006 12:04:07.108593 4958 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/horizon-69f5d58bb-ghq4l" Oct 06 12:04:07 crc kubenswrapper[4958]: I1006 12:04:07.627023 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-dv44v" Oct 06 12:04:07 crc kubenswrapper[4958]: I1006 12:04:07.723563 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5ffvd\" (UniqueName: \"kubernetes.io/projected/d0818578-1e79-4dfb-8257-b8b1a2bc0cef-kube-api-access-5ffvd\") pod \"d0818578-1e79-4dfb-8257-b8b1a2bc0cef\" (UID: \"d0818578-1e79-4dfb-8257-b8b1a2bc0cef\") " Oct 06 12:04:07 crc kubenswrapper[4958]: I1006 12:04:07.723720 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d0818578-1e79-4dfb-8257-b8b1a2bc0cef-combined-ca-bundle\") pod \"d0818578-1e79-4dfb-8257-b8b1a2bc0cef\" (UID: \"d0818578-1e79-4dfb-8257-b8b1a2bc0cef\") " Oct 06 12:04:07 crc kubenswrapper[4958]: I1006 12:04:07.723782 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/d0818578-1e79-4dfb-8257-b8b1a2bc0cef-config\") pod \"d0818578-1e79-4dfb-8257-b8b1a2bc0cef\" (UID: \"d0818578-1e79-4dfb-8257-b8b1a2bc0cef\") " Oct 06 12:04:07 crc kubenswrapper[4958]: I1006 12:04:07.740399 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d0818578-1e79-4dfb-8257-b8b1a2bc0cef-kube-api-access-5ffvd" (OuterVolumeSpecName: "kube-api-access-5ffvd") pod "d0818578-1e79-4dfb-8257-b8b1a2bc0cef" (UID: "d0818578-1e79-4dfb-8257-b8b1a2bc0cef"). InnerVolumeSpecName "kube-api-access-5ffvd". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 06 12:04:07 crc kubenswrapper[4958]: I1006 12:04:07.759982 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d0818578-1e79-4dfb-8257-b8b1a2bc0cef-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "d0818578-1e79-4dfb-8257-b8b1a2bc0cef" (UID: "d0818578-1e79-4dfb-8257-b8b1a2bc0cef"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 06 12:04:07 crc kubenswrapper[4958]: I1006 12:04:07.762014 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d0818578-1e79-4dfb-8257-b8b1a2bc0cef-config" (OuterVolumeSpecName: "config") pod "d0818578-1e79-4dfb-8257-b8b1a2bc0cef" (UID: "d0818578-1e79-4dfb-8257-b8b1a2bc0cef"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 06 12:04:07 crc kubenswrapper[4958]: I1006 12:04:07.827263 4958 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d0818578-1e79-4dfb-8257-b8b1a2bc0cef-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 06 12:04:07 crc kubenswrapper[4958]: I1006 12:04:07.827312 4958 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/d0818578-1e79-4dfb-8257-b8b1a2bc0cef-config\") on node \"crc\" DevicePath \"\"" Oct 06 12:04:07 crc kubenswrapper[4958]: I1006 12:04:07.827333 4958 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5ffvd\" (UniqueName: \"kubernetes.io/projected/d0818578-1e79-4dfb-8257-b8b1a2bc0cef-kube-api-access-5ffvd\") on node \"crc\" DevicePath \"\"" Oct 06 12:04:08 crc kubenswrapper[4958]: I1006 12:04:08.408871 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-dv44v" Oct 06 12:04:08 crc kubenswrapper[4958]: I1006 12:04:08.409325 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-dv44v" event={"ID":"d0818578-1e79-4dfb-8257-b8b1a2bc0cef","Type":"ContainerDied","Data":"b280abe1d757d880b67606af47c6b2fe244ce0cc13ae10da93ba2cfb9268b641"} Oct 06 12:04:08 crc kubenswrapper[4958]: I1006 12:04:08.409356 4958 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b280abe1d757d880b67606af47c6b2fe244ce0cc13ae10da93ba2cfb9268b641" Oct 06 12:04:08 crc kubenswrapper[4958]: I1006 12:04:08.770181 4958 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-84b966f6c9-xqx5n"] Oct 06 12:04:08 crc kubenswrapper[4958]: E1006 12:04:08.770765 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7ac6939f-98bc-44dd-9b9e-cf9ee69fce50" containerName="dnsmasq-dns" Oct 06 12:04:08 crc kubenswrapper[4958]: I1006 12:04:08.770776 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="7ac6939f-98bc-44dd-9b9e-cf9ee69fce50" containerName="dnsmasq-dns" Oct 06 12:04:08 crc kubenswrapper[4958]: E1006 12:04:08.770792 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d0818578-1e79-4dfb-8257-b8b1a2bc0cef" containerName="neutron-db-sync" Oct 06 12:04:08 crc kubenswrapper[4958]: I1006 12:04:08.770798 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="d0818578-1e79-4dfb-8257-b8b1a2bc0cef" containerName="neutron-db-sync" Oct 06 12:04:08 crc kubenswrapper[4958]: E1006 12:04:08.770820 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7ac6939f-98bc-44dd-9b9e-cf9ee69fce50" containerName="init" Oct 06 12:04:08 crc kubenswrapper[4958]: I1006 12:04:08.770827 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="7ac6939f-98bc-44dd-9b9e-cf9ee69fce50" containerName="init" Oct 06 12:04:08 crc kubenswrapper[4958]: I1006 12:04:08.770980 4958 memory_manager.go:354] "RemoveStaleState removing state" podUID="d0818578-1e79-4dfb-8257-b8b1a2bc0cef" containerName="neutron-db-sync" Oct 06 12:04:08 crc kubenswrapper[4958]: I1006 12:04:08.770993 4958 memory_manager.go:354] "RemoveStaleState removing state" podUID="7ac6939f-98bc-44dd-9b9e-cf9ee69fce50" containerName="dnsmasq-dns" Oct 06 12:04:08 crc kubenswrapper[4958]: I1006 12:04:08.771917 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-84b966f6c9-xqx5n" Oct 06 12:04:08 crc kubenswrapper[4958]: I1006 12:04:08.791661 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-84b966f6c9-xqx5n"] Oct 06 12:04:08 crc kubenswrapper[4958]: I1006 12:04:08.811638 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/horizon-69f5d58bb-ghq4l" Oct 06 12:04:08 crc kubenswrapper[4958]: I1006 12:04:08.872174 4958 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-df9b9d74d-rffxt"] Oct 06 12:04:08 crc kubenswrapper[4958]: I1006 12:04:08.874498 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-df9b9d74d-rffxt" Oct 06 12:04:08 crc kubenswrapper[4958]: I1006 12:04:08.881015 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-httpd-config" Oct 06 12:04:08 crc kubenswrapper[4958]: I1006 12:04:08.881059 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-neutron-dockercfg-pxfvn" Oct 06 12:04:08 crc kubenswrapper[4958]: I1006 12:04:08.881260 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-ovndbs" Oct 06 12:04:08 crc kubenswrapper[4958]: I1006 12:04:08.881295 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-config" Oct 06 12:04:08 crc kubenswrapper[4958]: I1006 12:04:08.891341 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-df9b9d74d-rffxt"] Oct 06 12:04:08 crc kubenswrapper[4958]: I1006 12:04:08.927886 4958 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-7567d7f44b-s7rvg"] Oct 06 12:04:08 crc kubenswrapper[4958]: I1006 12:04:08.928215 4958 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/horizon-7567d7f44b-s7rvg" podUID="f0aa3dc0-4553-4ec3-bec4-097c68139910" containerName="horizon-log" containerID="cri-o://1a53f4c2eeaf4a11211eb329d82ea7b02340d3241f4f3fb4358cb52e27b45808" gracePeriod=30 Oct 06 12:04:08 crc kubenswrapper[4958]: I1006 12:04:08.928574 4958 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/horizon-7567d7f44b-s7rvg" podUID="f0aa3dc0-4553-4ec3-bec4-097c68139910" containerName="horizon" containerID="cri-o://2bf82343c7eb8ce707462fc4316e29f13036b7e1df47ae7e9cf97e8645c62246" gracePeriod=30 Oct 06 12:04:08 crc kubenswrapper[4958]: I1006 12:04:08.957657 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/1f8e795b-4ea8-4088-8ccb-345195b2d313-dns-swift-storage-0\") pod \"dnsmasq-dns-84b966f6c9-xqx5n\" (UID: \"1f8e795b-4ea8-4088-8ccb-345195b2d313\") " pod="openstack/dnsmasq-dns-84b966f6c9-xqx5n" Oct 06 12:04:08 crc kubenswrapper[4958]: I1006 12:04:08.958217 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/1f8e795b-4ea8-4088-8ccb-345195b2d313-ovsdbserver-sb\") pod \"dnsmasq-dns-84b966f6c9-xqx5n\" (UID: \"1f8e795b-4ea8-4088-8ccb-345195b2d313\") " pod="openstack/dnsmasq-dns-84b966f6c9-xqx5n" Oct 06 12:04:08 crc kubenswrapper[4958]: I1006 12:04:08.958369 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1f8e795b-4ea8-4088-8ccb-345195b2d313-config\") pod \"dnsmasq-dns-84b966f6c9-xqx5n\" (UID: \"1f8e795b-4ea8-4088-8ccb-345195b2d313\") " pod="openstack/dnsmasq-dns-84b966f6c9-xqx5n" Oct 06 12:04:08 crc kubenswrapper[4958]: I1006 12:04:08.958520 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4fxpd\" (UniqueName: \"kubernetes.io/projected/1f8e795b-4ea8-4088-8ccb-345195b2d313-kube-api-access-4fxpd\") pod \"dnsmasq-dns-84b966f6c9-xqx5n\" (UID: \"1f8e795b-4ea8-4088-8ccb-345195b2d313\") " pod="openstack/dnsmasq-dns-84b966f6c9-xqx5n" Oct 06 12:04:08 crc kubenswrapper[4958]: I1006 12:04:08.958675 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/1f8e795b-4ea8-4088-8ccb-345195b2d313-ovsdbserver-nb\") pod \"dnsmasq-dns-84b966f6c9-xqx5n\" (UID: \"1f8e795b-4ea8-4088-8ccb-345195b2d313\") " pod="openstack/dnsmasq-dns-84b966f6c9-xqx5n" Oct 06 12:04:08 crc kubenswrapper[4958]: I1006 12:04:08.958811 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/1f8e795b-4ea8-4088-8ccb-345195b2d313-dns-svc\") pod \"dnsmasq-dns-84b966f6c9-xqx5n\" (UID: \"1f8e795b-4ea8-4088-8ccb-345195b2d313\") " pod="openstack/dnsmasq-dns-84b966f6c9-xqx5n" Oct 06 12:04:09 crc kubenswrapper[4958]: I1006 12:04:09.060445 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4fxpd\" (UniqueName: \"kubernetes.io/projected/1f8e795b-4ea8-4088-8ccb-345195b2d313-kube-api-access-4fxpd\") pod \"dnsmasq-dns-84b966f6c9-xqx5n\" (UID: \"1f8e795b-4ea8-4088-8ccb-345195b2d313\") " pod="openstack/dnsmasq-dns-84b966f6c9-xqx5n" Oct 06 12:04:09 crc kubenswrapper[4958]: I1006 12:04:09.060508 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g44mr\" (UniqueName: \"kubernetes.io/projected/db7e63d2-d43d-417e-939a-be456eaae637-kube-api-access-g44mr\") pod \"neutron-df9b9d74d-rffxt\" (UID: \"db7e63d2-d43d-417e-939a-be456eaae637\") " pod="openstack/neutron-df9b9d74d-rffxt" Oct 06 12:04:09 crc kubenswrapper[4958]: I1006 12:04:09.060540 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/1f8e795b-4ea8-4088-8ccb-345195b2d313-ovsdbserver-nb\") pod \"dnsmasq-dns-84b966f6c9-xqx5n\" (UID: \"1f8e795b-4ea8-4088-8ccb-345195b2d313\") " pod="openstack/dnsmasq-dns-84b966f6c9-xqx5n" Oct 06 12:04:09 crc kubenswrapper[4958]: I1006 12:04:09.060561 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/db7e63d2-d43d-417e-939a-be456eaae637-config\") pod \"neutron-df9b9d74d-rffxt\" (UID: \"db7e63d2-d43d-417e-939a-be456eaae637\") " pod="openstack/neutron-df9b9d74d-rffxt" Oct 06 12:04:09 crc kubenswrapper[4958]: I1006 12:04:09.060591 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/db7e63d2-d43d-417e-939a-be456eaae637-ovndb-tls-certs\") pod \"neutron-df9b9d74d-rffxt\" (UID: \"db7e63d2-d43d-417e-939a-be456eaae637\") " pod="openstack/neutron-df9b9d74d-rffxt" Oct 06 12:04:09 crc kubenswrapper[4958]: I1006 12:04:09.060620 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/1f8e795b-4ea8-4088-8ccb-345195b2d313-dns-svc\") pod \"dnsmasq-dns-84b966f6c9-xqx5n\" (UID: \"1f8e795b-4ea8-4088-8ccb-345195b2d313\") " pod="openstack/dnsmasq-dns-84b966f6c9-xqx5n" Oct 06 12:04:09 crc kubenswrapper[4958]: I1006 12:04:09.060643 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/1f8e795b-4ea8-4088-8ccb-345195b2d313-dns-swift-storage-0\") pod \"dnsmasq-dns-84b966f6c9-xqx5n\" (UID: \"1f8e795b-4ea8-4088-8ccb-345195b2d313\") " pod="openstack/dnsmasq-dns-84b966f6c9-xqx5n" Oct 06 12:04:09 crc kubenswrapper[4958]: I1006 12:04:09.060667 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/db7e63d2-d43d-417e-939a-be456eaae637-combined-ca-bundle\") pod \"neutron-df9b9d74d-rffxt\" (UID: \"db7e63d2-d43d-417e-939a-be456eaae637\") " pod="openstack/neutron-df9b9d74d-rffxt" Oct 06 12:04:09 crc kubenswrapper[4958]: I1006 12:04:09.060717 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/1f8e795b-4ea8-4088-8ccb-345195b2d313-ovsdbserver-sb\") pod \"dnsmasq-dns-84b966f6c9-xqx5n\" (UID: \"1f8e795b-4ea8-4088-8ccb-345195b2d313\") " pod="openstack/dnsmasq-dns-84b966f6c9-xqx5n" Oct 06 12:04:09 crc kubenswrapper[4958]: I1006 12:04:09.060752 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/db7e63d2-d43d-417e-939a-be456eaae637-httpd-config\") pod \"neutron-df9b9d74d-rffxt\" (UID: \"db7e63d2-d43d-417e-939a-be456eaae637\") " pod="openstack/neutron-df9b9d74d-rffxt" Oct 06 12:04:09 crc kubenswrapper[4958]: I1006 12:04:09.060783 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1f8e795b-4ea8-4088-8ccb-345195b2d313-config\") pod \"dnsmasq-dns-84b966f6c9-xqx5n\" (UID: \"1f8e795b-4ea8-4088-8ccb-345195b2d313\") " pod="openstack/dnsmasq-dns-84b966f6c9-xqx5n" Oct 06 12:04:09 crc kubenswrapper[4958]: I1006 12:04:09.061936 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1f8e795b-4ea8-4088-8ccb-345195b2d313-config\") pod \"dnsmasq-dns-84b966f6c9-xqx5n\" (UID: \"1f8e795b-4ea8-4088-8ccb-345195b2d313\") " pod="openstack/dnsmasq-dns-84b966f6c9-xqx5n" Oct 06 12:04:09 crc kubenswrapper[4958]: I1006 12:04:09.062287 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/1f8e795b-4ea8-4088-8ccb-345195b2d313-ovsdbserver-nb\") pod \"dnsmasq-dns-84b966f6c9-xqx5n\" (UID: \"1f8e795b-4ea8-4088-8ccb-345195b2d313\") " pod="openstack/dnsmasq-dns-84b966f6c9-xqx5n" Oct 06 12:04:09 crc kubenswrapper[4958]: I1006 12:04:09.062472 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/1f8e795b-4ea8-4088-8ccb-345195b2d313-dns-swift-storage-0\") pod \"dnsmasq-dns-84b966f6c9-xqx5n\" (UID: \"1f8e795b-4ea8-4088-8ccb-345195b2d313\") " pod="openstack/dnsmasq-dns-84b966f6c9-xqx5n" Oct 06 12:04:09 crc kubenswrapper[4958]: I1006 12:04:09.063869 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/1f8e795b-4ea8-4088-8ccb-345195b2d313-ovsdbserver-sb\") pod \"dnsmasq-dns-84b966f6c9-xqx5n\" (UID: \"1f8e795b-4ea8-4088-8ccb-345195b2d313\") " pod="openstack/dnsmasq-dns-84b966f6c9-xqx5n" Oct 06 12:04:09 crc kubenswrapper[4958]: I1006 12:04:09.064840 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/1f8e795b-4ea8-4088-8ccb-345195b2d313-dns-svc\") pod \"dnsmasq-dns-84b966f6c9-xqx5n\" (UID: \"1f8e795b-4ea8-4088-8ccb-345195b2d313\") " pod="openstack/dnsmasq-dns-84b966f6c9-xqx5n" Oct 06 12:04:09 crc kubenswrapper[4958]: I1006 12:04:09.088081 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4fxpd\" (UniqueName: \"kubernetes.io/projected/1f8e795b-4ea8-4088-8ccb-345195b2d313-kube-api-access-4fxpd\") pod \"dnsmasq-dns-84b966f6c9-xqx5n\" (UID: \"1f8e795b-4ea8-4088-8ccb-345195b2d313\") " pod="openstack/dnsmasq-dns-84b966f6c9-xqx5n" Oct 06 12:04:09 crc kubenswrapper[4958]: I1006 12:04:09.098845 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-84b966f6c9-xqx5n" Oct 06 12:04:09 crc kubenswrapper[4958]: I1006 12:04:09.162480 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/db7e63d2-d43d-417e-939a-be456eaae637-combined-ca-bundle\") pod \"neutron-df9b9d74d-rffxt\" (UID: \"db7e63d2-d43d-417e-939a-be456eaae637\") " pod="openstack/neutron-df9b9d74d-rffxt" Oct 06 12:04:09 crc kubenswrapper[4958]: I1006 12:04:09.162551 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/db7e63d2-d43d-417e-939a-be456eaae637-httpd-config\") pod \"neutron-df9b9d74d-rffxt\" (UID: \"db7e63d2-d43d-417e-939a-be456eaae637\") " pod="openstack/neutron-df9b9d74d-rffxt" Oct 06 12:04:09 crc kubenswrapper[4958]: I1006 12:04:09.162632 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g44mr\" (UniqueName: \"kubernetes.io/projected/db7e63d2-d43d-417e-939a-be456eaae637-kube-api-access-g44mr\") pod \"neutron-df9b9d74d-rffxt\" (UID: \"db7e63d2-d43d-417e-939a-be456eaae637\") " pod="openstack/neutron-df9b9d74d-rffxt" Oct 06 12:04:09 crc kubenswrapper[4958]: I1006 12:04:09.162668 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/db7e63d2-d43d-417e-939a-be456eaae637-config\") pod \"neutron-df9b9d74d-rffxt\" (UID: \"db7e63d2-d43d-417e-939a-be456eaae637\") " pod="openstack/neutron-df9b9d74d-rffxt" Oct 06 12:04:09 crc kubenswrapper[4958]: I1006 12:04:09.162696 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/db7e63d2-d43d-417e-939a-be456eaae637-ovndb-tls-certs\") pod \"neutron-df9b9d74d-rffxt\" (UID: \"db7e63d2-d43d-417e-939a-be456eaae637\") " pod="openstack/neutron-df9b9d74d-rffxt" Oct 06 12:04:09 crc kubenswrapper[4958]: I1006 12:04:09.169702 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/db7e63d2-d43d-417e-939a-be456eaae637-config\") pod \"neutron-df9b9d74d-rffxt\" (UID: \"db7e63d2-d43d-417e-939a-be456eaae637\") " pod="openstack/neutron-df9b9d74d-rffxt" Oct 06 12:04:09 crc kubenswrapper[4958]: I1006 12:04:09.169990 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/db7e63d2-d43d-417e-939a-be456eaae637-combined-ca-bundle\") pod \"neutron-df9b9d74d-rffxt\" (UID: \"db7e63d2-d43d-417e-939a-be456eaae637\") " pod="openstack/neutron-df9b9d74d-rffxt" Oct 06 12:04:09 crc kubenswrapper[4958]: I1006 12:04:09.172881 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/db7e63d2-d43d-417e-939a-be456eaae637-ovndb-tls-certs\") pod \"neutron-df9b9d74d-rffxt\" (UID: \"db7e63d2-d43d-417e-939a-be456eaae637\") " pod="openstack/neutron-df9b9d74d-rffxt" Oct 06 12:04:09 crc kubenswrapper[4958]: I1006 12:04:09.173115 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/db7e63d2-d43d-417e-939a-be456eaae637-httpd-config\") pod \"neutron-df9b9d74d-rffxt\" (UID: \"db7e63d2-d43d-417e-939a-be456eaae637\") " pod="openstack/neutron-df9b9d74d-rffxt" Oct 06 12:04:09 crc kubenswrapper[4958]: I1006 12:04:09.188375 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g44mr\" (UniqueName: \"kubernetes.io/projected/db7e63d2-d43d-417e-939a-be456eaae637-kube-api-access-g44mr\") pod \"neutron-df9b9d74d-rffxt\" (UID: \"db7e63d2-d43d-417e-939a-be456eaae637\") " pod="openstack/neutron-df9b9d74d-rffxt" Oct 06 12:04:09 crc kubenswrapper[4958]: I1006 12:04:09.206770 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-df9b9d74d-rffxt" Oct 06 12:04:10 crc kubenswrapper[4958]: E1006 12:04:10.274842 4958 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-cinder-api:current-podified" Oct 06 12:04:10 crc kubenswrapper[4958]: E1006 12:04:10.275336 4958 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:cinder-db-sync,Image:quay.io/podified-antelope-centos9/openstack-cinder-api:current-podified,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_set_configs && /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:TRUE,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:etc-machine-id,ReadOnly:true,MountPath:/etc/machine-id,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:scripts,ReadOnly:true,MountPath:/usr/local/bin/container-scripts,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/config-data/merged,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/my.cnf,SubPath:my.cnf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:db-sync-config-data,ReadOnly:true,MountPath:/etc/cinder/cinder.conf.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:db-sync-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-xjjl7,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod cinder-db-sync-wqlhj_openstack(9e240eda-9921-45e1-991d-971031189ee4): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Oct 06 12:04:10 crc kubenswrapper[4958]: E1006 12:04:10.276416 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cinder-db-sync\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/cinder-db-sync-wqlhj" podUID="9e240eda-9921-45e1-991d-971031189ee4" Oct 06 12:04:10 crc kubenswrapper[4958]: I1006 12:04:10.363811 4958 scope.go:117] "RemoveContainer" containerID="526b744ee4ba4bb6750ced97d55caced455d74d4b596c4dc153eabd3f6d228dc" Oct 06 12:04:10 crc kubenswrapper[4958]: I1006 12:04:10.449043 4958 generic.go:334] "Generic (PLEG): container finished" podID="f0aa3dc0-4553-4ec3-bec4-097c68139910" containerID="2bf82343c7eb8ce707462fc4316e29f13036b7e1df47ae7e9cf97e8645c62246" exitCode=0 Oct 06 12:04:10 crc kubenswrapper[4958]: I1006 12:04:10.449474 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-7567d7f44b-s7rvg" event={"ID":"f0aa3dc0-4553-4ec3-bec4-097c68139910","Type":"ContainerDied","Data":"2bf82343c7eb8ce707462fc4316e29f13036b7e1df47ae7e9cf97e8645c62246"} Oct 06 12:04:10 crc kubenswrapper[4958]: I1006 12:04:10.456746 4958 scope.go:117] "RemoveContainer" containerID="174de2d0c3aa07f89385198bd740a7bd7177cda9c12afae246e3efbe502acda1" Oct 06 12:04:10 crc kubenswrapper[4958]: I1006 12:04:10.473563 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"9a220bc7-3077-4d2f-aad4-9d1b52c752a1","Type":"ContainerDied","Data":"49ceed096fec91691e79e713b13aa963793e146f9e98e4cb36d00b9bec5ec9d9"} Oct 06 12:04:10 crc kubenswrapper[4958]: I1006 12:04:10.473620 4958 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="49ceed096fec91691e79e713b13aa963793e146f9e98e4cb36d00b9bec5ec9d9" Oct 06 12:04:10 crc kubenswrapper[4958]: E1006 12:04:10.480006 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cinder-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-cinder-api:current-podified\\\"\"" pod="openstack/cinder-db-sync-wqlhj" podUID="9e240eda-9921-45e1-991d-971031189ee4" Oct 06 12:04:10 crc kubenswrapper[4958]: I1006 12:04:10.519302 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Oct 06 12:04:10 crc kubenswrapper[4958]: I1006 12:04:10.560408 4958 scope.go:117] "RemoveContainer" containerID="e2ee0a47c24873de1268babcae30e2cde51e3a99302edc26d6276d280520157b" Oct 06 12:04:10 crc kubenswrapper[4958]: I1006 12:04:10.592116 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9a220bc7-3077-4d2f-aad4-9d1b52c752a1-combined-ca-bundle\") pod \"9a220bc7-3077-4d2f-aad4-9d1b52c752a1\" (UID: \"9a220bc7-3077-4d2f-aad4-9d1b52c752a1\") " Oct 06 12:04:10 crc kubenswrapper[4958]: I1006 12:04:10.592183 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"9a220bc7-3077-4d2f-aad4-9d1b52c752a1\" (UID: \"9a220bc7-3077-4d2f-aad4-9d1b52c752a1\") " Oct 06 12:04:10 crc kubenswrapper[4958]: I1006 12:04:10.592207 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9a220bc7-3077-4d2f-aad4-9d1b52c752a1-scripts\") pod \"9a220bc7-3077-4d2f-aad4-9d1b52c752a1\" (UID: \"9a220bc7-3077-4d2f-aad4-9d1b52c752a1\") " Oct 06 12:04:10 crc kubenswrapper[4958]: I1006 12:04:10.592249 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zrmtv\" (UniqueName: \"kubernetes.io/projected/9a220bc7-3077-4d2f-aad4-9d1b52c752a1-kube-api-access-zrmtv\") pod \"9a220bc7-3077-4d2f-aad4-9d1b52c752a1\" (UID: \"9a220bc7-3077-4d2f-aad4-9d1b52c752a1\") " Oct 06 12:04:10 crc kubenswrapper[4958]: I1006 12:04:10.592285 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9a220bc7-3077-4d2f-aad4-9d1b52c752a1-logs\") pod \"9a220bc7-3077-4d2f-aad4-9d1b52c752a1\" (UID: \"9a220bc7-3077-4d2f-aad4-9d1b52c752a1\") " Oct 06 12:04:10 crc kubenswrapper[4958]: I1006 12:04:10.592320 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/9a220bc7-3077-4d2f-aad4-9d1b52c752a1-httpd-run\") pod \"9a220bc7-3077-4d2f-aad4-9d1b52c752a1\" (UID: \"9a220bc7-3077-4d2f-aad4-9d1b52c752a1\") " Oct 06 12:04:10 crc kubenswrapper[4958]: I1006 12:04:10.592346 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9a220bc7-3077-4d2f-aad4-9d1b52c752a1-config-data\") pod \"9a220bc7-3077-4d2f-aad4-9d1b52c752a1\" (UID: \"9a220bc7-3077-4d2f-aad4-9d1b52c752a1\") " Oct 06 12:04:10 crc kubenswrapper[4958]: I1006 12:04:10.596024 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9a220bc7-3077-4d2f-aad4-9d1b52c752a1-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "9a220bc7-3077-4d2f-aad4-9d1b52c752a1" (UID: "9a220bc7-3077-4d2f-aad4-9d1b52c752a1"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 06 12:04:10 crc kubenswrapper[4958]: I1006 12:04:10.596110 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9a220bc7-3077-4d2f-aad4-9d1b52c752a1-logs" (OuterVolumeSpecName: "logs") pod "9a220bc7-3077-4d2f-aad4-9d1b52c752a1" (UID: "9a220bc7-3077-4d2f-aad4-9d1b52c752a1"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 06 12:04:10 crc kubenswrapper[4958]: I1006 12:04:10.596599 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9a220bc7-3077-4d2f-aad4-9d1b52c752a1-scripts" (OuterVolumeSpecName: "scripts") pod "9a220bc7-3077-4d2f-aad4-9d1b52c752a1" (UID: "9a220bc7-3077-4d2f-aad4-9d1b52c752a1"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 06 12:04:10 crc kubenswrapper[4958]: I1006 12:04:10.600836 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage11-crc" (OuterVolumeSpecName: "glance") pod "9a220bc7-3077-4d2f-aad4-9d1b52c752a1" (UID: "9a220bc7-3077-4d2f-aad4-9d1b52c752a1"). InnerVolumeSpecName "local-storage11-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Oct 06 12:04:10 crc kubenswrapper[4958]: I1006 12:04:10.609425 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9a220bc7-3077-4d2f-aad4-9d1b52c752a1-kube-api-access-zrmtv" (OuterVolumeSpecName: "kube-api-access-zrmtv") pod "9a220bc7-3077-4d2f-aad4-9d1b52c752a1" (UID: "9a220bc7-3077-4d2f-aad4-9d1b52c752a1"). InnerVolumeSpecName "kube-api-access-zrmtv". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 06 12:04:10 crc kubenswrapper[4958]: I1006 12:04:10.640459 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9a220bc7-3077-4d2f-aad4-9d1b52c752a1-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "9a220bc7-3077-4d2f-aad4-9d1b52c752a1" (UID: "9a220bc7-3077-4d2f-aad4-9d1b52c752a1"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 06 12:04:10 crc kubenswrapper[4958]: I1006 12:04:10.684739 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9a220bc7-3077-4d2f-aad4-9d1b52c752a1-config-data" (OuterVolumeSpecName: "config-data") pod "9a220bc7-3077-4d2f-aad4-9d1b52c752a1" (UID: "9a220bc7-3077-4d2f-aad4-9d1b52c752a1"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 06 12:04:10 crc kubenswrapper[4958]: I1006 12:04:10.694239 4958 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9a220bc7-3077-4d2f-aad4-9d1b52c752a1-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 06 12:04:10 crc kubenswrapper[4958]: I1006 12:04:10.694285 4958 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") on node \"crc\" " Oct 06 12:04:10 crc kubenswrapper[4958]: I1006 12:04:10.694296 4958 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9a220bc7-3077-4d2f-aad4-9d1b52c752a1-scripts\") on node \"crc\" DevicePath \"\"" Oct 06 12:04:10 crc kubenswrapper[4958]: I1006 12:04:10.694304 4958 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zrmtv\" (UniqueName: \"kubernetes.io/projected/9a220bc7-3077-4d2f-aad4-9d1b52c752a1-kube-api-access-zrmtv\") on node \"crc\" DevicePath \"\"" Oct 06 12:04:10 crc kubenswrapper[4958]: I1006 12:04:10.694315 4958 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9a220bc7-3077-4d2f-aad4-9d1b52c752a1-logs\") on node \"crc\" DevicePath \"\"" Oct 06 12:04:10 crc kubenswrapper[4958]: I1006 12:04:10.694324 4958 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/9a220bc7-3077-4d2f-aad4-9d1b52c752a1-httpd-run\") on node \"crc\" DevicePath \"\"" Oct 06 12:04:10 crc kubenswrapper[4958]: I1006 12:04:10.694332 4958 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9a220bc7-3077-4d2f-aad4-9d1b52c752a1-config-data\") on node \"crc\" DevicePath \"\"" Oct 06 12:04:10 crc kubenswrapper[4958]: I1006 12:04:10.719221 4958 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage11-crc" (UniqueName: "kubernetes.io/local-volume/local-storage11-crc") on node "crc" Oct 06 12:04:10 crc kubenswrapper[4958]: I1006 12:04:10.795204 4958 reconciler_common.go:293] "Volume detached for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") on node \"crc\" DevicePath \"\"" Oct 06 12:04:10 crc kubenswrapper[4958]: W1006 12:04:10.908839 4958 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod1f8e795b_4ea8_4088_8ccb_345195b2d313.slice/crio-1190f71ec7b9f7f271791a0eb50046621b76a1a813d882e342854589ee3cc580 WatchSource:0}: Error finding container 1190f71ec7b9f7f271791a0eb50046621b76a1a813d882e342854589ee3cc580: Status 404 returned error can't find the container with id 1190f71ec7b9f7f271791a0eb50046621b76a1a813d882e342854589ee3cc580 Oct 06 12:04:10 crc kubenswrapper[4958]: I1006 12:04:10.909295 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-84b966f6c9-xqx5n"] Oct 06 12:04:10 crc kubenswrapper[4958]: I1006 12:04:10.993305 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Oct 06 12:04:11 crc kubenswrapper[4958]: W1006 12:04:11.013937 4958 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod7b1cc01c_8b93_41aa_bf54_7d98363efbca.slice/crio-43e6b6cc944453ee0ba21cd26936ee8af6dda9689484ab901d805e620b334d7d WatchSource:0}: Error finding container 43e6b6cc944453ee0ba21cd26936ee8af6dda9689484ab901d805e620b334d7d: Status 404 returned error can't find the container with id 43e6b6cc944453ee0ba21cd26936ee8af6dda9689484ab901d805e620b334d7d Oct 06 12:04:11 crc kubenswrapper[4958]: I1006 12:04:11.031780 4958 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-7bc84f8f6c-tdr2k"] Oct 06 12:04:11 crc kubenswrapper[4958]: E1006 12:04:11.032395 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9a220bc7-3077-4d2f-aad4-9d1b52c752a1" containerName="glance-httpd" Oct 06 12:04:11 crc kubenswrapper[4958]: I1006 12:04:11.032488 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="9a220bc7-3077-4d2f-aad4-9d1b52c752a1" containerName="glance-httpd" Oct 06 12:04:11 crc kubenswrapper[4958]: E1006 12:04:11.032587 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9a220bc7-3077-4d2f-aad4-9d1b52c752a1" containerName="glance-log" Oct 06 12:04:11 crc kubenswrapper[4958]: I1006 12:04:11.032682 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="9a220bc7-3077-4d2f-aad4-9d1b52c752a1" containerName="glance-log" Oct 06 12:04:11 crc kubenswrapper[4958]: I1006 12:04:11.032992 4958 memory_manager.go:354] "RemoveStaleState removing state" podUID="9a220bc7-3077-4d2f-aad4-9d1b52c752a1" containerName="glance-httpd" Oct 06 12:04:11 crc kubenswrapper[4958]: I1006 12:04:11.033086 4958 memory_manager.go:354] "RemoveStaleState removing state" podUID="9a220bc7-3077-4d2f-aad4-9d1b52c752a1" containerName="glance-log" Oct 06 12:04:11 crc kubenswrapper[4958]: I1006 12:04:11.035207 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-7bc84f8f6c-tdr2k" Oct 06 12:04:11 crc kubenswrapper[4958]: I1006 12:04:11.038858 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-internal-svc" Oct 06 12:04:11 crc kubenswrapper[4958]: I1006 12:04:11.039069 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-public-svc" Oct 06 12:04:11 crc kubenswrapper[4958]: I1006 12:04:11.045344 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-7bc84f8f6c-tdr2k"] Oct 06 12:04:11 crc kubenswrapper[4958]: I1006 12:04:11.061354 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-6d6b6556f7-c2dwg"] Oct 06 12:04:11 crc kubenswrapper[4958]: W1006 12:04:11.153723 4958 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poddb7e63d2_d43d_417e_939a_be456eaae637.slice/crio-a509850dd117c718747e1eb2989becc666f554c53a50236926ded0b8d3e0d4d3 WatchSource:0}: Error finding container a509850dd117c718747e1eb2989becc666f554c53a50236926ded0b8d3e0d4d3: Status 404 returned error can't find the container with id a509850dd117c718747e1eb2989becc666f554c53a50236926ded0b8d3e0d4d3 Oct 06 12:04:11 crc kubenswrapper[4958]: I1006 12:04:11.154390 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-df9b9d74d-rffxt"] Oct 06 12:04:11 crc kubenswrapper[4958]: I1006 12:04:11.201192 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6ef174b4-138f-4dc1-8618-afb9c9e8f9b3-combined-ca-bundle\") pod \"neutron-7bc84f8f6c-tdr2k\" (UID: \"6ef174b4-138f-4dc1-8618-afb9c9e8f9b3\") " pod="openstack/neutron-7bc84f8f6c-tdr2k" Oct 06 12:04:11 crc kubenswrapper[4958]: I1006 12:04:11.201247 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/6ef174b4-138f-4dc1-8618-afb9c9e8f9b3-internal-tls-certs\") pod \"neutron-7bc84f8f6c-tdr2k\" (UID: \"6ef174b4-138f-4dc1-8618-afb9c9e8f9b3\") " pod="openstack/neutron-7bc84f8f6c-tdr2k" Oct 06 12:04:11 crc kubenswrapper[4958]: I1006 12:04:11.201273 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/6ef174b4-138f-4dc1-8618-afb9c9e8f9b3-public-tls-certs\") pod \"neutron-7bc84f8f6c-tdr2k\" (UID: \"6ef174b4-138f-4dc1-8618-afb9c9e8f9b3\") " pod="openstack/neutron-7bc84f8f6c-tdr2k" Oct 06 12:04:11 crc kubenswrapper[4958]: I1006 12:04:11.201735 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/6ef174b4-138f-4dc1-8618-afb9c9e8f9b3-httpd-config\") pod \"neutron-7bc84f8f6c-tdr2k\" (UID: \"6ef174b4-138f-4dc1-8618-afb9c9e8f9b3\") " pod="openstack/neutron-7bc84f8f6c-tdr2k" Oct 06 12:04:11 crc kubenswrapper[4958]: I1006 12:04:11.201801 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/6ef174b4-138f-4dc1-8618-afb9c9e8f9b3-config\") pod \"neutron-7bc84f8f6c-tdr2k\" (UID: \"6ef174b4-138f-4dc1-8618-afb9c9e8f9b3\") " pod="openstack/neutron-7bc84f8f6c-tdr2k" Oct 06 12:04:11 crc kubenswrapper[4958]: I1006 12:04:11.201845 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cjzlm\" (UniqueName: \"kubernetes.io/projected/6ef174b4-138f-4dc1-8618-afb9c9e8f9b3-kube-api-access-cjzlm\") pod \"neutron-7bc84f8f6c-tdr2k\" (UID: \"6ef174b4-138f-4dc1-8618-afb9c9e8f9b3\") " pod="openstack/neutron-7bc84f8f6c-tdr2k" Oct 06 12:04:11 crc kubenswrapper[4958]: I1006 12:04:11.201925 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/6ef174b4-138f-4dc1-8618-afb9c9e8f9b3-ovndb-tls-certs\") pod \"neutron-7bc84f8f6c-tdr2k\" (UID: \"6ef174b4-138f-4dc1-8618-afb9c9e8f9b3\") " pod="openstack/neutron-7bc84f8f6c-tdr2k" Oct 06 12:04:11 crc kubenswrapper[4958]: I1006 12:04:11.304691 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/6ef174b4-138f-4dc1-8618-afb9c9e8f9b3-ovndb-tls-certs\") pod \"neutron-7bc84f8f6c-tdr2k\" (UID: \"6ef174b4-138f-4dc1-8618-afb9c9e8f9b3\") " pod="openstack/neutron-7bc84f8f6c-tdr2k" Oct 06 12:04:11 crc kubenswrapper[4958]: I1006 12:04:11.305190 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6ef174b4-138f-4dc1-8618-afb9c9e8f9b3-combined-ca-bundle\") pod \"neutron-7bc84f8f6c-tdr2k\" (UID: \"6ef174b4-138f-4dc1-8618-afb9c9e8f9b3\") " pod="openstack/neutron-7bc84f8f6c-tdr2k" Oct 06 12:04:11 crc kubenswrapper[4958]: I1006 12:04:11.305255 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/6ef174b4-138f-4dc1-8618-afb9c9e8f9b3-internal-tls-certs\") pod \"neutron-7bc84f8f6c-tdr2k\" (UID: \"6ef174b4-138f-4dc1-8618-afb9c9e8f9b3\") " pod="openstack/neutron-7bc84f8f6c-tdr2k" Oct 06 12:04:11 crc kubenswrapper[4958]: I1006 12:04:11.305284 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/6ef174b4-138f-4dc1-8618-afb9c9e8f9b3-public-tls-certs\") pod \"neutron-7bc84f8f6c-tdr2k\" (UID: \"6ef174b4-138f-4dc1-8618-afb9c9e8f9b3\") " pod="openstack/neutron-7bc84f8f6c-tdr2k" Oct 06 12:04:11 crc kubenswrapper[4958]: I1006 12:04:11.305852 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/6ef174b4-138f-4dc1-8618-afb9c9e8f9b3-httpd-config\") pod \"neutron-7bc84f8f6c-tdr2k\" (UID: \"6ef174b4-138f-4dc1-8618-afb9c9e8f9b3\") " pod="openstack/neutron-7bc84f8f6c-tdr2k" Oct 06 12:04:11 crc kubenswrapper[4958]: I1006 12:04:11.305893 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/6ef174b4-138f-4dc1-8618-afb9c9e8f9b3-config\") pod \"neutron-7bc84f8f6c-tdr2k\" (UID: \"6ef174b4-138f-4dc1-8618-afb9c9e8f9b3\") " pod="openstack/neutron-7bc84f8f6c-tdr2k" Oct 06 12:04:11 crc kubenswrapper[4958]: I1006 12:04:11.305952 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cjzlm\" (UniqueName: \"kubernetes.io/projected/6ef174b4-138f-4dc1-8618-afb9c9e8f9b3-kube-api-access-cjzlm\") pod \"neutron-7bc84f8f6c-tdr2k\" (UID: \"6ef174b4-138f-4dc1-8618-afb9c9e8f9b3\") " pod="openstack/neutron-7bc84f8f6c-tdr2k" Oct 06 12:04:11 crc kubenswrapper[4958]: I1006 12:04:11.309882 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/6ef174b4-138f-4dc1-8618-afb9c9e8f9b3-internal-tls-certs\") pod \"neutron-7bc84f8f6c-tdr2k\" (UID: \"6ef174b4-138f-4dc1-8618-afb9c9e8f9b3\") " pod="openstack/neutron-7bc84f8f6c-tdr2k" Oct 06 12:04:11 crc kubenswrapper[4958]: I1006 12:04:11.312184 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/6ef174b4-138f-4dc1-8618-afb9c9e8f9b3-ovndb-tls-certs\") pod \"neutron-7bc84f8f6c-tdr2k\" (UID: \"6ef174b4-138f-4dc1-8618-afb9c9e8f9b3\") " pod="openstack/neutron-7bc84f8f6c-tdr2k" Oct 06 12:04:11 crc kubenswrapper[4958]: I1006 12:04:11.312324 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/6ef174b4-138f-4dc1-8618-afb9c9e8f9b3-public-tls-certs\") pod \"neutron-7bc84f8f6c-tdr2k\" (UID: \"6ef174b4-138f-4dc1-8618-afb9c9e8f9b3\") " pod="openstack/neutron-7bc84f8f6c-tdr2k" Oct 06 12:04:11 crc kubenswrapper[4958]: I1006 12:04:11.313942 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/6ef174b4-138f-4dc1-8618-afb9c9e8f9b3-httpd-config\") pod \"neutron-7bc84f8f6c-tdr2k\" (UID: \"6ef174b4-138f-4dc1-8618-afb9c9e8f9b3\") " pod="openstack/neutron-7bc84f8f6c-tdr2k" Oct 06 12:04:11 crc kubenswrapper[4958]: I1006 12:04:11.315914 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/6ef174b4-138f-4dc1-8618-afb9c9e8f9b3-config\") pod \"neutron-7bc84f8f6c-tdr2k\" (UID: \"6ef174b4-138f-4dc1-8618-afb9c9e8f9b3\") " pod="openstack/neutron-7bc84f8f6c-tdr2k" Oct 06 12:04:11 crc kubenswrapper[4958]: I1006 12:04:11.319803 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6ef174b4-138f-4dc1-8618-afb9c9e8f9b3-combined-ca-bundle\") pod \"neutron-7bc84f8f6c-tdr2k\" (UID: \"6ef174b4-138f-4dc1-8618-afb9c9e8f9b3\") " pod="openstack/neutron-7bc84f8f6c-tdr2k" Oct 06 12:04:11 crc kubenswrapper[4958]: I1006 12:04:11.323803 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cjzlm\" (UniqueName: \"kubernetes.io/projected/6ef174b4-138f-4dc1-8618-afb9c9e8f9b3-kube-api-access-cjzlm\") pod \"neutron-7bc84f8f6c-tdr2k\" (UID: \"6ef174b4-138f-4dc1-8618-afb9c9e8f9b3\") " pod="openstack/neutron-7bc84f8f6c-tdr2k" Oct 06 12:04:11 crc kubenswrapper[4958]: I1006 12:04:11.381732 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-7bc84f8f6c-tdr2k" Oct 06 12:04:11 crc kubenswrapper[4958]: I1006 12:04:11.485799 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-df9b9d74d-rffxt" event={"ID":"db7e63d2-d43d-417e-939a-be456eaae637","Type":"ContainerStarted","Data":"a509850dd117c718747e1eb2989becc666f554c53a50236926ded0b8d3e0d4d3"} Oct 06 12:04:11 crc kubenswrapper[4958]: I1006 12:04:11.488922 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-s7xhl" event={"ID":"e2efb2d0-1b2f-47fd-a1ee-5e69cafa0fd7","Type":"ContainerStarted","Data":"bda9bd9e98ae5a4cd88593ea36a8f7ca2e61dd496c0d5a786ae8a7b15506bfcc"} Oct 06 12:04:11 crc kubenswrapper[4958]: I1006 12:04:11.494327 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-9kx8n" event={"ID":"2d86d752-3a4d-4940-b221-242b3253c418","Type":"ContainerStarted","Data":"e85b20492bebd2fb52a6adfb25d6edcf5303303b44221c9d93db5b05ea4800f3"} Oct 06 12:04:11 crc kubenswrapper[4958]: I1006 12:04:11.511904 4958 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-db-sync-s7xhl" podStartSLOduration=2.536326302 podStartE2EDuration="45.511888248s" podCreationTimestamp="2025-10-06 12:03:26 +0000 UTC" firstStartedPulling="2025-10-06 12:03:27.427004579 +0000 UTC m=+961.313029887" lastFinishedPulling="2025-10-06 12:04:10.402566505 +0000 UTC m=+1004.288591833" observedRunningTime="2025-10-06 12:04:11.510488566 +0000 UTC m=+1005.396513874" watchObservedRunningTime="2025-10-06 12:04:11.511888248 +0000 UTC m=+1005.397913556" Oct 06 12:04:11 crc kubenswrapper[4958]: I1006 12:04:11.531261 4958 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-db-sync-9kx8n" podStartSLOduration=18.914241164 podStartE2EDuration="34.531243742s" podCreationTimestamp="2025-10-06 12:03:37 +0000 UTC" firstStartedPulling="2025-10-06 12:03:42.102189432 +0000 UTC m=+975.988214740" lastFinishedPulling="2025-10-06 12:03:57.71919201 +0000 UTC m=+991.605217318" observedRunningTime="2025-10-06 12:04:11.525249181 +0000 UTC m=+1005.411274489" watchObservedRunningTime="2025-10-06 12:04:11.531243742 +0000 UTC m=+1005.417269040" Oct 06 12:04:11 crc kubenswrapper[4958]: I1006 12:04:11.532679 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"7198a6cb-9e91-48c8-82b5-16f40fb6b732","Type":"ContainerStarted","Data":"0cf16a119eb12aa5d2aaed61e860d1743af7865a55253fe3cfbadcff94f57252"} Oct 06 12:04:11 crc kubenswrapper[4958]: I1006 12:04:11.542334 4958 generic.go:334] "Generic (PLEG): container finished" podID="1f8e795b-4ea8-4088-8ccb-345195b2d313" containerID="4c5d3274cd52120292043ed37eefadc7b739d29b8af799b20b2079314766aeb6" exitCode=0 Oct 06 12:04:11 crc kubenswrapper[4958]: I1006 12:04:11.542597 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-84b966f6c9-xqx5n" event={"ID":"1f8e795b-4ea8-4088-8ccb-345195b2d313","Type":"ContainerDied","Data":"4c5d3274cd52120292043ed37eefadc7b739d29b8af799b20b2079314766aeb6"} Oct 06 12:04:11 crc kubenswrapper[4958]: I1006 12:04:11.542638 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-84b966f6c9-xqx5n" event={"ID":"1f8e795b-4ea8-4088-8ccb-345195b2d313","Type":"ContainerStarted","Data":"1190f71ec7b9f7f271791a0eb50046621b76a1a813d882e342854589ee3cc580"} Oct 06 12:04:11 crc kubenswrapper[4958]: I1006 12:04:11.546887 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-6d6b6556f7-c2dwg" event={"ID":"2ba0cfe8-aec4-4d54-b307-7e7f5b1d9756","Type":"ContainerStarted","Data":"045ce52a3d47053064ecddef696286e4629c49aa22dc14c3f9b02b3c4ec5b4c7"} Oct 06 12:04:11 crc kubenswrapper[4958]: I1006 12:04:11.547120 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-6d6b6556f7-c2dwg" event={"ID":"2ba0cfe8-aec4-4d54-b307-7e7f5b1d9756","Type":"ContainerStarted","Data":"83c7871ad200f382f84c150ec5cd32a99ea32dc37bf131acc89a99c88019520e"} Oct 06 12:04:11 crc kubenswrapper[4958]: I1006 12:04:11.549052 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/keystone-6d6b6556f7-c2dwg" Oct 06 12:04:11 crc kubenswrapper[4958]: I1006 12:04:11.557427 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"7b1cc01c-8b93-41aa-bf54-7d98363efbca","Type":"ContainerStarted","Data":"43e6b6cc944453ee0ba21cd26936ee8af6dda9689484ab901d805e620b334d7d"} Oct 06 12:04:11 crc kubenswrapper[4958]: I1006 12:04:11.562597 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Oct 06 12:04:11 crc kubenswrapper[4958]: I1006 12:04:11.596797 4958 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-6d6b6556f7-c2dwg" podStartSLOduration=15.596774721 podStartE2EDuration="15.596774721s" podCreationTimestamp="2025-10-06 12:03:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-06 12:04:11.580241182 +0000 UTC m=+1005.466266490" watchObservedRunningTime="2025-10-06 12:04:11.596774721 +0000 UTC m=+1005.482800029" Oct 06 12:04:11 crc kubenswrapper[4958]: I1006 12:04:11.657077 4958 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Oct 06 12:04:11 crc kubenswrapper[4958]: I1006 12:04:11.681458 4958 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-external-api-0"] Oct 06 12:04:11 crc kubenswrapper[4958]: I1006 12:04:11.693685 4958 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-external-api-0"] Oct 06 12:04:11 crc kubenswrapper[4958]: I1006 12:04:11.695170 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Oct 06 12:04:11 crc kubenswrapper[4958]: I1006 12:04:11.702931 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-external-config-data" Oct 06 12:04:11 crc kubenswrapper[4958]: I1006 12:04:11.703095 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-public-svc" Oct 06 12:04:11 crc kubenswrapper[4958]: I1006 12:04:11.706833 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Oct 06 12:04:11 crc kubenswrapper[4958]: I1006 12:04:11.778203 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-7bc84f8f6c-tdr2k"] Oct 06 12:04:11 crc kubenswrapper[4958]: I1006 12:04:11.813764 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"glance-default-external-api-0\" (UID: \"bdff7949-e76a-484d-983d-c3f8fee7f175\") " pod="openstack/glance-default-external-api-0" Oct 06 12:04:11 crc kubenswrapper[4958]: I1006 12:04:11.814012 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/bdff7949-e76a-484d-983d-c3f8fee7f175-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"bdff7949-e76a-484d-983d-c3f8fee7f175\") " pod="openstack/glance-default-external-api-0" Oct 06 12:04:11 crc kubenswrapper[4958]: I1006 12:04:11.814070 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j9x5m\" (UniqueName: \"kubernetes.io/projected/bdff7949-e76a-484d-983d-c3f8fee7f175-kube-api-access-j9x5m\") pod \"glance-default-external-api-0\" (UID: \"bdff7949-e76a-484d-983d-c3f8fee7f175\") " pod="openstack/glance-default-external-api-0" Oct 06 12:04:11 crc kubenswrapper[4958]: I1006 12:04:11.814103 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bdff7949-e76a-484d-983d-c3f8fee7f175-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"bdff7949-e76a-484d-983d-c3f8fee7f175\") " pod="openstack/glance-default-external-api-0" Oct 06 12:04:11 crc kubenswrapper[4958]: I1006 12:04:11.814171 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/bdff7949-e76a-484d-983d-c3f8fee7f175-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"bdff7949-e76a-484d-983d-c3f8fee7f175\") " pod="openstack/glance-default-external-api-0" Oct 06 12:04:11 crc kubenswrapper[4958]: I1006 12:04:11.814206 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/bdff7949-e76a-484d-983d-c3f8fee7f175-scripts\") pod \"glance-default-external-api-0\" (UID: \"bdff7949-e76a-484d-983d-c3f8fee7f175\") " pod="openstack/glance-default-external-api-0" Oct 06 12:04:11 crc kubenswrapper[4958]: I1006 12:04:11.814243 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/bdff7949-e76a-484d-983d-c3f8fee7f175-logs\") pod \"glance-default-external-api-0\" (UID: \"bdff7949-e76a-484d-983d-c3f8fee7f175\") " pod="openstack/glance-default-external-api-0" Oct 06 12:04:11 crc kubenswrapper[4958]: I1006 12:04:11.814261 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bdff7949-e76a-484d-983d-c3f8fee7f175-config-data\") pod \"glance-default-external-api-0\" (UID: \"bdff7949-e76a-484d-983d-c3f8fee7f175\") " pod="openstack/glance-default-external-api-0" Oct 06 12:04:11 crc kubenswrapper[4958]: I1006 12:04:11.916173 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/bdff7949-e76a-484d-983d-c3f8fee7f175-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"bdff7949-e76a-484d-983d-c3f8fee7f175\") " pod="openstack/glance-default-external-api-0" Oct 06 12:04:11 crc kubenswrapper[4958]: I1006 12:04:11.916242 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/bdff7949-e76a-484d-983d-c3f8fee7f175-scripts\") pod \"glance-default-external-api-0\" (UID: \"bdff7949-e76a-484d-983d-c3f8fee7f175\") " pod="openstack/glance-default-external-api-0" Oct 06 12:04:11 crc kubenswrapper[4958]: I1006 12:04:11.916302 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/bdff7949-e76a-484d-983d-c3f8fee7f175-logs\") pod \"glance-default-external-api-0\" (UID: \"bdff7949-e76a-484d-983d-c3f8fee7f175\") " pod="openstack/glance-default-external-api-0" Oct 06 12:04:11 crc kubenswrapper[4958]: I1006 12:04:11.916327 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bdff7949-e76a-484d-983d-c3f8fee7f175-config-data\") pod \"glance-default-external-api-0\" (UID: \"bdff7949-e76a-484d-983d-c3f8fee7f175\") " pod="openstack/glance-default-external-api-0" Oct 06 12:04:11 crc kubenswrapper[4958]: I1006 12:04:11.916355 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"glance-default-external-api-0\" (UID: \"bdff7949-e76a-484d-983d-c3f8fee7f175\") " pod="openstack/glance-default-external-api-0" Oct 06 12:04:11 crc kubenswrapper[4958]: I1006 12:04:11.916649 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/bdff7949-e76a-484d-983d-c3f8fee7f175-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"bdff7949-e76a-484d-983d-c3f8fee7f175\") " pod="openstack/glance-default-external-api-0" Oct 06 12:04:11 crc kubenswrapper[4958]: I1006 12:04:11.916706 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/bdff7949-e76a-484d-983d-c3f8fee7f175-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"bdff7949-e76a-484d-983d-c3f8fee7f175\") " pod="openstack/glance-default-external-api-0" Oct 06 12:04:11 crc kubenswrapper[4958]: I1006 12:04:11.916718 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j9x5m\" (UniqueName: \"kubernetes.io/projected/bdff7949-e76a-484d-983d-c3f8fee7f175-kube-api-access-j9x5m\") pod \"glance-default-external-api-0\" (UID: \"bdff7949-e76a-484d-983d-c3f8fee7f175\") " pod="openstack/glance-default-external-api-0" Oct 06 12:04:11 crc kubenswrapper[4958]: I1006 12:04:11.917231 4958 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"glance-default-external-api-0\" (UID: \"bdff7949-e76a-484d-983d-c3f8fee7f175\") device mount path \"/mnt/openstack/pv11\"" pod="openstack/glance-default-external-api-0" Oct 06 12:04:11 crc kubenswrapper[4958]: I1006 12:04:11.917279 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bdff7949-e76a-484d-983d-c3f8fee7f175-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"bdff7949-e76a-484d-983d-c3f8fee7f175\") " pod="openstack/glance-default-external-api-0" Oct 06 12:04:11 crc kubenswrapper[4958]: I1006 12:04:11.917576 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/bdff7949-e76a-484d-983d-c3f8fee7f175-logs\") pod \"glance-default-external-api-0\" (UID: \"bdff7949-e76a-484d-983d-c3f8fee7f175\") " pod="openstack/glance-default-external-api-0" Oct 06 12:04:11 crc kubenswrapper[4958]: I1006 12:04:11.930227 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/bdff7949-e76a-484d-983d-c3f8fee7f175-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"bdff7949-e76a-484d-983d-c3f8fee7f175\") " pod="openstack/glance-default-external-api-0" Oct 06 12:04:11 crc kubenswrapper[4958]: I1006 12:04:11.930386 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bdff7949-e76a-484d-983d-c3f8fee7f175-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"bdff7949-e76a-484d-983d-c3f8fee7f175\") " pod="openstack/glance-default-external-api-0" Oct 06 12:04:11 crc kubenswrapper[4958]: I1006 12:04:11.930860 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/bdff7949-e76a-484d-983d-c3f8fee7f175-scripts\") pod \"glance-default-external-api-0\" (UID: \"bdff7949-e76a-484d-983d-c3f8fee7f175\") " pod="openstack/glance-default-external-api-0" Oct 06 12:04:11 crc kubenswrapper[4958]: I1006 12:04:11.940376 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bdff7949-e76a-484d-983d-c3f8fee7f175-config-data\") pod \"glance-default-external-api-0\" (UID: \"bdff7949-e76a-484d-983d-c3f8fee7f175\") " pod="openstack/glance-default-external-api-0" Oct 06 12:04:11 crc kubenswrapper[4958]: I1006 12:04:11.947590 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j9x5m\" (UniqueName: \"kubernetes.io/projected/bdff7949-e76a-484d-983d-c3f8fee7f175-kube-api-access-j9x5m\") pod \"glance-default-external-api-0\" (UID: \"bdff7949-e76a-484d-983d-c3f8fee7f175\") " pod="openstack/glance-default-external-api-0" Oct 06 12:04:11 crc kubenswrapper[4958]: I1006 12:04:11.971766 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"glance-default-external-api-0\" (UID: \"bdff7949-e76a-484d-983d-c3f8fee7f175\") " pod="openstack/glance-default-external-api-0" Oct 06 12:04:12 crc kubenswrapper[4958]: I1006 12:04:12.030022 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Oct 06 12:04:12 crc kubenswrapper[4958]: I1006 12:04:12.508344 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-78b66cc59c-dwbpp" Oct 06 12:04:12 crc kubenswrapper[4958]: I1006 12:04:12.557441 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-ccb77967f-gznd9" Oct 06 12:04:12 crc kubenswrapper[4958]: I1006 12:04:12.606048 4958 generic.go:334] "Generic (PLEG): container finished" podID="cedfa2ab-1916-46e6-8e95-18c7a6da6046" containerID="4ccb0b9e775fc0c0e4010437180c3e059a9b00ad4b5d41146ffe438cedf527f5" exitCode=137 Oct 06 12:04:12 crc kubenswrapper[4958]: I1006 12:04:12.606125 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-78b66cc59c-dwbpp" event={"ID":"cedfa2ab-1916-46e6-8e95-18c7a6da6046","Type":"ContainerDied","Data":"4ccb0b9e775fc0c0e4010437180c3e059a9b00ad4b5d41146ffe438cedf527f5"} Oct 06 12:04:12 crc kubenswrapper[4958]: I1006 12:04:12.606162 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-78b66cc59c-dwbpp" event={"ID":"cedfa2ab-1916-46e6-8e95-18c7a6da6046","Type":"ContainerDied","Data":"69bdb513c58fd97119854329b2d4b4b9dc23d3283c0b30c656c6f660c5d2d241"} Oct 06 12:04:12 crc kubenswrapper[4958]: I1006 12:04:12.606179 4958 scope.go:117] "RemoveContainer" containerID="4ccb0b9e775fc0c0e4010437180c3e059a9b00ad4b5d41146ffe438cedf527f5" Oct 06 12:04:12 crc kubenswrapper[4958]: I1006 12:04:12.606326 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-78b66cc59c-dwbpp" Oct 06 12:04:12 crc kubenswrapper[4958]: I1006 12:04:12.636362 4958 generic.go:334] "Generic (PLEG): container finished" podID="76fb3845-b6c8-49ed-a7c5-fbf1254134dd" containerID="d00543839437d8ddab081b5d4fcdfeb78edeeaa73c91085cab177c7f11462364" exitCode=137 Oct 06 12:04:12 crc kubenswrapper[4958]: I1006 12:04:12.636728 4958 generic.go:334] "Generic (PLEG): container finished" podID="76fb3845-b6c8-49ed-a7c5-fbf1254134dd" containerID="2248175c24a706093c575dc76c190f64771a8e5f014536cd647d96ba8aaf3967" exitCode=137 Oct 06 12:04:12 crc kubenswrapper[4958]: I1006 12:04:12.636797 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-f9bd9f6f-dzxmr" event={"ID":"76fb3845-b6c8-49ed-a7c5-fbf1254134dd","Type":"ContainerDied","Data":"d00543839437d8ddab081b5d4fcdfeb78edeeaa73c91085cab177c7f11462364"} Oct 06 12:04:12 crc kubenswrapper[4958]: I1006 12:04:12.636822 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-f9bd9f6f-dzxmr" event={"ID":"76fb3845-b6c8-49ed-a7c5-fbf1254134dd","Type":"ContainerDied","Data":"2248175c24a706093c575dc76c190f64771a8e5f014536cd647d96ba8aaf3967"} Oct 06 12:04:12 crc kubenswrapper[4958]: I1006 12:04:12.639922 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-df9b9d74d-rffxt" event={"ID":"db7e63d2-d43d-417e-939a-be456eaae637","Type":"ContainerStarted","Data":"d4f06781df28f582f5978d251210caeb10e53375d803de3880309ed0e406df19"} Oct 06 12:04:12 crc kubenswrapper[4958]: I1006 12:04:12.639965 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-df9b9d74d-rffxt" event={"ID":"db7e63d2-d43d-417e-939a-be456eaae637","Type":"ContainerStarted","Data":"85829cdeb0396bf6abe3552a08347214ec79f661f6a666f2091f84050b6b906b"} Oct 06 12:04:12 crc kubenswrapper[4958]: I1006 12:04:12.641252 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/neutron-df9b9d74d-rffxt" Oct 06 12:04:12 crc kubenswrapper[4958]: I1006 12:04:12.644083 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/13d1aca0-56df-432b-97e2-bdf76bda20b8-scripts\") pod \"13d1aca0-56df-432b-97e2-bdf76bda20b8\" (UID: \"13d1aca0-56df-432b-97e2-bdf76bda20b8\") " Oct 06 12:04:12 crc kubenswrapper[4958]: I1006 12:04:12.644128 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/cedfa2ab-1916-46e6-8e95-18c7a6da6046-logs\") pod \"cedfa2ab-1916-46e6-8e95-18c7a6da6046\" (UID: \"cedfa2ab-1916-46e6-8e95-18c7a6da6046\") " Oct 06 12:04:12 crc kubenswrapper[4958]: I1006 12:04:12.644159 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/cedfa2ab-1916-46e6-8e95-18c7a6da6046-horizon-secret-key\") pod \"cedfa2ab-1916-46e6-8e95-18c7a6da6046\" (UID: \"cedfa2ab-1916-46e6-8e95-18c7a6da6046\") " Oct 06 12:04:12 crc kubenswrapper[4958]: I1006 12:04:12.644182 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/13d1aca0-56df-432b-97e2-bdf76bda20b8-config-data\") pod \"13d1aca0-56df-432b-97e2-bdf76bda20b8\" (UID: \"13d1aca0-56df-432b-97e2-bdf76bda20b8\") " Oct 06 12:04:12 crc kubenswrapper[4958]: I1006 12:04:12.644687 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/cedfa2ab-1916-46e6-8e95-18c7a6da6046-logs" (OuterVolumeSpecName: "logs") pod "cedfa2ab-1916-46e6-8e95-18c7a6da6046" (UID: "cedfa2ab-1916-46e6-8e95-18c7a6da6046"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 06 12:04:12 crc kubenswrapper[4958]: I1006 12:04:12.651816 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-84b966f6c9-xqx5n" event={"ID":"1f8e795b-4ea8-4088-8ccb-345195b2d313","Type":"ContainerStarted","Data":"77af60a7fc658d197c6e7c6890bdd030dd1b3747670ee8b18644b9426796ed8f"} Oct 06 12:04:12 crc kubenswrapper[4958]: I1006 12:04:12.651977 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-84b966f6c9-xqx5n" Oct 06 12:04:12 crc kubenswrapper[4958]: I1006 12:04:12.654063 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/13d1aca0-56df-432b-97e2-bdf76bda20b8-horizon-secret-key\") pod \"13d1aca0-56df-432b-97e2-bdf76bda20b8\" (UID: \"13d1aca0-56df-432b-97e2-bdf76bda20b8\") " Oct 06 12:04:12 crc kubenswrapper[4958]: I1006 12:04:12.654177 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/cedfa2ab-1916-46e6-8e95-18c7a6da6046-scripts\") pod \"cedfa2ab-1916-46e6-8e95-18c7a6da6046\" (UID: \"cedfa2ab-1916-46e6-8e95-18c7a6da6046\") " Oct 06 12:04:12 crc kubenswrapper[4958]: I1006 12:04:12.654196 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/cedfa2ab-1916-46e6-8e95-18c7a6da6046-config-data\") pod \"cedfa2ab-1916-46e6-8e95-18c7a6da6046\" (UID: \"cedfa2ab-1916-46e6-8e95-18c7a6da6046\") " Oct 06 12:04:12 crc kubenswrapper[4958]: I1006 12:04:12.654244 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hllnl\" (UniqueName: \"kubernetes.io/projected/cedfa2ab-1916-46e6-8e95-18c7a6da6046-kube-api-access-hllnl\") pod \"cedfa2ab-1916-46e6-8e95-18c7a6da6046\" (UID: \"cedfa2ab-1916-46e6-8e95-18c7a6da6046\") " Oct 06 12:04:12 crc kubenswrapper[4958]: I1006 12:04:12.654281 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nvgz6\" (UniqueName: \"kubernetes.io/projected/13d1aca0-56df-432b-97e2-bdf76bda20b8-kube-api-access-nvgz6\") pod \"13d1aca0-56df-432b-97e2-bdf76bda20b8\" (UID: \"13d1aca0-56df-432b-97e2-bdf76bda20b8\") " Oct 06 12:04:12 crc kubenswrapper[4958]: I1006 12:04:12.654337 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/13d1aca0-56df-432b-97e2-bdf76bda20b8-logs\") pod \"13d1aca0-56df-432b-97e2-bdf76bda20b8\" (UID: \"13d1aca0-56df-432b-97e2-bdf76bda20b8\") " Oct 06 12:04:12 crc kubenswrapper[4958]: I1006 12:04:12.655042 4958 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/cedfa2ab-1916-46e6-8e95-18c7a6da6046-logs\") on node \"crc\" DevicePath \"\"" Oct 06 12:04:12 crc kubenswrapper[4958]: I1006 12:04:12.656133 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/13d1aca0-56df-432b-97e2-bdf76bda20b8-logs" (OuterVolumeSpecName: "logs") pod "13d1aca0-56df-432b-97e2-bdf76bda20b8" (UID: "13d1aca0-56df-432b-97e2-bdf76bda20b8"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 06 12:04:12 crc kubenswrapper[4958]: I1006 12:04:12.659053 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cedfa2ab-1916-46e6-8e95-18c7a6da6046-horizon-secret-key" (OuterVolumeSpecName: "horizon-secret-key") pod "cedfa2ab-1916-46e6-8e95-18c7a6da6046" (UID: "cedfa2ab-1916-46e6-8e95-18c7a6da6046"). InnerVolumeSpecName "horizon-secret-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 06 12:04:12 crc kubenswrapper[4958]: I1006 12:04:12.674273 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/13d1aca0-56df-432b-97e2-bdf76bda20b8-kube-api-access-nvgz6" (OuterVolumeSpecName: "kube-api-access-nvgz6") pod "13d1aca0-56df-432b-97e2-bdf76bda20b8" (UID: "13d1aca0-56df-432b-97e2-bdf76bda20b8"). InnerVolumeSpecName "kube-api-access-nvgz6". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 06 12:04:12 crc kubenswrapper[4958]: I1006 12:04:12.674407 4958 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-df9b9d74d-rffxt" podStartSLOduration=4.674391726 podStartE2EDuration="4.674391726s" podCreationTimestamp="2025-10-06 12:04:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-06 12:04:12.673012204 +0000 UTC m=+1006.559037532" watchObservedRunningTime="2025-10-06 12:04:12.674391726 +0000 UTC m=+1006.560417024" Oct 06 12:04:12 crc kubenswrapper[4958]: I1006 12:04:12.674767 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cedfa2ab-1916-46e6-8e95-18c7a6da6046-kube-api-access-hllnl" (OuterVolumeSpecName: "kube-api-access-hllnl") pod "cedfa2ab-1916-46e6-8e95-18c7a6da6046" (UID: "cedfa2ab-1916-46e6-8e95-18c7a6da6046"). InnerVolumeSpecName "kube-api-access-hllnl". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 06 12:04:12 crc kubenswrapper[4958]: I1006 12:04:12.682322 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/13d1aca0-56df-432b-97e2-bdf76bda20b8-horizon-secret-key" (OuterVolumeSpecName: "horizon-secret-key") pod "13d1aca0-56df-432b-97e2-bdf76bda20b8" (UID: "13d1aca0-56df-432b-97e2-bdf76bda20b8"). InnerVolumeSpecName "horizon-secret-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 06 12:04:12 crc kubenswrapper[4958]: I1006 12:04:12.694258 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-7bc84f8f6c-tdr2k" event={"ID":"6ef174b4-138f-4dc1-8618-afb9c9e8f9b3","Type":"ContainerStarted","Data":"0df223bc95780b5a2162d0873fe72106e6fd5d3f060409881cee6e229c3cd1ef"} Oct 06 12:04:12 crc kubenswrapper[4958]: I1006 12:04:12.694538 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-7bc84f8f6c-tdr2k" event={"ID":"6ef174b4-138f-4dc1-8618-afb9c9e8f9b3","Type":"ContainerStarted","Data":"34a2ec95af325a4d8caffdd3a54fca0b60401ceb628dccc3759fe01cdfb2b811"} Oct 06 12:04:12 crc kubenswrapper[4958]: I1006 12:04:12.696025 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/neutron-7bc84f8f6c-tdr2k" Oct 06 12:04:12 crc kubenswrapper[4958]: I1006 12:04:12.714263 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/13d1aca0-56df-432b-97e2-bdf76bda20b8-scripts" (OuterVolumeSpecName: "scripts") pod "13d1aca0-56df-432b-97e2-bdf76bda20b8" (UID: "13d1aca0-56df-432b-97e2-bdf76bda20b8"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 06 12:04:12 crc kubenswrapper[4958]: I1006 12:04:12.715036 4958 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-84b966f6c9-xqx5n" podStartSLOduration=4.715010542 podStartE2EDuration="4.715010542s" podCreationTimestamp="2025-10-06 12:04:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-06 12:04:12.701320789 +0000 UTC m=+1006.587346097" watchObservedRunningTime="2025-10-06 12:04:12.715010542 +0000 UTC m=+1006.601035850" Oct 06 12:04:12 crc kubenswrapper[4958]: I1006 12:04:12.715783 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"7b1cc01c-8b93-41aa-bf54-7d98363efbca","Type":"ContainerStarted","Data":"3417a50edc73a8f5851ebbd98453cc79184544b45feb20c6d00c9b8f44f808fd"} Oct 06 12:04:12 crc kubenswrapper[4958]: I1006 12:04:12.738291 4958 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-7bc84f8f6c-tdr2k" podStartSLOduration=1.7382702939999999 podStartE2EDuration="1.738270294s" podCreationTimestamp="2025-10-06 12:04:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-06 12:04:12.729216311 +0000 UTC m=+1006.615241639" watchObservedRunningTime="2025-10-06 12:04:12.738270294 +0000 UTC m=+1006.624295612" Oct 06 12:04:12 crc kubenswrapper[4958]: I1006 12:04:12.745957 4958 generic.go:334] "Generic (PLEG): container finished" podID="13d1aca0-56df-432b-97e2-bdf76bda20b8" containerID="973962e82adea6461f9cec954912695e72c3a5913bf0cdacdb820efa3cd66b8b" exitCode=137 Oct 06 12:04:12 crc kubenswrapper[4958]: I1006 12:04:12.745990 4958 generic.go:334] "Generic (PLEG): container finished" podID="13d1aca0-56df-432b-97e2-bdf76bda20b8" containerID="502d7fe944c247a897e334fe1a5a6ce35d47ed4b9f54fbfe8134074a495da73b" exitCode=137 Oct 06 12:04:12 crc kubenswrapper[4958]: I1006 12:04:12.746123 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-ccb77967f-gznd9" event={"ID":"13d1aca0-56df-432b-97e2-bdf76bda20b8","Type":"ContainerDied","Data":"973962e82adea6461f9cec954912695e72c3a5913bf0cdacdb820efa3cd66b8b"} Oct 06 12:04:12 crc kubenswrapper[4958]: I1006 12:04:12.746342 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-ccb77967f-gznd9" event={"ID":"13d1aca0-56df-432b-97e2-bdf76bda20b8","Type":"ContainerDied","Data":"502d7fe944c247a897e334fe1a5a6ce35d47ed4b9f54fbfe8134074a495da73b"} Oct 06 12:04:12 crc kubenswrapper[4958]: I1006 12:04:12.746355 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-ccb77967f-gznd9" event={"ID":"13d1aca0-56df-432b-97e2-bdf76bda20b8","Type":"ContainerDied","Data":"29128926a0f563208b2ad7e30807da90ceeba3973e2a12128bbc0b0296b618f0"} Oct 06 12:04:12 crc kubenswrapper[4958]: I1006 12:04:12.746422 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-ccb77967f-gznd9" Oct 06 12:04:12 crc kubenswrapper[4958]: I1006 12:04:12.759810 4958 reconciler_common.go:293] "Volume detached for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/cedfa2ab-1916-46e6-8e95-18c7a6da6046-horizon-secret-key\") on node \"crc\" DevicePath \"\"" Oct 06 12:04:12 crc kubenswrapper[4958]: I1006 12:04:12.759838 4958 reconciler_common.go:293] "Volume detached for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/13d1aca0-56df-432b-97e2-bdf76bda20b8-horizon-secret-key\") on node \"crc\" DevicePath \"\"" Oct 06 12:04:12 crc kubenswrapper[4958]: I1006 12:04:12.759847 4958 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hllnl\" (UniqueName: \"kubernetes.io/projected/cedfa2ab-1916-46e6-8e95-18c7a6da6046-kube-api-access-hllnl\") on node \"crc\" DevicePath \"\"" Oct 06 12:04:12 crc kubenswrapper[4958]: I1006 12:04:12.759856 4958 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nvgz6\" (UniqueName: \"kubernetes.io/projected/13d1aca0-56df-432b-97e2-bdf76bda20b8-kube-api-access-nvgz6\") on node \"crc\" DevicePath \"\"" Oct 06 12:04:12 crc kubenswrapper[4958]: I1006 12:04:12.759864 4958 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/13d1aca0-56df-432b-97e2-bdf76bda20b8-logs\") on node \"crc\" DevicePath \"\"" Oct 06 12:04:12 crc kubenswrapper[4958]: I1006 12:04:12.759872 4958 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/13d1aca0-56df-432b-97e2-bdf76bda20b8-scripts\") on node \"crc\" DevicePath \"\"" Oct 06 12:04:12 crc kubenswrapper[4958]: I1006 12:04:12.764948 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/13d1aca0-56df-432b-97e2-bdf76bda20b8-config-data" (OuterVolumeSpecName: "config-data") pod "13d1aca0-56df-432b-97e2-bdf76bda20b8" (UID: "13d1aca0-56df-432b-97e2-bdf76bda20b8"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 06 12:04:12 crc kubenswrapper[4958]: I1006 12:04:12.789514 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cedfa2ab-1916-46e6-8e95-18c7a6da6046-scripts" (OuterVolumeSpecName: "scripts") pod "cedfa2ab-1916-46e6-8e95-18c7a6da6046" (UID: "cedfa2ab-1916-46e6-8e95-18c7a6da6046"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 06 12:04:12 crc kubenswrapper[4958]: I1006 12:04:12.801057 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cedfa2ab-1916-46e6-8e95-18c7a6da6046-config-data" (OuterVolumeSpecName: "config-data") pod "cedfa2ab-1916-46e6-8e95-18c7a6da6046" (UID: "cedfa2ab-1916-46e6-8e95-18c7a6da6046"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 06 12:04:12 crc kubenswrapper[4958]: I1006 12:04:12.804523 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Oct 06 12:04:12 crc kubenswrapper[4958]: I1006 12:04:12.804851 4958 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-internal-api-0" podStartSLOduration=20.804839284 podStartE2EDuration="20.804839284s" podCreationTimestamp="2025-10-06 12:03:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-06 12:04:12.758189076 +0000 UTC m=+1006.644214374" watchObservedRunningTime="2025-10-06 12:04:12.804839284 +0000 UTC m=+1006.690864582" Oct 06 12:04:12 crc kubenswrapper[4958]: I1006 12:04:12.826498 4958 scope.go:117] "RemoveContainer" containerID="4ccb0b9e775fc0c0e4010437180c3e059a9b00ad4b5d41146ffe438cedf527f5" Oct 06 12:04:12 crc kubenswrapper[4958]: E1006 12:04:12.827589 4958 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4ccb0b9e775fc0c0e4010437180c3e059a9b00ad4b5d41146ffe438cedf527f5\": container with ID starting with 4ccb0b9e775fc0c0e4010437180c3e059a9b00ad4b5d41146ffe438cedf527f5 not found: ID does not exist" containerID="4ccb0b9e775fc0c0e4010437180c3e059a9b00ad4b5d41146ffe438cedf527f5" Oct 06 12:04:12 crc kubenswrapper[4958]: I1006 12:04:12.827632 4958 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4ccb0b9e775fc0c0e4010437180c3e059a9b00ad4b5d41146ffe438cedf527f5"} err="failed to get container status \"4ccb0b9e775fc0c0e4010437180c3e059a9b00ad4b5d41146ffe438cedf527f5\": rpc error: code = NotFound desc = could not find container \"4ccb0b9e775fc0c0e4010437180c3e059a9b00ad4b5d41146ffe438cedf527f5\": container with ID starting with 4ccb0b9e775fc0c0e4010437180c3e059a9b00ad4b5d41146ffe438cedf527f5 not found: ID does not exist" Oct 06 12:04:12 crc kubenswrapper[4958]: I1006 12:04:12.827659 4958 scope.go:117] "RemoveContainer" containerID="973962e82adea6461f9cec954912695e72c3a5913bf0cdacdb820efa3cd66b8b" Oct 06 12:04:12 crc kubenswrapper[4958]: W1006 12:04:12.832673 4958 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podbdff7949_e76a_484d_983d_c3f8fee7f175.slice/crio-3c1baf0a5511b0d512ebcf27a5d0666d1af31e5a95fcd903f15ff39b88e5f5d8 WatchSource:0}: Error finding container 3c1baf0a5511b0d512ebcf27a5d0666d1af31e5a95fcd903f15ff39b88e5f5d8: Status 404 returned error can't find the container with id 3c1baf0a5511b0d512ebcf27a5d0666d1af31e5a95fcd903f15ff39b88e5f5d8 Oct 06 12:04:12 crc kubenswrapper[4958]: I1006 12:04:12.859185 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-f9bd9f6f-dzxmr" Oct 06 12:04:12 crc kubenswrapper[4958]: I1006 12:04:12.867815 4958 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/cedfa2ab-1916-46e6-8e95-18c7a6da6046-scripts\") on node \"crc\" DevicePath \"\"" Oct 06 12:04:12 crc kubenswrapper[4958]: I1006 12:04:12.867860 4958 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/cedfa2ab-1916-46e6-8e95-18c7a6da6046-config-data\") on node \"crc\" DevicePath \"\"" Oct 06 12:04:12 crc kubenswrapper[4958]: I1006 12:04:12.867874 4958 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/13d1aca0-56df-432b-97e2-bdf76bda20b8-config-data\") on node \"crc\" DevicePath \"\"" Oct 06 12:04:12 crc kubenswrapper[4958]: I1006 12:04:12.931757 4958 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9a220bc7-3077-4d2f-aad4-9d1b52c752a1" path="/var/lib/kubelet/pods/9a220bc7-3077-4d2f-aad4-9d1b52c752a1/volumes" Oct 06 12:04:12 crc kubenswrapper[4958]: I1006 12:04:12.965460 4958 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Oct 06 12:04:12 crc kubenswrapper[4958]: I1006 12:04:12.965493 4958 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Oct 06 12:04:12 crc kubenswrapper[4958]: I1006 12:04:12.969237 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cpkcs\" (UniqueName: \"kubernetes.io/projected/76fb3845-b6c8-49ed-a7c5-fbf1254134dd-kube-api-access-cpkcs\") pod \"76fb3845-b6c8-49ed-a7c5-fbf1254134dd\" (UID: \"76fb3845-b6c8-49ed-a7c5-fbf1254134dd\") " Oct 06 12:04:12 crc kubenswrapper[4958]: I1006 12:04:12.969313 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/76fb3845-b6c8-49ed-a7c5-fbf1254134dd-scripts\") pod \"76fb3845-b6c8-49ed-a7c5-fbf1254134dd\" (UID: \"76fb3845-b6c8-49ed-a7c5-fbf1254134dd\") " Oct 06 12:04:12 crc kubenswrapper[4958]: I1006 12:04:12.969357 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/76fb3845-b6c8-49ed-a7c5-fbf1254134dd-logs\") pod \"76fb3845-b6c8-49ed-a7c5-fbf1254134dd\" (UID: \"76fb3845-b6c8-49ed-a7c5-fbf1254134dd\") " Oct 06 12:04:12 crc kubenswrapper[4958]: I1006 12:04:12.969398 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/76fb3845-b6c8-49ed-a7c5-fbf1254134dd-config-data\") pod \"76fb3845-b6c8-49ed-a7c5-fbf1254134dd\" (UID: \"76fb3845-b6c8-49ed-a7c5-fbf1254134dd\") " Oct 06 12:04:12 crc kubenswrapper[4958]: I1006 12:04:12.969445 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/76fb3845-b6c8-49ed-a7c5-fbf1254134dd-horizon-secret-key\") pod \"76fb3845-b6c8-49ed-a7c5-fbf1254134dd\" (UID: \"76fb3845-b6c8-49ed-a7c5-fbf1254134dd\") " Oct 06 12:04:12 crc kubenswrapper[4958]: I1006 12:04:12.970511 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/76fb3845-b6c8-49ed-a7c5-fbf1254134dd-logs" (OuterVolumeSpecName: "logs") pod "76fb3845-b6c8-49ed-a7c5-fbf1254134dd" (UID: "76fb3845-b6c8-49ed-a7c5-fbf1254134dd"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 06 12:04:12 crc kubenswrapper[4958]: I1006 12:04:12.975809 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/76fb3845-b6c8-49ed-a7c5-fbf1254134dd-kube-api-access-cpkcs" (OuterVolumeSpecName: "kube-api-access-cpkcs") pod "76fb3845-b6c8-49ed-a7c5-fbf1254134dd" (UID: "76fb3845-b6c8-49ed-a7c5-fbf1254134dd"). InnerVolumeSpecName "kube-api-access-cpkcs". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 06 12:04:12 crc kubenswrapper[4958]: I1006 12:04:12.977838 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/76fb3845-b6c8-49ed-a7c5-fbf1254134dd-horizon-secret-key" (OuterVolumeSpecName: "horizon-secret-key") pod "76fb3845-b6c8-49ed-a7c5-fbf1254134dd" (UID: "76fb3845-b6c8-49ed-a7c5-fbf1254134dd"). InnerVolumeSpecName "horizon-secret-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 06 12:04:12 crc kubenswrapper[4958]: I1006 12:04:12.978241 4958 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-78b66cc59c-dwbpp"] Oct 06 12:04:12 crc kubenswrapper[4958]: I1006 12:04:12.987508 4958 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/horizon-78b66cc59c-dwbpp"] Oct 06 12:04:13 crc kubenswrapper[4958]: I1006 12:04:13.013071 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/76fb3845-b6c8-49ed-a7c5-fbf1254134dd-scripts" (OuterVolumeSpecName: "scripts") pod "76fb3845-b6c8-49ed-a7c5-fbf1254134dd" (UID: "76fb3845-b6c8-49ed-a7c5-fbf1254134dd"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 06 12:04:13 crc kubenswrapper[4958]: I1006 12:04:13.021814 4958 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Oct 06 12:04:13 crc kubenswrapper[4958]: I1006 12:04:13.031542 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/76fb3845-b6c8-49ed-a7c5-fbf1254134dd-config-data" (OuterVolumeSpecName: "config-data") pod "76fb3845-b6c8-49ed-a7c5-fbf1254134dd" (UID: "76fb3845-b6c8-49ed-a7c5-fbf1254134dd"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 06 12:04:13 crc kubenswrapper[4958]: I1006 12:04:13.034229 4958 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Oct 06 12:04:13 crc kubenswrapper[4958]: I1006 12:04:13.071004 4958 reconciler_common.go:293] "Volume detached for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/76fb3845-b6c8-49ed-a7c5-fbf1254134dd-horizon-secret-key\") on node \"crc\" DevicePath \"\"" Oct 06 12:04:13 crc kubenswrapper[4958]: I1006 12:04:13.071038 4958 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cpkcs\" (UniqueName: \"kubernetes.io/projected/76fb3845-b6c8-49ed-a7c5-fbf1254134dd-kube-api-access-cpkcs\") on node \"crc\" DevicePath \"\"" Oct 06 12:04:13 crc kubenswrapper[4958]: I1006 12:04:13.071049 4958 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/76fb3845-b6c8-49ed-a7c5-fbf1254134dd-scripts\") on node \"crc\" DevicePath \"\"" Oct 06 12:04:13 crc kubenswrapper[4958]: I1006 12:04:13.071058 4958 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/76fb3845-b6c8-49ed-a7c5-fbf1254134dd-logs\") on node \"crc\" DevicePath \"\"" Oct 06 12:04:13 crc kubenswrapper[4958]: I1006 12:04:13.071067 4958 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/76fb3845-b6c8-49ed-a7c5-fbf1254134dd-config-data\") on node \"crc\" DevicePath \"\"" Oct 06 12:04:13 crc kubenswrapper[4958]: I1006 12:04:13.122362 4958 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-ccb77967f-gznd9"] Oct 06 12:04:13 crc kubenswrapper[4958]: I1006 12:04:13.129063 4958 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/horizon-ccb77967f-gznd9"] Oct 06 12:04:13 crc kubenswrapper[4958]: I1006 12:04:13.162646 4958 scope.go:117] "RemoveContainer" containerID="502d7fe944c247a897e334fe1a5a6ce35d47ed4b9f54fbfe8134074a495da73b" Oct 06 12:04:13 crc kubenswrapper[4958]: I1006 12:04:13.214650 4958 scope.go:117] "RemoveContainer" containerID="973962e82adea6461f9cec954912695e72c3a5913bf0cdacdb820efa3cd66b8b" Oct 06 12:04:13 crc kubenswrapper[4958]: E1006 12:04:13.215172 4958 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"973962e82adea6461f9cec954912695e72c3a5913bf0cdacdb820efa3cd66b8b\": container with ID starting with 973962e82adea6461f9cec954912695e72c3a5913bf0cdacdb820efa3cd66b8b not found: ID does not exist" containerID="973962e82adea6461f9cec954912695e72c3a5913bf0cdacdb820efa3cd66b8b" Oct 06 12:04:13 crc kubenswrapper[4958]: I1006 12:04:13.215221 4958 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"973962e82adea6461f9cec954912695e72c3a5913bf0cdacdb820efa3cd66b8b"} err="failed to get container status \"973962e82adea6461f9cec954912695e72c3a5913bf0cdacdb820efa3cd66b8b\": rpc error: code = NotFound desc = could not find container \"973962e82adea6461f9cec954912695e72c3a5913bf0cdacdb820efa3cd66b8b\": container with ID starting with 973962e82adea6461f9cec954912695e72c3a5913bf0cdacdb820efa3cd66b8b not found: ID does not exist" Oct 06 12:04:13 crc kubenswrapper[4958]: I1006 12:04:13.215250 4958 scope.go:117] "RemoveContainer" containerID="502d7fe944c247a897e334fe1a5a6ce35d47ed4b9f54fbfe8134074a495da73b" Oct 06 12:04:13 crc kubenswrapper[4958]: E1006 12:04:13.215568 4958 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"502d7fe944c247a897e334fe1a5a6ce35d47ed4b9f54fbfe8134074a495da73b\": container with ID starting with 502d7fe944c247a897e334fe1a5a6ce35d47ed4b9f54fbfe8134074a495da73b not found: ID does not exist" containerID="502d7fe944c247a897e334fe1a5a6ce35d47ed4b9f54fbfe8134074a495da73b" Oct 06 12:04:13 crc kubenswrapper[4958]: I1006 12:04:13.215598 4958 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"502d7fe944c247a897e334fe1a5a6ce35d47ed4b9f54fbfe8134074a495da73b"} err="failed to get container status \"502d7fe944c247a897e334fe1a5a6ce35d47ed4b9f54fbfe8134074a495da73b\": rpc error: code = NotFound desc = could not find container \"502d7fe944c247a897e334fe1a5a6ce35d47ed4b9f54fbfe8134074a495da73b\": container with ID starting with 502d7fe944c247a897e334fe1a5a6ce35d47ed4b9f54fbfe8134074a495da73b not found: ID does not exist" Oct 06 12:04:13 crc kubenswrapper[4958]: I1006 12:04:13.215617 4958 scope.go:117] "RemoveContainer" containerID="973962e82adea6461f9cec954912695e72c3a5913bf0cdacdb820efa3cd66b8b" Oct 06 12:04:13 crc kubenswrapper[4958]: I1006 12:04:13.215989 4958 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"973962e82adea6461f9cec954912695e72c3a5913bf0cdacdb820efa3cd66b8b"} err="failed to get container status \"973962e82adea6461f9cec954912695e72c3a5913bf0cdacdb820efa3cd66b8b\": rpc error: code = NotFound desc = could not find container \"973962e82adea6461f9cec954912695e72c3a5913bf0cdacdb820efa3cd66b8b\": container with ID starting with 973962e82adea6461f9cec954912695e72c3a5913bf0cdacdb820efa3cd66b8b not found: ID does not exist" Oct 06 12:04:13 crc kubenswrapper[4958]: I1006 12:04:13.216028 4958 scope.go:117] "RemoveContainer" containerID="502d7fe944c247a897e334fe1a5a6ce35d47ed4b9f54fbfe8134074a495da73b" Oct 06 12:04:13 crc kubenswrapper[4958]: I1006 12:04:13.216340 4958 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"502d7fe944c247a897e334fe1a5a6ce35d47ed4b9f54fbfe8134074a495da73b"} err="failed to get container status \"502d7fe944c247a897e334fe1a5a6ce35d47ed4b9f54fbfe8134074a495da73b\": rpc error: code = NotFound desc = could not find container \"502d7fe944c247a897e334fe1a5a6ce35d47ed4b9f54fbfe8134074a495da73b\": container with ID starting with 502d7fe944c247a897e334fe1a5a6ce35d47ed4b9f54fbfe8134074a495da73b not found: ID does not exist" Oct 06 12:04:13 crc kubenswrapper[4958]: I1006 12:04:13.756088 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-f9bd9f6f-dzxmr" event={"ID":"76fb3845-b6c8-49ed-a7c5-fbf1254134dd","Type":"ContainerDied","Data":"02d5da75838d46d26dbdaaa46f2c7fff052db9eb054857c9b03128366ec258ca"} Oct 06 12:04:13 crc kubenswrapper[4958]: I1006 12:04:13.756125 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-f9bd9f6f-dzxmr" Oct 06 12:04:13 crc kubenswrapper[4958]: I1006 12:04:13.756494 4958 scope.go:117] "RemoveContainer" containerID="d00543839437d8ddab081b5d4fcdfeb78edeeaa73c91085cab177c7f11462364" Oct 06 12:04:13 crc kubenswrapper[4958]: I1006 12:04:13.757630 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-7bc84f8f6c-tdr2k" event={"ID":"6ef174b4-138f-4dc1-8618-afb9c9e8f9b3","Type":"ContainerStarted","Data":"3ec27b3d51cc3aafc9544b5d979a4a0f1f2072e977f88ef13593b9a3e73dda77"} Oct 06 12:04:13 crc kubenswrapper[4958]: I1006 12:04:13.760551 4958 generic.go:334] "Generic (PLEG): container finished" podID="e2efb2d0-1b2f-47fd-a1ee-5e69cafa0fd7" containerID="bda9bd9e98ae5a4cd88593ea36a8f7ca2e61dd496c0d5a786ae8a7b15506bfcc" exitCode=0 Oct 06 12:04:13 crc kubenswrapper[4958]: I1006 12:04:13.760624 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-s7xhl" event={"ID":"e2efb2d0-1b2f-47fd-a1ee-5e69cafa0fd7","Type":"ContainerDied","Data":"bda9bd9e98ae5a4cd88593ea36a8f7ca2e61dd496c0d5a786ae8a7b15506bfcc"} Oct 06 12:04:13 crc kubenswrapper[4958]: I1006 12:04:13.762617 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"7b1cc01c-8b93-41aa-bf54-7d98363efbca","Type":"ContainerStarted","Data":"b6bdf703a557f076696027a3d0398d0fe978145e58903f39a8694f11dc29ff8c"} Oct 06 12:04:13 crc kubenswrapper[4958]: I1006 12:04:13.762699 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Oct 06 12:04:13 crc kubenswrapper[4958]: I1006 12:04:13.762713 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Oct 06 12:04:13 crc kubenswrapper[4958]: I1006 12:04:13.766275 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"bdff7949-e76a-484d-983d-c3f8fee7f175","Type":"ContainerStarted","Data":"3c1baf0a5511b0d512ebcf27a5d0666d1af31e5a95fcd903f15ff39b88e5f5d8"} Oct 06 12:04:13 crc kubenswrapper[4958]: I1006 12:04:13.802766 4958 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-f9bd9f6f-dzxmr"] Oct 06 12:04:13 crc kubenswrapper[4958]: I1006 12:04:13.813357 4958 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/horizon-f9bd9f6f-dzxmr"] Oct 06 12:04:13 crc kubenswrapper[4958]: I1006 12:04:13.956708 4958 scope.go:117] "RemoveContainer" containerID="2248175c24a706093c575dc76c190f64771a8e5f014536cd647d96ba8aaf3967" Oct 06 12:04:14 crc kubenswrapper[4958]: I1006 12:04:14.778115 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"bdff7949-e76a-484d-983d-c3f8fee7f175","Type":"ContainerStarted","Data":"5e604db492197be7746aaa503276106e68a4d9e81736affd54302a31a30ba540"} Oct 06 12:04:14 crc kubenswrapper[4958]: I1006 12:04:14.778184 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"bdff7949-e76a-484d-983d-c3f8fee7f175","Type":"ContainerStarted","Data":"eb466551a2ed74deb5b0345d460e4a060341bf53984c36e1ad2159e49007280e"} Oct 06 12:04:14 crc kubenswrapper[4958]: I1006 12:04:14.781057 4958 generic.go:334] "Generic (PLEG): container finished" podID="2d86d752-3a4d-4940-b221-242b3253c418" containerID="e85b20492bebd2fb52a6adfb25d6edcf5303303b44221c9d93db5b05ea4800f3" exitCode=0 Oct 06 12:04:14 crc kubenswrapper[4958]: I1006 12:04:14.781097 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-9kx8n" event={"ID":"2d86d752-3a4d-4940-b221-242b3253c418","Type":"ContainerDied","Data":"e85b20492bebd2fb52a6adfb25d6edcf5303303b44221c9d93db5b05ea4800f3"} Oct 06 12:04:14 crc kubenswrapper[4958]: I1006 12:04:14.800818 4958 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-external-api-0" podStartSLOduration=3.800796026 podStartE2EDuration="3.800796026s" podCreationTimestamp="2025-10-06 12:04:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-06 12:04:14.799968531 +0000 UTC m=+1008.685993839" watchObservedRunningTime="2025-10-06 12:04:14.800796026 +0000 UTC m=+1008.686821334" Oct 06 12:04:14 crc kubenswrapper[4958]: I1006 12:04:14.925367 4958 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="13d1aca0-56df-432b-97e2-bdf76bda20b8" path="/var/lib/kubelet/pods/13d1aca0-56df-432b-97e2-bdf76bda20b8/volumes" Oct 06 12:04:14 crc kubenswrapper[4958]: I1006 12:04:14.926132 4958 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="76fb3845-b6c8-49ed-a7c5-fbf1254134dd" path="/var/lib/kubelet/pods/76fb3845-b6c8-49ed-a7c5-fbf1254134dd/volumes" Oct 06 12:04:14 crc kubenswrapper[4958]: I1006 12:04:14.926842 4958 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cedfa2ab-1916-46e6-8e95-18c7a6da6046" path="/var/lib/kubelet/pods/cedfa2ab-1916-46e6-8e95-18c7a6da6046/volumes" Oct 06 12:04:15 crc kubenswrapper[4958]: I1006 12:04:15.172423 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-s7xhl" Oct 06 12:04:15 crc kubenswrapper[4958]: I1006 12:04:15.325548 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e2efb2d0-1b2f-47fd-a1ee-5e69cafa0fd7-logs\") pod \"e2efb2d0-1b2f-47fd-a1ee-5e69cafa0fd7\" (UID: \"e2efb2d0-1b2f-47fd-a1ee-5e69cafa0fd7\") " Oct 06 12:04:15 crc kubenswrapper[4958]: I1006 12:04:15.325726 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jwmbq\" (UniqueName: \"kubernetes.io/projected/e2efb2d0-1b2f-47fd-a1ee-5e69cafa0fd7-kube-api-access-jwmbq\") pod \"e2efb2d0-1b2f-47fd-a1ee-5e69cafa0fd7\" (UID: \"e2efb2d0-1b2f-47fd-a1ee-5e69cafa0fd7\") " Oct 06 12:04:15 crc kubenswrapper[4958]: I1006 12:04:15.325780 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e2efb2d0-1b2f-47fd-a1ee-5e69cafa0fd7-config-data\") pod \"e2efb2d0-1b2f-47fd-a1ee-5e69cafa0fd7\" (UID: \"e2efb2d0-1b2f-47fd-a1ee-5e69cafa0fd7\") " Oct 06 12:04:15 crc kubenswrapper[4958]: I1006 12:04:15.325894 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e2efb2d0-1b2f-47fd-a1ee-5e69cafa0fd7-scripts\") pod \"e2efb2d0-1b2f-47fd-a1ee-5e69cafa0fd7\" (UID: \"e2efb2d0-1b2f-47fd-a1ee-5e69cafa0fd7\") " Oct 06 12:04:15 crc kubenswrapper[4958]: I1006 12:04:15.326062 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e2efb2d0-1b2f-47fd-a1ee-5e69cafa0fd7-combined-ca-bundle\") pod \"e2efb2d0-1b2f-47fd-a1ee-5e69cafa0fd7\" (UID: \"e2efb2d0-1b2f-47fd-a1ee-5e69cafa0fd7\") " Oct 06 12:04:15 crc kubenswrapper[4958]: I1006 12:04:15.326196 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e2efb2d0-1b2f-47fd-a1ee-5e69cafa0fd7-logs" (OuterVolumeSpecName: "logs") pod "e2efb2d0-1b2f-47fd-a1ee-5e69cafa0fd7" (UID: "e2efb2d0-1b2f-47fd-a1ee-5e69cafa0fd7"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 06 12:04:15 crc kubenswrapper[4958]: I1006 12:04:15.326858 4958 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e2efb2d0-1b2f-47fd-a1ee-5e69cafa0fd7-logs\") on node \"crc\" DevicePath \"\"" Oct 06 12:04:15 crc kubenswrapper[4958]: I1006 12:04:15.334221 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e2efb2d0-1b2f-47fd-a1ee-5e69cafa0fd7-kube-api-access-jwmbq" (OuterVolumeSpecName: "kube-api-access-jwmbq") pod "e2efb2d0-1b2f-47fd-a1ee-5e69cafa0fd7" (UID: "e2efb2d0-1b2f-47fd-a1ee-5e69cafa0fd7"). InnerVolumeSpecName "kube-api-access-jwmbq". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 06 12:04:15 crc kubenswrapper[4958]: I1006 12:04:15.335673 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e2efb2d0-1b2f-47fd-a1ee-5e69cafa0fd7-scripts" (OuterVolumeSpecName: "scripts") pod "e2efb2d0-1b2f-47fd-a1ee-5e69cafa0fd7" (UID: "e2efb2d0-1b2f-47fd-a1ee-5e69cafa0fd7"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 06 12:04:15 crc kubenswrapper[4958]: I1006 12:04:15.360195 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e2efb2d0-1b2f-47fd-a1ee-5e69cafa0fd7-config-data" (OuterVolumeSpecName: "config-data") pod "e2efb2d0-1b2f-47fd-a1ee-5e69cafa0fd7" (UID: "e2efb2d0-1b2f-47fd-a1ee-5e69cafa0fd7"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 06 12:04:15 crc kubenswrapper[4958]: I1006 12:04:15.364564 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e2efb2d0-1b2f-47fd-a1ee-5e69cafa0fd7-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "e2efb2d0-1b2f-47fd-a1ee-5e69cafa0fd7" (UID: "e2efb2d0-1b2f-47fd-a1ee-5e69cafa0fd7"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 06 12:04:15 crc kubenswrapper[4958]: I1006 12:04:15.428928 4958 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e2efb2d0-1b2f-47fd-a1ee-5e69cafa0fd7-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 06 12:04:15 crc kubenswrapper[4958]: I1006 12:04:15.429393 4958 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jwmbq\" (UniqueName: \"kubernetes.io/projected/e2efb2d0-1b2f-47fd-a1ee-5e69cafa0fd7-kube-api-access-jwmbq\") on node \"crc\" DevicePath \"\"" Oct 06 12:04:15 crc kubenswrapper[4958]: I1006 12:04:15.429410 4958 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e2efb2d0-1b2f-47fd-a1ee-5e69cafa0fd7-config-data\") on node \"crc\" DevicePath \"\"" Oct 06 12:04:15 crc kubenswrapper[4958]: I1006 12:04:15.429421 4958 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e2efb2d0-1b2f-47fd-a1ee-5e69cafa0fd7-scripts\") on node \"crc\" DevicePath \"\"" Oct 06 12:04:15 crc kubenswrapper[4958]: I1006 12:04:15.797121 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-s7xhl" Oct 06 12:04:15 crc kubenswrapper[4958]: I1006 12:04:15.797234 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-s7xhl" event={"ID":"e2efb2d0-1b2f-47fd-a1ee-5e69cafa0fd7","Type":"ContainerDied","Data":"103a1e5f07b752fe87fd1135d1837adcabc9ef6feed5d7964e4c7ec38a2932f0"} Oct 06 12:04:15 crc kubenswrapper[4958]: I1006 12:04:15.797257 4958 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="103a1e5f07b752fe87fd1135d1837adcabc9ef6feed5d7964e4c7ec38a2932f0" Oct 06 12:04:15 crc kubenswrapper[4958]: I1006 12:04:15.895612 4958 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-999fc56db-gbkpz"] Oct 06 12:04:15 crc kubenswrapper[4958]: E1006 12:04:15.900005 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="13d1aca0-56df-432b-97e2-bdf76bda20b8" containerName="horizon" Oct 06 12:04:15 crc kubenswrapper[4958]: I1006 12:04:15.900039 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="13d1aca0-56df-432b-97e2-bdf76bda20b8" containerName="horizon" Oct 06 12:04:15 crc kubenswrapper[4958]: E1006 12:04:15.900067 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="76fb3845-b6c8-49ed-a7c5-fbf1254134dd" containerName="horizon-log" Oct 06 12:04:15 crc kubenswrapper[4958]: I1006 12:04:15.900073 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="76fb3845-b6c8-49ed-a7c5-fbf1254134dd" containerName="horizon-log" Oct 06 12:04:15 crc kubenswrapper[4958]: E1006 12:04:15.900102 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cedfa2ab-1916-46e6-8e95-18c7a6da6046" containerName="horizon" Oct 06 12:04:15 crc kubenswrapper[4958]: I1006 12:04:15.900108 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="cedfa2ab-1916-46e6-8e95-18c7a6da6046" containerName="horizon" Oct 06 12:04:15 crc kubenswrapper[4958]: E1006 12:04:15.900120 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="76fb3845-b6c8-49ed-a7c5-fbf1254134dd" containerName="horizon" Oct 06 12:04:15 crc kubenswrapper[4958]: I1006 12:04:15.900153 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="76fb3845-b6c8-49ed-a7c5-fbf1254134dd" containerName="horizon" Oct 06 12:04:15 crc kubenswrapper[4958]: E1006 12:04:15.911595 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e2efb2d0-1b2f-47fd-a1ee-5e69cafa0fd7" containerName="placement-db-sync" Oct 06 12:04:15 crc kubenswrapper[4958]: I1006 12:04:15.911639 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="e2efb2d0-1b2f-47fd-a1ee-5e69cafa0fd7" containerName="placement-db-sync" Oct 06 12:04:15 crc kubenswrapper[4958]: E1006 12:04:15.911701 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="13d1aca0-56df-432b-97e2-bdf76bda20b8" containerName="horizon-log" Oct 06 12:04:15 crc kubenswrapper[4958]: I1006 12:04:15.911718 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="13d1aca0-56df-432b-97e2-bdf76bda20b8" containerName="horizon-log" Oct 06 12:04:15 crc kubenswrapper[4958]: I1006 12:04:15.912472 4958 memory_manager.go:354] "RemoveStaleState removing state" podUID="13d1aca0-56df-432b-97e2-bdf76bda20b8" containerName="horizon" Oct 06 12:04:15 crc kubenswrapper[4958]: I1006 12:04:15.912507 4958 memory_manager.go:354] "RemoveStaleState removing state" podUID="13d1aca0-56df-432b-97e2-bdf76bda20b8" containerName="horizon-log" Oct 06 12:04:15 crc kubenswrapper[4958]: I1006 12:04:15.912543 4958 memory_manager.go:354] "RemoveStaleState removing state" podUID="e2efb2d0-1b2f-47fd-a1ee-5e69cafa0fd7" containerName="placement-db-sync" Oct 06 12:04:15 crc kubenswrapper[4958]: I1006 12:04:15.912574 4958 memory_manager.go:354] "RemoveStaleState removing state" podUID="cedfa2ab-1916-46e6-8e95-18c7a6da6046" containerName="horizon" Oct 06 12:04:15 crc kubenswrapper[4958]: I1006 12:04:15.912597 4958 memory_manager.go:354] "RemoveStaleState removing state" podUID="76fb3845-b6c8-49ed-a7c5-fbf1254134dd" containerName="horizon" Oct 06 12:04:15 crc kubenswrapper[4958]: I1006 12:04:15.912623 4958 memory_manager.go:354] "RemoveStaleState removing state" podUID="76fb3845-b6c8-49ed-a7c5-fbf1254134dd" containerName="horizon-log" Oct 06 12:04:15 crc kubenswrapper[4958]: I1006 12:04:15.914733 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-999fc56db-gbkpz" Oct 06 12:04:15 crc kubenswrapper[4958]: I1006 12:04:15.922110 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-scripts" Oct 06 12:04:15 crc kubenswrapper[4958]: I1006 12:04:15.922321 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-config-data" Oct 06 12:04:15 crc kubenswrapper[4958]: I1006 12:04:15.922505 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-placement-public-svc" Oct 06 12:04:15 crc kubenswrapper[4958]: I1006 12:04:15.922636 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-placement-dockercfg-nhngr" Oct 06 12:04:15 crc kubenswrapper[4958]: I1006 12:04:15.922764 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-placement-internal-svc" Oct 06 12:04:15 crc kubenswrapper[4958]: I1006 12:04:15.947170 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-999fc56db-gbkpz"] Oct 06 12:04:16 crc kubenswrapper[4958]: I1006 12:04:16.063510 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/13b3f1be-75c2-49c7-a3c6-d6dd842788c4-scripts\") pod \"placement-999fc56db-gbkpz\" (UID: \"13b3f1be-75c2-49c7-a3c6-d6dd842788c4\") " pod="openstack/placement-999fc56db-gbkpz" Oct 06 12:04:16 crc kubenswrapper[4958]: I1006 12:04:16.063599 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/13b3f1be-75c2-49c7-a3c6-d6dd842788c4-public-tls-certs\") pod \"placement-999fc56db-gbkpz\" (UID: \"13b3f1be-75c2-49c7-a3c6-d6dd842788c4\") " pod="openstack/placement-999fc56db-gbkpz" Oct 06 12:04:16 crc kubenswrapper[4958]: I1006 12:04:16.063623 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/13b3f1be-75c2-49c7-a3c6-d6dd842788c4-internal-tls-certs\") pod \"placement-999fc56db-gbkpz\" (UID: \"13b3f1be-75c2-49c7-a3c6-d6dd842788c4\") " pod="openstack/placement-999fc56db-gbkpz" Oct 06 12:04:16 crc kubenswrapper[4958]: I1006 12:04:16.063666 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/13b3f1be-75c2-49c7-a3c6-d6dd842788c4-combined-ca-bundle\") pod \"placement-999fc56db-gbkpz\" (UID: \"13b3f1be-75c2-49c7-a3c6-d6dd842788c4\") " pod="openstack/placement-999fc56db-gbkpz" Oct 06 12:04:16 crc kubenswrapper[4958]: I1006 12:04:16.063686 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8jgks\" (UniqueName: \"kubernetes.io/projected/13b3f1be-75c2-49c7-a3c6-d6dd842788c4-kube-api-access-8jgks\") pod \"placement-999fc56db-gbkpz\" (UID: \"13b3f1be-75c2-49c7-a3c6-d6dd842788c4\") " pod="openstack/placement-999fc56db-gbkpz" Oct 06 12:04:16 crc kubenswrapper[4958]: I1006 12:04:16.063713 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/13b3f1be-75c2-49c7-a3c6-d6dd842788c4-logs\") pod \"placement-999fc56db-gbkpz\" (UID: \"13b3f1be-75c2-49c7-a3c6-d6dd842788c4\") " pod="openstack/placement-999fc56db-gbkpz" Oct 06 12:04:16 crc kubenswrapper[4958]: I1006 12:04:16.063775 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/13b3f1be-75c2-49c7-a3c6-d6dd842788c4-config-data\") pod \"placement-999fc56db-gbkpz\" (UID: \"13b3f1be-75c2-49c7-a3c6-d6dd842788c4\") " pod="openstack/placement-999fc56db-gbkpz" Oct 06 12:04:16 crc kubenswrapper[4958]: I1006 12:04:16.165253 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/13b3f1be-75c2-49c7-a3c6-d6dd842788c4-scripts\") pod \"placement-999fc56db-gbkpz\" (UID: \"13b3f1be-75c2-49c7-a3c6-d6dd842788c4\") " pod="openstack/placement-999fc56db-gbkpz" Oct 06 12:04:16 crc kubenswrapper[4958]: I1006 12:04:16.165345 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/13b3f1be-75c2-49c7-a3c6-d6dd842788c4-public-tls-certs\") pod \"placement-999fc56db-gbkpz\" (UID: \"13b3f1be-75c2-49c7-a3c6-d6dd842788c4\") " pod="openstack/placement-999fc56db-gbkpz" Oct 06 12:04:16 crc kubenswrapper[4958]: I1006 12:04:16.165407 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/13b3f1be-75c2-49c7-a3c6-d6dd842788c4-internal-tls-certs\") pod \"placement-999fc56db-gbkpz\" (UID: \"13b3f1be-75c2-49c7-a3c6-d6dd842788c4\") " pod="openstack/placement-999fc56db-gbkpz" Oct 06 12:04:16 crc kubenswrapper[4958]: I1006 12:04:16.165957 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/13b3f1be-75c2-49c7-a3c6-d6dd842788c4-combined-ca-bundle\") pod \"placement-999fc56db-gbkpz\" (UID: \"13b3f1be-75c2-49c7-a3c6-d6dd842788c4\") " pod="openstack/placement-999fc56db-gbkpz" Oct 06 12:04:16 crc kubenswrapper[4958]: I1006 12:04:16.166002 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8jgks\" (UniqueName: \"kubernetes.io/projected/13b3f1be-75c2-49c7-a3c6-d6dd842788c4-kube-api-access-8jgks\") pod \"placement-999fc56db-gbkpz\" (UID: \"13b3f1be-75c2-49c7-a3c6-d6dd842788c4\") " pod="openstack/placement-999fc56db-gbkpz" Oct 06 12:04:16 crc kubenswrapper[4958]: I1006 12:04:16.166036 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/13b3f1be-75c2-49c7-a3c6-d6dd842788c4-logs\") pod \"placement-999fc56db-gbkpz\" (UID: \"13b3f1be-75c2-49c7-a3c6-d6dd842788c4\") " pod="openstack/placement-999fc56db-gbkpz" Oct 06 12:04:16 crc kubenswrapper[4958]: I1006 12:04:16.166118 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/13b3f1be-75c2-49c7-a3c6-d6dd842788c4-config-data\") pod \"placement-999fc56db-gbkpz\" (UID: \"13b3f1be-75c2-49c7-a3c6-d6dd842788c4\") " pod="openstack/placement-999fc56db-gbkpz" Oct 06 12:04:16 crc kubenswrapper[4958]: I1006 12:04:16.167007 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/13b3f1be-75c2-49c7-a3c6-d6dd842788c4-logs\") pod \"placement-999fc56db-gbkpz\" (UID: \"13b3f1be-75c2-49c7-a3c6-d6dd842788c4\") " pod="openstack/placement-999fc56db-gbkpz" Oct 06 12:04:16 crc kubenswrapper[4958]: I1006 12:04:16.170387 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/13b3f1be-75c2-49c7-a3c6-d6dd842788c4-combined-ca-bundle\") pod \"placement-999fc56db-gbkpz\" (UID: \"13b3f1be-75c2-49c7-a3c6-d6dd842788c4\") " pod="openstack/placement-999fc56db-gbkpz" Oct 06 12:04:16 crc kubenswrapper[4958]: I1006 12:04:16.171908 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/13b3f1be-75c2-49c7-a3c6-d6dd842788c4-config-data\") pod \"placement-999fc56db-gbkpz\" (UID: \"13b3f1be-75c2-49c7-a3c6-d6dd842788c4\") " pod="openstack/placement-999fc56db-gbkpz" Oct 06 12:04:16 crc kubenswrapper[4958]: I1006 12:04:16.173915 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/13b3f1be-75c2-49c7-a3c6-d6dd842788c4-public-tls-certs\") pod \"placement-999fc56db-gbkpz\" (UID: \"13b3f1be-75c2-49c7-a3c6-d6dd842788c4\") " pod="openstack/placement-999fc56db-gbkpz" Oct 06 12:04:16 crc kubenswrapper[4958]: I1006 12:04:16.174156 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/13b3f1be-75c2-49c7-a3c6-d6dd842788c4-scripts\") pod \"placement-999fc56db-gbkpz\" (UID: \"13b3f1be-75c2-49c7-a3c6-d6dd842788c4\") " pod="openstack/placement-999fc56db-gbkpz" Oct 06 12:04:16 crc kubenswrapper[4958]: I1006 12:04:16.174489 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/13b3f1be-75c2-49c7-a3c6-d6dd842788c4-internal-tls-certs\") pod \"placement-999fc56db-gbkpz\" (UID: \"13b3f1be-75c2-49c7-a3c6-d6dd842788c4\") " pod="openstack/placement-999fc56db-gbkpz" Oct 06 12:04:16 crc kubenswrapper[4958]: I1006 12:04:16.183710 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8jgks\" (UniqueName: \"kubernetes.io/projected/13b3f1be-75c2-49c7-a3c6-d6dd842788c4-kube-api-access-8jgks\") pod \"placement-999fc56db-gbkpz\" (UID: \"13b3f1be-75c2-49c7-a3c6-d6dd842788c4\") " pod="openstack/placement-999fc56db-gbkpz" Oct 06 12:04:16 crc kubenswrapper[4958]: I1006 12:04:16.269276 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-999fc56db-gbkpz" Oct 06 12:04:19 crc kubenswrapper[4958]: I1006 12:04:19.101182 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-84b966f6c9-xqx5n" Oct 06 12:04:19 crc kubenswrapper[4958]: I1006 12:04:19.165138 4958 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-8b5c85b87-s74ql"] Oct 06 12:04:19 crc kubenswrapper[4958]: I1006 12:04:19.165515 4958 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-8b5c85b87-s74ql" podUID="09020dee-07c8-4b21-95e6-701ea33f70f4" containerName="dnsmasq-dns" containerID="cri-o://2e9196f8e2fa1b198017d2f2853d5824896c81e5e8f598d3763f9e71eb458ef2" gracePeriod=10 Oct 06 12:04:19 crc kubenswrapper[4958]: I1006 12:04:19.193841 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-9kx8n" Oct 06 12:04:19 crc kubenswrapper[4958]: I1006 12:04:19.322841 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/2d86d752-3a4d-4940-b221-242b3253c418-db-sync-config-data\") pod \"2d86d752-3a4d-4940-b221-242b3253c418\" (UID: \"2d86d752-3a4d-4940-b221-242b3253c418\") " Oct 06 12:04:19 crc kubenswrapper[4958]: I1006 12:04:19.323884 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2lkkf\" (UniqueName: \"kubernetes.io/projected/2d86d752-3a4d-4940-b221-242b3253c418-kube-api-access-2lkkf\") pod \"2d86d752-3a4d-4940-b221-242b3253c418\" (UID: \"2d86d752-3a4d-4940-b221-242b3253c418\") " Oct 06 12:04:19 crc kubenswrapper[4958]: I1006 12:04:19.324193 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2d86d752-3a4d-4940-b221-242b3253c418-combined-ca-bundle\") pod \"2d86d752-3a4d-4940-b221-242b3253c418\" (UID: \"2d86d752-3a4d-4940-b221-242b3253c418\") " Oct 06 12:04:19 crc kubenswrapper[4958]: I1006 12:04:19.332271 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2d86d752-3a4d-4940-b221-242b3253c418-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "2d86d752-3a4d-4940-b221-242b3253c418" (UID: "2d86d752-3a4d-4940-b221-242b3253c418"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 06 12:04:19 crc kubenswrapper[4958]: I1006 12:04:19.344382 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2d86d752-3a4d-4940-b221-242b3253c418-kube-api-access-2lkkf" (OuterVolumeSpecName: "kube-api-access-2lkkf") pod "2d86d752-3a4d-4940-b221-242b3253c418" (UID: "2d86d752-3a4d-4940-b221-242b3253c418"). InnerVolumeSpecName "kube-api-access-2lkkf". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 06 12:04:19 crc kubenswrapper[4958]: I1006 12:04:19.383290 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2d86d752-3a4d-4940-b221-242b3253c418-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "2d86d752-3a4d-4940-b221-242b3253c418" (UID: "2d86d752-3a4d-4940-b221-242b3253c418"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 06 12:04:19 crc kubenswrapper[4958]: I1006 12:04:19.426157 4958 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2d86d752-3a4d-4940-b221-242b3253c418-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 06 12:04:19 crc kubenswrapper[4958]: I1006 12:04:19.426190 4958 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/2d86d752-3a4d-4940-b221-242b3253c418-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Oct 06 12:04:19 crc kubenswrapper[4958]: I1006 12:04:19.426200 4958 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2lkkf\" (UniqueName: \"kubernetes.io/projected/2d86d752-3a4d-4940-b221-242b3253c418-kube-api-access-2lkkf\") on node \"crc\" DevicePath \"\"" Oct 06 12:04:19 crc kubenswrapper[4958]: I1006 12:04:19.832100 4958 generic.go:334] "Generic (PLEG): container finished" podID="09020dee-07c8-4b21-95e6-701ea33f70f4" containerID="2e9196f8e2fa1b198017d2f2853d5824896c81e5e8f598d3763f9e71eb458ef2" exitCode=0 Oct 06 12:04:19 crc kubenswrapper[4958]: I1006 12:04:19.832176 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-8b5c85b87-s74ql" event={"ID":"09020dee-07c8-4b21-95e6-701ea33f70f4","Type":"ContainerDied","Data":"2e9196f8e2fa1b198017d2f2853d5824896c81e5e8f598d3763f9e71eb458ef2"} Oct 06 12:04:19 crc kubenswrapper[4958]: I1006 12:04:19.834787 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-9kx8n" event={"ID":"2d86d752-3a4d-4940-b221-242b3253c418","Type":"ContainerDied","Data":"d9bc707430764af4c1fa07d57ccec0af93a589301e0aeaf130ad7bf19bf380f2"} Oct 06 12:04:19 crc kubenswrapper[4958]: I1006 12:04:19.834845 4958 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d9bc707430764af4c1fa07d57ccec0af93a589301e0aeaf130ad7bf19bf380f2" Oct 06 12:04:19 crc kubenswrapper[4958]: I1006 12:04:19.834947 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-9kx8n" Oct 06 12:04:20 crc kubenswrapper[4958]: I1006 12:04:20.199112 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-8b5c85b87-s74ql" Oct 06 12:04:20 crc kubenswrapper[4958]: I1006 12:04:20.343519 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/09020dee-07c8-4b21-95e6-701ea33f70f4-dns-svc\") pod \"09020dee-07c8-4b21-95e6-701ea33f70f4\" (UID: \"09020dee-07c8-4b21-95e6-701ea33f70f4\") " Oct 06 12:04:20 crc kubenswrapper[4958]: I1006 12:04:20.343889 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/09020dee-07c8-4b21-95e6-701ea33f70f4-ovsdbserver-sb\") pod \"09020dee-07c8-4b21-95e6-701ea33f70f4\" (UID: \"09020dee-07c8-4b21-95e6-701ea33f70f4\") " Oct 06 12:04:20 crc kubenswrapper[4958]: I1006 12:04:20.343940 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/09020dee-07c8-4b21-95e6-701ea33f70f4-config\") pod \"09020dee-07c8-4b21-95e6-701ea33f70f4\" (UID: \"09020dee-07c8-4b21-95e6-701ea33f70f4\") " Oct 06 12:04:20 crc kubenswrapper[4958]: I1006 12:04:20.344076 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/09020dee-07c8-4b21-95e6-701ea33f70f4-dns-swift-storage-0\") pod \"09020dee-07c8-4b21-95e6-701ea33f70f4\" (UID: \"09020dee-07c8-4b21-95e6-701ea33f70f4\") " Oct 06 12:04:20 crc kubenswrapper[4958]: I1006 12:04:20.344286 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/09020dee-07c8-4b21-95e6-701ea33f70f4-ovsdbserver-nb\") pod \"09020dee-07c8-4b21-95e6-701ea33f70f4\" (UID: \"09020dee-07c8-4b21-95e6-701ea33f70f4\") " Oct 06 12:04:20 crc kubenswrapper[4958]: I1006 12:04:20.344326 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8bxhc\" (UniqueName: \"kubernetes.io/projected/09020dee-07c8-4b21-95e6-701ea33f70f4-kube-api-access-8bxhc\") pod \"09020dee-07c8-4b21-95e6-701ea33f70f4\" (UID: \"09020dee-07c8-4b21-95e6-701ea33f70f4\") " Oct 06 12:04:20 crc kubenswrapper[4958]: I1006 12:04:20.366068 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/09020dee-07c8-4b21-95e6-701ea33f70f4-kube-api-access-8bxhc" (OuterVolumeSpecName: "kube-api-access-8bxhc") pod "09020dee-07c8-4b21-95e6-701ea33f70f4" (UID: "09020dee-07c8-4b21-95e6-701ea33f70f4"). InnerVolumeSpecName "kube-api-access-8bxhc". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 06 12:04:20 crc kubenswrapper[4958]: I1006 12:04:20.409045 4958 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-worker-56588d4b7-rsgzm"] Oct 06 12:04:20 crc kubenswrapper[4958]: E1006 12:04:20.409471 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="09020dee-07c8-4b21-95e6-701ea33f70f4" containerName="dnsmasq-dns" Oct 06 12:04:20 crc kubenswrapper[4958]: I1006 12:04:20.409483 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="09020dee-07c8-4b21-95e6-701ea33f70f4" containerName="dnsmasq-dns" Oct 06 12:04:20 crc kubenswrapper[4958]: E1006 12:04:20.409506 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="09020dee-07c8-4b21-95e6-701ea33f70f4" containerName="init" Oct 06 12:04:20 crc kubenswrapper[4958]: I1006 12:04:20.409512 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="09020dee-07c8-4b21-95e6-701ea33f70f4" containerName="init" Oct 06 12:04:20 crc kubenswrapper[4958]: E1006 12:04:20.409528 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2d86d752-3a4d-4940-b221-242b3253c418" containerName="barbican-db-sync" Oct 06 12:04:20 crc kubenswrapper[4958]: I1006 12:04:20.409534 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="2d86d752-3a4d-4940-b221-242b3253c418" containerName="barbican-db-sync" Oct 06 12:04:20 crc kubenswrapper[4958]: I1006 12:04:20.409695 4958 memory_manager.go:354] "RemoveStaleState removing state" podUID="09020dee-07c8-4b21-95e6-701ea33f70f4" containerName="dnsmasq-dns" Oct 06 12:04:20 crc kubenswrapper[4958]: I1006 12:04:20.409716 4958 memory_manager.go:354] "RemoveStaleState removing state" podUID="2d86d752-3a4d-4940-b221-242b3253c418" containerName="barbican-db-sync" Oct 06 12:04:20 crc kubenswrapper[4958]: I1006 12:04:20.410611 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-worker-56588d4b7-rsgzm" Oct 06 12:04:20 crc kubenswrapper[4958]: I1006 12:04:20.412423 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-barbican-dockercfg-2rn69" Oct 06 12:04:20 crc kubenswrapper[4958]: I1006 12:04:20.412667 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-worker-config-data" Oct 06 12:04:20 crc kubenswrapper[4958]: I1006 12:04:20.413386 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-config-data" Oct 06 12:04:20 crc kubenswrapper[4958]: I1006 12:04:20.445316 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v6cp6\" (UniqueName: \"kubernetes.io/projected/d1f5dd7f-a3f7-4c0d-8e1e-ccbc0fc5063f-kube-api-access-v6cp6\") pod \"barbican-worker-56588d4b7-rsgzm\" (UID: \"d1f5dd7f-a3f7-4c0d-8e1e-ccbc0fc5063f\") " pod="openstack/barbican-worker-56588d4b7-rsgzm" Oct 06 12:04:20 crc kubenswrapper[4958]: I1006 12:04:20.445369 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/d1f5dd7f-a3f7-4c0d-8e1e-ccbc0fc5063f-config-data-custom\") pod \"barbican-worker-56588d4b7-rsgzm\" (UID: \"d1f5dd7f-a3f7-4c0d-8e1e-ccbc0fc5063f\") " pod="openstack/barbican-worker-56588d4b7-rsgzm" Oct 06 12:04:20 crc kubenswrapper[4958]: I1006 12:04:20.445415 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d1f5dd7f-a3f7-4c0d-8e1e-ccbc0fc5063f-combined-ca-bundle\") pod \"barbican-worker-56588d4b7-rsgzm\" (UID: \"d1f5dd7f-a3f7-4c0d-8e1e-ccbc0fc5063f\") " pod="openstack/barbican-worker-56588d4b7-rsgzm" Oct 06 12:04:20 crc kubenswrapper[4958]: I1006 12:04:20.445439 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d1f5dd7f-a3f7-4c0d-8e1e-ccbc0fc5063f-config-data\") pod \"barbican-worker-56588d4b7-rsgzm\" (UID: \"d1f5dd7f-a3f7-4c0d-8e1e-ccbc0fc5063f\") " pod="openstack/barbican-worker-56588d4b7-rsgzm" Oct 06 12:04:20 crc kubenswrapper[4958]: I1006 12:04:20.445461 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d1f5dd7f-a3f7-4c0d-8e1e-ccbc0fc5063f-logs\") pod \"barbican-worker-56588d4b7-rsgzm\" (UID: \"d1f5dd7f-a3f7-4c0d-8e1e-ccbc0fc5063f\") " pod="openstack/barbican-worker-56588d4b7-rsgzm" Oct 06 12:04:20 crc kubenswrapper[4958]: I1006 12:04:20.445558 4958 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8bxhc\" (UniqueName: \"kubernetes.io/projected/09020dee-07c8-4b21-95e6-701ea33f70f4-kube-api-access-8bxhc\") on node \"crc\" DevicePath \"\"" Oct 06 12:04:20 crc kubenswrapper[4958]: I1006 12:04:20.446795 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-999fc56db-gbkpz"] Oct 06 12:04:20 crc kubenswrapper[4958]: I1006 12:04:20.458095 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-worker-56588d4b7-rsgzm"] Oct 06 12:04:20 crc kubenswrapper[4958]: I1006 12:04:20.479292 4958 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-keystone-listener-7f4f85d9b4-tcldf"] Oct 06 12:04:20 crc kubenswrapper[4958]: I1006 12:04:20.481124 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-keystone-listener-7f4f85d9b4-tcldf" Oct 06 12:04:20 crc kubenswrapper[4958]: I1006 12:04:20.489792 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-keystone-listener-config-data" Oct 06 12:04:20 crc kubenswrapper[4958]: I1006 12:04:20.505726 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-keystone-listener-7f4f85d9b4-tcldf"] Oct 06 12:04:20 crc kubenswrapper[4958]: I1006 12:04:20.511859 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09020dee-07c8-4b21-95e6-701ea33f70f4-config" (OuterVolumeSpecName: "config") pod "09020dee-07c8-4b21-95e6-701ea33f70f4" (UID: "09020dee-07c8-4b21-95e6-701ea33f70f4"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 06 12:04:20 crc kubenswrapper[4958]: I1006 12:04:20.516681 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09020dee-07c8-4b21-95e6-701ea33f70f4-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "09020dee-07c8-4b21-95e6-701ea33f70f4" (UID: "09020dee-07c8-4b21-95e6-701ea33f70f4"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 06 12:04:20 crc kubenswrapper[4958]: I1006 12:04:20.520658 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09020dee-07c8-4b21-95e6-701ea33f70f4-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "09020dee-07c8-4b21-95e6-701ea33f70f4" (UID: "09020dee-07c8-4b21-95e6-701ea33f70f4"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 06 12:04:20 crc kubenswrapper[4958]: I1006 12:04:20.546594 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v6cp6\" (UniqueName: \"kubernetes.io/projected/d1f5dd7f-a3f7-4c0d-8e1e-ccbc0fc5063f-kube-api-access-v6cp6\") pod \"barbican-worker-56588d4b7-rsgzm\" (UID: \"d1f5dd7f-a3f7-4c0d-8e1e-ccbc0fc5063f\") " pod="openstack/barbican-worker-56588d4b7-rsgzm" Oct 06 12:04:20 crc kubenswrapper[4958]: I1006 12:04:20.546642 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/d1f5dd7f-a3f7-4c0d-8e1e-ccbc0fc5063f-config-data-custom\") pod \"barbican-worker-56588d4b7-rsgzm\" (UID: \"d1f5dd7f-a3f7-4c0d-8e1e-ccbc0fc5063f\") " pod="openstack/barbican-worker-56588d4b7-rsgzm" Oct 06 12:04:20 crc kubenswrapper[4958]: I1006 12:04:20.546670 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/f3a9469b-86a8-4eec-9722-8bec4159b05e-config-data-custom\") pod \"barbican-keystone-listener-7f4f85d9b4-tcldf\" (UID: \"f3a9469b-86a8-4eec-9722-8bec4159b05e\") " pod="openstack/barbican-keystone-listener-7f4f85d9b4-tcldf" Oct 06 12:04:20 crc kubenswrapper[4958]: I1006 12:04:20.546721 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f3a9469b-86a8-4eec-9722-8bec4159b05e-config-data\") pod \"barbican-keystone-listener-7f4f85d9b4-tcldf\" (UID: \"f3a9469b-86a8-4eec-9722-8bec4159b05e\") " pod="openstack/barbican-keystone-listener-7f4f85d9b4-tcldf" Oct 06 12:04:20 crc kubenswrapper[4958]: I1006 12:04:20.546746 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d1f5dd7f-a3f7-4c0d-8e1e-ccbc0fc5063f-combined-ca-bundle\") pod \"barbican-worker-56588d4b7-rsgzm\" (UID: \"d1f5dd7f-a3f7-4c0d-8e1e-ccbc0fc5063f\") " pod="openstack/barbican-worker-56588d4b7-rsgzm" Oct 06 12:04:20 crc kubenswrapper[4958]: I1006 12:04:20.546767 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f3a9469b-86a8-4eec-9722-8bec4159b05e-combined-ca-bundle\") pod \"barbican-keystone-listener-7f4f85d9b4-tcldf\" (UID: \"f3a9469b-86a8-4eec-9722-8bec4159b05e\") " pod="openstack/barbican-keystone-listener-7f4f85d9b4-tcldf" Oct 06 12:04:20 crc kubenswrapper[4958]: I1006 12:04:20.546788 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d1f5dd7f-a3f7-4c0d-8e1e-ccbc0fc5063f-config-data\") pod \"barbican-worker-56588d4b7-rsgzm\" (UID: \"d1f5dd7f-a3f7-4c0d-8e1e-ccbc0fc5063f\") " pod="openstack/barbican-worker-56588d4b7-rsgzm" Oct 06 12:04:20 crc kubenswrapper[4958]: I1006 12:04:20.546806 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f3a9469b-86a8-4eec-9722-8bec4159b05e-logs\") pod \"barbican-keystone-listener-7f4f85d9b4-tcldf\" (UID: \"f3a9469b-86a8-4eec-9722-8bec4159b05e\") " pod="openstack/barbican-keystone-listener-7f4f85d9b4-tcldf" Oct 06 12:04:20 crc kubenswrapper[4958]: I1006 12:04:20.546824 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d1f5dd7f-a3f7-4c0d-8e1e-ccbc0fc5063f-logs\") pod \"barbican-worker-56588d4b7-rsgzm\" (UID: \"d1f5dd7f-a3f7-4c0d-8e1e-ccbc0fc5063f\") " pod="openstack/barbican-worker-56588d4b7-rsgzm" Oct 06 12:04:20 crc kubenswrapper[4958]: I1006 12:04:20.546886 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8k6v9\" (UniqueName: \"kubernetes.io/projected/f3a9469b-86a8-4eec-9722-8bec4159b05e-kube-api-access-8k6v9\") pod \"barbican-keystone-listener-7f4f85d9b4-tcldf\" (UID: \"f3a9469b-86a8-4eec-9722-8bec4159b05e\") " pod="openstack/barbican-keystone-listener-7f4f85d9b4-tcldf" Oct 06 12:04:20 crc kubenswrapper[4958]: I1006 12:04:20.546956 4958 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/09020dee-07c8-4b21-95e6-701ea33f70f4-dns-svc\") on node \"crc\" DevicePath \"\"" Oct 06 12:04:20 crc kubenswrapper[4958]: I1006 12:04:20.546965 4958 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/09020dee-07c8-4b21-95e6-701ea33f70f4-config\") on node \"crc\" DevicePath \"\"" Oct 06 12:04:20 crc kubenswrapper[4958]: I1006 12:04:20.546975 4958 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/09020dee-07c8-4b21-95e6-701ea33f70f4-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Oct 06 12:04:20 crc kubenswrapper[4958]: I1006 12:04:20.555201 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d1f5dd7f-a3f7-4c0d-8e1e-ccbc0fc5063f-logs\") pod \"barbican-worker-56588d4b7-rsgzm\" (UID: \"d1f5dd7f-a3f7-4c0d-8e1e-ccbc0fc5063f\") " pod="openstack/barbican-worker-56588d4b7-rsgzm" Oct 06 12:04:20 crc kubenswrapper[4958]: I1006 12:04:20.566329 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/d1f5dd7f-a3f7-4c0d-8e1e-ccbc0fc5063f-config-data-custom\") pod \"barbican-worker-56588d4b7-rsgzm\" (UID: \"d1f5dd7f-a3f7-4c0d-8e1e-ccbc0fc5063f\") " pod="openstack/barbican-worker-56588d4b7-rsgzm" Oct 06 12:04:20 crc kubenswrapper[4958]: I1006 12:04:20.567464 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v6cp6\" (UniqueName: \"kubernetes.io/projected/d1f5dd7f-a3f7-4c0d-8e1e-ccbc0fc5063f-kube-api-access-v6cp6\") pod \"barbican-worker-56588d4b7-rsgzm\" (UID: \"d1f5dd7f-a3f7-4c0d-8e1e-ccbc0fc5063f\") " pod="openstack/barbican-worker-56588d4b7-rsgzm" Oct 06 12:04:20 crc kubenswrapper[4958]: I1006 12:04:20.568160 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d1f5dd7f-a3f7-4c0d-8e1e-ccbc0fc5063f-combined-ca-bundle\") pod \"barbican-worker-56588d4b7-rsgzm\" (UID: \"d1f5dd7f-a3f7-4c0d-8e1e-ccbc0fc5063f\") " pod="openstack/barbican-worker-56588d4b7-rsgzm" Oct 06 12:04:20 crc kubenswrapper[4958]: I1006 12:04:20.575245 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d1f5dd7f-a3f7-4c0d-8e1e-ccbc0fc5063f-config-data\") pod \"barbican-worker-56588d4b7-rsgzm\" (UID: \"d1f5dd7f-a3f7-4c0d-8e1e-ccbc0fc5063f\") " pod="openstack/barbican-worker-56588d4b7-rsgzm" Oct 06 12:04:20 crc kubenswrapper[4958]: I1006 12:04:20.583886 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09020dee-07c8-4b21-95e6-701ea33f70f4-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "09020dee-07c8-4b21-95e6-701ea33f70f4" (UID: "09020dee-07c8-4b21-95e6-701ea33f70f4"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 06 12:04:20 crc kubenswrapper[4958]: I1006 12:04:20.588954 4958 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-75c8ddd69c-l6bx4"] Oct 06 12:04:20 crc kubenswrapper[4958]: I1006 12:04:20.589727 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09020dee-07c8-4b21-95e6-701ea33f70f4-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "09020dee-07c8-4b21-95e6-701ea33f70f4" (UID: "09020dee-07c8-4b21-95e6-701ea33f70f4"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 06 12:04:20 crc kubenswrapper[4958]: I1006 12:04:20.590368 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-75c8ddd69c-l6bx4" Oct 06 12:04:20 crc kubenswrapper[4958]: I1006 12:04:20.596936 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-75c8ddd69c-l6bx4"] Oct 06 12:04:20 crc kubenswrapper[4958]: I1006 12:04:20.630283 4958 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-api-5958f9647d-2d24s"] Oct 06 12:04:20 crc kubenswrapper[4958]: I1006 12:04:20.631809 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-5958f9647d-2d24s" Oct 06 12:04:20 crc kubenswrapper[4958]: I1006 12:04:20.634883 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-api-config-data" Oct 06 12:04:20 crc kubenswrapper[4958]: I1006 12:04:20.649175 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f3a9469b-86a8-4eec-9722-8bec4159b05e-config-data\") pod \"barbican-keystone-listener-7f4f85d9b4-tcldf\" (UID: \"f3a9469b-86a8-4eec-9722-8bec4159b05e\") " pod="openstack/barbican-keystone-listener-7f4f85d9b4-tcldf" Oct 06 12:04:20 crc kubenswrapper[4958]: I1006 12:04:20.649222 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/f71790e2-93a0-4e5a-9099-e2e5a103af3d-ovsdbserver-sb\") pod \"dnsmasq-dns-75c8ddd69c-l6bx4\" (UID: \"f71790e2-93a0-4e5a-9099-e2e5a103af3d\") " pod="openstack/dnsmasq-dns-75c8ddd69c-l6bx4" Oct 06 12:04:20 crc kubenswrapper[4958]: I1006 12:04:20.649255 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f3a9469b-86a8-4eec-9722-8bec4159b05e-combined-ca-bundle\") pod \"barbican-keystone-listener-7f4f85d9b4-tcldf\" (UID: \"f3a9469b-86a8-4eec-9722-8bec4159b05e\") " pod="openstack/barbican-keystone-listener-7f4f85d9b4-tcldf" Oct 06 12:04:20 crc kubenswrapper[4958]: I1006 12:04:20.649276 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f3a9469b-86a8-4eec-9722-8bec4159b05e-logs\") pod \"barbican-keystone-listener-7f4f85d9b4-tcldf\" (UID: \"f3a9469b-86a8-4eec-9722-8bec4159b05e\") " pod="openstack/barbican-keystone-listener-7f4f85d9b4-tcldf" Oct 06 12:04:20 crc kubenswrapper[4958]: I1006 12:04:20.649298 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/f71790e2-93a0-4e5a-9099-e2e5a103af3d-ovsdbserver-nb\") pod \"dnsmasq-dns-75c8ddd69c-l6bx4\" (UID: \"f71790e2-93a0-4e5a-9099-e2e5a103af3d\") " pod="openstack/dnsmasq-dns-75c8ddd69c-l6bx4" Oct 06 12:04:20 crc kubenswrapper[4958]: I1006 12:04:20.649354 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-764tf\" (UniqueName: \"kubernetes.io/projected/f71790e2-93a0-4e5a-9099-e2e5a103af3d-kube-api-access-764tf\") pod \"dnsmasq-dns-75c8ddd69c-l6bx4\" (UID: \"f71790e2-93a0-4e5a-9099-e2e5a103af3d\") " pod="openstack/dnsmasq-dns-75c8ddd69c-l6bx4" Oct 06 12:04:20 crc kubenswrapper[4958]: I1006 12:04:20.649373 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/f71790e2-93a0-4e5a-9099-e2e5a103af3d-dns-swift-storage-0\") pod \"dnsmasq-dns-75c8ddd69c-l6bx4\" (UID: \"f71790e2-93a0-4e5a-9099-e2e5a103af3d\") " pod="openstack/dnsmasq-dns-75c8ddd69c-l6bx4" Oct 06 12:04:20 crc kubenswrapper[4958]: I1006 12:04:20.649391 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d92c9a92-e930-40fd-83bb-b478da76b0d3-combined-ca-bundle\") pod \"barbican-api-5958f9647d-2d24s\" (UID: \"d92c9a92-e930-40fd-83bb-b478da76b0d3\") " pod="openstack/barbican-api-5958f9647d-2d24s" Oct 06 12:04:20 crc kubenswrapper[4958]: I1006 12:04:20.649410 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d92c9a92-e930-40fd-83bb-b478da76b0d3-logs\") pod \"barbican-api-5958f9647d-2d24s\" (UID: \"d92c9a92-e930-40fd-83bb-b478da76b0d3\") " pod="openstack/barbican-api-5958f9647d-2d24s" Oct 06 12:04:20 crc kubenswrapper[4958]: I1006 12:04:20.649437 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8k6v9\" (UniqueName: \"kubernetes.io/projected/f3a9469b-86a8-4eec-9722-8bec4159b05e-kube-api-access-8k6v9\") pod \"barbican-keystone-listener-7f4f85d9b4-tcldf\" (UID: \"f3a9469b-86a8-4eec-9722-8bec4159b05e\") " pod="openstack/barbican-keystone-listener-7f4f85d9b4-tcldf" Oct 06 12:04:20 crc kubenswrapper[4958]: I1006 12:04:20.649478 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d92c9a92-e930-40fd-83bb-b478da76b0d3-config-data\") pod \"barbican-api-5958f9647d-2d24s\" (UID: \"d92c9a92-e930-40fd-83bb-b478da76b0d3\") " pod="openstack/barbican-api-5958f9647d-2d24s" Oct 06 12:04:20 crc kubenswrapper[4958]: I1006 12:04:20.649502 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t6gpb\" (UniqueName: \"kubernetes.io/projected/d92c9a92-e930-40fd-83bb-b478da76b0d3-kube-api-access-t6gpb\") pod \"barbican-api-5958f9647d-2d24s\" (UID: \"d92c9a92-e930-40fd-83bb-b478da76b0d3\") " pod="openstack/barbican-api-5958f9647d-2d24s" Oct 06 12:04:20 crc kubenswrapper[4958]: I1006 12:04:20.649553 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/f3a9469b-86a8-4eec-9722-8bec4159b05e-config-data-custom\") pod \"barbican-keystone-listener-7f4f85d9b4-tcldf\" (UID: \"f3a9469b-86a8-4eec-9722-8bec4159b05e\") " pod="openstack/barbican-keystone-listener-7f4f85d9b4-tcldf" Oct 06 12:04:20 crc kubenswrapper[4958]: I1006 12:04:20.649579 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f71790e2-93a0-4e5a-9099-e2e5a103af3d-dns-svc\") pod \"dnsmasq-dns-75c8ddd69c-l6bx4\" (UID: \"f71790e2-93a0-4e5a-9099-e2e5a103af3d\") " pod="openstack/dnsmasq-dns-75c8ddd69c-l6bx4" Oct 06 12:04:20 crc kubenswrapper[4958]: I1006 12:04:20.649623 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/d92c9a92-e930-40fd-83bb-b478da76b0d3-config-data-custom\") pod \"barbican-api-5958f9647d-2d24s\" (UID: \"d92c9a92-e930-40fd-83bb-b478da76b0d3\") " pod="openstack/barbican-api-5958f9647d-2d24s" Oct 06 12:04:20 crc kubenswrapper[4958]: I1006 12:04:20.649653 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f71790e2-93a0-4e5a-9099-e2e5a103af3d-config\") pod \"dnsmasq-dns-75c8ddd69c-l6bx4\" (UID: \"f71790e2-93a0-4e5a-9099-e2e5a103af3d\") " pod="openstack/dnsmasq-dns-75c8ddd69c-l6bx4" Oct 06 12:04:20 crc kubenswrapper[4958]: I1006 12:04:20.649716 4958 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/09020dee-07c8-4b21-95e6-701ea33f70f4-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Oct 06 12:04:20 crc kubenswrapper[4958]: I1006 12:04:20.649747 4958 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/09020dee-07c8-4b21-95e6-701ea33f70f4-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Oct 06 12:04:20 crc kubenswrapper[4958]: I1006 12:04:20.649799 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f3a9469b-86a8-4eec-9722-8bec4159b05e-logs\") pod \"barbican-keystone-listener-7f4f85d9b4-tcldf\" (UID: \"f3a9469b-86a8-4eec-9722-8bec4159b05e\") " pod="openstack/barbican-keystone-listener-7f4f85d9b4-tcldf" Oct 06 12:04:20 crc kubenswrapper[4958]: I1006 12:04:20.651983 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-5958f9647d-2d24s"] Oct 06 12:04:20 crc kubenswrapper[4958]: I1006 12:04:20.653718 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/f3a9469b-86a8-4eec-9722-8bec4159b05e-config-data-custom\") pod \"barbican-keystone-listener-7f4f85d9b4-tcldf\" (UID: \"f3a9469b-86a8-4eec-9722-8bec4159b05e\") " pod="openstack/barbican-keystone-listener-7f4f85d9b4-tcldf" Oct 06 12:04:20 crc kubenswrapper[4958]: I1006 12:04:20.654262 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f3a9469b-86a8-4eec-9722-8bec4159b05e-config-data\") pod \"barbican-keystone-listener-7f4f85d9b4-tcldf\" (UID: \"f3a9469b-86a8-4eec-9722-8bec4159b05e\") " pod="openstack/barbican-keystone-listener-7f4f85d9b4-tcldf" Oct 06 12:04:20 crc kubenswrapper[4958]: I1006 12:04:20.658702 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f3a9469b-86a8-4eec-9722-8bec4159b05e-combined-ca-bundle\") pod \"barbican-keystone-listener-7f4f85d9b4-tcldf\" (UID: \"f3a9469b-86a8-4eec-9722-8bec4159b05e\") " pod="openstack/barbican-keystone-listener-7f4f85d9b4-tcldf" Oct 06 12:04:20 crc kubenswrapper[4958]: I1006 12:04:20.668275 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8k6v9\" (UniqueName: \"kubernetes.io/projected/f3a9469b-86a8-4eec-9722-8bec4159b05e-kube-api-access-8k6v9\") pod \"barbican-keystone-listener-7f4f85d9b4-tcldf\" (UID: \"f3a9469b-86a8-4eec-9722-8bec4159b05e\") " pod="openstack/barbican-keystone-listener-7f4f85d9b4-tcldf" Oct 06 12:04:20 crc kubenswrapper[4958]: I1006 12:04:20.751361 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-764tf\" (UniqueName: \"kubernetes.io/projected/f71790e2-93a0-4e5a-9099-e2e5a103af3d-kube-api-access-764tf\") pod \"dnsmasq-dns-75c8ddd69c-l6bx4\" (UID: \"f71790e2-93a0-4e5a-9099-e2e5a103af3d\") " pod="openstack/dnsmasq-dns-75c8ddd69c-l6bx4" Oct 06 12:04:20 crc kubenswrapper[4958]: I1006 12:04:20.751798 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/f71790e2-93a0-4e5a-9099-e2e5a103af3d-dns-swift-storage-0\") pod \"dnsmasq-dns-75c8ddd69c-l6bx4\" (UID: \"f71790e2-93a0-4e5a-9099-e2e5a103af3d\") " pod="openstack/dnsmasq-dns-75c8ddd69c-l6bx4" Oct 06 12:04:20 crc kubenswrapper[4958]: I1006 12:04:20.751843 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d92c9a92-e930-40fd-83bb-b478da76b0d3-combined-ca-bundle\") pod \"barbican-api-5958f9647d-2d24s\" (UID: \"d92c9a92-e930-40fd-83bb-b478da76b0d3\") " pod="openstack/barbican-api-5958f9647d-2d24s" Oct 06 12:04:20 crc kubenswrapper[4958]: I1006 12:04:20.751865 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d92c9a92-e930-40fd-83bb-b478da76b0d3-logs\") pod \"barbican-api-5958f9647d-2d24s\" (UID: \"d92c9a92-e930-40fd-83bb-b478da76b0d3\") " pod="openstack/barbican-api-5958f9647d-2d24s" Oct 06 12:04:20 crc kubenswrapper[4958]: I1006 12:04:20.752428 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d92c9a92-e930-40fd-83bb-b478da76b0d3-config-data\") pod \"barbican-api-5958f9647d-2d24s\" (UID: \"d92c9a92-e930-40fd-83bb-b478da76b0d3\") " pod="openstack/barbican-api-5958f9647d-2d24s" Oct 06 12:04:20 crc kubenswrapper[4958]: I1006 12:04:20.752453 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t6gpb\" (UniqueName: \"kubernetes.io/projected/d92c9a92-e930-40fd-83bb-b478da76b0d3-kube-api-access-t6gpb\") pod \"barbican-api-5958f9647d-2d24s\" (UID: \"d92c9a92-e930-40fd-83bb-b478da76b0d3\") " pod="openstack/barbican-api-5958f9647d-2d24s" Oct 06 12:04:20 crc kubenswrapper[4958]: I1006 12:04:20.752488 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f71790e2-93a0-4e5a-9099-e2e5a103af3d-dns-svc\") pod \"dnsmasq-dns-75c8ddd69c-l6bx4\" (UID: \"f71790e2-93a0-4e5a-9099-e2e5a103af3d\") " pod="openstack/dnsmasq-dns-75c8ddd69c-l6bx4" Oct 06 12:04:20 crc kubenswrapper[4958]: I1006 12:04:20.752561 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/d92c9a92-e930-40fd-83bb-b478da76b0d3-config-data-custom\") pod \"barbican-api-5958f9647d-2d24s\" (UID: \"d92c9a92-e930-40fd-83bb-b478da76b0d3\") " pod="openstack/barbican-api-5958f9647d-2d24s" Oct 06 12:04:20 crc kubenswrapper[4958]: I1006 12:04:20.752592 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f71790e2-93a0-4e5a-9099-e2e5a103af3d-config\") pod \"dnsmasq-dns-75c8ddd69c-l6bx4\" (UID: \"f71790e2-93a0-4e5a-9099-e2e5a103af3d\") " pod="openstack/dnsmasq-dns-75c8ddd69c-l6bx4" Oct 06 12:04:20 crc kubenswrapper[4958]: I1006 12:04:20.752633 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/f71790e2-93a0-4e5a-9099-e2e5a103af3d-ovsdbserver-sb\") pod \"dnsmasq-dns-75c8ddd69c-l6bx4\" (UID: \"f71790e2-93a0-4e5a-9099-e2e5a103af3d\") " pod="openstack/dnsmasq-dns-75c8ddd69c-l6bx4" Oct 06 12:04:20 crc kubenswrapper[4958]: I1006 12:04:20.752681 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/f71790e2-93a0-4e5a-9099-e2e5a103af3d-ovsdbserver-nb\") pod \"dnsmasq-dns-75c8ddd69c-l6bx4\" (UID: \"f71790e2-93a0-4e5a-9099-e2e5a103af3d\") " pod="openstack/dnsmasq-dns-75c8ddd69c-l6bx4" Oct 06 12:04:20 crc kubenswrapper[4958]: I1006 12:04:20.752584 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d92c9a92-e930-40fd-83bb-b478da76b0d3-logs\") pod \"barbican-api-5958f9647d-2d24s\" (UID: \"d92c9a92-e930-40fd-83bb-b478da76b0d3\") " pod="openstack/barbican-api-5958f9647d-2d24s" Oct 06 12:04:20 crc kubenswrapper[4958]: I1006 12:04:20.753550 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/f71790e2-93a0-4e5a-9099-e2e5a103af3d-ovsdbserver-nb\") pod \"dnsmasq-dns-75c8ddd69c-l6bx4\" (UID: \"f71790e2-93a0-4e5a-9099-e2e5a103af3d\") " pod="openstack/dnsmasq-dns-75c8ddd69c-l6bx4" Oct 06 12:04:20 crc kubenswrapper[4958]: I1006 12:04:20.753593 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f71790e2-93a0-4e5a-9099-e2e5a103af3d-dns-svc\") pod \"dnsmasq-dns-75c8ddd69c-l6bx4\" (UID: \"f71790e2-93a0-4e5a-9099-e2e5a103af3d\") " pod="openstack/dnsmasq-dns-75c8ddd69c-l6bx4" Oct 06 12:04:20 crc kubenswrapper[4958]: I1006 12:04:20.753665 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f71790e2-93a0-4e5a-9099-e2e5a103af3d-config\") pod \"dnsmasq-dns-75c8ddd69c-l6bx4\" (UID: \"f71790e2-93a0-4e5a-9099-e2e5a103af3d\") " pod="openstack/dnsmasq-dns-75c8ddd69c-l6bx4" Oct 06 12:04:20 crc kubenswrapper[4958]: I1006 12:04:20.754065 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/f71790e2-93a0-4e5a-9099-e2e5a103af3d-dns-swift-storage-0\") pod \"dnsmasq-dns-75c8ddd69c-l6bx4\" (UID: \"f71790e2-93a0-4e5a-9099-e2e5a103af3d\") " pod="openstack/dnsmasq-dns-75c8ddd69c-l6bx4" Oct 06 12:04:20 crc kubenswrapper[4958]: I1006 12:04:20.754734 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/f71790e2-93a0-4e5a-9099-e2e5a103af3d-ovsdbserver-sb\") pod \"dnsmasq-dns-75c8ddd69c-l6bx4\" (UID: \"f71790e2-93a0-4e5a-9099-e2e5a103af3d\") " pod="openstack/dnsmasq-dns-75c8ddd69c-l6bx4" Oct 06 12:04:20 crc kubenswrapper[4958]: I1006 12:04:20.755482 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d92c9a92-e930-40fd-83bb-b478da76b0d3-combined-ca-bundle\") pod \"barbican-api-5958f9647d-2d24s\" (UID: \"d92c9a92-e930-40fd-83bb-b478da76b0d3\") " pod="openstack/barbican-api-5958f9647d-2d24s" Oct 06 12:04:20 crc kubenswrapper[4958]: I1006 12:04:20.756384 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/d92c9a92-e930-40fd-83bb-b478da76b0d3-config-data-custom\") pod \"barbican-api-5958f9647d-2d24s\" (UID: \"d92c9a92-e930-40fd-83bb-b478da76b0d3\") " pod="openstack/barbican-api-5958f9647d-2d24s" Oct 06 12:04:20 crc kubenswrapper[4958]: I1006 12:04:20.757307 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d92c9a92-e930-40fd-83bb-b478da76b0d3-config-data\") pod \"barbican-api-5958f9647d-2d24s\" (UID: \"d92c9a92-e930-40fd-83bb-b478da76b0d3\") " pod="openstack/barbican-api-5958f9647d-2d24s" Oct 06 12:04:20 crc kubenswrapper[4958]: I1006 12:04:20.768726 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-764tf\" (UniqueName: \"kubernetes.io/projected/f71790e2-93a0-4e5a-9099-e2e5a103af3d-kube-api-access-764tf\") pod \"dnsmasq-dns-75c8ddd69c-l6bx4\" (UID: \"f71790e2-93a0-4e5a-9099-e2e5a103af3d\") " pod="openstack/dnsmasq-dns-75c8ddd69c-l6bx4" Oct 06 12:04:20 crc kubenswrapper[4958]: I1006 12:04:20.768992 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t6gpb\" (UniqueName: \"kubernetes.io/projected/d92c9a92-e930-40fd-83bb-b478da76b0d3-kube-api-access-t6gpb\") pod \"barbican-api-5958f9647d-2d24s\" (UID: \"d92c9a92-e930-40fd-83bb-b478da76b0d3\") " pod="openstack/barbican-api-5958f9647d-2d24s" Oct 06 12:04:20 crc kubenswrapper[4958]: I1006 12:04:20.813394 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-worker-56588d4b7-rsgzm" Oct 06 12:04:20 crc kubenswrapper[4958]: I1006 12:04:20.842385 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-keystone-listener-7f4f85d9b4-tcldf" Oct 06 12:04:20 crc kubenswrapper[4958]: I1006 12:04:20.847962 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"7198a6cb-9e91-48c8-82b5-16f40fb6b732","Type":"ContainerStarted","Data":"7d5a588df02d8834707c4f5c9c1e9aeaf83308b6cf3b95bdc5a56a7d4c749e65"} Oct 06 12:04:20 crc kubenswrapper[4958]: I1006 12:04:20.848053 4958 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="7198a6cb-9e91-48c8-82b5-16f40fb6b732" containerName="ceilometer-central-agent" containerID="cri-o://d9d94ed4838b3a427237aa8e38a79d82bafd2ef7b89280db8b1bcc9bbd39fe85" gracePeriod=30 Oct 06 12:04:20 crc kubenswrapper[4958]: I1006 12:04:20.848140 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Oct 06 12:04:20 crc kubenswrapper[4958]: I1006 12:04:20.848171 4958 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="7198a6cb-9e91-48c8-82b5-16f40fb6b732" containerName="proxy-httpd" containerID="cri-o://7d5a588df02d8834707c4f5c9c1e9aeaf83308b6cf3b95bdc5a56a7d4c749e65" gracePeriod=30 Oct 06 12:04:20 crc kubenswrapper[4958]: I1006 12:04:20.848210 4958 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="7198a6cb-9e91-48c8-82b5-16f40fb6b732" containerName="sg-core" containerID="cri-o://0cf16a119eb12aa5d2aaed61e860d1743af7865a55253fe3cfbadcff94f57252" gracePeriod=30 Oct 06 12:04:20 crc kubenswrapper[4958]: I1006 12:04:20.848243 4958 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="7198a6cb-9e91-48c8-82b5-16f40fb6b732" containerName="ceilometer-notification-agent" containerID="cri-o://fa6d616ab64cdaf8a95c4dd191e9c95039f4bfe40449b0027e3e6a161b255e37" gracePeriod=30 Oct 06 12:04:20 crc kubenswrapper[4958]: I1006 12:04:20.852166 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-999fc56db-gbkpz" event={"ID":"13b3f1be-75c2-49c7-a3c6-d6dd842788c4","Type":"ContainerStarted","Data":"63012e2e8f503386a5d49872d2a7056c473af630b0aeac5d67d3492fd03fe8a7"} Oct 06 12:04:20 crc kubenswrapper[4958]: I1006 12:04:20.852213 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-999fc56db-gbkpz" event={"ID":"13b3f1be-75c2-49c7-a3c6-d6dd842788c4","Type":"ContainerStarted","Data":"27c82292e8f25907b23f5cf508e395a32ecc0682b26c67c2d8acdc50adae812d"} Oct 06 12:04:20 crc kubenswrapper[4958]: I1006 12:04:20.854737 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-8b5c85b87-s74ql" event={"ID":"09020dee-07c8-4b21-95e6-701ea33f70f4","Type":"ContainerDied","Data":"dc4c5500e817f2f2aede49a19d1028adddd4cb4d0229cb4dde90ebfafea2ea05"} Oct 06 12:04:20 crc kubenswrapper[4958]: I1006 12:04:20.854777 4958 scope.go:117] "RemoveContainer" containerID="2e9196f8e2fa1b198017d2f2853d5824896c81e5e8f598d3763f9e71eb458ef2" Oct 06 12:04:20 crc kubenswrapper[4958]: I1006 12:04:20.854902 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-8b5c85b87-s74ql" Oct 06 12:04:20 crc kubenswrapper[4958]: I1006 12:04:20.883702 4958 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.02223763 podStartE2EDuration="54.88368459s" podCreationTimestamp="2025-10-06 12:03:26 +0000 UTC" firstStartedPulling="2025-10-06 12:03:27.01039069 +0000 UTC m=+960.896415998" lastFinishedPulling="2025-10-06 12:04:19.87183764 +0000 UTC m=+1013.757862958" observedRunningTime="2025-10-06 12:04:20.874651387 +0000 UTC m=+1014.760676695" watchObservedRunningTime="2025-10-06 12:04:20.88368459 +0000 UTC m=+1014.769709898" Oct 06 12:04:20 crc kubenswrapper[4958]: I1006 12:04:20.897489 4958 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-8b5c85b87-s74ql"] Oct 06 12:04:20 crc kubenswrapper[4958]: I1006 12:04:20.907740 4958 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-8b5c85b87-s74ql"] Oct 06 12:04:20 crc kubenswrapper[4958]: I1006 12:04:20.908799 4958 scope.go:117] "RemoveContainer" containerID="79261e888ea072b546b90d006fc9f88ca71e9b2b04bc81f03cd04e3abc3d9bf0" Oct 06 12:04:20 crc kubenswrapper[4958]: I1006 12:04:20.921817 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-75c8ddd69c-l6bx4" Oct 06 12:04:20 crc kubenswrapper[4958]: I1006 12:04:20.928924 4958 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="09020dee-07c8-4b21-95e6-701ea33f70f4" path="/var/lib/kubelet/pods/09020dee-07c8-4b21-95e6-701ea33f70f4/volumes" Oct 06 12:04:20 crc kubenswrapper[4958]: I1006 12:04:20.951357 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-5958f9647d-2d24s" Oct 06 12:04:21 crc kubenswrapper[4958]: I1006 12:04:21.245933 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-keystone-listener-7f4f85d9b4-tcldf"] Oct 06 12:04:21 crc kubenswrapper[4958]: I1006 12:04:21.311354 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-worker-56588d4b7-rsgzm"] Oct 06 12:04:21 crc kubenswrapper[4958]: I1006 12:04:21.537270 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-5958f9647d-2d24s"] Oct 06 12:04:21 crc kubenswrapper[4958]: W1006 12:04:21.540648 4958 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd92c9a92_e930_40fd_83bb_b478da76b0d3.slice/crio-fb260795f9dc3c7b1c061cb5a0814454893a31047818d3a092ccc5375e38dc02 WatchSource:0}: Error finding container fb260795f9dc3c7b1c061cb5a0814454893a31047818d3a092ccc5375e38dc02: Status 404 returned error can't find the container with id fb260795f9dc3c7b1c061cb5a0814454893a31047818d3a092ccc5375e38dc02 Oct 06 12:04:21 crc kubenswrapper[4958]: I1006 12:04:21.550911 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-75c8ddd69c-l6bx4"] Oct 06 12:04:21 crc kubenswrapper[4958]: I1006 12:04:21.864853 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-56588d4b7-rsgzm" event={"ID":"d1f5dd7f-a3f7-4c0d-8e1e-ccbc0fc5063f","Type":"ContainerStarted","Data":"0f6d0b70268028e762408defc77fd173de6e3f01ec637b19966d029b223134f5"} Oct 06 12:04:21 crc kubenswrapper[4958]: I1006 12:04:21.867631 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-999fc56db-gbkpz" event={"ID":"13b3f1be-75c2-49c7-a3c6-d6dd842788c4","Type":"ContainerStarted","Data":"629510f9bfc4101ae4f5a14f4a1c42c756cc8047e937c9224c39a370b41b9f99"} Oct 06 12:04:21 crc kubenswrapper[4958]: I1006 12:04:21.867693 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/placement-999fc56db-gbkpz" Oct 06 12:04:21 crc kubenswrapper[4958]: I1006 12:04:21.867713 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/placement-999fc56db-gbkpz" Oct 06 12:04:21 crc kubenswrapper[4958]: I1006 12:04:21.872362 4958 generic.go:334] "Generic (PLEG): container finished" podID="f71790e2-93a0-4e5a-9099-e2e5a103af3d" containerID="3e8e17445015a8cf78c4d36b4a999a630d2717c0eea0f10f82600c3784ae30b4" exitCode=0 Oct 06 12:04:21 crc kubenswrapper[4958]: I1006 12:04:21.872423 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-75c8ddd69c-l6bx4" event={"ID":"f71790e2-93a0-4e5a-9099-e2e5a103af3d","Type":"ContainerDied","Data":"3e8e17445015a8cf78c4d36b4a999a630d2717c0eea0f10f82600c3784ae30b4"} Oct 06 12:04:21 crc kubenswrapper[4958]: I1006 12:04:21.872446 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-75c8ddd69c-l6bx4" event={"ID":"f71790e2-93a0-4e5a-9099-e2e5a103af3d","Type":"ContainerStarted","Data":"aa6af3ab22016b3ee39fed0bf244d64c4a9c9ac173212c193e23daf3074f3a95"} Oct 06 12:04:21 crc kubenswrapper[4958]: I1006 12:04:21.877748 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-7f4f85d9b4-tcldf" event={"ID":"f3a9469b-86a8-4eec-9722-8bec4159b05e","Type":"ContainerStarted","Data":"5fa81d357e55a08b32c67a3fc36ed91fe558ebfc99049a13fec8dc97d8be71db"} Oct 06 12:04:21 crc kubenswrapper[4958]: I1006 12:04:21.883244 4958 generic.go:334] "Generic (PLEG): container finished" podID="7198a6cb-9e91-48c8-82b5-16f40fb6b732" containerID="7d5a588df02d8834707c4f5c9c1e9aeaf83308b6cf3b95bdc5a56a7d4c749e65" exitCode=0 Oct 06 12:04:21 crc kubenswrapper[4958]: I1006 12:04:21.883280 4958 generic.go:334] "Generic (PLEG): container finished" podID="7198a6cb-9e91-48c8-82b5-16f40fb6b732" containerID="0cf16a119eb12aa5d2aaed61e860d1743af7865a55253fe3cfbadcff94f57252" exitCode=2 Oct 06 12:04:21 crc kubenswrapper[4958]: I1006 12:04:21.883290 4958 generic.go:334] "Generic (PLEG): container finished" podID="7198a6cb-9e91-48c8-82b5-16f40fb6b732" containerID="d9d94ed4838b3a427237aa8e38a79d82bafd2ef7b89280db8b1bcc9bbd39fe85" exitCode=0 Oct 06 12:04:21 crc kubenswrapper[4958]: I1006 12:04:21.883310 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"7198a6cb-9e91-48c8-82b5-16f40fb6b732","Type":"ContainerDied","Data":"7d5a588df02d8834707c4f5c9c1e9aeaf83308b6cf3b95bdc5a56a7d4c749e65"} Oct 06 12:04:21 crc kubenswrapper[4958]: I1006 12:04:21.883347 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"7198a6cb-9e91-48c8-82b5-16f40fb6b732","Type":"ContainerDied","Data":"0cf16a119eb12aa5d2aaed61e860d1743af7865a55253fe3cfbadcff94f57252"} Oct 06 12:04:21 crc kubenswrapper[4958]: I1006 12:04:21.883369 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"7198a6cb-9e91-48c8-82b5-16f40fb6b732","Type":"ContainerDied","Data":"d9d94ed4838b3a427237aa8e38a79d82bafd2ef7b89280db8b1bcc9bbd39fe85"} Oct 06 12:04:21 crc kubenswrapper[4958]: I1006 12:04:21.893979 4958 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-999fc56db-gbkpz" podStartSLOduration=6.893961903 podStartE2EDuration="6.893961903s" podCreationTimestamp="2025-10-06 12:04:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-06 12:04:21.885664412 +0000 UTC m=+1015.771689720" watchObservedRunningTime="2025-10-06 12:04:21.893961903 +0000 UTC m=+1015.779987211" Oct 06 12:04:21 crc kubenswrapper[4958]: I1006 12:04:21.896971 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-5958f9647d-2d24s" event={"ID":"d92c9a92-e930-40fd-83bb-b478da76b0d3","Type":"ContainerStarted","Data":"85232274a3cb12408e56c6f35bb82db84acd0a9b628e8b6f97165aecbd6f27a3"} Oct 06 12:04:21 crc kubenswrapper[4958]: I1006 12:04:21.897002 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-5958f9647d-2d24s" event={"ID":"d92c9a92-e930-40fd-83bb-b478da76b0d3","Type":"ContainerStarted","Data":"fb260795f9dc3c7b1c061cb5a0814454893a31047818d3a092ccc5375e38dc02"} Oct 06 12:04:22 crc kubenswrapper[4958]: I1006 12:04:22.031472 4958 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Oct 06 12:04:22 crc kubenswrapper[4958]: I1006 12:04:22.031521 4958 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Oct 06 12:04:22 crc kubenswrapper[4958]: I1006 12:04:22.061631 4958 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Oct 06 12:04:22 crc kubenswrapper[4958]: I1006 12:04:22.073076 4958 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Oct 06 12:04:22 crc kubenswrapper[4958]: I1006 12:04:22.937390 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-5958f9647d-2d24s" event={"ID":"d92c9a92-e930-40fd-83bb-b478da76b0d3","Type":"ContainerStarted","Data":"ab1c7a492e8ca9592bbf4081708aa94c720231df90749d65a07e696d7239c03e"} Oct 06 12:04:22 crc kubenswrapper[4958]: I1006 12:04:22.937937 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Oct 06 12:04:22 crc kubenswrapper[4958]: I1006 12:04:22.937954 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Oct 06 12:04:22 crc kubenswrapper[4958]: I1006 12:04:22.937965 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-5958f9647d-2d24s" Oct 06 12:04:22 crc kubenswrapper[4958]: I1006 12:04:22.937977 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-5958f9647d-2d24s" Oct 06 12:04:23 crc kubenswrapper[4958]: I1006 12:04:23.801537 4958 patch_prober.go:28] interesting pod/machine-config-daemon-whw6z container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 06 12:04:23 crc kubenswrapper[4958]: I1006 12:04:23.802038 4958 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-whw6z" podUID="1a63a5e9-6d00-40ff-a080-cfcaf03a1c1b" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 06 12:04:23 crc kubenswrapper[4958]: I1006 12:04:23.964956 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-56588d4b7-rsgzm" event={"ID":"d1f5dd7f-a3f7-4c0d-8e1e-ccbc0fc5063f","Type":"ContainerStarted","Data":"94087367e62d5a1c30fd19de2357cfd0b29e90a8e34372f30856c10e925a316f"} Oct 06 12:04:23 crc kubenswrapper[4958]: I1006 12:04:23.965018 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-56588d4b7-rsgzm" event={"ID":"d1f5dd7f-a3f7-4c0d-8e1e-ccbc0fc5063f","Type":"ContainerStarted","Data":"8f46097a83fafc3c0f5aee7b0dd9d3414431a77a5a7ed7be1886cdf438611213"} Oct 06 12:04:23 crc kubenswrapper[4958]: I1006 12:04:23.976138 4958 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-api-5958f9647d-2d24s" podStartSLOduration=3.976116527 podStartE2EDuration="3.976116527s" podCreationTimestamp="2025-10-06 12:04:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-06 12:04:22.960400201 +0000 UTC m=+1016.846425519" watchObservedRunningTime="2025-10-06 12:04:23.976116527 +0000 UTC m=+1017.862141825" Oct 06 12:04:23 crc kubenswrapper[4958]: I1006 12:04:23.976800 4958 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-api-5784c7f6c4-pqpwp"] Oct 06 12:04:23 crc kubenswrapper[4958]: I1006 12:04:23.978182 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-5784c7f6c4-pqpwp" Oct 06 12:04:23 crc kubenswrapper[4958]: I1006 12:04:23.985543 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-75c8ddd69c-l6bx4" event={"ID":"f71790e2-93a0-4e5a-9099-e2e5a103af3d","Type":"ContainerStarted","Data":"3d6e23357b40b220c2cce11946235ee83bbcb5a25c4a585c3a497c082925b56b"} Oct 06 12:04:23 crc kubenswrapper[4958]: I1006 12:04:23.985587 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-75c8ddd69c-l6bx4" Oct 06 12:04:23 crc kubenswrapper[4958]: I1006 12:04:23.994386 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-barbican-public-svc" Oct 06 12:04:23 crc kubenswrapper[4958]: I1006 12:04:23.994602 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-barbican-internal-svc" Oct 06 12:04:24 crc kubenswrapper[4958]: I1006 12:04:24.003246 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-7f4f85d9b4-tcldf" event={"ID":"f3a9469b-86a8-4eec-9722-8bec4159b05e","Type":"ContainerStarted","Data":"8b52ab18c521389830185c7037c21af187da1c1a3dc413d5e96ffab7ec0e6e27"} Oct 06 12:04:24 crc kubenswrapper[4958]: I1006 12:04:24.003282 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-7f4f85d9b4-tcldf" event={"ID":"f3a9469b-86a8-4eec-9722-8bec4159b05e","Type":"ContainerStarted","Data":"c93fdb82b924bc39e47c49783fe128a190732d8fe2613fca254993bb6b8bdb31"} Oct 06 12:04:24 crc kubenswrapper[4958]: I1006 12:04:24.005847 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-5784c7f6c4-pqpwp"] Oct 06 12:04:24 crc kubenswrapper[4958]: I1006 12:04:24.006909 4958 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-worker-56588d4b7-rsgzm" podStartSLOduration=2.573596561 podStartE2EDuration="4.006891776s" podCreationTimestamp="2025-10-06 12:04:20 +0000 UTC" firstStartedPulling="2025-10-06 12:04:21.318826648 +0000 UTC m=+1015.204851956" lastFinishedPulling="2025-10-06 12:04:22.752121833 +0000 UTC m=+1016.638147171" observedRunningTime="2025-10-06 12:04:24.003981528 +0000 UTC m=+1017.890006826" watchObservedRunningTime="2025-10-06 12:04:24.006891776 +0000 UTC m=+1017.892917084" Oct 06 12:04:24 crc kubenswrapper[4958]: I1006 12:04:24.050780 4958 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-75c8ddd69c-l6bx4" podStartSLOduration=4.05076342 podStartE2EDuration="4.05076342s" podCreationTimestamp="2025-10-06 12:04:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-06 12:04:24.046327826 +0000 UTC m=+1017.932353134" watchObservedRunningTime="2025-10-06 12:04:24.05076342 +0000 UTC m=+1017.936788728" Oct 06 12:04:24 crc kubenswrapper[4958]: I1006 12:04:24.081841 4958 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-keystone-listener-7f4f85d9b4-tcldf" podStartSLOduration=2.576392906 podStartE2EDuration="4.081826208s" podCreationTimestamp="2025-10-06 12:04:20 +0000 UTC" firstStartedPulling="2025-10-06 12:04:21.241642858 +0000 UTC m=+1015.127668166" lastFinishedPulling="2025-10-06 12:04:22.74707616 +0000 UTC m=+1016.633101468" observedRunningTime="2025-10-06 12:04:24.075675272 +0000 UTC m=+1017.961700570" watchObservedRunningTime="2025-10-06 12:04:24.081826208 +0000 UTC m=+1017.967851516" Oct 06 12:04:24 crc kubenswrapper[4958]: I1006 12:04:24.122209 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d5b91f63-f0c4-4c4b-a06a-0136898c0beb-combined-ca-bundle\") pod \"barbican-api-5784c7f6c4-pqpwp\" (UID: \"d5b91f63-f0c4-4c4b-a06a-0136898c0beb\") " pod="openstack/barbican-api-5784c7f6c4-pqpwp" Oct 06 12:04:24 crc kubenswrapper[4958]: I1006 12:04:24.122534 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d5b91f63-f0c4-4c4b-a06a-0136898c0beb-logs\") pod \"barbican-api-5784c7f6c4-pqpwp\" (UID: \"d5b91f63-f0c4-4c4b-a06a-0136898c0beb\") " pod="openstack/barbican-api-5784c7f6c4-pqpwp" Oct 06 12:04:24 crc kubenswrapper[4958]: I1006 12:04:24.122586 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/d5b91f63-f0c4-4c4b-a06a-0136898c0beb-internal-tls-certs\") pod \"barbican-api-5784c7f6c4-pqpwp\" (UID: \"d5b91f63-f0c4-4c4b-a06a-0136898c0beb\") " pod="openstack/barbican-api-5784c7f6c4-pqpwp" Oct 06 12:04:24 crc kubenswrapper[4958]: I1006 12:04:24.122651 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/d5b91f63-f0c4-4c4b-a06a-0136898c0beb-public-tls-certs\") pod \"barbican-api-5784c7f6c4-pqpwp\" (UID: \"d5b91f63-f0c4-4c4b-a06a-0136898c0beb\") " pod="openstack/barbican-api-5784c7f6c4-pqpwp" Oct 06 12:04:24 crc kubenswrapper[4958]: I1006 12:04:24.122708 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6pwdt\" (UniqueName: \"kubernetes.io/projected/d5b91f63-f0c4-4c4b-a06a-0136898c0beb-kube-api-access-6pwdt\") pod \"barbican-api-5784c7f6c4-pqpwp\" (UID: \"d5b91f63-f0c4-4c4b-a06a-0136898c0beb\") " pod="openstack/barbican-api-5784c7f6c4-pqpwp" Oct 06 12:04:24 crc kubenswrapper[4958]: I1006 12:04:24.122784 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d5b91f63-f0c4-4c4b-a06a-0136898c0beb-config-data\") pod \"barbican-api-5784c7f6c4-pqpwp\" (UID: \"d5b91f63-f0c4-4c4b-a06a-0136898c0beb\") " pod="openstack/barbican-api-5784c7f6c4-pqpwp" Oct 06 12:04:24 crc kubenswrapper[4958]: I1006 12:04:24.122976 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/d5b91f63-f0c4-4c4b-a06a-0136898c0beb-config-data-custom\") pod \"barbican-api-5784c7f6c4-pqpwp\" (UID: \"d5b91f63-f0c4-4c4b-a06a-0136898c0beb\") " pod="openstack/barbican-api-5784c7f6c4-pqpwp" Oct 06 12:04:24 crc kubenswrapper[4958]: I1006 12:04:24.224858 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6pwdt\" (UniqueName: \"kubernetes.io/projected/d5b91f63-f0c4-4c4b-a06a-0136898c0beb-kube-api-access-6pwdt\") pod \"barbican-api-5784c7f6c4-pqpwp\" (UID: \"d5b91f63-f0c4-4c4b-a06a-0136898c0beb\") " pod="openstack/barbican-api-5784c7f6c4-pqpwp" Oct 06 12:04:24 crc kubenswrapper[4958]: I1006 12:04:24.224931 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d5b91f63-f0c4-4c4b-a06a-0136898c0beb-config-data\") pod \"barbican-api-5784c7f6c4-pqpwp\" (UID: \"d5b91f63-f0c4-4c4b-a06a-0136898c0beb\") " pod="openstack/barbican-api-5784c7f6c4-pqpwp" Oct 06 12:04:24 crc kubenswrapper[4958]: I1006 12:04:24.225014 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/d5b91f63-f0c4-4c4b-a06a-0136898c0beb-config-data-custom\") pod \"barbican-api-5784c7f6c4-pqpwp\" (UID: \"d5b91f63-f0c4-4c4b-a06a-0136898c0beb\") " pod="openstack/barbican-api-5784c7f6c4-pqpwp" Oct 06 12:04:24 crc kubenswrapper[4958]: I1006 12:04:24.225081 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d5b91f63-f0c4-4c4b-a06a-0136898c0beb-combined-ca-bundle\") pod \"barbican-api-5784c7f6c4-pqpwp\" (UID: \"d5b91f63-f0c4-4c4b-a06a-0136898c0beb\") " pod="openstack/barbican-api-5784c7f6c4-pqpwp" Oct 06 12:04:24 crc kubenswrapper[4958]: I1006 12:04:24.225110 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d5b91f63-f0c4-4c4b-a06a-0136898c0beb-logs\") pod \"barbican-api-5784c7f6c4-pqpwp\" (UID: \"d5b91f63-f0c4-4c4b-a06a-0136898c0beb\") " pod="openstack/barbican-api-5784c7f6c4-pqpwp" Oct 06 12:04:24 crc kubenswrapper[4958]: I1006 12:04:24.225161 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/d5b91f63-f0c4-4c4b-a06a-0136898c0beb-internal-tls-certs\") pod \"barbican-api-5784c7f6c4-pqpwp\" (UID: \"d5b91f63-f0c4-4c4b-a06a-0136898c0beb\") " pod="openstack/barbican-api-5784c7f6c4-pqpwp" Oct 06 12:04:24 crc kubenswrapper[4958]: I1006 12:04:24.225208 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/d5b91f63-f0c4-4c4b-a06a-0136898c0beb-public-tls-certs\") pod \"barbican-api-5784c7f6c4-pqpwp\" (UID: \"d5b91f63-f0c4-4c4b-a06a-0136898c0beb\") " pod="openstack/barbican-api-5784c7f6c4-pqpwp" Oct 06 12:04:24 crc kubenswrapper[4958]: I1006 12:04:24.226439 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d5b91f63-f0c4-4c4b-a06a-0136898c0beb-logs\") pod \"barbican-api-5784c7f6c4-pqpwp\" (UID: \"d5b91f63-f0c4-4c4b-a06a-0136898c0beb\") " pod="openstack/barbican-api-5784c7f6c4-pqpwp" Oct 06 12:04:24 crc kubenswrapper[4958]: I1006 12:04:24.231544 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/d5b91f63-f0c4-4c4b-a06a-0136898c0beb-config-data-custom\") pod \"barbican-api-5784c7f6c4-pqpwp\" (UID: \"d5b91f63-f0c4-4c4b-a06a-0136898c0beb\") " pod="openstack/barbican-api-5784c7f6c4-pqpwp" Oct 06 12:04:24 crc kubenswrapper[4958]: I1006 12:04:24.231679 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d5b91f63-f0c4-4c4b-a06a-0136898c0beb-combined-ca-bundle\") pod \"barbican-api-5784c7f6c4-pqpwp\" (UID: \"d5b91f63-f0c4-4c4b-a06a-0136898c0beb\") " pod="openstack/barbican-api-5784c7f6c4-pqpwp" Oct 06 12:04:24 crc kubenswrapper[4958]: I1006 12:04:24.233967 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/d5b91f63-f0c4-4c4b-a06a-0136898c0beb-public-tls-certs\") pod \"barbican-api-5784c7f6c4-pqpwp\" (UID: \"d5b91f63-f0c4-4c4b-a06a-0136898c0beb\") " pod="openstack/barbican-api-5784c7f6c4-pqpwp" Oct 06 12:04:24 crc kubenswrapper[4958]: I1006 12:04:24.242511 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/d5b91f63-f0c4-4c4b-a06a-0136898c0beb-internal-tls-certs\") pod \"barbican-api-5784c7f6c4-pqpwp\" (UID: \"d5b91f63-f0c4-4c4b-a06a-0136898c0beb\") " pod="openstack/barbican-api-5784c7f6c4-pqpwp" Oct 06 12:04:24 crc kubenswrapper[4958]: I1006 12:04:24.244438 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d5b91f63-f0c4-4c4b-a06a-0136898c0beb-config-data\") pod \"barbican-api-5784c7f6c4-pqpwp\" (UID: \"d5b91f63-f0c4-4c4b-a06a-0136898c0beb\") " pod="openstack/barbican-api-5784c7f6c4-pqpwp" Oct 06 12:04:24 crc kubenswrapper[4958]: I1006 12:04:24.250571 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6pwdt\" (UniqueName: \"kubernetes.io/projected/d5b91f63-f0c4-4c4b-a06a-0136898c0beb-kube-api-access-6pwdt\") pod \"barbican-api-5784c7f6c4-pqpwp\" (UID: \"d5b91f63-f0c4-4c4b-a06a-0136898c0beb\") " pod="openstack/barbican-api-5784c7f6c4-pqpwp" Oct 06 12:04:24 crc kubenswrapper[4958]: I1006 12:04:24.302700 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-5784c7f6c4-pqpwp" Oct 06 12:04:24 crc kubenswrapper[4958]: I1006 12:04:24.900071 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-5784c7f6c4-pqpwp"] Oct 06 12:04:24 crc kubenswrapper[4958]: W1006 12:04:24.902185 4958 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd5b91f63_f0c4_4c4b_a06a_0136898c0beb.slice/crio-9719786fbf6ea335f07a5a36192296efd96bc281c7fc8f9a3da9cef1d4b56c2a WatchSource:0}: Error finding container 9719786fbf6ea335f07a5a36192296efd96bc281c7fc8f9a3da9cef1d4b56c2a: Status 404 returned error can't find the container with id 9719786fbf6ea335f07a5a36192296efd96bc281c7fc8f9a3da9cef1d4b56c2a Oct 06 12:04:24 crc kubenswrapper[4958]: I1006 12:04:24.971984 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Oct 06 12:04:25 crc kubenswrapper[4958]: I1006 12:04:25.031569 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-wqlhj" event={"ID":"9e240eda-9921-45e1-991d-971031189ee4","Type":"ContainerStarted","Data":"9baca3a04d3fb8a69611a589ecd1f022f36e21ee81568370ebeb268cbbeae73a"} Oct 06 12:04:25 crc kubenswrapper[4958]: I1006 12:04:25.042723 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7198a6cb-9e91-48c8-82b5-16f40fb6b732-combined-ca-bundle\") pod \"7198a6cb-9e91-48c8-82b5-16f40fb6b732\" (UID: \"7198a6cb-9e91-48c8-82b5-16f40fb6b732\") " Oct 06 12:04:25 crc kubenswrapper[4958]: I1006 12:04:25.042798 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/7198a6cb-9e91-48c8-82b5-16f40fb6b732-log-httpd\") pod \"7198a6cb-9e91-48c8-82b5-16f40fb6b732\" (UID: \"7198a6cb-9e91-48c8-82b5-16f40fb6b732\") " Oct 06 12:04:25 crc kubenswrapper[4958]: I1006 12:04:25.042851 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jnktv\" (UniqueName: \"kubernetes.io/projected/7198a6cb-9e91-48c8-82b5-16f40fb6b732-kube-api-access-jnktv\") pod \"7198a6cb-9e91-48c8-82b5-16f40fb6b732\" (UID: \"7198a6cb-9e91-48c8-82b5-16f40fb6b732\") " Oct 06 12:04:25 crc kubenswrapper[4958]: I1006 12:04:25.042940 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/7198a6cb-9e91-48c8-82b5-16f40fb6b732-run-httpd\") pod \"7198a6cb-9e91-48c8-82b5-16f40fb6b732\" (UID: \"7198a6cb-9e91-48c8-82b5-16f40fb6b732\") " Oct 06 12:04:25 crc kubenswrapper[4958]: I1006 12:04:25.042969 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/7198a6cb-9e91-48c8-82b5-16f40fb6b732-sg-core-conf-yaml\") pod \"7198a6cb-9e91-48c8-82b5-16f40fb6b732\" (UID: \"7198a6cb-9e91-48c8-82b5-16f40fb6b732\") " Oct 06 12:04:25 crc kubenswrapper[4958]: I1006 12:04:25.043048 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7198a6cb-9e91-48c8-82b5-16f40fb6b732-config-data\") pod \"7198a6cb-9e91-48c8-82b5-16f40fb6b732\" (UID: \"7198a6cb-9e91-48c8-82b5-16f40fb6b732\") " Oct 06 12:04:25 crc kubenswrapper[4958]: I1006 12:04:25.043159 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7198a6cb-9e91-48c8-82b5-16f40fb6b732-scripts\") pod \"7198a6cb-9e91-48c8-82b5-16f40fb6b732\" (UID: \"7198a6cb-9e91-48c8-82b5-16f40fb6b732\") " Oct 06 12:04:25 crc kubenswrapper[4958]: I1006 12:04:25.047677 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7198a6cb-9e91-48c8-82b5-16f40fb6b732-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "7198a6cb-9e91-48c8-82b5-16f40fb6b732" (UID: "7198a6cb-9e91-48c8-82b5-16f40fb6b732"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 06 12:04:25 crc kubenswrapper[4958]: I1006 12:04:25.048318 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7198a6cb-9e91-48c8-82b5-16f40fb6b732-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "7198a6cb-9e91-48c8-82b5-16f40fb6b732" (UID: "7198a6cb-9e91-48c8-82b5-16f40fb6b732"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 06 12:04:25 crc kubenswrapper[4958]: I1006 12:04:25.071604 4958 generic.go:334] "Generic (PLEG): container finished" podID="7198a6cb-9e91-48c8-82b5-16f40fb6b732" containerID="fa6d616ab64cdaf8a95c4dd191e9c95039f4bfe40449b0027e3e6a161b255e37" exitCode=0 Oct 06 12:04:25 crc kubenswrapper[4958]: I1006 12:04:25.071690 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"7198a6cb-9e91-48c8-82b5-16f40fb6b732","Type":"ContainerDied","Data":"fa6d616ab64cdaf8a95c4dd191e9c95039f4bfe40449b0027e3e6a161b255e37"} Oct 06 12:04:25 crc kubenswrapper[4958]: I1006 12:04:25.071726 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"7198a6cb-9e91-48c8-82b5-16f40fb6b732","Type":"ContainerDied","Data":"329a77dfcd7ff2498a851bc6159119753fc1f1b8af3580ee332432affb01290b"} Oct 06 12:04:25 crc kubenswrapper[4958]: I1006 12:04:25.071746 4958 scope.go:117] "RemoveContainer" containerID="7d5a588df02d8834707c4f5c9c1e9aeaf83308b6cf3b95bdc5a56a7d4c749e65" Oct 06 12:04:25 crc kubenswrapper[4958]: I1006 12:04:25.071913 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Oct 06 12:04:25 crc kubenswrapper[4958]: I1006 12:04:25.066389 4958 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-db-sync-wqlhj" podStartSLOduration=2.59147987 podStartE2EDuration="43.066367424s" podCreationTimestamp="2025-10-06 12:03:42 +0000 UTC" firstStartedPulling="2025-10-06 12:03:43.196469051 +0000 UTC m=+977.082494359" lastFinishedPulling="2025-10-06 12:04:23.671356605 +0000 UTC m=+1017.557381913" observedRunningTime="2025-10-06 12:04:25.047018549 +0000 UTC m=+1018.933043867" watchObservedRunningTime="2025-10-06 12:04:25.066367424 +0000 UTC m=+1018.952392732" Oct 06 12:04:25 crc kubenswrapper[4958]: I1006 12:04:25.075677 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7198a6cb-9e91-48c8-82b5-16f40fb6b732-scripts" (OuterVolumeSpecName: "scripts") pod "7198a6cb-9e91-48c8-82b5-16f40fb6b732" (UID: "7198a6cb-9e91-48c8-82b5-16f40fb6b732"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 06 12:04:25 crc kubenswrapper[4958]: I1006 12:04:25.076736 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7198a6cb-9e91-48c8-82b5-16f40fb6b732-kube-api-access-jnktv" (OuterVolumeSpecName: "kube-api-access-jnktv") pod "7198a6cb-9e91-48c8-82b5-16f40fb6b732" (UID: "7198a6cb-9e91-48c8-82b5-16f40fb6b732"). InnerVolumeSpecName "kube-api-access-jnktv". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 06 12:04:25 crc kubenswrapper[4958]: I1006 12:04:25.080888 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-5784c7f6c4-pqpwp" event={"ID":"d5b91f63-f0c4-4c4b-a06a-0136898c0beb","Type":"ContainerStarted","Data":"9719786fbf6ea335f07a5a36192296efd96bc281c7fc8f9a3da9cef1d4b56c2a"} Oct 06 12:04:25 crc kubenswrapper[4958]: I1006 12:04:25.082638 4958 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Oct 06 12:04:25 crc kubenswrapper[4958]: I1006 12:04:25.082659 4958 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Oct 06 12:04:25 crc kubenswrapper[4958]: I1006 12:04:25.149369 4958 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7198a6cb-9e91-48c8-82b5-16f40fb6b732-scripts\") on node \"crc\" DevicePath \"\"" Oct 06 12:04:25 crc kubenswrapper[4958]: I1006 12:04:25.149395 4958 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/7198a6cb-9e91-48c8-82b5-16f40fb6b732-log-httpd\") on node \"crc\" DevicePath \"\"" Oct 06 12:04:25 crc kubenswrapper[4958]: I1006 12:04:25.149405 4958 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jnktv\" (UniqueName: \"kubernetes.io/projected/7198a6cb-9e91-48c8-82b5-16f40fb6b732-kube-api-access-jnktv\") on node \"crc\" DevicePath \"\"" Oct 06 12:04:25 crc kubenswrapper[4958]: I1006 12:04:25.149414 4958 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/7198a6cb-9e91-48c8-82b5-16f40fb6b732-run-httpd\") on node \"crc\" DevicePath \"\"" Oct 06 12:04:25 crc kubenswrapper[4958]: I1006 12:04:25.163358 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7198a6cb-9e91-48c8-82b5-16f40fb6b732-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "7198a6cb-9e91-48c8-82b5-16f40fb6b732" (UID: "7198a6cb-9e91-48c8-82b5-16f40fb6b732"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 06 12:04:25 crc kubenswrapper[4958]: I1006 12:04:25.180377 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Oct 06 12:04:25 crc kubenswrapper[4958]: I1006 12:04:25.241300 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7198a6cb-9e91-48c8-82b5-16f40fb6b732-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "7198a6cb-9e91-48c8-82b5-16f40fb6b732" (UID: "7198a6cb-9e91-48c8-82b5-16f40fb6b732"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 06 12:04:25 crc kubenswrapper[4958]: I1006 12:04:25.252304 4958 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7198a6cb-9e91-48c8-82b5-16f40fb6b732-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 06 12:04:25 crc kubenswrapper[4958]: I1006 12:04:25.252336 4958 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/7198a6cb-9e91-48c8-82b5-16f40fb6b732-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Oct 06 12:04:25 crc kubenswrapper[4958]: I1006 12:04:25.252438 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7198a6cb-9e91-48c8-82b5-16f40fb6b732-config-data" (OuterVolumeSpecName: "config-data") pod "7198a6cb-9e91-48c8-82b5-16f40fb6b732" (UID: "7198a6cb-9e91-48c8-82b5-16f40fb6b732"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 06 12:04:25 crc kubenswrapper[4958]: I1006 12:04:25.326712 4958 scope.go:117] "RemoveContainer" containerID="0cf16a119eb12aa5d2aaed61e860d1743af7865a55253fe3cfbadcff94f57252" Oct 06 12:04:25 crc kubenswrapper[4958]: I1006 12:04:25.330273 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Oct 06 12:04:25 crc kubenswrapper[4958]: I1006 12:04:25.343798 4958 scope.go:117] "RemoveContainer" containerID="fa6d616ab64cdaf8a95c4dd191e9c95039f4bfe40449b0027e3e6a161b255e37" Oct 06 12:04:25 crc kubenswrapper[4958]: I1006 12:04:25.353768 4958 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7198a6cb-9e91-48c8-82b5-16f40fb6b732-config-data\") on node \"crc\" DevicePath \"\"" Oct 06 12:04:25 crc kubenswrapper[4958]: I1006 12:04:25.364460 4958 scope.go:117] "RemoveContainer" containerID="d9d94ed4838b3a427237aa8e38a79d82bafd2ef7b89280db8b1bcc9bbd39fe85" Oct 06 12:04:25 crc kubenswrapper[4958]: I1006 12:04:25.382547 4958 scope.go:117] "RemoveContainer" containerID="7d5a588df02d8834707c4f5c9c1e9aeaf83308b6cf3b95bdc5a56a7d4c749e65" Oct 06 12:04:25 crc kubenswrapper[4958]: E1006 12:04:25.382986 4958 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7d5a588df02d8834707c4f5c9c1e9aeaf83308b6cf3b95bdc5a56a7d4c749e65\": container with ID starting with 7d5a588df02d8834707c4f5c9c1e9aeaf83308b6cf3b95bdc5a56a7d4c749e65 not found: ID does not exist" containerID="7d5a588df02d8834707c4f5c9c1e9aeaf83308b6cf3b95bdc5a56a7d4c749e65" Oct 06 12:04:25 crc kubenswrapper[4958]: I1006 12:04:25.383026 4958 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7d5a588df02d8834707c4f5c9c1e9aeaf83308b6cf3b95bdc5a56a7d4c749e65"} err="failed to get container status \"7d5a588df02d8834707c4f5c9c1e9aeaf83308b6cf3b95bdc5a56a7d4c749e65\": rpc error: code = NotFound desc = could not find container \"7d5a588df02d8834707c4f5c9c1e9aeaf83308b6cf3b95bdc5a56a7d4c749e65\": container with ID starting with 7d5a588df02d8834707c4f5c9c1e9aeaf83308b6cf3b95bdc5a56a7d4c749e65 not found: ID does not exist" Oct 06 12:04:25 crc kubenswrapper[4958]: I1006 12:04:25.383055 4958 scope.go:117] "RemoveContainer" containerID="0cf16a119eb12aa5d2aaed61e860d1743af7865a55253fe3cfbadcff94f57252" Oct 06 12:04:25 crc kubenswrapper[4958]: E1006 12:04:25.383362 4958 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0cf16a119eb12aa5d2aaed61e860d1743af7865a55253fe3cfbadcff94f57252\": container with ID starting with 0cf16a119eb12aa5d2aaed61e860d1743af7865a55253fe3cfbadcff94f57252 not found: ID does not exist" containerID="0cf16a119eb12aa5d2aaed61e860d1743af7865a55253fe3cfbadcff94f57252" Oct 06 12:04:25 crc kubenswrapper[4958]: I1006 12:04:25.383408 4958 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0cf16a119eb12aa5d2aaed61e860d1743af7865a55253fe3cfbadcff94f57252"} err="failed to get container status \"0cf16a119eb12aa5d2aaed61e860d1743af7865a55253fe3cfbadcff94f57252\": rpc error: code = NotFound desc = could not find container \"0cf16a119eb12aa5d2aaed61e860d1743af7865a55253fe3cfbadcff94f57252\": container with ID starting with 0cf16a119eb12aa5d2aaed61e860d1743af7865a55253fe3cfbadcff94f57252 not found: ID does not exist" Oct 06 12:04:25 crc kubenswrapper[4958]: I1006 12:04:25.383436 4958 scope.go:117] "RemoveContainer" containerID="fa6d616ab64cdaf8a95c4dd191e9c95039f4bfe40449b0027e3e6a161b255e37" Oct 06 12:04:25 crc kubenswrapper[4958]: E1006 12:04:25.383672 4958 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fa6d616ab64cdaf8a95c4dd191e9c95039f4bfe40449b0027e3e6a161b255e37\": container with ID starting with fa6d616ab64cdaf8a95c4dd191e9c95039f4bfe40449b0027e3e6a161b255e37 not found: ID does not exist" containerID="fa6d616ab64cdaf8a95c4dd191e9c95039f4bfe40449b0027e3e6a161b255e37" Oct 06 12:04:25 crc kubenswrapper[4958]: I1006 12:04:25.383706 4958 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fa6d616ab64cdaf8a95c4dd191e9c95039f4bfe40449b0027e3e6a161b255e37"} err="failed to get container status \"fa6d616ab64cdaf8a95c4dd191e9c95039f4bfe40449b0027e3e6a161b255e37\": rpc error: code = NotFound desc = could not find container \"fa6d616ab64cdaf8a95c4dd191e9c95039f4bfe40449b0027e3e6a161b255e37\": container with ID starting with fa6d616ab64cdaf8a95c4dd191e9c95039f4bfe40449b0027e3e6a161b255e37 not found: ID does not exist" Oct 06 12:04:25 crc kubenswrapper[4958]: I1006 12:04:25.383727 4958 scope.go:117] "RemoveContainer" containerID="d9d94ed4838b3a427237aa8e38a79d82bafd2ef7b89280db8b1bcc9bbd39fe85" Oct 06 12:04:25 crc kubenswrapper[4958]: E1006 12:04:25.384013 4958 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d9d94ed4838b3a427237aa8e38a79d82bafd2ef7b89280db8b1bcc9bbd39fe85\": container with ID starting with d9d94ed4838b3a427237aa8e38a79d82bafd2ef7b89280db8b1bcc9bbd39fe85 not found: ID does not exist" containerID="d9d94ed4838b3a427237aa8e38a79d82bafd2ef7b89280db8b1bcc9bbd39fe85" Oct 06 12:04:25 crc kubenswrapper[4958]: I1006 12:04:25.384040 4958 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d9d94ed4838b3a427237aa8e38a79d82bafd2ef7b89280db8b1bcc9bbd39fe85"} err="failed to get container status \"d9d94ed4838b3a427237aa8e38a79d82bafd2ef7b89280db8b1bcc9bbd39fe85\": rpc error: code = NotFound desc = could not find container \"d9d94ed4838b3a427237aa8e38a79d82bafd2ef7b89280db8b1bcc9bbd39fe85\": container with ID starting with d9d94ed4838b3a427237aa8e38a79d82bafd2ef7b89280db8b1bcc9bbd39fe85 not found: ID does not exist" Oct 06 12:04:25 crc kubenswrapper[4958]: I1006 12:04:25.402451 4958 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Oct 06 12:04:25 crc kubenswrapper[4958]: I1006 12:04:25.411333 4958 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Oct 06 12:04:25 crc kubenswrapper[4958]: I1006 12:04:25.422509 4958 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Oct 06 12:04:25 crc kubenswrapper[4958]: E1006 12:04:25.422872 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7198a6cb-9e91-48c8-82b5-16f40fb6b732" containerName="sg-core" Oct 06 12:04:25 crc kubenswrapper[4958]: I1006 12:04:25.422891 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="7198a6cb-9e91-48c8-82b5-16f40fb6b732" containerName="sg-core" Oct 06 12:04:25 crc kubenswrapper[4958]: E1006 12:04:25.422912 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7198a6cb-9e91-48c8-82b5-16f40fb6b732" containerName="ceilometer-notification-agent" Oct 06 12:04:25 crc kubenswrapper[4958]: I1006 12:04:25.422919 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="7198a6cb-9e91-48c8-82b5-16f40fb6b732" containerName="ceilometer-notification-agent" Oct 06 12:04:25 crc kubenswrapper[4958]: E1006 12:04:25.422927 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7198a6cb-9e91-48c8-82b5-16f40fb6b732" containerName="proxy-httpd" Oct 06 12:04:25 crc kubenswrapper[4958]: I1006 12:04:25.422934 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="7198a6cb-9e91-48c8-82b5-16f40fb6b732" containerName="proxy-httpd" Oct 06 12:04:25 crc kubenswrapper[4958]: E1006 12:04:25.422952 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7198a6cb-9e91-48c8-82b5-16f40fb6b732" containerName="ceilometer-central-agent" Oct 06 12:04:25 crc kubenswrapper[4958]: I1006 12:04:25.422958 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="7198a6cb-9e91-48c8-82b5-16f40fb6b732" containerName="ceilometer-central-agent" Oct 06 12:04:25 crc kubenswrapper[4958]: I1006 12:04:25.423117 4958 memory_manager.go:354] "RemoveStaleState removing state" podUID="7198a6cb-9e91-48c8-82b5-16f40fb6b732" containerName="ceilometer-central-agent" Oct 06 12:04:25 crc kubenswrapper[4958]: I1006 12:04:25.423136 4958 memory_manager.go:354] "RemoveStaleState removing state" podUID="7198a6cb-9e91-48c8-82b5-16f40fb6b732" containerName="proxy-httpd" Oct 06 12:04:25 crc kubenswrapper[4958]: I1006 12:04:25.423162 4958 memory_manager.go:354] "RemoveStaleState removing state" podUID="7198a6cb-9e91-48c8-82b5-16f40fb6b732" containerName="ceilometer-notification-agent" Oct 06 12:04:25 crc kubenswrapper[4958]: I1006 12:04:25.423169 4958 memory_manager.go:354] "RemoveStaleState removing state" podUID="7198a6cb-9e91-48c8-82b5-16f40fb6b732" containerName="sg-core" Oct 06 12:04:25 crc kubenswrapper[4958]: I1006 12:04:25.429319 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Oct 06 12:04:25 crc kubenswrapper[4958]: I1006 12:04:25.431784 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Oct 06 12:04:25 crc kubenswrapper[4958]: I1006 12:04:25.431991 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Oct 06 12:04:25 crc kubenswrapper[4958]: I1006 12:04:25.458633 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Oct 06 12:04:25 crc kubenswrapper[4958]: I1006 12:04:25.500342 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Oct 06 12:04:25 crc kubenswrapper[4958]: I1006 12:04:25.540942 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Oct 06 12:04:25 crc kubenswrapper[4958]: I1006 12:04:25.592719 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-np9xn\" (UniqueName: \"kubernetes.io/projected/d71280d2-c744-42ec-985c-297896ba4731-kube-api-access-np9xn\") pod \"ceilometer-0\" (UID: \"d71280d2-c744-42ec-985c-297896ba4731\") " pod="openstack/ceilometer-0" Oct 06 12:04:25 crc kubenswrapper[4958]: I1006 12:04:25.592775 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d71280d2-c744-42ec-985c-297896ba4731-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"d71280d2-c744-42ec-985c-297896ba4731\") " pod="openstack/ceilometer-0" Oct 06 12:04:25 crc kubenswrapper[4958]: I1006 12:04:25.592822 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d71280d2-c744-42ec-985c-297896ba4731-config-data\") pod \"ceilometer-0\" (UID: \"d71280d2-c744-42ec-985c-297896ba4731\") " pod="openstack/ceilometer-0" Oct 06 12:04:25 crc kubenswrapper[4958]: I1006 12:04:25.592845 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d71280d2-c744-42ec-985c-297896ba4731-run-httpd\") pod \"ceilometer-0\" (UID: \"d71280d2-c744-42ec-985c-297896ba4731\") " pod="openstack/ceilometer-0" Oct 06 12:04:25 crc kubenswrapper[4958]: I1006 12:04:25.592897 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d71280d2-c744-42ec-985c-297896ba4731-log-httpd\") pod \"ceilometer-0\" (UID: \"d71280d2-c744-42ec-985c-297896ba4731\") " pod="openstack/ceilometer-0" Oct 06 12:04:25 crc kubenswrapper[4958]: I1006 12:04:25.592937 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d71280d2-c744-42ec-985c-297896ba4731-scripts\") pod \"ceilometer-0\" (UID: \"d71280d2-c744-42ec-985c-297896ba4731\") " pod="openstack/ceilometer-0" Oct 06 12:04:25 crc kubenswrapper[4958]: I1006 12:04:25.592973 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/d71280d2-c744-42ec-985c-297896ba4731-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"d71280d2-c744-42ec-985c-297896ba4731\") " pod="openstack/ceilometer-0" Oct 06 12:04:25 crc kubenswrapper[4958]: I1006 12:04:25.694776 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d71280d2-c744-42ec-985c-297896ba4731-log-httpd\") pod \"ceilometer-0\" (UID: \"d71280d2-c744-42ec-985c-297896ba4731\") " pod="openstack/ceilometer-0" Oct 06 12:04:25 crc kubenswrapper[4958]: I1006 12:04:25.694844 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d71280d2-c744-42ec-985c-297896ba4731-scripts\") pod \"ceilometer-0\" (UID: \"d71280d2-c744-42ec-985c-297896ba4731\") " pod="openstack/ceilometer-0" Oct 06 12:04:25 crc kubenswrapper[4958]: I1006 12:04:25.694882 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/d71280d2-c744-42ec-985c-297896ba4731-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"d71280d2-c744-42ec-985c-297896ba4731\") " pod="openstack/ceilometer-0" Oct 06 12:04:25 crc kubenswrapper[4958]: I1006 12:04:25.694922 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-np9xn\" (UniqueName: \"kubernetes.io/projected/d71280d2-c744-42ec-985c-297896ba4731-kube-api-access-np9xn\") pod \"ceilometer-0\" (UID: \"d71280d2-c744-42ec-985c-297896ba4731\") " pod="openstack/ceilometer-0" Oct 06 12:04:25 crc kubenswrapper[4958]: I1006 12:04:25.694954 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d71280d2-c744-42ec-985c-297896ba4731-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"d71280d2-c744-42ec-985c-297896ba4731\") " pod="openstack/ceilometer-0" Oct 06 12:04:25 crc kubenswrapper[4958]: I1006 12:04:25.695000 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d71280d2-c744-42ec-985c-297896ba4731-config-data\") pod \"ceilometer-0\" (UID: \"d71280d2-c744-42ec-985c-297896ba4731\") " pod="openstack/ceilometer-0" Oct 06 12:04:25 crc kubenswrapper[4958]: I1006 12:04:25.695022 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d71280d2-c744-42ec-985c-297896ba4731-run-httpd\") pod \"ceilometer-0\" (UID: \"d71280d2-c744-42ec-985c-297896ba4731\") " pod="openstack/ceilometer-0" Oct 06 12:04:25 crc kubenswrapper[4958]: I1006 12:04:25.695346 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d71280d2-c744-42ec-985c-297896ba4731-run-httpd\") pod \"ceilometer-0\" (UID: \"d71280d2-c744-42ec-985c-297896ba4731\") " pod="openstack/ceilometer-0" Oct 06 12:04:25 crc kubenswrapper[4958]: I1006 12:04:25.695786 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d71280d2-c744-42ec-985c-297896ba4731-log-httpd\") pod \"ceilometer-0\" (UID: \"d71280d2-c744-42ec-985c-297896ba4731\") " pod="openstack/ceilometer-0" Oct 06 12:04:25 crc kubenswrapper[4958]: I1006 12:04:25.699989 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d71280d2-c744-42ec-985c-297896ba4731-config-data\") pod \"ceilometer-0\" (UID: \"d71280d2-c744-42ec-985c-297896ba4731\") " pod="openstack/ceilometer-0" Oct 06 12:04:25 crc kubenswrapper[4958]: I1006 12:04:25.700031 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/d71280d2-c744-42ec-985c-297896ba4731-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"d71280d2-c744-42ec-985c-297896ba4731\") " pod="openstack/ceilometer-0" Oct 06 12:04:25 crc kubenswrapper[4958]: I1006 12:04:25.700447 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d71280d2-c744-42ec-985c-297896ba4731-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"d71280d2-c744-42ec-985c-297896ba4731\") " pod="openstack/ceilometer-0" Oct 06 12:04:25 crc kubenswrapper[4958]: I1006 12:04:25.708772 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d71280d2-c744-42ec-985c-297896ba4731-scripts\") pod \"ceilometer-0\" (UID: \"d71280d2-c744-42ec-985c-297896ba4731\") " pod="openstack/ceilometer-0" Oct 06 12:04:25 crc kubenswrapper[4958]: I1006 12:04:25.713799 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-np9xn\" (UniqueName: \"kubernetes.io/projected/d71280d2-c744-42ec-985c-297896ba4731-kube-api-access-np9xn\") pod \"ceilometer-0\" (UID: \"d71280d2-c744-42ec-985c-297896ba4731\") " pod="openstack/ceilometer-0" Oct 06 12:04:25 crc kubenswrapper[4958]: I1006 12:04:25.769396 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Oct 06 12:04:26 crc kubenswrapper[4958]: I1006 12:04:26.095186 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-5784c7f6c4-pqpwp" event={"ID":"d5b91f63-f0c4-4c4b-a06a-0136898c0beb","Type":"ContainerStarted","Data":"76f96fdc6c77165fda43136b2331de395158cd4937d08d2c32c70739e9ee6ecd"} Oct 06 12:04:26 crc kubenswrapper[4958]: I1006 12:04:26.096370 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-5784c7f6c4-pqpwp" event={"ID":"d5b91f63-f0c4-4c4b-a06a-0136898c0beb","Type":"ContainerStarted","Data":"4e053969cb2680b064d6d6d28a9562aa2ba95baf5a7f5342decbfcacb4acdbc8"} Oct 06 12:04:26 crc kubenswrapper[4958]: I1006 12:04:26.116737 4958 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-api-5784c7f6c4-pqpwp" podStartSLOduration=3.116722896 podStartE2EDuration="3.116722896s" podCreationTimestamp="2025-10-06 12:04:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-06 12:04:26.112002044 +0000 UTC m=+1019.998027342" watchObservedRunningTime="2025-10-06 12:04:26.116722896 +0000 UTC m=+1020.002748204" Oct 06 12:04:26 crc kubenswrapper[4958]: I1006 12:04:26.187792 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Oct 06 12:04:26 crc kubenswrapper[4958]: I1006 12:04:26.930557 4958 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7198a6cb-9e91-48c8-82b5-16f40fb6b732" path="/var/lib/kubelet/pods/7198a6cb-9e91-48c8-82b5-16f40fb6b732/volumes" Oct 06 12:04:27 crc kubenswrapper[4958]: I1006 12:04:27.126332 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"d71280d2-c744-42ec-985c-297896ba4731","Type":"ContainerStarted","Data":"901b3e1cfc9e9e876ccfbde2853dd86f587d479357d75708ae3292afe1ccf8b9"} Oct 06 12:04:27 crc kubenswrapper[4958]: I1006 12:04:27.126374 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"d71280d2-c744-42ec-985c-297896ba4731","Type":"ContainerStarted","Data":"74ed12693cf78af180d7b5eaef4d585eb750f3dea38f22634ac8e92f8fa2b5d9"} Oct 06 12:04:27 crc kubenswrapper[4958]: I1006 12:04:27.126388 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-5784c7f6c4-pqpwp" Oct 06 12:04:27 crc kubenswrapper[4958]: I1006 12:04:27.126888 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-5784c7f6c4-pqpwp" Oct 06 12:04:28 crc kubenswrapper[4958]: I1006 12:04:28.135238 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"d71280d2-c744-42ec-985c-297896ba4731","Type":"ContainerStarted","Data":"388e92821e45a23bf00ec78d00fa86f85f928ef58b6d39e43d924b0a68e8d15d"} Oct 06 12:04:28 crc kubenswrapper[4958]: I1006 12:04:28.353902 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/keystone-6d6b6556f7-c2dwg" Oct 06 12:04:29 crc kubenswrapper[4958]: I1006 12:04:29.146638 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"d71280d2-c744-42ec-985c-297896ba4731","Type":"ContainerStarted","Data":"9df602a13e753fb5090bd19cdade1e12bcb4b050d5ae1211043a0f014fec204d"} Oct 06 12:04:29 crc kubenswrapper[4958]: I1006 12:04:29.148575 4958 generic.go:334] "Generic (PLEG): container finished" podID="9e240eda-9921-45e1-991d-971031189ee4" containerID="9baca3a04d3fb8a69611a589ecd1f022f36e21ee81568370ebeb268cbbeae73a" exitCode=0 Oct 06 12:04:29 crc kubenswrapper[4958]: I1006 12:04:29.148898 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-wqlhj" event={"ID":"9e240eda-9921-45e1-991d-971031189ee4","Type":"ContainerDied","Data":"9baca3a04d3fb8a69611a589ecd1f022f36e21ee81568370ebeb268cbbeae73a"} Oct 06 12:04:30 crc kubenswrapper[4958]: I1006 12:04:30.158962 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"d71280d2-c744-42ec-985c-297896ba4731","Type":"ContainerStarted","Data":"b9d26e78afd064ffd17e6cacce186b775eee7e53bca68e1f85cef5f83f77ff06"} Oct 06 12:04:30 crc kubenswrapper[4958]: I1006 12:04:30.580413 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-wqlhj" Oct 06 12:04:30 crc kubenswrapper[4958]: I1006 12:04:30.601837 4958 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.11614891 podStartE2EDuration="5.601817969s" podCreationTimestamp="2025-10-06 12:04:25 +0000 UTC" firstStartedPulling="2025-10-06 12:04:26.198625359 +0000 UTC m=+1020.084650667" lastFinishedPulling="2025-10-06 12:04:29.684294408 +0000 UTC m=+1023.570319726" observedRunningTime="2025-10-06 12:04:30.186376167 +0000 UTC m=+1024.072401475" watchObservedRunningTime="2025-10-06 12:04:30.601817969 +0000 UTC m=+1024.487843287" Oct 06 12:04:30 crc kubenswrapper[4958]: I1006 12:04:30.687447 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9e240eda-9921-45e1-991d-971031189ee4-config-data\") pod \"9e240eda-9921-45e1-991d-971031189ee4\" (UID: \"9e240eda-9921-45e1-991d-971031189ee4\") " Oct 06 12:04:30 crc kubenswrapper[4958]: I1006 12:04:30.687635 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/9e240eda-9921-45e1-991d-971031189ee4-etc-machine-id\") pod \"9e240eda-9921-45e1-991d-971031189ee4\" (UID: \"9e240eda-9921-45e1-991d-971031189ee4\") " Oct 06 12:04:30 crc kubenswrapper[4958]: I1006 12:04:30.687687 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9e240eda-9921-45e1-991d-971031189ee4-combined-ca-bundle\") pod \"9e240eda-9921-45e1-991d-971031189ee4\" (UID: \"9e240eda-9921-45e1-991d-971031189ee4\") " Oct 06 12:04:30 crc kubenswrapper[4958]: I1006 12:04:30.687702 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9e240eda-9921-45e1-991d-971031189ee4-scripts\") pod \"9e240eda-9921-45e1-991d-971031189ee4\" (UID: \"9e240eda-9921-45e1-991d-971031189ee4\") " Oct 06 12:04:30 crc kubenswrapper[4958]: I1006 12:04:30.687734 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/9e240eda-9921-45e1-991d-971031189ee4-db-sync-config-data\") pod \"9e240eda-9921-45e1-991d-971031189ee4\" (UID: \"9e240eda-9921-45e1-991d-971031189ee4\") " Oct 06 12:04:30 crc kubenswrapper[4958]: I1006 12:04:30.687723 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9e240eda-9921-45e1-991d-971031189ee4-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "9e240eda-9921-45e1-991d-971031189ee4" (UID: "9e240eda-9921-45e1-991d-971031189ee4"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 06 12:04:30 crc kubenswrapper[4958]: I1006 12:04:30.687753 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xjjl7\" (UniqueName: \"kubernetes.io/projected/9e240eda-9921-45e1-991d-971031189ee4-kube-api-access-xjjl7\") pod \"9e240eda-9921-45e1-991d-971031189ee4\" (UID: \"9e240eda-9921-45e1-991d-971031189ee4\") " Oct 06 12:04:30 crc kubenswrapper[4958]: I1006 12:04:30.688586 4958 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/9e240eda-9921-45e1-991d-971031189ee4-etc-machine-id\") on node \"crc\" DevicePath \"\"" Oct 06 12:04:30 crc kubenswrapper[4958]: I1006 12:04:30.707302 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9e240eda-9921-45e1-991d-971031189ee4-scripts" (OuterVolumeSpecName: "scripts") pod "9e240eda-9921-45e1-991d-971031189ee4" (UID: "9e240eda-9921-45e1-991d-971031189ee4"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 06 12:04:30 crc kubenswrapper[4958]: I1006 12:04:30.718037 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9e240eda-9921-45e1-991d-971031189ee4-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "9e240eda-9921-45e1-991d-971031189ee4" (UID: "9e240eda-9921-45e1-991d-971031189ee4"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 06 12:04:30 crc kubenswrapper[4958]: I1006 12:04:30.718176 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9e240eda-9921-45e1-991d-971031189ee4-kube-api-access-xjjl7" (OuterVolumeSpecName: "kube-api-access-xjjl7") pod "9e240eda-9921-45e1-991d-971031189ee4" (UID: "9e240eda-9921-45e1-991d-971031189ee4"). InnerVolumeSpecName "kube-api-access-xjjl7". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 06 12:04:30 crc kubenswrapper[4958]: I1006 12:04:30.733341 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9e240eda-9921-45e1-991d-971031189ee4-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "9e240eda-9921-45e1-991d-971031189ee4" (UID: "9e240eda-9921-45e1-991d-971031189ee4"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 06 12:04:30 crc kubenswrapper[4958]: I1006 12:04:30.753501 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9e240eda-9921-45e1-991d-971031189ee4-config-data" (OuterVolumeSpecName: "config-data") pod "9e240eda-9921-45e1-991d-971031189ee4" (UID: "9e240eda-9921-45e1-991d-971031189ee4"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 06 12:04:30 crc kubenswrapper[4958]: I1006 12:04:30.790095 4958 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9e240eda-9921-45e1-991d-971031189ee4-config-data\") on node \"crc\" DevicePath \"\"" Oct 06 12:04:30 crc kubenswrapper[4958]: I1006 12:04:30.790127 4958 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9e240eda-9921-45e1-991d-971031189ee4-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 06 12:04:30 crc kubenswrapper[4958]: I1006 12:04:30.790138 4958 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9e240eda-9921-45e1-991d-971031189ee4-scripts\") on node \"crc\" DevicePath \"\"" Oct 06 12:04:30 crc kubenswrapper[4958]: I1006 12:04:30.790162 4958 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/9e240eda-9921-45e1-991d-971031189ee4-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Oct 06 12:04:30 crc kubenswrapper[4958]: I1006 12:04:30.790170 4958 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xjjl7\" (UniqueName: \"kubernetes.io/projected/9e240eda-9921-45e1-991d-971031189ee4-kube-api-access-xjjl7\") on node \"crc\" DevicePath \"\"" Oct 06 12:04:30 crc kubenswrapper[4958]: I1006 12:04:30.923437 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-75c8ddd69c-l6bx4" Oct 06 12:04:31 crc kubenswrapper[4958]: I1006 12:04:31.002385 4958 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-84b966f6c9-xqx5n"] Oct 06 12:04:31 crc kubenswrapper[4958]: I1006 12:04:31.002710 4958 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-84b966f6c9-xqx5n" podUID="1f8e795b-4ea8-4088-8ccb-345195b2d313" containerName="dnsmasq-dns" containerID="cri-o://77af60a7fc658d197c6e7c6890bdd030dd1b3747670ee8b18644b9426796ed8f" gracePeriod=10 Oct 06 12:04:31 crc kubenswrapper[4958]: I1006 12:04:31.023799 4958 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstackclient"] Oct 06 12:04:31 crc kubenswrapper[4958]: E1006 12:04:31.024552 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9e240eda-9921-45e1-991d-971031189ee4" containerName="cinder-db-sync" Oct 06 12:04:31 crc kubenswrapper[4958]: I1006 12:04:31.024570 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="9e240eda-9921-45e1-991d-971031189ee4" containerName="cinder-db-sync" Oct 06 12:04:31 crc kubenswrapper[4958]: I1006 12:04:31.027547 4958 memory_manager.go:354] "RemoveStaleState removing state" podUID="9e240eda-9921-45e1-991d-971031189ee4" containerName="cinder-db-sync" Oct 06 12:04:31 crc kubenswrapper[4958]: I1006 12:04:31.028387 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Oct 06 12:04:31 crc kubenswrapper[4958]: I1006 12:04:31.032016 4958 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-config" Oct 06 12:04:31 crc kubenswrapper[4958]: I1006 12:04:31.032700 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstackclient-openstackclient-dockercfg-fsgdb" Oct 06 12:04:31 crc kubenswrapper[4958]: I1006 12:04:31.032967 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-config-secret" Oct 06 12:04:31 crc kubenswrapper[4958]: I1006 12:04:31.048755 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstackclient"] Oct 06 12:04:31 crc kubenswrapper[4958]: I1006 12:04:31.195476 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2474d50f-478f-4d0f-abc0-f0a5135285ca-combined-ca-bundle\") pod \"openstackclient\" (UID: \"2474d50f-478f-4d0f-abc0-f0a5135285ca\") " pod="openstack/openstackclient" Oct 06 12:04:31 crc kubenswrapper[4958]: I1006 12:04:31.195774 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/2474d50f-478f-4d0f-abc0-f0a5135285ca-openstack-config\") pod \"openstackclient\" (UID: \"2474d50f-478f-4d0f-abc0-f0a5135285ca\") " pod="openstack/openstackclient" Oct 06 12:04:31 crc kubenswrapper[4958]: I1006 12:04:31.195866 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vbz9d\" (UniqueName: \"kubernetes.io/projected/2474d50f-478f-4d0f-abc0-f0a5135285ca-kube-api-access-vbz9d\") pod \"openstackclient\" (UID: \"2474d50f-478f-4d0f-abc0-f0a5135285ca\") " pod="openstack/openstackclient" Oct 06 12:04:31 crc kubenswrapper[4958]: I1006 12:04:31.195899 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/2474d50f-478f-4d0f-abc0-f0a5135285ca-openstack-config-secret\") pod \"openstackclient\" (UID: \"2474d50f-478f-4d0f-abc0-f0a5135285ca\") " pod="openstack/openstackclient" Oct 06 12:04:31 crc kubenswrapper[4958]: I1006 12:04:31.196137 4958 generic.go:334] "Generic (PLEG): container finished" podID="1f8e795b-4ea8-4088-8ccb-345195b2d313" containerID="77af60a7fc658d197c6e7c6890bdd030dd1b3747670ee8b18644b9426796ed8f" exitCode=0 Oct 06 12:04:31 crc kubenswrapper[4958]: I1006 12:04:31.196273 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-84b966f6c9-xqx5n" event={"ID":"1f8e795b-4ea8-4088-8ccb-345195b2d313","Type":"ContainerDied","Data":"77af60a7fc658d197c6e7c6890bdd030dd1b3747670ee8b18644b9426796ed8f"} Oct 06 12:04:31 crc kubenswrapper[4958]: I1006 12:04:31.218167 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-wqlhj" Oct 06 12:04:31 crc kubenswrapper[4958]: I1006 12:04:31.218248 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-wqlhj" event={"ID":"9e240eda-9921-45e1-991d-971031189ee4","Type":"ContainerDied","Data":"51eaa048d83d3adeab10bb404c85867b8cc7e1a55df9a4b91af37d4bb000b800"} Oct 06 12:04:31 crc kubenswrapper[4958]: I1006 12:04:31.218297 4958 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="51eaa048d83d3adeab10bb404c85867b8cc7e1a55df9a4b91af37d4bb000b800" Oct 06 12:04:31 crc kubenswrapper[4958]: I1006 12:04:31.218322 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Oct 06 12:04:31 crc kubenswrapper[4958]: I1006 12:04:31.298460 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vbz9d\" (UniqueName: \"kubernetes.io/projected/2474d50f-478f-4d0f-abc0-f0a5135285ca-kube-api-access-vbz9d\") pod \"openstackclient\" (UID: \"2474d50f-478f-4d0f-abc0-f0a5135285ca\") " pod="openstack/openstackclient" Oct 06 12:04:31 crc kubenswrapper[4958]: I1006 12:04:31.298529 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/2474d50f-478f-4d0f-abc0-f0a5135285ca-openstack-config-secret\") pod \"openstackclient\" (UID: \"2474d50f-478f-4d0f-abc0-f0a5135285ca\") " pod="openstack/openstackclient" Oct 06 12:04:31 crc kubenswrapper[4958]: I1006 12:04:31.298573 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2474d50f-478f-4d0f-abc0-f0a5135285ca-combined-ca-bundle\") pod \"openstackclient\" (UID: \"2474d50f-478f-4d0f-abc0-f0a5135285ca\") " pod="openstack/openstackclient" Oct 06 12:04:31 crc kubenswrapper[4958]: I1006 12:04:31.298624 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/2474d50f-478f-4d0f-abc0-f0a5135285ca-openstack-config\") pod \"openstackclient\" (UID: \"2474d50f-478f-4d0f-abc0-f0a5135285ca\") " pod="openstack/openstackclient" Oct 06 12:04:31 crc kubenswrapper[4958]: I1006 12:04:31.304039 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/2474d50f-478f-4d0f-abc0-f0a5135285ca-openstack-config\") pod \"openstackclient\" (UID: \"2474d50f-478f-4d0f-abc0-f0a5135285ca\") " pod="openstack/openstackclient" Oct 06 12:04:31 crc kubenswrapper[4958]: I1006 12:04:31.320096 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/2474d50f-478f-4d0f-abc0-f0a5135285ca-openstack-config-secret\") pod \"openstackclient\" (UID: \"2474d50f-478f-4d0f-abc0-f0a5135285ca\") " pod="openstack/openstackclient" Oct 06 12:04:31 crc kubenswrapper[4958]: I1006 12:04:31.320547 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vbz9d\" (UniqueName: \"kubernetes.io/projected/2474d50f-478f-4d0f-abc0-f0a5135285ca-kube-api-access-vbz9d\") pod \"openstackclient\" (UID: \"2474d50f-478f-4d0f-abc0-f0a5135285ca\") " pod="openstack/openstackclient" Oct 06 12:04:31 crc kubenswrapper[4958]: I1006 12:04:31.322727 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2474d50f-478f-4d0f-abc0-f0a5135285ca-combined-ca-bundle\") pod \"openstackclient\" (UID: \"2474d50f-478f-4d0f-abc0-f0a5135285ca\") " pod="openstack/openstackclient" Oct 06 12:04:31 crc kubenswrapper[4958]: I1006 12:04:31.346699 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Oct 06 12:04:31 crc kubenswrapper[4958]: I1006 12:04:31.484571 4958 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-5784cf869f-dkwc8"] Oct 06 12:04:31 crc kubenswrapper[4958]: I1006 12:04:31.486081 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5784cf869f-dkwc8" Oct 06 12:04:31 crc kubenswrapper[4958]: I1006 12:04:31.499888 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5784cf869f-dkwc8"] Oct 06 12:04:31 crc kubenswrapper[4958]: I1006 12:04:31.513978 4958 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-scheduler-0"] Oct 06 12:04:31 crc kubenswrapper[4958]: I1006 12:04:31.521244 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Oct 06 12:04:31 crc kubenswrapper[4958]: I1006 12:04:31.529810 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scheduler-config-data" Oct 06 12:04:31 crc kubenswrapper[4958]: I1006 12:04:31.529983 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-cinder-dockercfg-x7gpf" Oct 06 12:04:31 crc kubenswrapper[4958]: I1006 12:04:31.530394 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scripts" Oct 06 12:04:31 crc kubenswrapper[4958]: I1006 12:04:31.532679 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-config-data" Oct 06 12:04:31 crc kubenswrapper[4958]: I1006 12:04:31.555899 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Oct 06 12:04:31 crc kubenswrapper[4958]: I1006 12:04:31.608026 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/7ba8a4da-fc79-4b19-ae45-208cbf09bbff-ovsdbserver-sb\") pod \"dnsmasq-dns-5784cf869f-dkwc8\" (UID: \"7ba8a4da-fc79-4b19-ae45-208cbf09bbff\") " pod="openstack/dnsmasq-dns-5784cf869f-dkwc8" Oct 06 12:04:31 crc kubenswrapper[4958]: I1006 12:04:31.608103 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d8409911-8a67-4382-87b7-7a050b025a09-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"d8409911-8a67-4382-87b7-7a050b025a09\") " pod="openstack/cinder-scheduler-0" Oct 06 12:04:31 crc kubenswrapper[4958]: I1006 12:04:31.608124 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qfnwl\" (UniqueName: \"kubernetes.io/projected/d8409911-8a67-4382-87b7-7a050b025a09-kube-api-access-qfnwl\") pod \"cinder-scheduler-0\" (UID: \"d8409911-8a67-4382-87b7-7a050b025a09\") " pod="openstack/cinder-scheduler-0" Oct 06 12:04:31 crc kubenswrapper[4958]: I1006 12:04:31.608172 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d8409911-8a67-4382-87b7-7a050b025a09-scripts\") pod \"cinder-scheduler-0\" (UID: \"d8409911-8a67-4382-87b7-7a050b025a09\") " pod="openstack/cinder-scheduler-0" Oct 06 12:04:31 crc kubenswrapper[4958]: I1006 12:04:31.608196 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/7ba8a4da-fc79-4b19-ae45-208cbf09bbff-dns-swift-storage-0\") pod \"dnsmasq-dns-5784cf869f-dkwc8\" (UID: \"7ba8a4da-fc79-4b19-ae45-208cbf09bbff\") " pod="openstack/dnsmasq-dns-5784cf869f-dkwc8" Oct 06 12:04:31 crc kubenswrapper[4958]: I1006 12:04:31.608220 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/7ba8a4da-fc79-4b19-ae45-208cbf09bbff-ovsdbserver-nb\") pod \"dnsmasq-dns-5784cf869f-dkwc8\" (UID: \"7ba8a4da-fc79-4b19-ae45-208cbf09bbff\") " pod="openstack/dnsmasq-dns-5784cf869f-dkwc8" Oct 06 12:04:31 crc kubenswrapper[4958]: I1006 12:04:31.608236 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d8409911-8a67-4382-87b7-7a050b025a09-config-data\") pod \"cinder-scheduler-0\" (UID: \"d8409911-8a67-4382-87b7-7a050b025a09\") " pod="openstack/cinder-scheduler-0" Oct 06 12:04:31 crc kubenswrapper[4958]: I1006 12:04:31.608256 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/7ba8a4da-fc79-4b19-ae45-208cbf09bbff-dns-svc\") pod \"dnsmasq-dns-5784cf869f-dkwc8\" (UID: \"7ba8a4da-fc79-4b19-ae45-208cbf09bbff\") " pod="openstack/dnsmasq-dns-5784cf869f-dkwc8" Oct 06 12:04:31 crc kubenswrapper[4958]: I1006 12:04:31.608283 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/d8409911-8a67-4382-87b7-7a050b025a09-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"d8409911-8a67-4382-87b7-7a050b025a09\") " pod="openstack/cinder-scheduler-0" Oct 06 12:04:31 crc kubenswrapper[4958]: I1006 12:04:31.608319 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7ba8a4da-fc79-4b19-ae45-208cbf09bbff-config\") pod \"dnsmasq-dns-5784cf869f-dkwc8\" (UID: \"7ba8a4da-fc79-4b19-ae45-208cbf09bbff\") " pod="openstack/dnsmasq-dns-5784cf869f-dkwc8" Oct 06 12:04:31 crc kubenswrapper[4958]: I1006 12:04:31.608335 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tfcmt\" (UniqueName: \"kubernetes.io/projected/7ba8a4da-fc79-4b19-ae45-208cbf09bbff-kube-api-access-tfcmt\") pod \"dnsmasq-dns-5784cf869f-dkwc8\" (UID: \"7ba8a4da-fc79-4b19-ae45-208cbf09bbff\") " pod="openstack/dnsmasq-dns-5784cf869f-dkwc8" Oct 06 12:04:31 crc kubenswrapper[4958]: I1006 12:04:31.608358 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/d8409911-8a67-4382-87b7-7a050b025a09-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"d8409911-8a67-4382-87b7-7a050b025a09\") " pod="openstack/cinder-scheduler-0" Oct 06 12:04:31 crc kubenswrapper[4958]: I1006 12:04:31.620940 4958 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-api-0"] Oct 06 12:04:31 crc kubenswrapper[4958]: I1006 12:04:31.627785 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Oct 06 12:04:31 crc kubenswrapper[4958]: I1006 12:04:31.640103 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-api-config-data" Oct 06 12:04:31 crc kubenswrapper[4958]: I1006 12:04:31.640922 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Oct 06 12:04:31 crc kubenswrapper[4958]: I1006 12:04:31.710125 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7295b149-cc50-47cd-a942-db144e82e687-logs\") pod \"cinder-api-0\" (UID: \"7295b149-cc50-47cd-a942-db144e82e687\") " pod="openstack/cinder-api-0" Oct 06 12:04:31 crc kubenswrapper[4958]: I1006 12:04:31.710201 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/7ba8a4da-fc79-4b19-ae45-208cbf09bbff-ovsdbserver-sb\") pod \"dnsmasq-dns-5784cf869f-dkwc8\" (UID: \"7ba8a4da-fc79-4b19-ae45-208cbf09bbff\") " pod="openstack/dnsmasq-dns-5784cf869f-dkwc8" Oct 06 12:04:31 crc kubenswrapper[4958]: I1006 12:04:31.710222 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/7295b149-cc50-47cd-a942-db144e82e687-config-data-custom\") pod \"cinder-api-0\" (UID: \"7295b149-cc50-47cd-a942-db144e82e687\") " pod="openstack/cinder-api-0" Oct 06 12:04:31 crc kubenswrapper[4958]: I1006 12:04:31.710282 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/7295b149-cc50-47cd-a942-db144e82e687-etc-machine-id\") pod \"cinder-api-0\" (UID: \"7295b149-cc50-47cd-a942-db144e82e687\") " pod="openstack/cinder-api-0" Oct 06 12:04:31 crc kubenswrapper[4958]: I1006 12:04:31.710299 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7295b149-cc50-47cd-a942-db144e82e687-config-data\") pod \"cinder-api-0\" (UID: \"7295b149-cc50-47cd-a942-db144e82e687\") " pod="openstack/cinder-api-0" Oct 06 12:04:31 crc kubenswrapper[4958]: I1006 12:04:31.710316 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d8409911-8a67-4382-87b7-7a050b025a09-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"d8409911-8a67-4382-87b7-7a050b025a09\") " pod="openstack/cinder-scheduler-0" Oct 06 12:04:31 crc kubenswrapper[4958]: I1006 12:04:31.710333 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qfnwl\" (UniqueName: \"kubernetes.io/projected/d8409911-8a67-4382-87b7-7a050b025a09-kube-api-access-qfnwl\") pod \"cinder-scheduler-0\" (UID: \"d8409911-8a67-4382-87b7-7a050b025a09\") " pod="openstack/cinder-scheduler-0" Oct 06 12:04:31 crc kubenswrapper[4958]: I1006 12:04:31.710358 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d8409911-8a67-4382-87b7-7a050b025a09-scripts\") pod \"cinder-scheduler-0\" (UID: \"d8409911-8a67-4382-87b7-7a050b025a09\") " pod="openstack/cinder-scheduler-0" Oct 06 12:04:31 crc kubenswrapper[4958]: I1006 12:04:31.710382 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/7ba8a4da-fc79-4b19-ae45-208cbf09bbff-dns-swift-storage-0\") pod \"dnsmasq-dns-5784cf869f-dkwc8\" (UID: \"7ba8a4da-fc79-4b19-ae45-208cbf09bbff\") " pod="openstack/dnsmasq-dns-5784cf869f-dkwc8" Oct 06 12:04:31 crc kubenswrapper[4958]: I1006 12:04:31.710404 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/7ba8a4da-fc79-4b19-ae45-208cbf09bbff-ovsdbserver-nb\") pod \"dnsmasq-dns-5784cf869f-dkwc8\" (UID: \"7ba8a4da-fc79-4b19-ae45-208cbf09bbff\") " pod="openstack/dnsmasq-dns-5784cf869f-dkwc8" Oct 06 12:04:31 crc kubenswrapper[4958]: I1006 12:04:31.710419 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d8409911-8a67-4382-87b7-7a050b025a09-config-data\") pod \"cinder-scheduler-0\" (UID: \"d8409911-8a67-4382-87b7-7a050b025a09\") " pod="openstack/cinder-scheduler-0" Oct 06 12:04:31 crc kubenswrapper[4958]: I1006 12:04:31.710438 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/7ba8a4da-fc79-4b19-ae45-208cbf09bbff-dns-svc\") pod \"dnsmasq-dns-5784cf869f-dkwc8\" (UID: \"7ba8a4da-fc79-4b19-ae45-208cbf09bbff\") " pod="openstack/dnsmasq-dns-5784cf869f-dkwc8" Oct 06 12:04:31 crc kubenswrapper[4958]: I1006 12:04:31.710466 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/d8409911-8a67-4382-87b7-7a050b025a09-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"d8409911-8a67-4382-87b7-7a050b025a09\") " pod="openstack/cinder-scheduler-0" Oct 06 12:04:31 crc kubenswrapper[4958]: I1006 12:04:31.710484 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7295b149-cc50-47cd-a942-db144e82e687-scripts\") pod \"cinder-api-0\" (UID: \"7295b149-cc50-47cd-a942-db144e82e687\") " pod="openstack/cinder-api-0" Oct 06 12:04:31 crc kubenswrapper[4958]: I1006 12:04:31.710518 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7ba8a4da-fc79-4b19-ae45-208cbf09bbff-config\") pod \"dnsmasq-dns-5784cf869f-dkwc8\" (UID: \"7ba8a4da-fc79-4b19-ae45-208cbf09bbff\") " pod="openstack/dnsmasq-dns-5784cf869f-dkwc8" Oct 06 12:04:31 crc kubenswrapper[4958]: I1006 12:04:31.710532 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tfcmt\" (UniqueName: \"kubernetes.io/projected/7ba8a4da-fc79-4b19-ae45-208cbf09bbff-kube-api-access-tfcmt\") pod \"dnsmasq-dns-5784cf869f-dkwc8\" (UID: \"7ba8a4da-fc79-4b19-ae45-208cbf09bbff\") " pod="openstack/dnsmasq-dns-5784cf869f-dkwc8" Oct 06 12:04:31 crc kubenswrapper[4958]: I1006 12:04:31.710553 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7295b149-cc50-47cd-a942-db144e82e687-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"7295b149-cc50-47cd-a942-db144e82e687\") " pod="openstack/cinder-api-0" Oct 06 12:04:31 crc kubenswrapper[4958]: I1006 12:04:31.710571 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/d8409911-8a67-4382-87b7-7a050b025a09-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"d8409911-8a67-4382-87b7-7a050b025a09\") " pod="openstack/cinder-scheduler-0" Oct 06 12:04:31 crc kubenswrapper[4958]: I1006 12:04:31.710590 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hgx5v\" (UniqueName: \"kubernetes.io/projected/7295b149-cc50-47cd-a942-db144e82e687-kube-api-access-hgx5v\") pod \"cinder-api-0\" (UID: \"7295b149-cc50-47cd-a942-db144e82e687\") " pod="openstack/cinder-api-0" Oct 06 12:04:31 crc kubenswrapper[4958]: I1006 12:04:31.711659 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/7ba8a4da-fc79-4b19-ae45-208cbf09bbff-ovsdbserver-sb\") pod \"dnsmasq-dns-5784cf869f-dkwc8\" (UID: \"7ba8a4da-fc79-4b19-ae45-208cbf09bbff\") " pod="openstack/dnsmasq-dns-5784cf869f-dkwc8" Oct 06 12:04:31 crc kubenswrapper[4958]: I1006 12:04:31.718505 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7ba8a4da-fc79-4b19-ae45-208cbf09bbff-config\") pod \"dnsmasq-dns-5784cf869f-dkwc8\" (UID: \"7ba8a4da-fc79-4b19-ae45-208cbf09bbff\") " pod="openstack/dnsmasq-dns-5784cf869f-dkwc8" Oct 06 12:04:31 crc kubenswrapper[4958]: I1006 12:04:31.718613 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/d8409911-8a67-4382-87b7-7a050b025a09-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"d8409911-8a67-4382-87b7-7a050b025a09\") " pod="openstack/cinder-scheduler-0" Oct 06 12:04:31 crc kubenswrapper[4958]: I1006 12:04:31.719191 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/7ba8a4da-fc79-4b19-ae45-208cbf09bbff-dns-swift-storage-0\") pod \"dnsmasq-dns-5784cf869f-dkwc8\" (UID: \"7ba8a4da-fc79-4b19-ae45-208cbf09bbff\") " pod="openstack/dnsmasq-dns-5784cf869f-dkwc8" Oct 06 12:04:31 crc kubenswrapper[4958]: I1006 12:04:31.721990 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/7ba8a4da-fc79-4b19-ae45-208cbf09bbff-dns-svc\") pod \"dnsmasq-dns-5784cf869f-dkwc8\" (UID: \"7ba8a4da-fc79-4b19-ae45-208cbf09bbff\") " pod="openstack/dnsmasq-dns-5784cf869f-dkwc8" Oct 06 12:04:31 crc kubenswrapper[4958]: I1006 12:04:31.722532 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/7ba8a4da-fc79-4b19-ae45-208cbf09bbff-ovsdbserver-nb\") pod \"dnsmasq-dns-5784cf869f-dkwc8\" (UID: \"7ba8a4da-fc79-4b19-ae45-208cbf09bbff\") " pod="openstack/dnsmasq-dns-5784cf869f-dkwc8" Oct 06 12:04:31 crc kubenswrapper[4958]: I1006 12:04:31.726059 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d8409911-8a67-4382-87b7-7a050b025a09-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"d8409911-8a67-4382-87b7-7a050b025a09\") " pod="openstack/cinder-scheduler-0" Oct 06 12:04:31 crc kubenswrapper[4958]: I1006 12:04:31.727675 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tfcmt\" (UniqueName: \"kubernetes.io/projected/7ba8a4da-fc79-4b19-ae45-208cbf09bbff-kube-api-access-tfcmt\") pod \"dnsmasq-dns-5784cf869f-dkwc8\" (UID: \"7ba8a4da-fc79-4b19-ae45-208cbf09bbff\") " pod="openstack/dnsmasq-dns-5784cf869f-dkwc8" Oct 06 12:04:31 crc kubenswrapper[4958]: I1006 12:04:31.728881 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/d8409911-8a67-4382-87b7-7a050b025a09-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"d8409911-8a67-4382-87b7-7a050b025a09\") " pod="openstack/cinder-scheduler-0" Oct 06 12:04:31 crc kubenswrapper[4958]: I1006 12:04:31.729361 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d8409911-8a67-4382-87b7-7a050b025a09-config-data\") pod \"cinder-scheduler-0\" (UID: \"d8409911-8a67-4382-87b7-7a050b025a09\") " pod="openstack/cinder-scheduler-0" Oct 06 12:04:31 crc kubenswrapper[4958]: I1006 12:04:31.732633 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d8409911-8a67-4382-87b7-7a050b025a09-scripts\") pod \"cinder-scheduler-0\" (UID: \"d8409911-8a67-4382-87b7-7a050b025a09\") " pod="openstack/cinder-scheduler-0" Oct 06 12:04:31 crc kubenswrapper[4958]: I1006 12:04:31.738106 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qfnwl\" (UniqueName: \"kubernetes.io/projected/d8409911-8a67-4382-87b7-7a050b025a09-kube-api-access-qfnwl\") pod \"cinder-scheduler-0\" (UID: \"d8409911-8a67-4382-87b7-7a050b025a09\") " pod="openstack/cinder-scheduler-0" Oct 06 12:04:31 crc kubenswrapper[4958]: I1006 12:04:31.812954 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7295b149-cc50-47cd-a942-db144e82e687-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"7295b149-cc50-47cd-a942-db144e82e687\") " pod="openstack/cinder-api-0" Oct 06 12:04:31 crc kubenswrapper[4958]: I1006 12:04:31.812992 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hgx5v\" (UniqueName: \"kubernetes.io/projected/7295b149-cc50-47cd-a942-db144e82e687-kube-api-access-hgx5v\") pod \"cinder-api-0\" (UID: \"7295b149-cc50-47cd-a942-db144e82e687\") " pod="openstack/cinder-api-0" Oct 06 12:04:31 crc kubenswrapper[4958]: I1006 12:04:31.813045 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7295b149-cc50-47cd-a942-db144e82e687-logs\") pod \"cinder-api-0\" (UID: \"7295b149-cc50-47cd-a942-db144e82e687\") " pod="openstack/cinder-api-0" Oct 06 12:04:31 crc kubenswrapper[4958]: I1006 12:04:31.813092 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/7295b149-cc50-47cd-a942-db144e82e687-config-data-custom\") pod \"cinder-api-0\" (UID: \"7295b149-cc50-47cd-a942-db144e82e687\") " pod="openstack/cinder-api-0" Oct 06 12:04:31 crc kubenswrapper[4958]: I1006 12:04:31.813211 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/7295b149-cc50-47cd-a942-db144e82e687-etc-machine-id\") pod \"cinder-api-0\" (UID: \"7295b149-cc50-47cd-a942-db144e82e687\") " pod="openstack/cinder-api-0" Oct 06 12:04:31 crc kubenswrapper[4958]: I1006 12:04:31.813245 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7295b149-cc50-47cd-a942-db144e82e687-config-data\") pod \"cinder-api-0\" (UID: \"7295b149-cc50-47cd-a942-db144e82e687\") " pod="openstack/cinder-api-0" Oct 06 12:04:31 crc kubenswrapper[4958]: I1006 12:04:31.813409 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7295b149-cc50-47cd-a942-db144e82e687-scripts\") pod \"cinder-api-0\" (UID: \"7295b149-cc50-47cd-a942-db144e82e687\") " pod="openstack/cinder-api-0" Oct 06 12:04:31 crc kubenswrapper[4958]: I1006 12:04:31.813513 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/7295b149-cc50-47cd-a942-db144e82e687-etc-machine-id\") pod \"cinder-api-0\" (UID: \"7295b149-cc50-47cd-a942-db144e82e687\") " pod="openstack/cinder-api-0" Oct 06 12:04:31 crc kubenswrapper[4958]: I1006 12:04:31.813947 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7295b149-cc50-47cd-a942-db144e82e687-logs\") pod \"cinder-api-0\" (UID: \"7295b149-cc50-47cd-a942-db144e82e687\") " pod="openstack/cinder-api-0" Oct 06 12:04:31 crc kubenswrapper[4958]: I1006 12:04:31.823078 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5784cf869f-dkwc8" Oct 06 12:04:31 crc kubenswrapper[4958]: I1006 12:04:31.823220 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7295b149-cc50-47cd-a942-db144e82e687-scripts\") pod \"cinder-api-0\" (UID: \"7295b149-cc50-47cd-a942-db144e82e687\") " pod="openstack/cinder-api-0" Oct 06 12:04:31 crc kubenswrapper[4958]: I1006 12:04:31.825504 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7295b149-cc50-47cd-a942-db144e82e687-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"7295b149-cc50-47cd-a942-db144e82e687\") " pod="openstack/cinder-api-0" Oct 06 12:04:31 crc kubenswrapper[4958]: I1006 12:04:31.827425 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7295b149-cc50-47cd-a942-db144e82e687-config-data\") pod \"cinder-api-0\" (UID: \"7295b149-cc50-47cd-a942-db144e82e687\") " pod="openstack/cinder-api-0" Oct 06 12:04:31 crc kubenswrapper[4958]: I1006 12:04:31.832307 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/7295b149-cc50-47cd-a942-db144e82e687-config-data-custom\") pod \"cinder-api-0\" (UID: \"7295b149-cc50-47cd-a942-db144e82e687\") " pod="openstack/cinder-api-0" Oct 06 12:04:31 crc kubenswrapper[4958]: I1006 12:04:31.836734 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hgx5v\" (UniqueName: \"kubernetes.io/projected/7295b149-cc50-47cd-a942-db144e82e687-kube-api-access-hgx5v\") pod \"cinder-api-0\" (UID: \"7295b149-cc50-47cd-a942-db144e82e687\") " pod="openstack/cinder-api-0" Oct 06 12:04:31 crc kubenswrapper[4958]: I1006 12:04:31.848588 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Oct 06 12:04:31 crc kubenswrapper[4958]: I1006 12:04:31.954682 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Oct 06 12:04:31 crc kubenswrapper[4958]: I1006 12:04:31.985363 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-84b966f6c9-xqx5n" Oct 06 12:04:32 crc kubenswrapper[4958]: I1006 12:04:32.028858 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/1f8e795b-4ea8-4088-8ccb-345195b2d313-dns-svc\") pod \"1f8e795b-4ea8-4088-8ccb-345195b2d313\" (UID: \"1f8e795b-4ea8-4088-8ccb-345195b2d313\") " Oct 06 12:04:32 crc kubenswrapper[4958]: I1006 12:04:32.029157 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/1f8e795b-4ea8-4088-8ccb-345195b2d313-ovsdbserver-nb\") pod \"1f8e795b-4ea8-4088-8ccb-345195b2d313\" (UID: \"1f8e795b-4ea8-4088-8ccb-345195b2d313\") " Oct 06 12:04:32 crc kubenswrapper[4958]: I1006 12:04:32.029190 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/1f8e795b-4ea8-4088-8ccb-345195b2d313-ovsdbserver-sb\") pod \"1f8e795b-4ea8-4088-8ccb-345195b2d313\" (UID: \"1f8e795b-4ea8-4088-8ccb-345195b2d313\") " Oct 06 12:04:32 crc kubenswrapper[4958]: I1006 12:04:32.029211 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1f8e795b-4ea8-4088-8ccb-345195b2d313-config\") pod \"1f8e795b-4ea8-4088-8ccb-345195b2d313\" (UID: \"1f8e795b-4ea8-4088-8ccb-345195b2d313\") " Oct 06 12:04:32 crc kubenswrapper[4958]: I1006 12:04:32.029344 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/1f8e795b-4ea8-4088-8ccb-345195b2d313-dns-swift-storage-0\") pod \"1f8e795b-4ea8-4088-8ccb-345195b2d313\" (UID: \"1f8e795b-4ea8-4088-8ccb-345195b2d313\") " Oct 06 12:04:32 crc kubenswrapper[4958]: I1006 12:04:32.029387 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4fxpd\" (UniqueName: \"kubernetes.io/projected/1f8e795b-4ea8-4088-8ccb-345195b2d313-kube-api-access-4fxpd\") pod \"1f8e795b-4ea8-4088-8ccb-345195b2d313\" (UID: \"1f8e795b-4ea8-4088-8ccb-345195b2d313\") " Oct 06 12:04:32 crc kubenswrapper[4958]: I1006 12:04:32.042399 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1f8e795b-4ea8-4088-8ccb-345195b2d313-kube-api-access-4fxpd" (OuterVolumeSpecName: "kube-api-access-4fxpd") pod "1f8e795b-4ea8-4088-8ccb-345195b2d313" (UID: "1f8e795b-4ea8-4088-8ccb-345195b2d313"). InnerVolumeSpecName "kube-api-access-4fxpd". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 06 12:04:32 crc kubenswrapper[4958]: I1006 12:04:32.133337 4958 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4fxpd\" (UniqueName: \"kubernetes.io/projected/1f8e795b-4ea8-4088-8ccb-345195b2d313-kube-api-access-4fxpd\") on node \"crc\" DevicePath \"\"" Oct 06 12:04:32 crc kubenswrapper[4958]: I1006 12:04:32.179254 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1f8e795b-4ea8-4088-8ccb-345195b2d313-config" (OuterVolumeSpecName: "config") pod "1f8e795b-4ea8-4088-8ccb-345195b2d313" (UID: "1f8e795b-4ea8-4088-8ccb-345195b2d313"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 06 12:04:32 crc kubenswrapper[4958]: I1006 12:04:32.187687 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1f8e795b-4ea8-4088-8ccb-345195b2d313-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "1f8e795b-4ea8-4088-8ccb-345195b2d313" (UID: "1f8e795b-4ea8-4088-8ccb-345195b2d313"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 06 12:04:32 crc kubenswrapper[4958]: I1006 12:04:32.205571 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstackclient"] Oct 06 12:04:32 crc kubenswrapper[4958]: I1006 12:04:32.241350 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1f8e795b-4ea8-4088-8ccb-345195b2d313-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "1f8e795b-4ea8-4088-8ccb-345195b2d313" (UID: "1f8e795b-4ea8-4088-8ccb-345195b2d313"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 06 12:04:32 crc kubenswrapper[4958]: I1006 12:04:32.252449 4958 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/1f8e795b-4ea8-4088-8ccb-345195b2d313-dns-svc\") on node \"crc\" DevicePath \"\"" Oct 06 12:04:32 crc kubenswrapper[4958]: I1006 12:04:32.252472 4958 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/1f8e795b-4ea8-4088-8ccb-345195b2d313-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Oct 06 12:04:32 crc kubenswrapper[4958]: I1006 12:04:32.252482 4958 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1f8e795b-4ea8-4088-8ccb-345195b2d313-config\") on node \"crc\" DevicePath \"\"" Oct 06 12:04:32 crc kubenswrapper[4958]: I1006 12:04:32.258601 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-84b966f6c9-xqx5n" event={"ID":"1f8e795b-4ea8-4088-8ccb-345195b2d313","Type":"ContainerDied","Data":"1190f71ec7b9f7f271791a0eb50046621b76a1a813d882e342854589ee3cc580"} Oct 06 12:04:32 crc kubenswrapper[4958]: I1006 12:04:32.258652 4958 scope.go:117] "RemoveContainer" containerID="77af60a7fc658d197c6e7c6890bdd030dd1b3747670ee8b18644b9426796ed8f" Oct 06 12:04:32 crc kubenswrapper[4958]: I1006 12:04:32.258799 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-84b966f6c9-xqx5n" Oct 06 12:04:32 crc kubenswrapper[4958]: I1006 12:04:32.265229 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstackclient" event={"ID":"2474d50f-478f-4d0f-abc0-f0a5135285ca","Type":"ContainerStarted","Data":"a0086aaca234b8b46110520f71e6f0a2c94d68d7695f31cde6f4e52b598356b4"} Oct 06 12:04:32 crc kubenswrapper[4958]: I1006 12:04:32.303326 4958 scope.go:117] "RemoveContainer" containerID="4c5d3274cd52120292043ed37eefadc7b739d29b8af799b20b2079314766aeb6" Oct 06 12:04:32 crc kubenswrapper[4958]: I1006 12:04:32.303876 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1f8e795b-4ea8-4088-8ccb-345195b2d313-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "1f8e795b-4ea8-4088-8ccb-345195b2d313" (UID: "1f8e795b-4ea8-4088-8ccb-345195b2d313"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 06 12:04:32 crc kubenswrapper[4958]: I1006 12:04:32.308742 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1f8e795b-4ea8-4088-8ccb-345195b2d313-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "1f8e795b-4ea8-4088-8ccb-345195b2d313" (UID: "1f8e795b-4ea8-4088-8ccb-345195b2d313"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 06 12:04:32 crc kubenswrapper[4958]: I1006 12:04:32.358300 4958 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/1f8e795b-4ea8-4088-8ccb-345195b2d313-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Oct 06 12:04:32 crc kubenswrapper[4958]: I1006 12:04:32.358336 4958 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/1f8e795b-4ea8-4088-8ccb-345195b2d313-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Oct 06 12:04:32 crc kubenswrapper[4958]: I1006 12:04:32.563430 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5784cf869f-dkwc8"] Oct 06 12:04:32 crc kubenswrapper[4958]: I1006 12:04:32.632194 4958 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-84b966f6c9-xqx5n"] Oct 06 12:04:32 crc kubenswrapper[4958]: I1006 12:04:32.643001 4958 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-84b966f6c9-xqx5n"] Oct 06 12:04:32 crc kubenswrapper[4958]: I1006 12:04:32.673220 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Oct 06 12:04:32 crc kubenswrapper[4958]: I1006 12:04:32.788025 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Oct 06 12:04:32 crc kubenswrapper[4958]: I1006 12:04:32.926288 4958 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1f8e795b-4ea8-4088-8ccb-345195b2d313" path="/var/lib/kubelet/pods/1f8e795b-4ea8-4088-8ccb-345195b2d313/volumes" Oct 06 12:04:33 crc kubenswrapper[4958]: I1006 12:04:33.244633 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-5958f9647d-2d24s" Oct 06 12:04:33 crc kubenswrapper[4958]: I1006 12:04:33.284626 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"7295b149-cc50-47cd-a942-db144e82e687","Type":"ContainerStarted","Data":"8c461b9875261ed506af9dd967f4743c056396f721dc5293af7a1e10e5dd77f2"} Oct 06 12:04:33 crc kubenswrapper[4958]: I1006 12:04:33.286128 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"d8409911-8a67-4382-87b7-7a050b025a09","Type":"ContainerStarted","Data":"868f1a1f91d715c7af4c8a173310c1418248c63e0b330650eeca75eb67f0588d"} Oct 06 12:04:33 crc kubenswrapper[4958]: I1006 12:04:33.293695 4958 generic.go:334] "Generic (PLEG): container finished" podID="7ba8a4da-fc79-4b19-ae45-208cbf09bbff" containerID="977e96c8c20b5ae44a5462c50c0f51725f01638f0e922a4631bd2485e2469cc3" exitCode=0 Oct 06 12:04:33 crc kubenswrapper[4958]: I1006 12:04:33.293743 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5784cf869f-dkwc8" event={"ID":"7ba8a4da-fc79-4b19-ae45-208cbf09bbff","Type":"ContainerDied","Data":"977e96c8c20b5ae44a5462c50c0f51725f01638f0e922a4631bd2485e2469cc3"} Oct 06 12:04:33 crc kubenswrapper[4958]: I1006 12:04:33.293772 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5784cf869f-dkwc8" event={"ID":"7ba8a4da-fc79-4b19-ae45-208cbf09bbff","Type":"ContainerStarted","Data":"185d108b408bf63436e7eb728dfa99b56ab8d3a6fbb000b89702740ee302ba47"} Oct 06 12:04:33 crc kubenswrapper[4958]: I1006 12:04:33.727631 4958 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-api-0"] Oct 06 12:04:34 crc kubenswrapper[4958]: I1006 12:04:33.998754 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-5958f9647d-2d24s" Oct 06 12:04:34 crc kubenswrapper[4958]: I1006 12:04:34.345013 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5784cf869f-dkwc8" event={"ID":"7ba8a4da-fc79-4b19-ae45-208cbf09bbff","Type":"ContainerStarted","Data":"7dd68528e2f386a07bb63a0217b790742e4e8b2ebb6af2e3b0d3ce53c5ef34ac"} Oct 06 12:04:34 crc kubenswrapper[4958]: I1006 12:04:34.346346 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-5784cf869f-dkwc8" Oct 06 12:04:34 crc kubenswrapper[4958]: I1006 12:04:34.349740 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"7295b149-cc50-47cd-a942-db144e82e687","Type":"ContainerStarted","Data":"b51c94acd10e64ce985e593c96170eade6ae33768207ab649c4d0fa4a456dacc"} Oct 06 12:04:34 crc kubenswrapper[4958]: I1006 12:04:34.351123 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"d8409911-8a67-4382-87b7-7a050b025a09","Type":"ContainerStarted","Data":"1b07e72b8a8349eeb0637e284e7beeff8033c68b56372590ebd563595f132eca"} Oct 06 12:04:34 crc kubenswrapper[4958]: I1006 12:04:34.374049 4958 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-5784cf869f-dkwc8" podStartSLOduration=3.37403222 podStartE2EDuration="3.37403222s" podCreationTimestamp="2025-10-06 12:04:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-06 12:04:34.373757481 +0000 UTC m=+1028.259782789" watchObservedRunningTime="2025-10-06 12:04:34.37403222 +0000 UTC m=+1028.260057528" Oct 06 12:04:35 crc kubenswrapper[4958]: I1006 12:04:35.362580 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"7295b149-cc50-47cd-a942-db144e82e687","Type":"ContainerStarted","Data":"b546d9f9ce71ba944cebfecb6433bc389dad63fc338cb85010650e4532e08184"} Oct 06 12:04:35 crc kubenswrapper[4958]: I1006 12:04:35.362995 4958 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-api-0" podUID="7295b149-cc50-47cd-a942-db144e82e687" containerName="cinder-api-log" containerID="cri-o://b51c94acd10e64ce985e593c96170eade6ae33768207ab649c4d0fa4a456dacc" gracePeriod=30 Oct 06 12:04:35 crc kubenswrapper[4958]: I1006 12:04:35.363259 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/cinder-api-0" Oct 06 12:04:35 crc kubenswrapper[4958]: I1006 12:04:35.363499 4958 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-api-0" podUID="7295b149-cc50-47cd-a942-db144e82e687" containerName="cinder-api" containerID="cri-o://b546d9f9ce71ba944cebfecb6433bc389dad63fc338cb85010650e4532e08184" gracePeriod=30 Oct 06 12:04:35 crc kubenswrapper[4958]: I1006 12:04:35.375617 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"d8409911-8a67-4382-87b7-7a050b025a09","Type":"ContainerStarted","Data":"4ec8589ab5c358d1ae115d7cb048e88b0ead557915aab85d2287be752cdfaaea"} Oct 06 12:04:35 crc kubenswrapper[4958]: I1006 12:04:35.389904 4958 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-api-0" podStartSLOduration=4.389837749 podStartE2EDuration="4.389837749s" podCreationTimestamp="2025-10-06 12:04:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-06 12:04:35.379532258 +0000 UTC m=+1029.265557566" watchObservedRunningTime="2025-10-06 12:04:35.389837749 +0000 UTC m=+1029.275863067" Oct 06 12:04:35 crc kubenswrapper[4958]: I1006 12:04:35.419846 4958 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-scheduler-0" podStartSLOduration=3.643412322 podStartE2EDuration="4.419821564s" podCreationTimestamp="2025-10-06 12:04:31 +0000 UTC" firstStartedPulling="2025-10-06 12:04:32.672483707 +0000 UTC m=+1026.558509015" lastFinishedPulling="2025-10-06 12:04:33.448892949 +0000 UTC m=+1027.334918257" observedRunningTime="2025-10-06 12:04:35.395534831 +0000 UTC m=+1029.281560139" watchObservedRunningTime="2025-10-06 12:04:35.419821564 +0000 UTC m=+1029.305846872" Oct 06 12:04:36 crc kubenswrapper[4958]: I1006 12:04:36.033555 4958 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/swift-proxy-5cc4dd9879-7xgdr"] Oct 06 12:04:36 crc kubenswrapper[4958]: E1006 12:04:36.034324 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1f8e795b-4ea8-4088-8ccb-345195b2d313" containerName="dnsmasq-dns" Oct 06 12:04:36 crc kubenswrapper[4958]: I1006 12:04:36.034340 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="1f8e795b-4ea8-4088-8ccb-345195b2d313" containerName="dnsmasq-dns" Oct 06 12:04:36 crc kubenswrapper[4958]: E1006 12:04:36.034354 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1f8e795b-4ea8-4088-8ccb-345195b2d313" containerName="init" Oct 06 12:04:36 crc kubenswrapper[4958]: I1006 12:04:36.034360 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="1f8e795b-4ea8-4088-8ccb-345195b2d313" containerName="init" Oct 06 12:04:36 crc kubenswrapper[4958]: I1006 12:04:36.034524 4958 memory_manager.go:354] "RemoveStaleState removing state" podUID="1f8e795b-4ea8-4088-8ccb-345195b2d313" containerName="dnsmasq-dns" Oct 06 12:04:36 crc kubenswrapper[4958]: I1006 12:04:36.035443 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-proxy-5cc4dd9879-7xgdr" Oct 06 12:04:36 crc kubenswrapper[4958]: I1006 12:04:36.041069 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Oct 06 12:04:36 crc kubenswrapper[4958]: I1006 12:04:36.049646 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-proxy-config-data" Oct 06 12:04:36 crc kubenswrapper[4958]: I1006 12:04:36.049831 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-swift-internal-svc" Oct 06 12:04:36 crc kubenswrapper[4958]: I1006 12:04:36.049990 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-swift-public-svc" Oct 06 12:04:36 crc kubenswrapper[4958]: I1006 12:04:36.069205 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-proxy-5cc4dd9879-7xgdr"] Oct 06 12:04:36 crc kubenswrapper[4958]: I1006 12:04:36.137660 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hgx5v\" (UniqueName: \"kubernetes.io/projected/7295b149-cc50-47cd-a942-db144e82e687-kube-api-access-hgx5v\") pod \"7295b149-cc50-47cd-a942-db144e82e687\" (UID: \"7295b149-cc50-47cd-a942-db144e82e687\") " Oct 06 12:04:36 crc kubenswrapper[4958]: I1006 12:04:36.137706 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7295b149-cc50-47cd-a942-db144e82e687-config-data\") pod \"7295b149-cc50-47cd-a942-db144e82e687\" (UID: \"7295b149-cc50-47cd-a942-db144e82e687\") " Oct 06 12:04:36 crc kubenswrapper[4958]: I1006 12:04:36.137746 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7295b149-cc50-47cd-a942-db144e82e687-logs\") pod \"7295b149-cc50-47cd-a942-db144e82e687\" (UID: \"7295b149-cc50-47cd-a942-db144e82e687\") " Oct 06 12:04:36 crc kubenswrapper[4958]: I1006 12:04:36.137896 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/7295b149-cc50-47cd-a942-db144e82e687-etc-machine-id\") pod \"7295b149-cc50-47cd-a942-db144e82e687\" (UID: \"7295b149-cc50-47cd-a942-db144e82e687\") " Oct 06 12:04:36 crc kubenswrapper[4958]: I1006 12:04:36.137922 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7295b149-cc50-47cd-a942-db144e82e687-scripts\") pod \"7295b149-cc50-47cd-a942-db144e82e687\" (UID: \"7295b149-cc50-47cd-a942-db144e82e687\") " Oct 06 12:04:36 crc kubenswrapper[4958]: I1006 12:04:36.137946 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/7295b149-cc50-47cd-a942-db144e82e687-config-data-custom\") pod \"7295b149-cc50-47cd-a942-db144e82e687\" (UID: \"7295b149-cc50-47cd-a942-db144e82e687\") " Oct 06 12:04:36 crc kubenswrapper[4958]: I1006 12:04:36.137983 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7295b149-cc50-47cd-a942-db144e82e687-combined-ca-bundle\") pod \"7295b149-cc50-47cd-a942-db144e82e687\" (UID: \"7295b149-cc50-47cd-a942-db144e82e687\") " Oct 06 12:04:36 crc kubenswrapper[4958]: I1006 12:04:36.138270 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/eb6c6362-e91c-47c8-8616-702c4cada19a-run-httpd\") pod \"swift-proxy-5cc4dd9879-7xgdr\" (UID: \"eb6c6362-e91c-47c8-8616-702c4cada19a\") " pod="openstack/swift-proxy-5cc4dd9879-7xgdr" Oct 06 12:04:36 crc kubenswrapper[4958]: I1006 12:04:36.138304 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/eb6c6362-e91c-47c8-8616-702c4cada19a-internal-tls-certs\") pod \"swift-proxy-5cc4dd9879-7xgdr\" (UID: \"eb6c6362-e91c-47c8-8616-702c4cada19a\") " pod="openstack/swift-proxy-5cc4dd9879-7xgdr" Oct 06 12:04:36 crc kubenswrapper[4958]: I1006 12:04:36.138321 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/eb6c6362-e91c-47c8-8616-702c4cada19a-combined-ca-bundle\") pod \"swift-proxy-5cc4dd9879-7xgdr\" (UID: \"eb6c6362-e91c-47c8-8616-702c4cada19a\") " pod="openstack/swift-proxy-5cc4dd9879-7xgdr" Oct 06 12:04:36 crc kubenswrapper[4958]: I1006 12:04:36.138354 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rhm6c\" (UniqueName: \"kubernetes.io/projected/eb6c6362-e91c-47c8-8616-702c4cada19a-kube-api-access-rhm6c\") pod \"swift-proxy-5cc4dd9879-7xgdr\" (UID: \"eb6c6362-e91c-47c8-8616-702c4cada19a\") " pod="openstack/swift-proxy-5cc4dd9879-7xgdr" Oct 06 12:04:36 crc kubenswrapper[4958]: I1006 12:04:36.138386 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/eb6c6362-e91c-47c8-8616-702c4cada19a-etc-swift\") pod \"swift-proxy-5cc4dd9879-7xgdr\" (UID: \"eb6c6362-e91c-47c8-8616-702c4cada19a\") " pod="openstack/swift-proxy-5cc4dd9879-7xgdr" Oct 06 12:04:36 crc kubenswrapper[4958]: I1006 12:04:36.138411 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/eb6c6362-e91c-47c8-8616-702c4cada19a-public-tls-certs\") pod \"swift-proxy-5cc4dd9879-7xgdr\" (UID: \"eb6c6362-e91c-47c8-8616-702c4cada19a\") " pod="openstack/swift-proxy-5cc4dd9879-7xgdr" Oct 06 12:04:36 crc kubenswrapper[4958]: I1006 12:04:36.138447 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/eb6c6362-e91c-47c8-8616-702c4cada19a-log-httpd\") pod \"swift-proxy-5cc4dd9879-7xgdr\" (UID: \"eb6c6362-e91c-47c8-8616-702c4cada19a\") " pod="openstack/swift-proxy-5cc4dd9879-7xgdr" Oct 06 12:04:36 crc kubenswrapper[4958]: I1006 12:04:36.138477 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/eb6c6362-e91c-47c8-8616-702c4cada19a-config-data\") pod \"swift-proxy-5cc4dd9879-7xgdr\" (UID: \"eb6c6362-e91c-47c8-8616-702c4cada19a\") " pod="openstack/swift-proxy-5cc4dd9879-7xgdr" Oct 06 12:04:36 crc kubenswrapper[4958]: I1006 12:04:36.140628 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7295b149-cc50-47cd-a942-db144e82e687-logs" (OuterVolumeSpecName: "logs") pod "7295b149-cc50-47cd-a942-db144e82e687" (UID: "7295b149-cc50-47cd-a942-db144e82e687"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 06 12:04:36 crc kubenswrapper[4958]: I1006 12:04:36.140697 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7295b149-cc50-47cd-a942-db144e82e687-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "7295b149-cc50-47cd-a942-db144e82e687" (UID: "7295b149-cc50-47cd-a942-db144e82e687"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 06 12:04:36 crc kubenswrapper[4958]: I1006 12:04:36.148004 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7295b149-cc50-47cd-a942-db144e82e687-scripts" (OuterVolumeSpecName: "scripts") pod "7295b149-cc50-47cd-a942-db144e82e687" (UID: "7295b149-cc50-47cd-a942-db144e82e687"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 06 12:04:36 crc kubenswrapper[4958]: I1006 12:04:36.148111 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7295b149-cc50-47cd-a942-db144e82e687-kube-api-access-hgx5v" (OuterVolumeSpecName: "kube-api-access-hgx5v") pod "7295b149-cc50-47cd-a942-db144e82e687" (UID: "7295b149-cc50-47cd-a942-db144e82e687"). InnerVolumeSpecName "kube-api-access-hgx5v". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 06 12:04:36 crc kubenswrapper[4958]: I1006 12:04:36.182037 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7295b149-cc50-47cd-a942-db144e82e687-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "7295b149-cc50-47cd-a942-db144e82e687" (UID: "7295b149-cc50-47cd-a942-db144e82e687"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 06 12:04:36 crc kubenswrapper[4958]: I1006 12:04:36.194309 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7295b149-cc50-47cd-a942-db144e82e687-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "7295b149-cc50-47cd-a942-db144e82e687" (UID: "7295b149-cc50-47cd-a942-db144e82e687"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 06 12:04:36 crc kubenswrapper[4958]: I1006 12:04:36.213271 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7295b149-cc50-47cd-a942-db144e82e687-config-data" (OuterVolumeSpecName: "config-data") pod "7295b149-cc50-47cd-a942-db144e82e687" (UID: "7295b149-cc50-47cd-a942-db144e82e687"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 06 12:04:36 crc kubenswrapper[4958]: I1006 12:04:36.239204 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/eb6c6362-e91c-47c8-8616-702c4cada19a-public-tls-certs\") pod \"swift-proxy-5cc4dd9879-7xgdr\" (UID: \"eb6c6362-e91c-47c8-8616-702c4cada19a\") " pod="openstack/swift-proxy-5cc4dd9879-7xgdr" Oct 06 12:04:36 crc kubenswrapper[4958]: I1006 12:04:36.239253 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/eb6c6362-e91c-47c8-8616-702c4cada19a-log-httpd\") pod \"swift-proxy-5cc4dd9879-7xgdr\" (UID: \"eb6c6362-e91c-47c8-8616-702c4cada19a\") " pod="openstack/swift-proxy-5cc4dd9879-7xgdr" Oct 06 12:04:36 crc kubenswrapper[4958]: I1006 12:04:36.239280 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/eb6c6362-e91c-47c8-8616-702c4cada19a-config-data\") pod \"swift-proxy-5cc4dd9879-7xgdr\" (UID: \"eb6c6362-e91c-47c8-8616-702c4cada19a\") " pod="openstack/swift-proxy-5cc4dd9879-7xgdr" Oct 06 12:04:36 crc kubenswrapper[4958]: I1006 12:04:36.239349 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/eb6c6362-e91c-47c8-8616-702c4cada19a-run-httpd\") pod \"swift-proxy-5cc4dd9879-7xgdr\" (UID: \"eb6c6362-e91c-47c8-8616-702c4cada19a\") " pod="openstack/swift-proxy-5cc4dd9879-7xgdr" Oct 06 12:04:36 crc kubenswrapper[4958]: I1006 12:04:36.239378 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/eb6c6362-e91c-47c8-8616-702c4cada19a-internal-tls-certs\") pod \"swift-proxy-5cc4dd9879-7xgdr\" (UID: \"eb6c6362-e91c-47c8-8616-702c4cada19a\") " pod="openstack/swift-proxy-5cc4dd9879-7xgdr" Oct 06 12:04:36 crc kubenswrapper[4958]: I1006 12:04:36.239396 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/eb6c6362-e91c-47c8-8616-702c4cada19a-combined-ca-bundle\") pod \"swift-proxy-5cc4dd9879-7xgdr\" (UID: \"eb6c6362-e91c-47c8-8616-702c4cada19a\") " pod="openstack/swift-proxy-5cc4dd9879-7xgdr" Oct 06 12:04:36 crc kubenswrapper[4958]: I1006 12:04:36.239428 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rhm6c\" (UniqueName: \"kubernetes.io/projected/eb6c6362-e91c-47c8-8616-702c4cada19a-kube-api-access-rhm6c\") pod \"swift-proxy-5cc4dd9879-7xgdr\" (UID: \"eb6c6362-e91c-47c8-8616-702c4cada19a\") " pod="openstack/swift-proxy-5cc4dd9879-7xgdr" Oct 06 12:04:36 crc kubenswrapper[4958]: I1006 12:04:36.239462 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/eb6c6362-e91c-47c8-8616-702c4cada19a-etc-swift\") pod \"swift-proxy-5cc4dd9879-7xgdr\" (UID: \"eb6c6362-e91c-47c8-8616-702c4cada19a\") " pod="openstack/swift-proxy-5cc4dd9879-7xgdr" Oct 06 12:04:36 crc kubenswrapper[4958]: I1006 12:04:36.239509 4958 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/7295b149-cc50-47cd-a942-db144e82e687-etc-machine-id\") on node \"crc\" DevicePath \"\"" Oct 06 12:04:36 crc kubenswrapper[4958]: I1006 12:04:36.239518 4958 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7295b149-cc50-47cd-a942-db144e82e687-scripts\") on node \"crc\" DevicePath \"\"" Oct 06 12:04:36 crc kubenswrapper[4958]: I1006 12:04:36.239527 4958 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/7295b149-cc50-47cd-a942-db144e82e687-config-data-custom\") on node \"crc\" DevicePath \"\"" Oct 06 12:04:36 crc kubenswrapper[4958]: I1006 12:04:36.239537 4958 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7295b149-cc50-47cd-a942-db144e82e687-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 06 12:04:36 crc kubenswrapper[4958]: I1006 12:04:36.239546 4958 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hgx5v\" (UniqueName: \"kubernetes.io/projected/7295b149-cc50-47cd-a942-db144e82e687-kube-api-access-hgx5v\") on node \"crc\" DevicePath \"\"" Oct 06 12:04:36 crc kubenswrapper[4958]: I1006 12:04:36.239556 4958 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7295b149-cc50-47cd-a942-db144e82e687-config-data\") on node \"crc\" DevicePath \"\"" Oct 06 12:04:36 crc kubenswrapper[4958]: I1006 12:04:36.239565 4958 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7295b149-cc50-47cd-a942-db144e82e687-logs\") on node \"crc\" DevicePath \"\"" Oct 06 12:04:36 crc kubenswrapper[4958]: I1006 12:04:36.240121 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/eb6c6362-e91c-47c8-8616-702c4cada19a-run-httpd\") pod \"swift-proxy-5cc4dd9879-7xgdr\" (UID: \"eb6c6362-e91c-47c8-8616-702c4cada19a\") " pod="openstack/swift-proxy-5cc4dd9879-7xgdr" Oct 06 12:04:36 crc kubenswrapper[4958]: I1006 12:04:36.240731 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/eb6c6362-e91c-47c8-8616-702c4cada19a-log-httpd\") pod \"swift-proxy-5cc4dd9879-7xgdr\" (UID: \"eb6c6362-e91c-47c8-8616-702c4cada19a\") " pod="openstack/swift-proxy-5cc4dd9879-7xgdr" Oct 06 12:04:36 crc kubenswrapper[4958]: I1006 12:04:36.244900 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/eb6c6362-e91c-47c8-8616-702c4cada19a-config-data\") pod \"swift-proxy-5cc4dd9879-7xgdr\" (UID: \"eb6c6362-e91c-47c8-8616-702c4cada19a\") " pod="openstack/swift-proxy-5cc4dd9879-7xgdr" Oct 06 12:04:36 crc kubenswrapper[4958]: I1006 12:04:36.244931 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/eb6c6362-e91c-47c8-8616-702c4cada19a-public-tls-certs\") pod \"swift-proxy-5cc4dd9879-7xgdr\" (UID: \"eb6c6362-e91c-47c8-8616-702c4cada19a\") " pod="openstack/swift-proxy-5cc4dd9879-7xgdr" Oct 06 12:04:36 crc kubenswrapper[4958]: I1006 12:04:36.244988 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/eb6c6362-e91c-47c8-8616-702c4cada19a-internal-tls-certs\") pod \"swift-proxy-5cc4dd9879-7xgdr\" (UID: \"eb6c6362-e91c-47c8-8616-702c4cada19a\") " pod="openstack/swift-proxy-5cc4dd9879-7xgdr" Oct 06 12:04:36 crc kubenswrapper[4958]: I1006 12:04:36.245160 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/eb6c6362-e91c-47c8-8616-702c4cada19a-etc-swift\") pod \"swift-proxy-5cc4dd9879-7xgdr\" (UID: \"eb6c6362-e91c-47c8-8616-702c4cada19a\") " pod="openstack/swift-proxy-5cc4dd9879-7xgdr" Oct 06 12:04:36 crc kubenswrapper[4958]: I1006 12:04:36.245859 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/eb6c6362-e91c-47c8-8616-702c4cada19a-combined-ca-bundle\") pod \"swift-proxy-5cc4dd9879-7xgdr\" (UID: \"eb6c6362-e91c-47c8-8616-702c4cada19a\") " pod="openstack/swift-proxy-5cc4dd9879-7xgdr" Oct 06 12:04:36 crc kubenswrapper[4958]: I1006 12:04:36.257237 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rhm6c\" (UniqueName: \"kubernetes.io/projected/eb6c6362-e91c-47c8-8616-702c4cada19a-kube-api-access-rhm6c\") pod \"swift-proxy-5cc4dd9879-7xgdr\" (UID: \"eb6c6362-e91c-47c8-8616-702c4cada19a\") " pod="openstack/swift-proxy-5cc4dd9879-7xgdr" Oct 06 12:04:36 crc kubenswrapper[4958]: I1006 12:04:36.272849 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-5784c7f6c4-pqpwp" Oct 06 12:04:36 crc kubenswrapper[4958]: I1006 12:04:36.376041 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-proxy-5cc4dd9879-7xgdr" Oct 06 12:04:36 crc kubenswrapper[4958]: I1006 12:04:36.396719 4958 generic.go:334] "Generic (PLEG): container finished" podID="7295b149-cc50-47cd-a942-db144e82e687" containerID="b546d9f9ce71ba944cebfecb6433bc389dad63fc338cb85010650e4532e08184" exitCode=0 Oct 06 12:04:36 crc kubenswrapper[4958]: I1006 12:04:36.396763 4958 generic.go:334] "Generic (PLEG): container finished" podID="7295b149-cc50-47cd-a942-db144e82e687" containerID="b51c94acd10e64ce985e593c96170eade6ae33768207ab649c4d0fa4a456dacc" exitCode=143 Oct 06 12:04:36 crc kubenswrapper[4958]: I1006 12:04:36.396804 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"7295b149-cc50-47cd-a942-db144e82e687","Type":"ContainerDied","Data":"b546d9f9ce71ba944cebfecb6433bc389dad63fc338cb85010650e4532e08184"} Oct 06 12:04:36 crc kubenswrapper[4958]: I1006 12:04:36.396844 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"7295b149-cc50-47cd-a942-db144e82e687","Type":"ContainerDied","Data":"b51c94acd10e64ce985e593c96170eade6ae33768207ab649c4d0fa4a456dacc"} Oct 06 12:04:36 crc kubenswrapper[4958]: I1006 12:04:36.396859 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"7295b149-cc50-47cd-a942-db144e82e687","Type":"ContainerDied","Data":"8c461b9875261ed506af9dd967f4743c056396f721dc5293af7a1e10e5dd77f2"} Oct 06 12:04:36 crc kubenswrapper[4958]: I1006 12:04:36.396880 4958 scope.go:117] "RemoveContainer" containerID="b546d9f9ce71ba944cebfecb6433bc389dad63fc338cb85010650e4532e08184" Oct 06 12:04:36 crc kubenswrapper[4958]: I1006 12:04:36.397058 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Oct 06 12:04:36 crc kubenswrapper[4958]: I1006 12:04:36.436082 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-5784c7f6c4-pqpwp" Oct 06 12:04:36 crc kubenswrapper[4958]: I1006 12:04:36.448162 4958 scope.go:117] "RemoveContainer" containerID="b51c94acd10e64ce985e593c96170eade6ae33768207ab649c4d0fa4a456dacc" Oct 06 12:04:36 crc kubenswrapper[4958]: I1006 12:04:36.461067 4958 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-api-0"] Oct 06 12:04:36 crc kubenswrapper[4958]: I1006 12:04:36.480200 4958 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-api-0"] Oct 06 12:04:36 crc kubenswrapper[4958]: I1006 12:04:36.511812 4958 scope.go:117] "RemoveContainer" containerID="b546d9f9ce71ba944cebfecb6433bc389dad63fc338cb85010650e4532e08184" Oct 06 12:04:36 crc kubenswrapper[4958]: E1006 12:04:36.515891 4958 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b546d9f9ce71ba944cebfecb6433bc389dad63fc338cb85010650e4532e08184\": container with ID starting with b546d9f9ce71ba944cebfecb6433bc389dad63fc338cb85010650e4532e08184 not found: ID does not exist" containerID="b546d9f9ce71ba944cebfecb6433bc389dad63fc338cb85010650e4532e08184" Oct 06 12:04:36 crc kubenswrapper[4958]: I1006 12:04:36.515989 4958 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b546d9f9ce71ba944cebfecb6433bc389dad63fc338cb85010650e4532e08184"} err="failed to get container status \"b546d9f9ce71ba944cebfecb6433bc389dad63fc338cb85010650e4532e08184\": rpc error: code = NotFound desc = could not find container \"b546d9f9ce71ba944cebfecb6433bc389dad63fc338cb85010650e4532e08184\": container with ID starting with b546d9f9ce71ba944cebfecb6433bc389dad63fc338cb85010650e4532e08184 not found: ID does not exist" Oct 06 12:04:36 crc kubenswrapper[4958]: I1006 12:04:36.516062 4958 scope.go:117] "RemoveContainer" containerID="b51c94acd10e64ce985e593c96170eade6ae33768207ab649c4d0fa4a456dacc" Oct 06 12:04:36 crc kubenswrapper[4958]: E1006 12:04:36.517606 4958 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b51c94acd10e64ce985e593c96170eade6ae33768207ab649c4d0fa4a456dacc\": container with ID starting with b51c94acd10e64ce985e593c96170eade6ae33768207ab649c4d0fa4a456dacc not found: ID does not exist" containerID="b51c94acd10e64ce985e593c96170eade6ae33768207ab649c4d0fa4a456dacc" Oct 06 12:04:36 crc kubenswrapper[4958]: I1006 12:04:36.517746 4958 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b51c94acd10e64ce985e593c96170eade6ae33768207ab649c4d0fa4a456dacc"} err="failed to get container status \"b51c94acd10e64ce985e593c96170eade6ae33768207ab649c4d0fa4a456dacc\": rpc error: code = NotFound desc = could not find container \"b51c94acd10e64ce985e593c96170eade6ae33768207ab649c4d0fa4a456dacc\": container with ID starting with b51c94acd10e64ce985e593c96170eade6ae33768207ab649c4d0fa4a456dacc not found: ID does not exist" Oct 06 12:04:36 crc kubenswrapper[4958]: I1006 12:04:36.517778 4958 scope.go:117] "RemoveContainer" containerID="b546d9f9ce71ba944cebfecb6433bc389dad63fc338cb85010650e4532e08184" Oct 06 12:04:36 crc kubenswrapper[4958]: I1006 12:04:36.525115 4958 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b546d9f9ce71ba944cebfecb6433bc389dad63fc338cb85010650e4532e08184"} err="failed to get container status \"b546d9f9ce71ba944cebfecb6433bc389dad63fc338cb85010650e4532e08184\": rpc error: code = NotFound desc = could not find container \"b546d9f9ce71ba944cebfecb6433bc389dad63fc338cb85010650e4532e08184\": container with ID starting with b546d9f9ce71ba944cebfecb6433bc389dad63fc338cb85010650e4532e08184 not found: ID does not exist" Oct 06 12:04:36 crc kubenswrapper[4958]: I1006 12:04:36.525220 4958 scope.go:117] "RemoveContainer" containerID="b51c94acd10e64ce985e593c96170eade6ae33768207ab649c4d0fa4a456dacc" Oct 06 12:04:36 crc kubenswrapper[4958]: I1006 12:04:36.528881 4958 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b51c94acd10e64ce985e593c96170eade6ae33768207ab649c4d0fa4a456dacc"} err="failed to get container status \"b51c94acd10e64ce985e593c96170eade6ae33768207ab649c4d0fa4a456dacc\": rpc error: code = NotFound desc = could not find container \"b51c94acd10e64ce985e593c96170eade6ae33768207ab649c4d0fa4a456dacc\": container with ID starting with b51c94acd10e64ce985e593c96170eade6ae33768207ab649c4d0fa4a456dacc not found: ID does not exist" Oct 06 12:04:36 crc kubenswrapper[4958]: I1006 12:04:36.550931 4958 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-api-0"] Oct 06 12:04:36 crc kubenswrapper[4958]: E1006 12:04:36.555213 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7295b149-cc50-47cd-a942-db144e82e687" containerName="cinder-api-log" Oct 06 12:04:36 crc kubenswrapper[4958]: I1006 12:04:36.555530 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="7295b149-cc50-47cd-a942-db144e82e687" containerName="cinder-api-log" Oct 06 12:04:36 crc kubenswrapper[4958]: E1006 12:04:36.555554 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7295b149-cc50-47cd-a942-db144e82e687" containerName="cinder-api" Oct 06 12:04:36 crc kubenswrapper[4958]: I1006 12:04:36.555561 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="7295b149-cc50-47cd-a942-db144e82e687" containerName="cinder-api" Oct 06 12:04:36 crc kubenswrapper[4958]: I1006 12:04:36.555981 4958 memory_manager.go:354] "RemoveStaleState removing state" podUID="7295b149-cc50-47cd-a942-db144e82e687" containerName="cinder-api-log" Oct 06 12:04:36 crc kubenswrapper[4958]: I1006 12:04:36.556118 4958 memory_manager.go:354] "RemoveStaleState removing state" podUID="7295b149-cc50-47cd-a942-db144e82e687" containerName="cinder-api" Oct 06 12:04:36 crc kubenswrapper[4958]: I1006 12:04:36.560082 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Oct 06 12:04:36 crc kubenswrapper[4958]: I1006 12:04:36.568265 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-cinder-public-svc" Oct 06 12:04:36 crc kubenswrapper[4958]: I1006 12:04:36.568564 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-api-config-data" Oct 06 12:04:36 crc kubenswrapper[4958]: I1006 12:04:36.594572 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-cinder-internal-svc" Oct 06 12:04:36 crc kubenswrapper[4958]: I1006 12:04:36.689365 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Oct 06 12:04:36 crc kubenswrapper[4958]: I1006 12:04:36.693934 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5631f7c8-d7b1-4655-8acd-83a29bb5f3b3-scripts\") pod \"cinder-api-0\" (UID: \"5631f7c8-d7b1-4655-8acd-83a29bb5f3b3\") " pod="openstack/cinder-api-0" Oct 06 12:04:36 crc kubenswrapper[4958]: I1006 12:04:36.693994 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5631f7c8-d7b1-4655-8acd-83a29bb5f3b3-config-data\") pod \"cinder-api-0\" (UID: \"5631f7c8-d7b1-4655-8acd-83a29bb5f3b3\") " pod="openstack/cinder-api-0" Oct 06 12:04:36 crc kubenswrapper[4958]: I1006 12:04:36.694043 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/5631f7c8-d7b1-4655-8acd-83a29bb5f3b3-internal-tls-certs\") pod \"cinder-api-0\" (UID: \"5631f7c8-d7b1-4655-8acd-83a29bb5f3b3\") " pod="openstack/cinder-api-0" Oct 06 12:04:36 crc kubenswrapper[4958]: I1006 12:04:36.704260 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/5631f7c8-d7b1-4655-8acd-83a29bb5f3b3-public-tls-certs\") pod \"cinder-api-0\" (UID: \"5631f7c8-d7b1-4655-8acd-83a29bb5f3b3\") " pod="openstack/cinder-api-0" Oct 06 12:04:36 crc kubenswrapper[4958]: I1006 12:04:36.705942 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/5631f7c8-d7b1-4655-8acd-83a29bb5f3b3-config-data-custom\") pod \"cinder-api-0\" (UID: \"5631f7c8-d7b1-4655-8acd-83a29bb5f3b3\") " pod="openstack/cinder-api-0" Oct 06 12:04:36 crc kubenswrapper[4958]: I1006 12:04:36.706051 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5631f7c8-d7b1-4655-8acd-83a29bb5f3b3-logs\") pod \"cinder-api-0\" (UID: \"5631f7c8-d7b1-4655-8acd-83a29bb5f3b3\") " pod="openstack/cinder-api-0" Oct 06 12:04:36 crc kubenswrapper[4958]: I1006 12:04:36.706073 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5631f7c8-d7b1-4655-8acd-83a29bb5f3b3-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"5631f7c8-d7b1-4655-8acd-83a29bb5f3b3\") " pod="openstack/cinder-api-0" Oct 06 12:04:36 crc kubenswrapper[4958]: I1006 12:04:36.706181 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/5631f7c8-d7b1-4655-8acd-83a29bb5f3b3-etc-machine-id\") pod \"cinder-api-0\" (UID: \"5631f7c8-d7b1-4655-8acd-83a29bb5f3b3\") " pod="openstack/cinder-api-0" Oct 06 12:04:36 crc kubenswrapper[4958]: I1006 12:04:36.706201 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lt8cd\" (UniqueName: \"kubernetes.io/projected/5631f7c8-d7b1-4655-8acd-83a29bb5f3b3-kube-api-access-lt8cd\") pod \"cinder-api-0\" (UID: \"5631f7c8-d7b1-4655-8acd-83a29bb5f3b3\") " pod="openstack/cinder-api-0" Oct 06 12:04:36 crc kubenswrapper[4958]: I1006 12:04:36.718336 4958 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-api-5958f9647d-2d24s"] Oct 06 12:04:36 crc kubenswrapper[4958]: I1006 12:04:36.718591 4958 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-api-5958f9647d-2d24s" podUID="d92c9a92-e930-40fd-83bb-b478da76b0d3" containerName="barbican-api-log" containerID="cri-o://85232274a3cb12408e56c6f35bb82db84acd0a9b628e8b6f97165aecbd6f27a3" gracePeriod=30 Oct 06 12:04:36 crc kubenswrapper[4958]: I1006 12:04:36.719185 4958 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-api-5958f9647d-2d24s" podUID="d92c9a92-e930-40fd-83bb-b478da76b0d3" containerName="barbican-api" containerID="cri-o://ab1c7a492e8ca9592bbf4081708aa94c720231df90749d65a07e696d7239c03e" gracePeriod=30 Oct 06 12:04:36 crc kubenswrapper[4958]: I1006 12:04:36.809902 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/5631f7c8-d7b1-4655-8acd-83a29bb5f3b3-etc-machine-id\") pod \"cinder-api-0\" (UID: \"5631f7c8-d7b1-4655-8acd-83a29bb5f3b3\") " pod="openstack/cinder-api-0" Oct 06 12:04:36 crc kubenswrapper[4958]: I1006 12:04:36.809942 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lt8cd\" (UniqueName: \"kubernetes.io/projected/5631f7c8-d7b1-4655-8acd-83a29bb5f3b3-kube-api-access-lt8cd\") pod \"cinder-api-0\" (UID: \"5631f7c8-d7b1-4655-8acd-83a29bb5f3b3\") " pod="openstack/cinder-api-0" Oct 06 12:04:36 crc kubenswrapper[4958]: I1006 12:04:36.809971 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5631f7c8-d7b1-4655-8acd-83a29bb5f3b3-scripts\") pod \"cinder-api-0\" (UID: \"5631f7c8-d7b1-4655-8acd-83a29bb5f3b3\") " pod="openstack/cinder-api-0" Oct 06 12:04:36 crc kubenswrapper[4958]: I1006 12:04:36.809994 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5631f7c8-d7b1-4655-8acd-83a29bb5f3b3-config-data\") pod \"cinder-api-0\" (UID: \"5631f7c8-d7b1-4655-8acd-83a29bb5f3b3\") " pod="openstack/cinder-api-0" Oct 06 12:04:36 crc kubenswrapper[4958]: I1006 12:04:36.810027 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/5631f7c8-d7b1-4655-8acd-83a29bb5f3b3-internal-tls-certs\") pod \"cinder-api-0\" (UID: \"5631f7c8-d7b1-4655-8acd-83a29bb5f3b3\") " pod="openstack/cinder-api-0" Oct 06 12:04:36 crc kubenswrapper[4958]: I1006 12:04:36.810074 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/5631f7c8-d7b1-4655-8acd-83a29bb5f3b3-public-tls-certs\") pod \"cinder-api-0\" (UID: \"5631f7c8-d7b1-4655-8acd-83a29bb5f3b3\") " pod="openstack/cinder-api-0" Oct 06 12:04:36 crc kubenswrapper[4958]: I1006 12:04:36.810111 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/5631f7c8-d7b1-4655-8acd-83a29bb5f3b3-config-data-custom\") pod \"cinder-api-0\" (UID: \"5631f7c8-d7b1-4655-8acd-83a29bb5f3b3\") " pod="openstack/cinder-api-0" Oct 06 12:04:36 crc kubenswrapper[4958]: I1006 12:04:36.810167 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5631f7c8-d7b1-4655-8acd-83a29bb5f3b3-logs\") pod \"cinder-api-0\" (UID: \"5631f7c8-d7b1-4655-8acd-83a29bb5f3b3\") " pod="openstack/cinder-api-0" Oct 06 12:04:36 crc kubenswrapper[4958]: I1006 12:04:36.810183 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5631f7c8-d7b1-4655-8acd-83a29bb5f3b3-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"5631f7c8-d7b1-4655-8acd-83a29bb5f3b3\") " pod="openstack/cinder-api-0" Oct 06 12:04:36 crc kubenswrapper[4958]: I1006 12:04:36.810668 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/5631f7c8-d7b1-4655-8acd-83a29bb5f3b3-etc-machine-id\") pod \"cinder-api-0\" (UID: \"5631f7c8-d7b1-4655-8acd-83a29bb5f3b3\") " pod="openstack/cinder-api-0" Oct 06 12:04:36 crc kubenswrapper[4958]: I1006 12:04:36.811867 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5631f7c8-d7b1-4655-8acd-83a29bb5f3b3-logs\") pod \"cinder-api-0\" (UID: \"5631f7c8-d7b1-4655-8acd-83a29bb5f3b3\") " pod="openstack/cinder-api-0" Oct 06 12:04:36 crc kubenswrapper[4958]: I1006 12:04:36.817673 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5631f7c8-d7b1-4655-8acd-83a29bb5f3b3-scripts\") pod \"cinder-api-0\" (UID: \"5631f7c8-d7b1-4655-8acd-83a29bb5f3b3\") " pod="openstack/cinder-api-0" Oct 06 12:04:36 crc kubenswrapper[4958]: I1006 12:04:36.818683 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/5631f7c8-d7b1-4655-8acd-83a29bb5f3b3-config-data-custom\") pod \"cinder-api-0\" (UID: \"5631f7c8-d7b1-4655-8acd-83a29bb5f3b3\") " pod="openstack/cinder-api-0" Oct 06 12:04:36 crc kubenswrapper[4958]: I1006 12:04:36.819555 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5631f7c8-d7b1-4655-8acd-83a29bb5f3b3-config-data\") pod \"cinder-api-0\" (UID: \"5631f7c8-d7b1-4655-8acd-83a29bb5f3b3\") " pod="openstack/cinder-api-0" Oct 06 12:04:36 crc kubenswrapper[4958]: I1006 12:04:36.822851 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/5631f7c8-d7b1-4655-8acd-83a29bb5f3b3-public-tls-certs\") pod \"cinder-api-0\" (UID: \"5631f7c8-d7b1-4655-8acd-83a29bb5f3b3\") " pod="openstack/cinder-api-0" Oct 06 12:04:36 crc kubenswrapper[4958]: I1006 12:04:36.823921 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5631f7c8-d7b1-4655-8acd-83a29bb5f3b3-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"5631f7c8-d7b1-4655-8acd-83a29bb5f3b3\") " pod="openstack/cinder-api-0" Oct 06 12:04:36 crc kubenswrapper[4958]: I1006 12:04:36.832747 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/5631f7c8-d7b1-4655-8acd-83a29bb5f3b3-internal-tls-certs\") pod \"cinder-api-0\" (UID: \"5631f7c8-d7b1-4655-8acd-83a29bb5f3b3\") " pod="openstack/cinder-api-0" Oct 06 12:04:36 crc kubenswrapper[4958]: I1006 12:04:36.832986 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lt8cd\" (UniqueName: \"kubernetes.io/projected/5631f7c8-d7b1-4655-8acd-83a29bb5f3b3-kube-api-access-lt8cd\") pod \"cinder-api-0\" (UID: \"5631f7c8-d7b1-4655-8acd-83a29bb5f3b3\") " pod="openstack/cinder-api-0" Oct 06 12:04:36 crc kubenswrapper[4958]: I1006 12:04:36.849995 4958 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-scheduler-0" Oct 06 12:04:36 crc kubenswrapper[4958]: I1006 12:04:36.910277 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Oct 06 12:04:36 crc kubenswrapper[4958]: I1006 12:04:36.959325 4958 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7295b149-cc50-47cd-a942-db144e82e687" path="/var/lib/kubelet/pods/7295b149-cc50-47cd-a942-db144e82e687/volumes" Oct 06 12:04:37 crc kubenswrapper[4958]: I1006 12:04:37.059616 4958 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Oct 06 12:04:37 crc kubenswrapper[4958]: I1006 12:04:37.059885 4958 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="d71280d2-c744-42ec-985c-297896ba4731" containerName="ceilometer-central-agent" containerID="cri-o://901b3e1cfc9e9e876ccfbde2853dd86f587d479357d75708ae3292afe1ccf8b9" gracePeriod=30 Oct 06 12:04:37 crc kubenswrapper[4958]: I1006 12:04:37.059914 4958 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="d71280d2-c744-42ec-985c-297896ba4731" containerName="sg-core" containerID="cri-o://9df602a13e753fb5090bd19cdade1e12bcb4b050d5ae1211043a0f014fec204d" gracePeriod=30 Oct 06 12:04:37 crc kubenswrapper[4958]: I1006 12:04:37.059996 4958 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="d71280d2-c744-42ec-985c-297896ba4731" containerName="ceilometer-notification-agent" containerID="cri-o://388e92821e45a23bf00ec78d00fa86f85f928ef58b6d39e43d924b0a68e8d15d" gracePeriod=30 Oct 06 12:04:37 crc kubenswrapper[4958]: I1006 12:04:37.060047 4958 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="d71280d2-c744-42ec-985c-297896ba4731" containerName="proxy-httpd" containerID="cri-o://b9d26e78afd064ffd17e6cacce186b775eee7e53bca68e1f85cef5f83f77ff06" gracePeriod=30 Oct 06 12:04:37 crc kubenswrapper[4958]: I1006 12:04:37.149539 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-proxy-5cc4dd9879-7xgdr"] Oct 06 12:04:37 crc kubenswrapper[4958]: I1006 12:04:37.408373 4958 generic.go:334] "Generic (PLEG): container finished" podID="d92c9a92-e930-40fd-83bb-b478da76b0d3" containerID="85232274a3cb12408e56c6f35bb82db84acd0a9b628e8b6f97165aecbd6f27a3" exitCode=143 Oct 06 12:04:37 crc kubenswrapper[4958]: I1006 12:04:37.408413 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-5958f9647d-2d24s" event={"ID":"d92c9a92-e930-40fd-83bb-b478da76b0d3","Type":"ContainerDied","Data":"85232274a3cb12408e56c6f35bb82db84acd0a9b628e8b6f97165aecbd6f27a3"} Oct 06 12:04:37 crc kubenswrapper[4958]: I1006 12:04:37.418223 4958 generic.go:334] "Generic (PLEG): container finished" podID="d71280d2-c744-42ec-985c-297896ba4731" containerID="b9d26e78afd064ffd17e6cacce186b775eee7e53bca68e1f85cef5f83f77ff06" exitCode=0 Oct 06 12:04:37 crc kubenswrapper[4958]: I1006 12:04:37.418250 4958 generic.go:334] "Generic (PLEG): container finished" podID="d71280d2-c744-42ec-985c-297896ba4731" containerID="9df602a13e753fb5090bd19cdade1e12bcb4b050d5ae1211043a0f014fec204d" exitCode=2 Oct 06 12:04:37 crc kubenswrapper[4958]: I1006 12:04:37.418258 4958 generic.go:334] "Generic (PLEG): container finished" podID="d71280d2-c744-42ec-985c-297896ba4731" containerID="388e92821e45a23bf00ec78d00fa86f85f928ef58b6d39e43d924b0a68e8d15d" exitCode=0 Oct 06 12:04:37 crc kubenswrapper[4958]: I1006 12:04:37.418296 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"d71280d2-c744-42ec-985c-297896ba4731","Type":"ContainerDied","Data":"b9d26e78afd064ffd17e6cacce186b775eee7e53bca68e1f85cef5f83f77ff06"} Oct 06 12:04:37 crc kubenswrapper[4958]: I1006 12:04:37.418322 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"d71280d2-c744-42ec-985c-297896ba4731","Type":"ContainerDied","Data":"9df602a13e753fb5090bd19cdade1e12bcb4b050d5ae1211043a0f014fec204d"} Oct 06 12:04:37 crc kubenswrapper[4958]: I1006 12:04:37.418344 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"d71280d2-c744-42ec-985c-297896ba4731","Type":"ContainerDied","Data":"388e92821e45a23bf00ec78d00fa86f85f928ef58b6d39e43d924b0a68e8d15d"} Oct 06 12:04:37 crc kubenswrapper[4958]: I1006 12:04:37.422994 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-5cc4dd9879-7xgdr" event={"ID":"eb6c6362-e91c-47c8-8616-702c4cada19a","Type":"ContainerStarted","Data":"63c76dc83239b1d2551c6a582ef92a020a630f9610a48d9dafbcc67a5e698d70"} Oct 06 12:04:37 crc kubenswrapper[4958]: I1006 12:04:37.488343 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Oct 06 12:04:38 crc kubenswrapper[4958]: I1006 12:04:38.092657 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Oct 06 12:04:38 crc kubenswrapper[4958]: I1006 12:04:38.137699 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d71280d2-c744-42ec-985c-297896ba4731-run-httpd\") pod \"d71280d2-c744-42ec-985c-297896ba4731\" (UID: \"d71280d2-c744-42ec-985c-297896ba4731\") " Oct 06 12:04:38 crc kubenswrapper[4958]: I1006 12:04:38.137741 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-np9xn\" (UniqueName: \"kubernetes.io/projected/d71280d2-c744-42ec-985c-297896ba4731-kube-api-access-np9xn\") pod \"d71280d2-c744-42ec-985c-297896ba4731\" (UID: \"d71280d2-c744-42ec-985c-297896ba4731\") " Oct 06 12:04:38 crc kubenswrapper[4958]: I1006 12:04:38.137763 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d71280d2-c744-42ec-985c-297896ba4731-log-httpd\") pod \"d71280d2-c744-42ec-985c-297896ba4731\" (UID: \"d71280d2-c744-42ec-985c-297896ba4731\") " Oct 06 12:04:38 crc kubenswrapper[4958]: I1006 12:04:38.137796 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d71280d2-c744-42ec-985c-297896ba4731-combined-ca-bundle\") pod \"d71280d2-c744-42ec-985c-297896ba4731\" (UID: \"d71280d2-c744-42ec-985c-297896ba4731\") " Oct 06 12:04:38 crc kubenswrapper[4958]: I1006 12:04:38.137819 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d71280d2-c744-42ec-985c-297896ba4731-config-data\") pod \"d71280d2-c744-42ec-985c-297896ba4731\" (UID: \"d71280d2-c744-42ec-985c-297896ba4731\") " Oct 06 12:04:38 crc kubenswrapper[4958]: I1006 12:04:38.137942 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d71280d2-c744-42ec-985c-297896ba4731-scripts\") pod \"d71280d2-c744-42ec-985c-297896ba4731\" (UID: \"d71280d2-c744-42ec-985c-297896ba4731\") " Oct 06 12:04:38 crc kubenswrapper[4958]: I1006 12:04:38.137986 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/d71280d2-c744-42ec-985c-297896ba4731-sg-core-conf-yaml\") pod \"d71280d2-c744-42ec-985c-297896ba4731\" (UID: \"d71280d2-c744-42ec-985c-297896ba4731\") " Oct 06 12:04:38 crc kubenswrapper[4958]: I1006 12:04:38.139412 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d71280d2-c744-42ec-985c-297896ba4731-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "d71280d2-c744-42ec-985c-297896ba4731" (UID: "d71280d2-c744-42ec-985c-297896ba4731"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 06 12:04:38 crc kubenswrapper[4958]: I1006 12:04:38.140067 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d71280d2-c744-42ec-985c-297896ba4731-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "d71280d2-c744-42ec-985c-297896ba4731" (UID: "d71280d2-c744-42ec-985c-297896ba4731"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 06 12:04:38 crc kubenswrapper[4958]: I1006 12:04:38.146274 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d71280d2-c744-42ec-985c-297896ba4731-scripts" (OuterVolumeSpecName: "scripts") pod "d71280d2-c744-42ec-985c-297896ba4731" (UID: "d71280d2-c744-42ec-985c-297896ba4731"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 06 12:04:38 crc kubenswrapper[4958]: I1006 12:04:38.147584 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d71280d2-c744-42ec-985c-297896ba4731-kube-api-access-np9xn" (OuterVolumeSpecName: "kube-api-access-np9xn") pod "d71280d2-c744-42ec-985c-297896ba4731" (UID: "d71280d2-c744-42ec-985c-297896ba4731"). InnerVolumeSpecName "kube-api-access-np9xn". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 06 12:04:38 crc kubenswrapper[4958]: I1006 12:04:38.169904 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d71280d2-c744-42ec-985c-297896ba4731-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "d71280d2-c744-42ec-985c-297896ba4731" (UID: "d71280d2-c744-42ec-985c-297896ba4731"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 06 12:04:38 crc kubenswrapper[4958]: I1006 12:04:38.240551 4958 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d71280d2-c744-42ec-985c-297896ba4731-scripts\") on node \"crc\" DevicePath \"\"" Oct 06 12:04:38 crc kubenswrapper[4958]: I1006 12:04:38.240929 4958 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/d71280d2-c744-42ec-985c-297896ba4731-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Oct 06 12:04:38 crc kubenswrapper[4958]: I1006 12:04:38.240941 4958 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d71280d2-c744-42ec-985c-297896ba4731-run-httpd\") on node \"crc\" DevicePath \"\"" Oct 06 12:04:38 crc kubenswrapper[4958]: I1006 12:04:38.240951 4958 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-np9xn\" (UniqueName: \"kubernetes.io/projected/d71280d2-c744-42ec-985c-297896ba4731-kube-api-access-np9xn\") on node \"crc\" DevicePath \"\"" Oct 06 12:04:38 crc kubenswrapper[4958]: I1006 12:04:38.240961 4958 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d71280d2-c744-42ec-985c-297896ba4731-log-httpd\") on node \"crc\" DevicePath \"\"" Oct 06 12:04:38 crc kubenswrapper[4958]: I1006 12:04:38.244480 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d71280d2-c744-42ec-985c-297896ba4731-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "d71280d2-c744-42ec-985c-297896ba4731" (UID: "d71280d2-c744-42ec-985c-297896ba4731"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 06 12:04:38 crc kubenswrapper[4958]: I1006 12:04:38.276418 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d71280d2-c744-42ec-985c-297896ba4731-config-data" (OuterVolumeSpecName: "config-data") pod "d71280d2-c744-42ec-985c-297896ba4731" (UID: "d71280d2-c744-42ec-985c-297896ba4731"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 06 12:04:38 crc kubenswrapper[4958]: I1006 12:04:38.344225 4958 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d71280d2-c744-42ec-985c-297896ba4731-config-data\") on node \"crc\" DevicePath \"\"" Oct 06 12:04:38 crc kubenswrapper[4958]: I1006 12:04:38.344267 4958 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d71280d2-c744-42ec-985c-297896ba4731-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 06 12:04:38 crc kubenswrapper[4958]: I1006 12:04:38.446683 4958 generic.go:334] "Generic (PLEG): container finished" podID="d71280d2-c744-42ec-985c-297896ba4731" containerID="901b3e1cfc9e9e876ccfbde2853dd86f587d479357d75708ae3292afe1ccf8b9" exitCode=0 Oct 06 12:04:38 crc kubenswrapper[4958]: I1006 12:04:38.446794 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"d71280d2-c744-42ec-985c-297896ba4731","Type":"ContainerDied","Data":"901b3e1cfc9e9e876ccfbde2853dd86f587d479357d75708ae3292afe1ccf8b9"} Oct 06 12:04:38 crc kubenswrapper[4958]: I1006 12:04:38.446850 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"d71280d2-c744-42ec-985c-297896ba4731","Type":"ContainerDied","Data":"74ed12693cf78af180d7b5eaef4d585eb750f3dea38f22634ac8e92f8fa2b5d9"} Oct 06 12:04:38 crc kubenswrapper[4958]: I1006 12:04:38.446872 4958 scope.go:117] "RemoveContainer" containerID="b9d26e78afd064ffd17e6cacce186b775eee7e53bca68e1f85cef5f83f77ff06" Oct 06 12:04:38 crc kubenswrapper[4958]: I1006 12:04:38.447023 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Oct 06 12:04:38 crc kubenswrapper[4958]: I1006 12:04:38.451550 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-5cc4dd9879-7xgdr" event={"ID":"eb6c6362-e91c-47c8-8616-702c4cada19a","Type":"ContainerStarted","Data":"02aadccbf4a0d2fe9b5f64059d45cfe1330ab7e95e5593c3bc39d2200e34749c"} Oct 06 12:04:38 crc kubenswrapper[4958]: I1006 12:04:38.451613 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-5cc4dd9879-7xgdr" event={"ID":"eb6c6362-e91c-47c8-8616-702c4cada19a","Type":"ContainerStarted","Data":"060cfa6b622693b9e1ee41a411a1faed798a9dfa7fd49d4724295837a6ea191f"} Oct 06 12:04:38 crc kubenswrapper[4958]: I1006 12:04:38.451834 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/swift-proxy-5cc4dd9879-7xgdr" Oct 06 12:04:38 crc kubenswrapper[4958]: I1006 12:04:38.451855 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/swift-proxy-5cc4dd9879-7xgdr" Oct 06 12:04:38 crc kubenswrapper[4958]: I1006 12:04:38.457042 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"5631f7c8-d7b1-4655-8acd-83a29bb5f3b3","Type":"ContainerStarted","Data":"68ad686f747c501b2fb4be4e8e7c33ae63d663b6d1c5c8d3b4083ec3b6f72dbc"} Oct 06 12:04:38 crc kubenswrapper[4958]: I1006 12:04:38.457086 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"5631f7c8-d7b1-4655-8acd-83a29bb5f3b3","Type":"ContainerStarted","Data":"1cc87860a5eca0a79f38b122b3c77913f0542bd7b29cdad54cef6a8aa8dddad2"} Oct 06 12:04:38 crc kubenswrapper[4958]: I1006 12:04:38.491182 4958 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/swift-proxy-5cc4dd9879-7xgdr" podStartSLOduration=3.491163884 podStartE2EDuration="3.491163884s" podCreationTimestamp="2025-10-06 12:04:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-06 12:04:38.484572475 +0000 UTC m=+1032.370597803" watchObservedRunningTime="2025-10-06 12:04:38.491163884 +0000 UTC m=+1032.377189192" Oct 06 12:04:38 crc kubenswrapper[4958]: I1006 12:04:38.524321 4958 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Oct 06 12:04:38 crc kubenswrapper[4958]: I1006 12:04:38.538209 4958 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Oct 06 12:04:38 crc kubenswrapper[4958]: I1006 12:04:38.559495 4958 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Oct 06 12:04:38 crc kubenswrapper[4958]: E1006 12:04:38.560054 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d71280d2-c744-42ec-985c-297896ba4731" containerName="ceilometer-central-agent" Oct 06 12:04:38 crc kubenswrapper[4958]: I1006 12:04:38.560115 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="d71280d2-c744-42ec-985c-297896ba4731" containerName="ceilometer-central-agent" Oct 06 12:04:38 crc kubenswrapper[4958]: E1006 12:04:38.560224 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d71280d2-c744-42ec-985c-297896ba4731" containerName="ceilometer-notification-agent" Oct 06 12:04:38 crc kubenswrapper[4958]: I1006 12:04:38.560276 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="d71280d2-c744-42ec-985c-297896ba4731" containerName="ceilometer-notification-agent" Oct 06 12:04:38 crc kubenswrapper[4958]: E1006 12:04:38.560333 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d71280d2-c744-42ec-985c-297896ba4731" containerName="sg-core" Oct 06 12:04:38 crc kubenswrapper[4958]: I1006 12:04:38.560381 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="d71280d2-c744-42ec-985c-297896ba4731" containerName="sg-core" Oct 06 12:04:38 crc kubenswrapper[4958]: E1006 12:04:38.560440 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d71280d2-c744-42ec-985c-297896ba4731" containerName="proxy-httpd" Oct 06 12:04:38 crc kubenswrapper[4958]: I1006 12:04:38.560487 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="d71280d2-c744-42ec-985c-297896ba4731" containerName="proxy-httpd" Oct 06 12:04:38 crc kubenswrapper[4958]: I1006 12:04:38.560771 4958 memory_manager.go:354] "RemoveStaleState removing state" podUID="d71280d2-c744-42ec-985c-297896ba4731" containerName="sg-core" Oct 06 12:04:38 crc kubenswrapper[4958]: I1006 12:04:38.560848 4958 memory_manager.go:354] "RemoveStaleState removing state" podUID="d71280d2-c744-42ec-985c-297896ba4731" containerName="ceilometer-notification-agent" Oct 06 12:04:38 crc kubenswrapper[4958]: I1006 12:04:38.560907 4958 memory_manager.go:354] "RemoveStaleState removing state" podUID="d71280d2-c744-42ec-985c-297896ba4731" containerName="ceilometer-central-agent" Oct 06 12:04:38 crc kubenswrapper[4958]: I1006 12:04:38.560968 4958 memory_manager.go:354] "RemoveStaleState removing state" podUID="d71280d2-c744-42ec-985c-297896ba4731" containerName="proxy-httpd" Oct 06 12:04:38 crc kubenswrapper[4958]: I1006 12:04:38.563043 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Oct 06 12:04:38 crc kubenswrapper[4958]: I1006 12:04:38.565118 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Oct 06 12:04:38 crc kubenswrapper[4958]: I1006 12:04:38.596638 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Oct 06 12:04:38 crc kubenswrapper[4958]: I1006 12:04:38.596877 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Oct 06 12:04:38 crc kubenswrapper[4958]: I1006 12:04:38.597031 4958 scope.go:117] "RemoveContainer" containerID="9df602a13e753fb5090bd19cdade1e12bcb4b050d5ae1211043a0f014fec204d" Oct 06 12:04:38 crc kubenswrapper[4958]: I1006 12:04:38.638927 4958 scope.go:117] "RemoveContainer" containerID="388e92821e45a23bf00ec78d00fa86f85f928ef58b6d39e43d924b0a68e8d15d" Oct 06 12:04:38 crc kubenswrapper[4958]: I1006 12:04:38.654972 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5c8fffd7-1c3e-46a3-b444-0fd22eaff575-config-data\") pod \"ceilometer-0\" (UID: \"5c8fffd7-1c3e-46a3-b444-0fd22eaff575\") " pod="openstack/ceilometer-0" Oct 06 12:04:38 crc kubenswrapper[4958]: I1006 12:04:38.655014 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/5c8fffd7-1c3e-46a3-b444-0fd22eaff575-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"5c8fffd7-1c3e-46a3-b444-0fd22eaff575\") " pod="openstack/ceilometer-0" Oct 06 12:04:38 crc kubenswrapper[4958]: I1006 12:04:38.655042 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5c8fffd7-1c3e-46a3-b444-0fd22eaff575-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"5c8fffd7-1c3e-46a3-b444-0fd22eaff575\") " pod="openstack/ceilometer-0" Oct 06 12:04:38 crc kubenswrapper[4958]: I1006 12:04:38.655228 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/5c8fffd7-1c3e-46a3-b444-0fd22eaff575-run-httpd\") pod \"ceilometer-0\" (UID: \"5c8fffd7-1c3e-46a3-b444-0fd22eaff575\") " pod="openstack/ceilometer-0" Oct 06 12:04:38 crc kubenswrapper[4958]: I1006 12:04:38.655544 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/5c8fffd7-1c3e-46a3-b444-0fd22eaff575-log-httpd\") pod \"ceilometer-0\" (UID: \"5c8fffd7-1c3e-46a3-b444-0fd22eaff575\") " pod="openstack/ceilometer-0" Oct 06 12:04:38 crc kubenswrapper[4958]: I1006 12:04:38.655623 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zk84n\" (UniqueName: \"kubernetes.io/projected/5c8fffd7-1c3e-46a3-b444-0fd22eaff575-kube-api-access-zk84n\") pod \"ceilometer-0\" (UID: \"5c8fffd7-1c3e-46a3-b444-0fd22eaff575\") " pod="openstack/ceilometer-0" Oct 06 12:04:38 crc kubenswrapper[4958]: I1006 12:04:38.655656 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5c8fffd7-1c3e-46a3-b444-0fd22eaff575-scripts\") pod \"ceilometer-0\" (UID: \"5c8fffd7-1c3e-46a3-b444-0fd22eaff575\") " pod="openstack/ceilometer-0" Oct 06 12:04:38 crc kubenswrapper[4958]: I1006 12:04:38.702636 4958 scope.go:117] "RemoveContainer" containerID="901b3e1cfc9e9e876ccfbde2853dd86f587d479357d75708ae3292afe1ccf8b9" Oct 06 12:04:38 crc kubenswrapper[4958]: I1006 12:04:38.726410 4958 scope.go:117] "RemoveContainer" containerID="b9d26e78afd064ffd17e6cacce186b775eee7e53bca68e1f85cef5f83f77ff06" Oct 06 12:04:38 crc kubenswrapper[4958]: E1006 12:04:38.727186 4958 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b9d26e78afd064ffd17e6cacce186b775eee7e53bca68e1f85cef5f83f77ff06\": container with ID starting with b9d26e78afd064ffd17e6cacce186b775eee7e53bca68e1f85cef5f83f77ff06 not found: ID does not exist" containerID="b9d26e78afd064ffd17e6cacce186b775eee7e53bca68e1f85cef5f83f77ff06" Oct 06 12:04:38 crc kubenswrapper[4958]: I1006 12:04:38.727225 4958 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b9d26e78afd064ffd17e6cacce186b775eee7e53bca68e1f85cef5f83f77ff06"} err="failed to get container status \"b9d26e78afd064ffd17e6cacce186b775eee7e53bca68e1f85cef5f83f77ff06\": rpc error: code = NotFound desc = could not find container \"b9d26e78afd064ffd17e6cacce186b775eee7e53bca68e1f85cef5f83f77ff06\": container with ID starting with b9d26e78afd064ffd17e6cacce186b775eee7e53bca68e1f85cef5f83f77ff06 not found: ID does not exist" Oct 06 12:04:38 crc kubenswrapper[4958]: I1006 12:04:38.727249 4958 scope.go:117] "RemoveContainer" containerID="9df602a13e753fb5090bd19cdade1e12bcb4b050d5ae1211043a0f014fec204d" Oct 06 12:04:38 crc kubenswrapper[4958]: E1006 12:04:38.727489 4958 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9df602a13e753fb5090bd19cdade1e12bcb4b050d5ae1211043a0f014fec204d\": container with ID starting with 9df602a13e753fb5090bd19cdade1e12bcb4b050d5ae1211043a0f014fec204d not found: ID does not exist" containerID="9df602a13e753fb5090bd19cdade1e12bcb4b050d5ae1211043a0f014fec204d" Oct 06 12:04:38 crc kubenswrapper[4958]: I1006 12:04:38.727514 4958 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9df602a13e753fb5090bd19cdade1e12bcb4b050d5ae1211043a0f014fec204d"} err="failed to get container status \"9df602a13e753fb5090bd19cdade1e12bcb4b050d5ae1211043a0f014fec204d\": rpc error: code = NotFound desc = could not find container \"9df602a13e753fb5090bd19cdade1e12bcb4b050d5ae1211043a0f014fec204d\": container with ID starting with 9df602a13e753fb5090bd19cdade1e12bcb4b050d5ae1211043a0f014fec204d not found: ID does not exist" Oct 06 12:04:38 crc kubenswrapper[4958]: I1006 12:04:38.727526 4958 scope.go:117] "RemoveContainer" containerID="388e92821e45a23bf00ec78d00fa86f85f928ef58b6d39e43d924b0a68e8d15d" Oct 06 12:04:38 crc kubenswrapper[4958]: E1006 12:04:38.727909 4958 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"388e92821e45a23bf00ec78d00fa86f85f928ef58b6d39e43d924b0a68e8d15d\": container with ID starting with 388e92821e45a23bf00ec78d00fa86f85f928ef58b6d39e43d924b0a68e8d15d not found: ID does not exist" containerID="388e92821e45a23bf00ec78d00fa86f85f928ef58b6d39e43d924b0a68e8d15d" Oct 06 12:04:38 crc kubenswrapper[4958]: I1006 12:04:38.727932 4958 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"388e92821e45a23bf00ec78d00fa86f85f928ef58b6d39e43d924b0a68e8d15d"} err="failed to get container status \"388e92821e45a23bf00ec78d00fa86f85f928ef58b6d39e43d924b0a68e8d15d\": rpc error: code = NotFound desc = could not find container \"388e92821e45a23bf00ec78d00fa86f85f928ef58b6d39e43d924b0a68e8d15d\": container with ID starting with 388e92821e45a23bf00ec78d00fa86f85f928ef58b6d39e43d924b0a68e8d15d not found: ID does not exist" Oct 06 12:04:38 crc kubenswrapper[4958]: I1006 12:04:38.727946 4958 scope.go:117] "RemoveContainer" containerID="901b3e1cfc9e9e876ccfbde2853dd86f587d479357d75708ae3292afe1ccf8b9" Oct 06 12:04:38 crc kubenswrapper[4958]: E1006 12:04:38.728358 4958 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"901b3e1cfc9e9e876ccfbde2853dd86f587d479357d75708ae3292afe1ccf8b9\": container with ID starting with 901b3e1cfc9e9e876ccfbde2853dd86f587d479357d75708ae3292afe1ccf8b9 not found: ID does not exist" containerID="901b3e1cfc9e9e876ccfbde2853dd86f587d479357d75708ae3292afe1ccf8b9" Oct 06 12:04:38 crc kubenswrapper[4958]: I1006 12:04:38.728380 4958 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"901b3e1cfc9e9e876ccfbde2853dd86f587d479357d75708ae3292afe1ccf8b9"} err="failed to get container status \"901b3e1cfc9e9e876ccfbde2853dd86f587d479357d75708ae3292afe1ccf8b9\": rpc error: code = NotFound desc = could not find container \"901b3e1cfc9e9e876ccfbde2853dd86f587d479357d75708ae3292afe1ccf8b9\": container with ID starting with 901b3e1cfc9e9e876ccfbde2853dd86f587d479357d75708ae3292afe1ccf8b9 not found: ID does not exist" Oct 06 12:04:38 crc kubenswrapper[4958]: I1006 12:04:38.757563 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/5c8fffd7-1c3e-46a3-b444-0fd22eaff575-log-httpd\") pod \"ceilometer-0\" (UID: \"5c8fffd7-1c3e-46a3-b444-0fd22eaff575\") " pod="openstack/ceilometer-0" Oct 06 12:04:38 crc kubenswrapper[4958]: I1006 12:04:38.757618 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zk84n\" (UniqueName: \"kubernetes.io/projected/5c8fffd7-1c3e-46a3-b444-0fd22eaff575-kube-api-access-zk84n\") pod \"ceilometer-0\" (UID: \"5c8fffd7-1c3e-46a3-b444-0fd22eaff575\") " pod="openstack/ceilometer-0" Oct 06 12:04:38 crc kubenswrapper[4958]: I1006 12:04:38.757654 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5c8fffd7-1c3e-46a3-b444-0fd22eaff575-scripts\") pod \"ceilometer-0\" (UID: \"5c8fffd7-1c3e-46a3-b444-0fd22eaff575\") " pod="openstack/ceilometer-0" Oct 06 12:04:38 crc kubenswrapper[4958]: I1006 12:04:38.757694 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5c8fffd7-1c3e-46a3-b444-0fd22eaff575-config-data\") pod \"ceilometer-0\" (UID: \"5c8fffd7-1c3e-46a3-b444-0fd22eaff575\") " pod="openstack/ceilometer-0" Oct 06 12:04:38 crc kubenswrapper[4958]: I1006 12:04:38.757713 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/5c8fffd7-1c3e-46a3-b444-0fd22eaff575-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"5c8fffd7-1c3e-46a3-b444-0fd22eaff575\") " pod="openstack/ceilometer-0" Oct 06 12:04:38 crc kubenswrapper[4958]: I1006 12:04:38.757741 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5c8fffd7-1c3e-46a3-b444-0fd22eaff575-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"5c8fffd7-1c3e-46a3-b444-0fd22eaff575\") " pod="openstack/ceilometer-0" Oct 06 12:04:38 crc kubenswrapper[4958]: I1006 12:04:38.757767 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/5c8fffd7-1c3e-46a3-b444-0fd22eaff575-run-httpd\") pod \"ceilometer-0\" (UID: \"5c8fffd7-1c3e-46a3-b444-0fd22eaff575\") " pod="openstack/ceilometer-0" Oct 06 12:04:38 crc kubenswrapper[4958]: I1006 12:04:38.759112 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/5c8fffd7-1c3e-46a3-b444-0fd22eaff575-log-httpd\") pod \"ceilometer-0\" (UID: \"5c8fffd7-1c3e-46a3-b444-0fd22eaff575\") " pod="openstack/ceilometer-0" Oct 06 12:04:38 crc kubenswrapper[4958]: I1006 12:04:38.759298 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/5c8fffd7-1c3e-46a3-b444-0fd22eaff575-run-httpd\") pod \"ceilometer-0\" (UID: \"5c8fffd7-1c3e-46a3-b444-0fd22eaff575\") " pod="openstack/ceilometer-0" Oct 06 12:04:38 crc kubenswrapper[4958]: I1006 12:04:38.762188 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/5c8fffd7-1c3e-46a3-b444-0fd22eaff575-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"5c8fffd7-1c3e-46a3-b444-0fd22eaff575\") " pod="openstack/ceilometer-0" Oct 06 12:04:38 crc kubenswrapper[4958]: I1006 12:04:38.762660 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5c8fffd7-1c3e-46a3-b444-0fd22eaff575-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"5c8fffd7-1c3e-46a3-b444-0fd22eaff575\") " pod="openstack/ceilometer-0" Oct 06 12:04:38 crc kubenswrapper[4958]: I1006 12:04:38.767463 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5c8fffd7-1c3e-46a3-b444-0fd22eaff575-config-data\") pod \"ceilometer-0\" (UID: \"5c8fffd7-1c3e-46a3-b444-0fd22eaff575\") " pod="openstack/ceilometer-0" Oct 06 12:04:38 crc kubenswrapper[4958]: I1006 12:04:38.774611 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zk84n\" (UniqueName: \"kubernetes.io/projected/5c8fffd7-1c3e-46a3-b444-0fd22eaff575-kube-api-access-zk84n\") pod \"ceilometer-0\" (UID: \"5c8fffd7-1c3e-46a3-b444-0fd22eaff575\") " pod="openstack/ceilometer-0" Oct 06 12:04:38 crc kubenswrapper[4958]: I1006 12:04:38.783717 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5c8fffd7-1c3e-46a3-b444-0fd22eaff575-scripts\") pod \"ceilometer-0\" (UID: \"5c8fffd7-1c3e-46a3-b444-0fd22eaff575\") " pod="openstack/ceilometer-0" Oct 06 12:04:38 crc kubenswrapper[4958]: I1006 12:04:38.925122 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Oct 06 12:04:38 crc kubenswrapper[4958]: I1006 12:04:38.934034 4958 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d71280d2-c744-42ec-985c-297896ba4731" path="/var/lib/kubelet/pods/d71280d2-c744-42ec-985c-297896ba4731/volumes" Oct 06 12:04:38 crc kubenswrapper[4958]: W1006 12:04:38.979457 4958 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd71280d2_c744_42ec_985c_297896ba4731.slice/crio-388e92821e45a23bf00ec78d00fa86f85f928ef58b6d39e43d924b0a68e8d15d.scope WatchSource:0}: Error finding container 388e92821e45a23bf00ec78d00fa86f85f928ef58b6d39e43d924b0a68e8d15d: Status 404 returned error can't find the container with id 388e92821e45a23bf00ec78d00fa86f85f928ef58b6d39e43d924b0a68e8d15d Oct 06 12:04:38 crc kubenswrapper[4958]: W1006 12:04:38.981289 4958 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd71280d2_c744_42ec_985c_297896ba4731.slice/crio-9df602a13e753fb5090bd19cdade1e12bcb4b050d5ae1211043a0f014fec204d.scope WatchSource:0}: Error finding container 9df602a13e753fb5090bd19cdade1e12bcb4b050d5ae1211043a0f014fec204d: Status 404 returned error can't find the container with id 9df602a13e753fb5090bd19cdade1e12bcb4b050d5ae1211043a0f014fec204d Oct 06 12:04:38 crc kubenswrapper[4958]: W1006 12:04:38.981496 4958 watcher.go:93] Error while processing event ("/sys/fs/cgroup/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod7295b149_cc50_47cd_a942_db144e82e687.slice/crio-8c461b9875261ed506af9dd967f4743c056396f721dc5293af7a1e10e5dd77f2": 0x40000100 == IN_CREATE|IN_ISDIR): inotify_add_watch /sys/fs/cgroup/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod7295b149_cc50_47cd_a942_db144e82e687.slice/crio-8c461b9875261ed506af9dd967f4743c056396f721dc5293af7a1e10e5dd77f2: no such file or directory Oct 06 12:04:38 crc kubenswrapper[4958]: W1006 12:04:38.981538 4958 watcher.go:93] Error while processing event ("/sys/fs/cgroup/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod7ba8a4da_fc79_4b19_ae45_208cbf09bbff.slice/crio-conmon-977e96c8c20b5ae44a5462c50c0f51725f01638f0e922a4631bd2485e2469cc3.scope": 0x40000100 == IN_CREATE|IN_ISDIR): inotify_add_watch /sys/fs/cgroup/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod7ba8a4da_fc79_4b19_ae45_208cbf09bbff.slice/crio-conmon-977e96c8c20b5ae44a5462c50c0f51725f01638f0e922a4631bd2485e2469cc3.scope: no such file or directory Oct 06 12:04:38 crc kubenswrapper[4958]: W1006 12:04:38.981559 4958 watcher.go:93] Error while processing event ("/sys/fs/cgroup/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod7ba8a4da_fc79_4b19_ae45_208cbf09bbff.slice/crio-977e96c8c20b5ae44a5462c50c0f51725f01638f0e922a4631bd2485e2469cc3.scope": 0x40000100 == IN_CREATE|IN_ISDIR): inotify_add_watch /sys/fs/cgroup/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod7ba8a4da_fc79_4b19_ae45_208cbf09bbff.slice/crio-977e96c8c20b5ae44a5462c50c0f51725f01638f0e922a4631bd2485e2469cc3.scope: no such file or directory Oct 06 12:04:38 crc kubenswrapper[4958]: W1006 12:04:38.981575 4958 watcher.go:93] Error while processing event ("/sys/fs/cgroup/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod7295b149_cc50_47cd_a942_db144e82e687.slice/crio-conmon-b51c94acd10e64ce985e593c96170eade6ae33768207ab649c4d0fa4a456dacc.scope": 0x40000100 == IN_CREATE|IN_ISDIR): inotify_add_watch /sys/fs/cgroup/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod7295b149_cc50_47cd_a942_db144e82e687.slice/crio-conmon-b51c94acd10e64ce985e593c96170eade6ae33768207ab649c4d0fa4a456dacc.scope: no such file or directory Oct 06 12:04:38 crc kubenswrapper[4958]: W1006 12:04:38.981589 4958 watcher.go:93] Error while processing event ("/sys/fs/cgroup/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod7295b149_cc50_47cd_a942_db144e82e687.slice/crio-b51c94acd10e64ce985e593c96170eade6ae33768207ab649c4d0fa4a456dacc.scope": 0x40000100 == IN_CREATE|IN_ISDIR): inotify_add_watch /sys/fs/cgroup/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod7295b149_cc50_47cd_a942_db144e82e687.slice/crio-b51c94acd10e64ce985e593c96170eade6ae33768207ab649c4d0fa4a456dacc.scope: no such file or directory Oct 06 12:04:38 crc kubenswrapper[4958]: W1006 12:04:38.982055 4958 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd71280d2_c744_42ec_985c_297896ba4731.slice/crio-b9d26e78afd064ffd17e6cacce186b775eee7e53bca68e1f85cef5f83f77ff06.scope WatchSource:0}: Error finding container b9d26e78afd064ffd17e6cacce186b775eee7e53bca68e1f85cef5f83f77ff06: Status 404 returned error can't find the container with id b9d26e78afd064ffd17e6cacce186b775eee7e53bca68e1f85cef5f83f77ff06 Oct 06 12:04:38 crc kubenswrapper[4958]: W1006 12:04:38.982096 4958 watcher.go:93] Error while processing event ("/sys/fs/cgroup/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod7295b149_cc50_47cd_a942_db144e82e687.slice/crio-conmon-b546d9f9ce71ba944cebfecb6433bc389dad63fc338cb85010650e4532e08184.scope": 0x40000100 == IN_CREATE|IN_ISDIR): inotify_add_watch /sys/fs/cgroup/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod7295b149_cc50_47cd_a942_db144e82e687.slice/crio-conmon-b546d9f9ce71ba944cebfecb6433bc389dad63fc338cb85010650e4532e08184.scope: no such file or directory Oct 06 12:04:38 crc kubenswrapper[4958]: W1006 12:04:38.982123 4958 watcher.go:93] Error while processing event ("/sys/fs/cgroup/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod7295b149_cc50_47cd_a942_db144e82e687.slice/crio-b546d9f9ce71ba944cebfecb6433bc389dad63fc338cb85010650e4532e08184.scope": 0x40000100 == IN_CREATE|IN_ISDIR): inotify_add_watch /sys/fs/cgroup/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod7295b149_cc50_47cd_a942_db144e82e687.slice/crio-b546d9f9ce71ba944cebfecb6433bc389dad63fc338cb85010650e4532e08184.scope: no such file or directory Oct 06 12:04:39 crc kubenswrapper[4958]: I1006 12:04:39.232135 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/neutron-df9b9d74d-rffxt" Oct 06 12:04:39 crc kubenswrapper[4958]: E1006 12:04:39.358708 4958 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd71280d2_c744_42ec_985c_297896ba4731.slice/crio-conmon-901b3e1cfc9e9e876ccfbde2853dd86f587d479357d75708ae3292afe1ccf8b9.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd71280d2_c744_42ec_985c_297896ba4731.slice/crio-901b3e1cfc9e9e876ccfbde2853dd86f587d479357d75708ae3292afe1ccf8b9.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd71280d2_c744_42ec_985c_297896ba4731.slice/crio-74ed12693cf78af180d7b5eaef4d585eb750f3dea38f22634ac8e92f8fa2b5d9\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd71280d2_c744_42ec_985c_297896ba4731.slice/crio-conmon-9df602a13e753fb5090bd19cdade1e12bcb4b050d5ae1211043a0f014fec204d.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd71280d2_c744_42ec_985c_297896ba4731.slice/crio-conmon-388e92821e45a23bf00ec78d00fa86f85f928ef58b6d39e43d924b0a68e8d15d.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf0aa3dc0_4553_4ec3_bec4_097c68139910.slice/crio-1a53f4c2eeaf4a11211eb329d82ea7b02340d3241f4f3fb4358cb52e27b45808.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf0aa3dc0_4553_4ec3_bec4_097c68139910.slice/crio-conmon-1a53f4c2eeaf4a11211eb329d82ea7b02340d3241f4f3fb4358cb52e27b45808.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd71280d2_c744_42ec_985c_297896ba4731.slice/crio-conmon-b9d26e78afd064ffd17e6cacce186b775eee7e53bca68e1f85cef5f83f77ff06.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd92c9a92_e930_40fd_83bb_b478da76b0d3.slice/crio-85232274a3cb12408e56c6f35bb82db84acd0a9b628e8b6f97165aecbd6f27a3.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod7295b149_cc50_47cd_a942_db144e82e687.slice\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd92c9a92_e930_40fd_83bb_b478da76b0d3.slice/crio-conmon-85232274a3cb12408e56c6f35bb82db84acd0a9b628e8b6f97165aecbd6f27a3.scope\": RecentStats: unable to find data in memory cache]" Oct 06 12:04:39 crc kubenswrapper[4958]: I1006 12:04:39.504460 4958 generic.go:334] "Generic (PLEG): container finished" podID="f0aa3dc0-4553-4ec3-bec4-097c68139910" containerID="1a53f4c2eeaf4a11211eb329d82ea7b02340d3241f4f3fb4358cb52e27b45808" exitCode=137 Oct 06 12:04:39 crc kubenswrapper[4958]: I1006 12:04:39.504992 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-7567d7f44b-s7rvg" event={"ID":"f0aa3dc0-4553-4ec3-bec4-097c68139910","Type":"ContainerDied","Data":"1a53f4c2eeaf4a11211eb329d82ea7b02340d3241f4f3fb4358cb52e27b45808"} Oct 06 12:04:39 crc kubenswrapper[4958]: I1006 12:04:39.517559 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/cinder-api-0" Oct 06 12:04:39 crc kubenswrapper[4958]: I1006 12:04:39.552338 4958 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-api-0" podStartSLOduration=3.552316233 podStartE2EDuration="3.552316233s" podCreationTimestamp="2025-10-06 12:04:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-06 12:04:39.542473916 +0000 UTC m=+1033.428499224" watchObservedRunningTime="2025-10-06 12:04:39.552316233 +0000 UTC m=+1033.438341541" Oct 06 12:04:39 crc kubenswrapper[4958]: I1006 12:04:39.649171 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-7567d7f44b-s7rvg" Oct 06 12:04:39 crc kubenswrapper[4958]: I1006 12:04:39.728023 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/f0aa3dc0-4553-4ec3-bec4-097c68139910-config-data\") pod \"f0aa3dc0-4553-4ec3-bec4-097c68139910\" (UID: \"f0aa3dc0-4553-4ec3-bec4-097c68139910\") " Oct 06 12:04:39 crc kubenswrapper[4958]: I1006 12:04:39.728124 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/f0aa3dc0-4553-4ec3-bec4-097c68139910-horizon-secret-key\") pod \"f0aa3dc0-4553-4ec3-bec4-097c68139910\" (UID: \"f0aa3dc0-4553-4ec3-bec4-097c68139910\") " Oct 06 12:04:39 crc kubenswrapper[4958]: I1006 12:04:39.728203 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/f0aa3dc0-4553-4ec3-bec4-097c68139910-horizon-tls-certs\") pod \"f0aa3dc0-4553-4ec3-bec4-097c68139910\" (UID: \"f0aa3dc0-4553-4ec3-bec4-097c68139910\") " Oct 06 12:04:39 crc kubenswrapper[4958]: I1006 12:04:39.728291 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f0aa3dc0-4553-4ec3-bec4-097c68139910-combined-ca-bundle\") pod \"f0aa3dc0-4553-4ec3-bec4-097c68139910\" (UID: \"f0aa3dc0-4553-4ec3-bec4-097c68139910\") " Oct 06 12:04:39 crc kubenswrapper[4958]: I1006 12:04:39.728323 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f0aa3dc0-4553-4ec3-bec4-097c68139910-logs\") pod \"f0aa3dc0-4553-4ec3-bec4-097c68139910\" (UID: \"f0aa3dc0-4553-4ec3-bec4-097c68139910\") " Oct 06 12:04:39 crc kubenswrapper[4958]: I1006 12:04:39.728410 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/f0aa3dc0-4553-4ec3-bec4-097c68139910-scripts\") pod \"f0aa3dc0-4553-4ec3-bec4-097c68139910\" (UID: \"f0aa3dc0-4553-4ec3-bec4-097c68139910\") " Oct 06 12:04:39 crc kubenswrapper[4958]: I1006 12:04:39.728447 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gdq4c\" (UniqueName: \"kubernetes.io/projected/f0aa3dc0-4553-4ec3-bec4-097c68139910-kube-api-access-gdq4c\") pod \"f0aa3dc0-4553-4ec3-bec4-097c68139910\" (UID: \"f0aa3dc0-4553-4ec3-bec4-097c68139910\") " Oct 06 12:04:39 crc kubenswrapper[4958]: I1006 12:04:39.732655 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f0aa3dc0-4553-4ec3-bec4-097c68139910-logs" (OuterVolumeSpecName: "logs") pod "f0aa3dc0-4553-4ec3-bec4-097c68139910" (UID: "f0aa3dc0-4553-4ec3-bec4-097c68139910"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 06 12:04:39 crc kubenswrapper[4958]: I1006 12:04:39.741993 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f0aa3dc0-4553-4ec3-bec4-097c68139910-kube-api-access-gdq4c" (OuterVolumeSpecName: "kube-api-access-gdq4c") pod "f0aa3dc0-4553-4ec3-bec4-097c68139910" (UID: "f0aa3dc0-4553-4ec3-bec4-097c68139910"). InnerVolumeSpecName "kube-api-access-gdq4c". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 06 12:04:39 crc kubenswrapper[4958]: I1006 12:04:39.764492 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Oct 06 12:04:39 crc kubenswrapper[4958]: I1006 12:04:39.770401 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f0aa3dc0-4553-4ec3-bec4-097c68139910-horizon-secret-key" (OuterVolumeSpecName: "horizon-secret-key") pod "f0aa3dc0-4553-4ec3-bec4-097c68139910" (UID: "f0aa3dc0-4553-4ec3-bec4-097c68139910"). InnerVolumeSpecName "horizon-secret-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 06 12:04:39 crc kubenswrapper[4958]: I1006 12:04:39.787074 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f0aa3dc0-4553-4ec3-bec4-097c68139910-scripts" (OuterVolumeSpecName: "scripts") pod "f0aa3dc0-4553-4ec3-bec4-097c68139910" (UID: "f0aa3dc0-4553-4ec3-bec4-097c68139910"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 06 12:04:39 crc kubenswrapper[4958]: I1006 12:04:39.803891 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f0aa3dc0-4553-4ec3-bec4-097c68139910-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "f0aa3dc0-4553-4ec3-bec4-097c68139910" (UID: "f0aa3dc0-4553-4ec3-bec4-097c68139910"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 06 12:04:39 crc kubenswrapper[4958]: I1006 12:04:39.807447 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f0aa3dc0-4553-4ec3-bec4-097c68139910-horizon-tls-certs" (OuterVolumeSpecName: "horizon-tls-certs") pod "f0aa3dc0-4553-4ec3-bec4-097c68139910" (UID: "f0aa3dc0-4553-4ec3-bec4-097c68139910"). InnerVolumeSpecName "horizon-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 06 12:04:39 crc kubenswrapper[4958]: I1006 12:04:39.823217 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f0aa3dc0-4553-4ec3-bec4-097c68139910-config-data" (OuterVolumeSpecName: "config-data") pod "f0aa3dc0-4553-4ec3-bec4-097c68139910" (UID: "f0aa3dc0-4553-4ec3-bec4-097c68139910"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 06 12:04:39 crc kubenswrapper[4958]: I1006 12:04:39.831435 4958 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/f0aa3dc0-4553-4ec3-bec4-097c68139910-scripts\") on node \"crc\" DevicePath \"\"" Oct 06 12:04:39 crc kubenswrapper[4958]: I1006 12:04:39.831483 4958 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gdq4c\" (UniqueName: \"kubernetes.io/projected/f0aa3dc0-4553-4ec3-bec4-097c68139910-kube-api-access-gdq4c\") on node \"crc\" DevicePath \"\"" Oct 06 12:04:39 crc kubenswrapper[4958]: I1006 12:04:39.831498 4958 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/f0aa3dc0-4553-4ec3-bec4-097c68139910-config-data\") on node \"crc\" DevicePath \"\"" Oct 06 12:04:39 crc kubenswrapper[4958]: I1006 12:04:39.831510 4958 reconciler_common.go:293] "Volume detached for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/f0aa3dc0-4553-4ec3-bec4-097c68139910-horizon-secret-key\") on node \"crc\" DevicePath \"\"" Oct 06 12:04:39 crc kubenswrapper[4958]: I1006 12:04:39.831520 4958 reconciler_common.go:293] "Volume detached for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/f0aa3dc0-4553-4ec3-bec4-097c68139910-horizon-tls-certs\") on node \"crc\" DevicePath \"\"" Oct 06 12:04:39 crc kubenswrapper[4958]: I1006 12:04:39.831533 4958 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f0aa3dc0-4553-4ec3-bec4-097c68139910-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 06 12:04:39 crc kubenswrapper[4958]: I1006 12:04:39.831543 4958 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f0aa3dc0-4553-4ec3-bec4-097c68139910-logs\") on node \"crc\" DevicePath \"\"" Oct 06 12:04:40 crc kubenswrapper[4958]: I1006 12:04:40.283647 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-5958f9647d-2d24s" Oct 06 12:04:40 crc kubenswrapper[4958]: I1006 12:04:40.338555 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/d92c9a92-e930-40fd-83bb-b478da76b0d3-config-data-custom\") pod \"d92c9a92-e930-40fd-83bb-b478da76b0d3\" (UID: \"d92c9a92-e930-40fd-83bb-b478da76b0d3\") " Oct 06 12:04:40 crc kubenswrapper[4958]: I1006 12:04:40.338598 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d92c9a92-e930-40fd-83bb-b478da76b0d3-config-data\") pod \"d92c9a92-e930-40fd-83bb-b478da76b0d3\" (UID: \"d92c9a92-e930-40fd-83bb-b478da76b0d3\") " Oct 06 12:04:40 crc kubenswrapper[4958]: I1006 12:04:40.338636 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-t6gpb\" (UniqueName: \"kubernetes.io/projected/d92c9a92-e930-40fd-83bb-b478da76b0d3-kube-api-access-t6gpb\") pod \"d92c9a92-e930-40fd-83bb-b478da76b0d3\" (UID: \"d92c9a92-e930-40fd-83bb-b478da76b0d3\") " Oct 06 12:04:40 crc kubenswrapper[4958]: I1006 12:04:40.338710 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d92c9a92-e930-40fd-83bb-b478da76b0d3-logs\") pod \"d92c9a92-e930-40fd-83bb-b478da76b0d3\" (UID: \"d92c9a92-e930-40fd-83bb-b478da76b0d3\") " Oct 06 12:04:40 crc kubenswrapper[4958]: I1006 12:04:40.338751 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d92c9a92-e930-40fd-83bb-b478da76b0d3-combined-ca-bundle\") pod \"d92c9a92-e930-40fd-83bb-b478da76b0d3\" (UID: \"d92c9a92-e930-40fd-83bb-b478da76b0d3\") " Oct 06 12:04:40 crc kubenswrapper[4958]: I1006 12:04:40.347599 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d92c9a92-e930-40fd-83bb-b478da76b0d3-logs" (OuterVolumeSpecName: "logs") pod "d92c9a92-e930-40fd-83bb-b478da76b0d3" (UID: "d92c9a92-e930-40fd-83bb-b478da76b0d3"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 06 12:04:40 crc kubenswrapper[4958]: I1006 12:04:40.354118 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d92c9a92-e930-40fd-83bb-b478da76b0d3-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "d92c9a92-e930-40fd-83bb-b478da76b0d3" (UID: "d92c9a92-e930-40fd-83bb-b478da76b0d3"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 06 12:04:40 crc kubenswrapper[4958]: I1006 12:04:40.355398 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d92c9a92-e930-40fd-83bb-b478da76b0d3-kube-api-access-t6gpb" (OuterVolumeSpecName: "kube-api-access-t6gpb") pod "d92c9a92-e930-40fd-83bb-b478da76b0d3" (UID: "d92c9a92-e930-40fd-83bb-b478da76b0d3"). InnerVolumeSpecName "kube-api-access-t6gpb". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 06 12:04:40 crc kubenswrapper[4958]: I1006 12:04:40.393758 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d92c9a92-e930-40fd-83bb-b478da76b0d3-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "d92c9a92-e930-40fd-83bb-b478da76b0d3" (UID: "d92c9a92-e930-40fd-83bb-b478da76b0d3"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 06 12:04:40 crc kubenswrapper[4958]: I1006 12:04:40.408245 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d92c9a92-e930-40fd-83bb-b478da76b0d3-config-data" (OuterVolumeSpecName: "config-data") pod "d92c9a92-e930-40fd-83bb-b478da76b0d3" (UID: "d92c9a92-e930-40fd-83bb-b478da76b0d3"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 06 12:04:40 crc kubenswrapper[4958]: I1006 12:04:40.441109 4958 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d92c9a92-e930-40fd-83bb-b478da76b0d3-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 06 12:04:40 crc kubenswrapper[4958]: I1006 12:04:40.441180 4958 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/d92c9a92-e930-40fd-83bb-b478da76b0d3-config-data-custom\") on node \"crc\" DevicePath \"\"" Oct 06 12:04:40 crc kubenswrapper[4958]: I1006 12:04:40.441193 4958 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d92c9a92-e930-40fd-83bb-b478da76b0d3-config-data\") on node \"crc\" DevicePath \"\"" Oct 06 12:04:40 crc kubenswrapper[4958]: I1006 12:04:40.441205 4958 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-t6gpb\" (UniqueName: \"kubernetes.io/projected/d92c9a92-e930-40fd-83bb-b478da76b0d3-kube-api-access-t6gpb\") on node \"crc\" DevicePath \"\"" Oct 06 12:04:40 crc kubenswrapper[4958]: I1006 12:04:40.441223 4958 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d92c9a92-e930-40fd-83bb-b478da76b0d3-logs\") on node \"crc\" DevicePath \"\"" Oct 06 12:04:40 crc kubenswrapper[4958]: I1006 12:04:40.564889 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"5c8fffd7-1c3e-46a3-b444-0fd22eaff575","Type":"ContainerStarted","Data":"acba22244bac507d673aca3f9064b23210c12f0ea506d28e7869322c344c52ea"} Oct 06 12:04:40 crc kubenswrapper[4958]: I1006 12:04:40.577917 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"5631f7c8-d7b1-4655-8acd-83a29bb5f3b3","Type":"ContainerStarted","Data":"5eafa0e26e5e936355bf2748002e0f80ff9d5d5aed89b63fe25986222f582e4e"} Oct 06 12:04:40 crc kubenswrapper[4958]: I1006 12:04:40.579796 4958 generic.go:334] "Generic (PLEG): container finished" podID="d92c9a92-e930-40fd-83bb-b478da76b0d3" containerID="ab1c7a492e8ca9592bbf4081708aa94c720231df90749d65a07e696d7239c03e" exitCode=0 Oct 06 12:04:40 crc kubenswrapper[4958]: I1006 12:04:40.579839 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-5958f9647d-2d24s" event={"ID":"d92c9a92-e930-40fd-83bb-b478da76b0d3","Type":"ContainerDied","Data":"ab1c7a492e8ca9592bbf4081708aa94c720231df90749d65a07e696d7239c03e"} Oct 06 12:04:40 crc kubenswrapper[4958]: I1006 12:04:40.579856 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-5958f9647d-2d24s" event={"ID":"d92c9a92-e930-40fd-83bb-b478da76b0d3","Type":"ContainerDied","Data":"fb260795f9dc3c7b1c061cb5a0814454893a31047818d3a092ccc5375e38dc02"} Oct 06 12:04:40 crc kubenswrapper[4958]: I1006 12:04:40.579872 4958 scope.go:117] "RemoveContainer" containerID="ab1c7a492e8ca9592bbf4081708aa94c720231df90749d65a07e696d7239c03e" Oct 06 12:04:40 crc kubenswrapper[4958]: I1006 12:04:40.580012 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-5958f9647d-2d24s" Oct 06 12:04:40 crc kubenswrapper[4958]: I1006 12:04:40.607749 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-7567d7f44b-s7rvg" event={"ID":"f0aa3dc0-4553-4ec3-bec4-097c68139910","Type":"ContainerDied","Data":"90d64ddf7e28f4fd3dba605b0e62c148245f6b667816016e63506f623d0db803"} Oct 06 12:04:40 crc kubenswrapper[4958]: I1006 12:04:40.607814 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-7567d7f44b-s7rvg" Oct 06 12:04:40 crc kubenswrapper[4958]: I1006 12:04:40.624411 4958 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-api-5958f9647d-2d24s"] Oct 06 12:04:40 crc kubenswrapper[4958]: I1006 12:04:40.631241 4958 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-api-5958f9647d-2d24s"] Oct 06 12:04:40 crc kubenswrapper[4958]: I1006 12:04:40.655823 4958 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-7567d7f44b-s7rvg"] Oct 06 12:04:40 crc kubenswrapper[4958]: I1006 12:04:40.664101 4958 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/horizon-7567d7f44b-s7rvg"] Oct 06 12:04:40 crc kubenswrapper[4958]: I1006 12:04:40.667887 4958 scope.go:117] "RemoveContainer" containerID="85232274a3cb12408e56c6f35bb82db84acd0a9b628e8b6f97165aecbd6f27a3" Oct 06 12:04:40 crc kubenswrapper[4958]: I1006 12:04:40.702954 4958 scope.go:117] "RemoveContainer" containerID="ab1c7a492e8ca9592bbf4081708aa94c720231df90749d65a07e696d7239c03e" Oct 06 12:04:40 crc kubenswrapper[4958]: E1006 12:04:40.703565 4958 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ab1c7a492e8ca9592bbf4081708aa94c720231df90749d65a07e696d7239c03e\": container with ID starting with ab1c7a492e8ca9592bbf4081708aa94c720231df90749d65a07e696d7239c03e not found: ID does not exist" containerID="ab1c7a492e8ca9592bbf4081708aa94c720231df90749d65a07e696d7239c03e" Oct 06 12:04:40 crc kubenswrapper[4958]: I1006 12:04:40.703598 4958 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ab1c7a492e8ca9592bbf4081708aa94c720231df90749d65a07e696d7239c03e"} err="failed to get container status \"ab1c7a492e8ca9592bbf4081708aa94c720231df90749d65a07e696d7239c03e\": rpc error: code = NotFound desc = could not find container \"ab1c7a492e8ca9592bbf4081708aa94c720231df90749d65a07e696d7239c03e\": container with ID starting with ab1c7a492e8ca9592bbf4081708aa94c720231df90749d65a07e696d7239c03e not found: ID does not exist" Oct 06 12:04:40 crc kubenswrapper[4958]: I1006 12:04:40.703636 4958 scope.go:117] "RemoveContainer" containerID="85232274a3cb12408e56c6f35bb82db84acd0a9b628e8b6f97165aecbd6f27a3" Oct 06 12:04:40 crc kubenswrapper[4958]: E1006 12:04:40.704934 4958 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"85232274a3cb12408e56c6f35bb82db84acd0a9b628e8b6f97165aecbd6f27a3\": container with ID starting with 85232274a3cb12408e56c6f35bb82db84acd0a9b628e8b6f97165aecbd6f27a3 not found: ID does not exist" containerID="85232274a3cb12408e56c6f35bb82db84acd0a9b628e8b6f97165aecbd6f27a3" Oct 06 12:04:40 crc kubenswrapper[4958]: I1006 12:04:40.704976 4958 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"85232274a3cb12408e56c6f35bb82db84acd0a9b628e8b6f97165aecbd6f27a3"} err="failed to get container status \"85232274a3cb12408e56c6f35bb82db84acd0a9b628e8b6f97165aecbd6f27a3\": rpc error: code = NotFound desc = could not find container \"85232274a3cb12408e56c6f35bb82db84acd0a9b628e8b6f97165aecbd6f27a3\": container with ID starting with 85232274a3cb12408e56c6f35bb82db84acd0a9b628e8b6f97165aecbd6f27a3 not found: ID does not exist" Oct 06 12:04:40 crc kubenswrapper[4958]: I1006 12:04:40.705001 4958 scope.go:117] "RemoveContainer" containerID="2bf82343c7eb8ce707462fc4316e29f13036b7e1df47ae7e9cf97e8645c62246" Oct 06 12:04:40 crc kubenswrapper[4958]: I1006 12:04:40.870119 4958 scope.go:117] "RemoveContainer" containerID="1a53f4c2eeaf4a11211eb329d82ea7b02340d3241f4f3fb4358cb52e27b45808" Oct 06 12:04:40 crc kubenswrapper[4958]: I1006 12:04:40.956966 4958 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d92c9a92-e930-40fd-83bb-b478da76b0d3" path="/var/lib/kubelet/pods/d92c9a92-e930-40fd-83bb-b478da76b0d3/volumes" Oct 06 12:04:40 crc kubenswrapper[4958]: I1006 12:04:40.958493 4958 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f0aa3dc0-4553-4ec3-bec4-097c68139910" path="/var/lib/kubelet/pods/f0aa3dc0-4553-4ec3-bec4-097c68139910/volumes" Oct 06 12:04:41 crc kubenswrapper[4958]: I1006 12:04:41.228756 4958 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Oct 06 12:04:41 crc kubenswrapper[4958]: I1006 12:04:41.395974 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/neutron-7bc84f8f6c-tdr2k" Oct 06 12:04:41 crc kubenswrapper[4958]: I1006 12:04:41.463097 4958 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-df9b9d74d-rffxt"] Oct 06 12:04:41 crc kubenswrapper[4958]: I1006 12:04:41.473953 4958 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-df9b9d74d-rffxt" podUID="db7e63d2-d43d-417e-939a-be456eaae637" containerName="neutron-httpd" containerID="cri-o://d4f06781df28f582f5978d251210caeb10e53375d803de3880309ed0e406df19" gracePeriod=30 Oct 06 12:04:41 crc kubenswrapper[4958]: I1006 12:04:41.473134 4958 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-df9b9d74d-rffxt" podUID="db7e63d2-d43d-417e-939a-be456eaae637" containerName="neutron-api" containerID="cri-o://85829cdeb0396bf6abe3552a08347214ec79f661f6a666f2091f84050b6b906b" gracePeriod=30 Oct 06 12:04:41 crc kubenswrapper[4958]: I1006 12:04:41.631524 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"5c8fffd7-1c3e-46a3-b444-0fd22eaff575","Type":"ContainerStarted","Data":"c5fdbff6fdb4090bba70e3d80a50116d5fe69e43ec27147b4f655fc3f449e579"} Oct 06 12:04:41 crc kubenswrapper[4958]: I1006 12:04:41.631756 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"5c8fffd7-1c3e-46a3-b444-0fd22eaff575","Type":"ContainerStarted","Data":"193fed6e08ebe32078d874abcaed84c327519c0ca0d78726772106a4ecc67bf7"} Oct 06 12:04:41 crc kubenswrapper[4958]: I1006 12:04:41.825276 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-5784cf869f-dkwc8" Oct 06 12:04:41 crc kubenswrapper[4958]: I1006 12:04:41.896887 4958 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-75c8ddd69c-l6bx4"] Oct 06 12:04:41 crc kubenswrapper[4958]: I1006 12:04:41.897266 4958 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-75c8ddd69c-l6bx4" podUID="f71790e2-93a0-4e5a-9099-e2e5a103af3d" containerName="dnsmasq-dns" containerID="cri-o://3d6e23357b40b220c2cce11946235ee83bbcb5a25c4a585c3a497c082925b56b" gracePeriod=10 Oct 06 12:04:42 crc kubenswrapper[4958]: I1006 12:04:42.140510 4958 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/cinder-scheduler-0" Oct 06 12:04:42 crc kubenswrapper[4958]: I1006 12:04:42.197782 4958 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-scheduler-0"] Oct 06 12:04:42 crc kubenswrapper[4958]: I1006 12:04:42.641328 4958 generic.go:334] "Generic (PLEG): container finished" podID="f71790e2-93a0-4e5a-9099-e2e5a103af3d" containerID="3d6e23357b40b220c2cce11946235ee83bbcb5a25c4a585c3a497c082925b56b" exitCode=0 Oct 06 12:04:42 crc kubenswrapper[4958]: I1006 12:04:42.641410 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-75c8ddd69c-l6bx4" event={"ID":"f71790e2-93a0-4e5a-9099-e2e5a103af3d","Type":"ContainerDied","Data":"3d6e23357b40b220c2cce11946235ee83bbcb5a25c4a585c3a497c082925b56b"} Oct 06 12:04:42 crc kubenswrapper[4958]: I1006 12:04:42.644137 4958 generic.go:334] "Generic (PLEG): container finished" podID="db7e63d2-d43d-417e-939a-be456eaae637" containerID="d4f06781df28f582f5978d251210caeb10e53375d803de3880309ed0e406df19" exitCode=0 Oct 06 12:04:42 crc kubenswrapper[4958]: I1006 12:04:42.644170 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-df9b9d74d-rffxt" event={"ID":"db7e63d2-d43d-417e-939a-be456eaae637","Type":"ContainerDied","Data":"d4f06781df28f582f5978d251210caeb10e53375d803de3880309ed0e406df19"} Oct 06 12:04:42 crc kubenswrapper[4958]: I1006 12:04:42.644437 4958 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-scheduler-0" podUID="d8409911-8a67-4382-87b7-7a050b025a09" containerName="cinder-scheduler" containerID="cri-o://1b07e72b8a8349eeb0637e284e7beeff8033c68b56372590ebd563595f132eca" gracePeriod=30 Oct 06 12:04:42 crc kubenswrapper[4958]: I1006 12:04:42.644495 4958 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-scheduler-0" podUID="d8409911-8a67-4382-87b7-7a050b025a09" containerName="probe" containerID="cri-o://4ec8589ab5c358d1ae115d7cb048e88b0ead557915aab85d2287be752cdfaaea" gracePeriod=30 Oct 06 12:04:43 crc kubenswrapper[4958]: I1006 12:04:43.660988 4958 generic.go:334] "Generic (PLEG): container finished" podID="d8409911-8a67-4382-87b7-7a050b025a09" containerID="4ec8589ab5c358d1ae115d7cb048e88b0ead557915aab85d2287be752cdfaaea" exitCode=0 Oct 06 12:04:43 crc kubenswrapper[4958]: I1006 12:04:43.661030 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"d8409911-8a67-4382-87b7-7a050b025a09","Type":"ContainerDied","Data":"4ec8589ab5c358d1ae115d7cb048e88b0ead557915aab85d2287be752cdfaaea"} Oct 06 12:04:44 crc kubenswrapper[4958]: I1006 12:04:44.507474 4958 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Oct 06 12:04:44 crc kubenswrapper[4958]: I1006 12:04:44.507916 4958 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="bdff7949-e76a-484d-983d-c3f8fee7f175" containerName="glance-log" containerID="cri-o://eb466551a2ed74deb5b0345d460e4a060341bf53984c36e1ad2159e49007280e" gracePeriod=30 Oct 06 12:04:44 crc kubenswrapper[4958]: I1006 12:04:44.509680 4958 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="bdff7949-e76a-484d-983d-c3f8fee7f175" containerName="glance-httpd" containerID="cri-o://5e604db492197be7746aaa503276106e68a4d9e81736affd54302a31a30ba540" gracePeriod=30 Oct 06 12:04:44 crc kubenswrapper[4958]: I1006 12:04:44.672303 4958 generic.go:334] "Generic (PLEG): container finished" podID="bdff7949-e76a-484d-983d-c3f8fee7f175" containerID="eb466551a2ed74deb5b0345d460e4a060341bf53984c36e1ad2159e49007280e" exitCode=143 Oct 06 12:04:44 crc kubenswrapper[4958]: I1006 12:04:44.672347 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"bdff7949-e76a-484d-983d-c3f8fee7f175","Type":"ContainerDied","Data":"eb466551a2ed74deb5b0345d460e4a060341bf53984c36e1ad2159e49007280e"} Oct 06 12:04:45 crc kubenswrapper[4958]: I1006 12:04:45.521186 4958 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-db-create-nsqbc"] Oct 06 12:04:45 crc kubenswrapper[4958]: E1006 12:04:45.521643 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d92c9a92-e930-40fd-83bb-b478da76b0d3" containerName="barbican-api-log" Oct 06 12:04:45 crc kubenswrapper[4958]: I1006 12:04:45.521665 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="d92c9a92-e930-40fd-83bb-b478da76b0d3" containerName="barbican-api-log" Oct 06 12:04:45 crc kubenswrapper[4958]: E1006 12:04:45.521707 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d92c9a92-e930-40fd-83bb-b478da76b0d3" containerName="barbican-api" Oct 06 12:04:45 crc kubenswrapper[4958]: I1006 12:04:45.521719 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="d92c9a92-e930-40fd-83bb-b478da76b0d3" containerName="barbican-api" Oct 06 12:04:45 crc kubenswrapper[4958]: E1006 12:04:45.521737 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f0aa3dc0-4553-4ec3-bec4-097c68139910" containerName="horizon-log" Oct 06 12:04:45 crc kubenswrapper[4958]: I1006 12:04:45.521745 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="f0aa3dc0-4553-4ec3-bec4-097c68139910" containerName="horizon-log" Oct 06 12:04:45 crc kubenswrapper[4958]: E1006 12:04:45.521775 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f0aa3dc0-4553-4ec3-bec4-097c68139910" containerName="horizon" Oct 06 12:04:45 crc kubenswrapper[4958]: I1006 12:04:45.521784 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="f0aa3dc0-4553-4ec3-bec4-097c68139910" containerName="horizon" Oct 06 12:04:45 crc kubenswrapper[4958]: I1006 12:04:45.522025 4958 memory_manager.go:354] "RemoveStaleState removing state" podUID="f0aa3dc0-4553-4ec3-bec4-097c68139910" containerName="horizon-log" Oct 06 12:04:45 crc kubenswrapper[4958]: I1006 12:04:45.522054 4958 memory_manager.go:354] "RemoveStaleState removing state" podUID="d92c9a92-e930-40fd-83bb-b478da76b0d3" containerName="barbican-api-log" Oct 06 12:04:45 crc kubenswrapper[4958]: I1006 12:04:45.522071 4958 memory_manager.go:354] "RemoveStaleState removing state" podUID="d92c9a92-e930-40fd-83bb-b478da76b0d3" containerName="barbican-api" Oct 06 12:04:45 crc kubenswrapper[4958]: I1006 12:04:45.522087 4958 memory_manager.go:354] "RemoveStaleState removing state" podUID="f0aa3dc0-4553-4ec3-bec4-097c68139910" containerName="horizon" Oct 06 12:04:45 crc kubenswrapper[4958]: I1006 12:04:45.522935 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-nsqbc" Oct 06 12:04:45 crc kubenswrapper[4958]: I1006 12:04:45.543298 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-db-create-nsqbc"] Oct 06 12:04:45 crc kubenswrapper[4958]: I1006 12:04:45.552950 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4b2hx\" (UniqueName: \"kubernetes.io/projected/9ba0cdbc-1261-4dd4-9be1-f733bc94f7d3-kube-api-access-4b2hx\") pod \"nova-api-db-create-nsqbc\" (UID: \"9ba0cdbc-1261-4dd4-9be1-f733bc94f7d3\") " pod="openstack/nova-api-db-create-nsqbc" Oct 06 12:04:45 crc kubenswrapper[4958]: I1006 12:04:45.611664 4958 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-db-create-gzxhx"] Oct 06 12:04:45 crc kubenswrapper[4958]: I1006 12:04:45.613282 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-gzxhx" Oct 06 12:04:45 crc kubenswrapper[4958]: I1006 12:04:45.622911 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-db-create-gzxhx"] Oct 06 12:04:45 crc kubenswrapper[4958]: I1006 12:04:45.654586 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4b2hx\" (UniqueName: \"kubernetes.io/projected/9ba0cdbc-1261-4dd4-9be1-f733bc94f7d3-kube-api-access-4b2hx\") pod \"nova-api-db-create-nsqbc\" (UID: \"9ba0cdbc-1261-4dd4-9be1-f733bc94f7d3\") " pod="openstack/nova-api-db-create-nsqbc" Oct 06 12:04:45 crc kubenswrapper[4958]: I1006 12:04:45.690893 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4b2hx\" (UniqueName: \"kubernetes.io/projected/9ba0cdbc-1261-4dd4-9be1-f733bc94f7d3-kube-api-access-4b2hx\") pod \"nova-api-db-create-nsqbc\" (UID: \"9ba0cdbc-1261-4dd4-9be1-f733bc94f7d3\") " pod="openstack/nova-api-db-create-nsqbc" Oct 06 12:04:45 crc kubenswrapper[4958]: I1006 12:04:45.713993 4958 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-db-create-6xbk9"] Oct 06 12:04:45 crc kubenswrapper[4958]: I1006 12:04:45.715038 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-6xbk9" Oct 06 12:04:45 crc kubenswrapper[4958]: I1006 12:04:45.716614 4958 generic.go:334] "Generic (PLEG): container finished" podID="d8409911-8a67-4382-87b7-7a050b025a09" containerID="1b07e72b8a8349eeb0637e284e7beeff8033c68b56372590ebd563595f132eca" exitCode=0 Oct 06 12:04:45 crc kubenswrapper[4958]: I1006 12:04:45.716665 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"d8409911-8a67-4382-87b7-7a050b025a09","Type":"ContainerDied","Data":"1b07e72b8a8349eeb0637e284e7beeff8033c68b56372590ebd563595f132eca"} Oct 06 12:04:45 crc kubenswrapper[4958]: I1006 12:04:45.735213 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-db-create-6xbk9"] Oct 06 12:04:45 crc kubenswrapper[4958]: I1006 12:04:45.756653 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mnqgl\" (UniqueName: \"kubernetes.io/projected/f8492838-08b1-4d13-8697-bf595621d465-kube-api-access-mnqgl\") pod \"nova-cell0-db-create-gzxhx\" (UID: \"f8492838-08b1-4d13-8697-bf595621d465\") " pod="openstack/nova-cell0-db-create-gzxhx" Oct 06 12:04:45 crc kubenswrapper[4958]: I1006 12:04:45.836690 4958 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Oct 06 12:04:45 crc kubenswrapper[4958]: I1006 12:04:45.836975 4958 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="7b1cc01c-8b93-41aa-bf54-7d98363efbca" containerName="glance-log" containerID="cri-o://3417a50edc73a8f5851ebbd98453cc79184544b45feb20c6d00c9b8f44f808fd" gracePeriod=30 Oct 06 12:04:45 crc kubenswrapper[4958]: I1006 12:04:45.837268 4958 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="7b1cc01c-8b93-41aa-bf54-7d98363efbca" containerName="glance-httpd" containerID="cri-o://b6bdf703a557f076696027a3d0398d0fe978145e58903f39a8694f11dc29ff8c" gracePeriod=30 Oct 06 12:04:45 crc kubenswrapper[4958]: I1006 12:04:45.858049 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mnqgl\" (UniqueName: \"kubernetes.io/projected/f8492838-08b1-4d13-8697-bf595621d465-kube-api-access-mnqgl\") pod \"nova-cell0-db-create-gzxhx\" (UID: \"f8492838-08b1-4d13-8697-bf595621d465\") " pod="openstack/nova-cell0-db-create-gzxhx" Oct 06 12:04:45 crc kubenswrapper[4958]: I1006 12:04:45.858265 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-srqg8\" (UniqueName: \"kubernetes.io/projected/ee717167-2b5b-4c0a-9528-b14f49856f5e-kube-api-access-srqg8\") pod \"nova-cell1-db-create-6xbk9\" (UID: \"ee717167-2b5b-4c0a-9528-b14f49856f5e\") " pod="openstack/nova-cell1-db-create-6xbk9" Oct 06 12:04:45 crc kubenswrapper[4958]: I1006 12:04:45.859378 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-nsqbc" Oct 06 12:04:45 crc kubenswrapper[4958]: I1006 12:04:45.885164 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mnqgl\" (UniqueName: \"kubernetes.io/projected/f8492838-08b1-4d13-8697-bf595621d465-kube-api-access-mnqgl\") pod \"nova-cell0-db-create-gzxhx\" (UID: \"f8492838-08b1-4d13-8697-bf595621d465\") " pod="openstack/nova-cell0-db-create-gzxhx" Oct 06 12:04:45 crc kubenswrapper[4958]: I1006 12:04:45.922554 4958 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-75c8ddd69c-l6bx4" podUID="f71790e2-93a0-4e5a-9099-e2e5a103af3d" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.160:5353: connect: connection refused" Oct 06 12:04:45 crc kubenswrapper[4958]: I1006 12:04:45.934274 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-gzxhx" Oct 06 12:04:45 crc kubenswrapper[4958]: I1006 12:04:45.960174 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-srqg8\" (UniqueName: \"kubernetes.io/projected/ee717167-2b5b-4c0a-9528-b14f49856f5e-kube-api-access-srqg8\") pod \"nova-cell1-db-create-6xbk9\" (UID: \"ee717167-2b5b-4c0a-9528-b14f49856f5e\") " pod="openstack/nova-cell1-db-create-6xbk9" Oct 06 12:04:45 crc kubenswrapper[4958]: I1006 12:04:45.995841 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-srqg8\" (UniqueName: \"kubernetes.io/projected/ee717167-2b5b-4c0a-9528-b14f49856f5e-kube-api-access-srqg8\") pod \"nova-cell1-db-create-6xbk9\" (UID: \"ee717167-2b5b-4c0a-9528-b14f49856f5e\") " pod="openstack/nova-cell1-db-create-6xbk9" Oct 06 12:04:46 crc kubenswrapper[4958]: I1006 12:04:46.053463 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-6xbk9" Oct 06 12:04:46 crc kubenswrapper[4958]: I1006 12:04:46.381538 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/swift-proxy-5cc4dd9879-7xgdr" Oct 06 12:04:46 crc kubenswrapper[4958]: I1006 12:04:46.382052 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/swift-proxy-5cc4dd9879-7xgdr" Oct 06 12:04:46 crc kubenswrapper[4958]: I1006 12:04:46.734188 4958 generic.go:334] "Generic (PLEG): container finished" podID="7b1cc01c-8b93-41aa-bf54-7d98363efbca" containerID="3417a50edc73a8f5851ebbd98453cc79184544b45feb20c6d00c9b8f44f808fd" exitCode=143 Oct 06 12:04:46 crc kubenswrapper[4958]: I1006 12:04:46.734303 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"7b1cc01c-8b93-41aa-bf54-7d98363efbca","Type":"ContainerDied","Data":"3417a50edc73a8f5851ebbd98453cc79184544b45feb20c6d00c9b8f44f808fd"} Oct 06 12:04:47 crc kubenswrapper[4958]: I1006 12:04:47.448236 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/placement-999fc56db-gbkpz" Oct 06 12:04:47 crc kubenswrapper[4958]: I1006 12:04:47.499980 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/placement-999fc56db-gbkpz" Oct 06 12:04:47 crc kubenswrapper[4958]: I1006 12:04:47.762719 4958 generic.go:334] "Generic (PLEG): container finished" podID="db7e63d2-d43d-417e-939a-be456eaae637" containerID="85829cdeb0396bf6abe3552a08347214ec79f661f6a666f2091f84050b6b906b" exitCode=0 Oct 06 12:04:47 crc kubenswrapper[4958]: I1006 12:04:47.762980 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-df9b9d74d-rffxt" event={"ID":"db7e63d2-d43d-417e-939a-be456eaae637","Type":"ContainerDied","Data":"85829cdeb0396bf6abe3552a08347214ec79f661f6a666f2091f84050b6b906b"} Oct 06 12:04:48 crc kubenswrapper[4958]: I1006 12:04:48.713322 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/cinder-api-0" Oct 06 12:04:48 crc kubenswrapper[4958]: I1006 12:04:48.774088 4958 generic.go:334] "Generic (PLEG): container finished" podID="bdff7949-e76a-484d-983d-c3f8fee7f175" containerID="5e604db492197be7746aaa503276106e68a4d9e81736affd54302a31a30ba540" exitCode=0 Oct 06 12:04:48 crc kubenswrapper[4958]: I1006 12:04:48.774130 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"bdff7949-e76a-484d-983d-c3f8fee7f175","Type":"ContainerDied","Data":"5e604db492197be7746aaa503276106e68a4d9e81736affd54302a31a30ba540"} Oct 06 12:04:49 crc kubenswrapper[4958]: I1006 12:04:49.802362 4958 generic.go:334] "Generic (PLEG): container finished" podID="7b1cc01c-8b93-41aa-bf54-7d98363efbca" containerID="b6bdf703a557f076696027a3d0398d0fe978145e58903f39a8694f11dc29ff8c" exitCode=0 Oct 06 12:04:49 crc kubenswrapper[4958]: I1006 12:04:49.802572 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"7b1cc01c-8b93-41aa-bf54-7d98363efbca","Type":"ContainerDied","Data":"b6bdf703a557f076696027a3d0398d0fe978145e58903f39a8694f11dc29ff8c"} Oct 06 12:04:49 crc kubenswrapper[4958]: I1006 12:04:49.913129 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-75c8ddd69c-l6bx4" Oct 06 12:04:50 crc kubenswrapper[4958]: I1006 12:04:50.041439 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-764tf\" (UniqueName: \"kubernetes.io/projected/f71790e2-93a0-4e5a-9099-e2e5a103af3d-kube-api-access-764tf\") pod \"f71790e2-93a0-4e5a-9099-e2e5a103af3d\" (UID: \"f71790e2-93a0-4e5a-9099-e2e5a103af3d\") " Oct 06 12:04:50 crc kubenswrapper[4958]: I1006 12:04:50.041924 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f71790e2-93a0-4e5a-9099-e2e5a103af3d-config\") pod \"f71790e2-93a0-4e5a-9099-e2e5a103af3d\" (UID: \"f71790e2-93a0-4e5a-9099-e2e5a103af3d\") " Oct 06 12:04:50 crc kubenswrapper[4958]: I1006 12:04:50.042059 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f71790e2-93a0-4e5a-9099-e2e5a103af3d-dns-svc\") pod \"f71790e2-93a0-4e5a-9099-e2e5a103af3d\" (UID: \"f71790e2-93a0-4e5a-9099-e2e5a103af3d\") " Oct 06 12:04:50 crc kubenswrapper[4958]: I1006 12:04:50.042160 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/f71790e2-93a0-4e5a-9099-e2e5a103af3d-dns-swift-storage-0\") pod \"f71790e2-93a0-4e5a-9099-e2e5a103af3d\" (UID: \"f71790e2-93a0-4e5a-9099-e2e5a103af3d\") " Oct 06 12:04:50 crc kubenswrapper[4958]: I1006 12:04:50.042209 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/f71790e2-93a0-4e5a-9099-e2e5a103af3d-ovsdbserver-sb\") pod \"f71790e2-93a0-4e5a-9099-e2e5a103af3d\" (UID: \"f71790e2-93a0-4e5a-9099-e2e5a103af3d\") " Oct 06 12:04:50 crc kubenswrapper[4958]: I1006 12:04:50.042250 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/f71790e2-93a0-4e5a-9099-e2e5a103af3d-ovsdbserver-nb\") pod \"f71790e2-93a0-4e5a-9099-e2e5a103af3d\" (UID: \"f71790e2-93a0-4e5a-9099-e2e5a103af3d\") " Oct 06 12:04:50 crc kubenswrapper[4958]: I1006 12:04:50.052822 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f71790e2-93a0-4e5a-9099-e2e5a103af3d-kube-api-access-764tf" (OuterVolumeSpecName: "kube-api-access-764tf") pod "f71790e2-93a0-4e5a-9099-e2e5a103af3d" (UID: "f71790e2-93a0-4e5a-9099-e2e5a103af3d"). InnerVolumeSpecName "kube-api-access-764tf". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 06 12:04:50 crc kubenswrapper[4958]: I1006 12:04:50.146719 4958 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-764tf\" (UniqueName: \"kubernetes.io/projected/f71790e2-93a0-4e5a-9099-e2e5a103af3d-kube-api-access-764tf\") on node \"crc\" DevicePath \"\"" Oct 06 12:04:50 crc kubenswrapper[4958]: I1006 12:04:50.216962 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Oct 06 12:04:50 crc kubenswrapper[4958]: I1006 12:04:50.351726 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d8409911-8a67-4382-87b7-7a050b025a09-scripts\") pod \"d8409911-8a67-4382-87b7-7a050b025a09\" (UID: \"d8409911-8a67-4382-87b7-7a050b025a09\") " Oct 06 12:04:50 crc kubenswrapper[4958]: I1006 12:04:50.351762 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d8409911-8a67-4382-87b7-7a050b025a09-combined-ca-bundle\") pod \"d8409911-8a67-4382-87b7-7a050b025a09\" (UID: \"d8409911-8a67-4382-87b7-7a050b025a09\") " Oct 06 12:04:50 crc kubenswrapper[4958]: I1006 12:04:50.351835 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/d8409911-8a67-4382-87b7-7a050b025a09-etc-machine-id\") pod \"d8409911-8a67-4382-87b7-7a050b025a09\" (UID: \"d8409911-8a67-4382-87b7-7a050b025a09\") " Oct 06 12:04:50 crc kubenswrapper[4958]: I1006 12:04:50.351901 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d8409911-8a67-4382-87b7-7a050b025a09-config-data\") pod \"d8409911-8a67-4382-87b7-7a050b025a09\" (UID: \"d8409911-8a67-4382-87b7-7a050b025a09\") " Oct 06 12:04:50 crc kubenswrapper[4958]: I1006 12:04:50.352037 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qfnwl\" (UniqueName: \"kubernetes.io/projected/d8409911-8a67-4382-87b7-7a050b025a09-kube-api-access-qfnwl\") pod \"d8409911-8a67-4382-87b7-7a050b025a09\" (UID: \"d8409911-8a67-4382-87b7-7a050b025a09\") " Oct 06 12:04:50 crc kubenswrapper[4958]: I1006 12:04:50.352096 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/d8409911-8a67-4382-87b7-7a050b025a09-config-data-custom\") pod \"d8409911-8a67-4382-87b7-7a050b025a09\" (UID: \"d8409911-8a67-4382-87b7-7a050b025a09\") " Oct 06 12:04:50 crc kubenswrapper[4958]: I1006 12:04:50.352441 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d8409911-8a67-4382-87b7-7a050b025a09-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "d8409911-8a67-4382-87b7-7a050b025a09" (UID: "d8409911-8a67-4382-87b7-7a050b025a09"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 06 12:04:50 crc kubenswrapper[4958]: I1006 12:04:50.360503 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d8409911-8a67-4382-87b7-7a050b025a09-kube-api-access-qfnwl" (OuterVolumeSpecName: "kube-api-access-qfnwl") pod "d8409911-8a67-4382-87b7-7a050b025a09" (UID: "d8409911-8a67-4382-87b7-7a050b025a09"). InnerVolumeSpecName "kube-api-access-qfnwl". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 06 12:04:50 crc kubenswrapper[4958]: I1006 12:04:50.364182 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d8409911-8a67-4382-87b7-7a050b025a09-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "d8409911-8a67-4382-87b7-7a050b025a09" (UID: "d8409911-8a67-4382-87b7-7a050b025a09"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 06 12:04:50 crc kubenswrapper[4958]: I1006 12:04:50.368110 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d8409911-8a67-4382-87b7-7a050b025a09-scripts" (OuterVolumeSpecName: "scripts") pod "d8409911-8a67-4382-87b7-7a050b025a09" (UID: "d8409911-8a67-4382-87b7-7a050b025a09"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 06 12:04:50 crc kubenswrapper[4958]: I1006 12:04:50.368986 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Oct 06 12:04:50 crc kubenswrapper[4958]: I1006 12:04:50.370931 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f71790e2-93a0-4e5a-9099-e2e5a103af3d-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "f71790e2-93a0-4e5a-9099-e2e5a103af3d" (UID: "f71790e2-93a0-4e5a-9099-e2e5a103af3d"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 06 12:04:50 crc kubenswrapper[4958]: I1006 12:04:50.376591 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f71790e2-93a0-4e5a-9099-e2e5a103af3d-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "f71790e2-93a0-4e5a-9099-e2e5a103af3d" (UID: "f71790e2-93a0-4e5a-9099-e2e5a103af3d"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 06 12:04:50 crc kubenswrapper[4958]: I1006 12:04:50.384184 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f71790e2-93a0-4e5a-9099-e2e5a103af3d-config" (OuterVolumeSpecName: "config") pod "f71790e2-93a0-4e5a-9099-e2e5a103af3d" (UID: "f71790e2-93a0-4e5a-9099-e2e5a103af3d"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 06 12:04:50 crc kubenswrapper[4958]: I1006 12:04:50.391641 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f71790e2-93a0-4e5a-9099-e2e5a103af3d-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "f71790e2-93a0-4e5a-9099-e2e5a103af3d" (UID: "f71790e2-93a0-4e5a-9099-e2e5a103af3d"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 06 12:04:50 crc kubenswrapper[4958]: I1006 12:04:50.394483 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-df9b9d74d-rffxt" Oct 06 12:04:50 crc kubenswrapper[4958]: I1006 12:04:50.405945 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f71790e2-93a0-4e5a-9099-e2e5a103af3d-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "f71790e2-93a0-4e5a-9099-e2e5a103af3d" (UID: "f71790e2-93a0-4e5a-9099-e2e5a103af3d"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 06 12:04:50 crc kubenswrapper[4958]: I1006 12:04:50.453639 4958 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/d8409911-8a67-4382-87b7-7a050b025a09-config-data-custom\") on node \"crc\" DevicePath \"\"" Oct 06 12:04:50 crc kubenswrapper[4958]: I1006 12:04:50.453672 4958 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f71790e2-93a0-4e5a-9099-e2e5a103af3d-dns-svc\") on node \"crc\" DevicePath \"\"" Oct 06 12:04:50 crc kubenswrapper[4958]: I1006 12:04:50.453685 4958 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d8409911-8a67-4382-87b7-7a050b025a09-scripts\") on node \"crc\" DevicePath \"\"" Oct 06 12:04:50 crc kubenswrapper[4958]: I1006 12:04:50.453694 4958 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/f71790e2-93a0-4e5a-9099-e2e5a103af3d-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Oct 06 12:04:50 crc kubenswrapper[4958]: I1006 12:04:50.453705 4958 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/d8409911-8a67-4382-87b7-7a050b025a09-etc-machine-id\") on node \"crc\" DevicePath \"\"" Oct 06 12:04:50 crc kubenswrapper[4958]: I1006 12:04:50.453712 4958 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/f71790e2-93a0-4e5a-9099-e2e5a103af3d-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Oct 06 12:04:50 crc kubenswrapper[4958]: I1006 12:04:50.453724 4958 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/f71790e2-93a0-4e5a-9099-e2e5a103af3d-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Oct 06 12:04:50 crc kubenswrapper[4958]: I1006 12:04:50.453732 4958 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f71790e2-93a0-4e5a-9099-e2e5a103af3d-config\") on node \"crc\" DevicePath \"\"" Oct 06 12:04:50 crc kubenswrapper[4958]: I1006 12:04:50.453739 4958 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qfnwl\" (UniqueName: \"kubernetes.io/projected/d8409911-8a67-4382-87b7-7a050b025a09-kube-api-access-qfnwl\") on node \"crc\" DevicePath \"\"" Oct 06 12:04:50 crc kubenswrapper[4958]: I1006 12:04:50.470801 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d8409911-8a67-4382-87b7-7a050b025a09-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "d8409911-8a67-4382-87b7-7a050b025a09" (UID: "d8409911-8a67-4382-87b7-7a050b025a09"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 06 12:04:50 crc kubenswrapper[4958]: I1006 12:04:50.549846 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d8409911-8a67-4382-87b7-7a050b025a09-config-data" (OuterVolumeSpecName: "config-data") pod "d8409911-8a67-4382-87b7-7a050b025a09" (UID: "d8409911-8a67-4382-87b7-7a050b025a09"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 06 12:04:50 crc kubenswrapper[4958]: I1006 12:04:50.554655 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/db7e63d2-d43d-417e-939a-be456eaae637-httpd-config\") pod \"db7e63d2-d43d-417e-939a-be456eaae637\" (UID: \"db7e63d2-d43d-417e-939a-be456eaae637\") " Oct 06 12:04:50 crc kubenswrapper[4958]: I1006 12:04:50.554722 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-j9x5m\" (UniqueName: \"kubernetes.io/projected/bdff7949-e76a-484d-983d-c3f8fee7f175-kube-api-access-j9x5m\") pod \"bdff7949-e76a-484d-983d-c3f8fee7f175\" (UID: \"bdff7949-e76a-484d-983d-c3f8fee7f175\") " Oct 06 12:04:50 crc kubenswrapper[4958]: I1006 12:04:50.554762 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bdff7949-e76a-484d-983d-c3f8fee7f175-config-data\") pod \"bdff7949-e76a-484d-983d-c3f8fee7f175\" (UID: \"bdff7949-e76a-484d-983d-c3f8fee7f175\") " Oct 06 12:04:50 crc kubenswrapper[4958]: I1006 12:04:50.554780 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/bdff7949-e76a-484d-983d-c3f8fee7f175-public-tls-certs\") pod \"bdff7949-e76a-484d-983d-c3f8fee7f175\" (UID: \"bdff7949-e76a-484d-983d-c3f8fee7f175\") " Oct 06 12:04:50 crc kubenswrapper[4958]: I1006 12:04:50.554813 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/db7e63d2-d43d-417e-939a-be456eaae637-combined-ca-bundle\") pod \"db7e63d2-d43d-417e-939a-be456eaae637\" (UID: \"db7e63d2-d43d-417e-939a-be456eaae637\") " Oct 06 12:04:50 crc kubenswrapper[4958]: I1006 12:04:50.554841 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/db7e63d2-d43d-417e-939a-be456eaae637-ovndb-tls-certs\") pod \"db7e63d2-d43d-417e-939a-be456eaae637\" (UID: \"db7e63d2-d43d-417e-939a-be456eaae637\") " Oct 06 12:04:50 crc kubenswrapper[4958]: I1006 12:04:50.554898 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/bdff7949-e76a-484d-983d-c3f8fee7f175-scripts\") pod \"bdff7949-e76a-484d-983d-c3f8fee7f175\" (UID: \"bdff7949-e76a-484d-983d-c3f8fee7f175\") " Oct 06 12:04:50 crc kubenswrapper[4958]: I1006 12:04:50.554932 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/bdff7949-e76a-484d-983d-c3f8fee7f175-logs\") pod \"bdff7949-e76a-484d-983d-c3f8fee7f175\" (UID: \"bdff7949-e76a-484d-983d-c3f8fee7f175\") " Oct 06 12:04:50 crc kubenswrapper[4958]: I1006 12:04:50.554967 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-g44mr\" (UniqueName: \"kubernetes.io/projected/db7e63d2-d43d-417e-939a-be456eaae637-kube-api-access-g44mr\") pod \"db7e63d2-d43d-417e-939a-be456eaae637\" (UID: \"db7e63d2-d43d-417e-939a-be456eaae637\") " Oct 06 12:04:50 crc kubenswrapper[4958]: I1006 12:04:50.554992 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"bdff7949-e76a-484d-983d-c3f8fee7f175\" (UID: \"bdff7949-e76a-484d-983d-c3f8fee7f175\") " Oct 06 12:04:50 crc kubenswrapper[4958]: I1006 12:04:50.555072 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/db7e63d2-d43d-417e-939a-be456eaae637-config\") pod \"db7e63d2-d43d-417e-939a-be456eaae637\" (UID: \"db7e63d2-d43d-417e-939a-be456eaae637\") " Oct 06 12:04:50 crc kubenswrapper[4958]: I1006 12:04:50.555102 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/bdff7949-e76a-484d-983d-c3f8fee7f175-httpd-run\") pod \"bdff7949-e76a-484d-983d-c3f8fee7f175\" (UID: \"bdff7949-e76a-484d-983d-c3f8fee7f175\") " Oct 06 12:04:50 crc kubenswrapper[4958]: I1006 12:04:50.555118 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bdff7949-e76a-484d-983d-c3f8fee7f175-combined-ca-bundle\") pod \"bdff7949-e76a-484d-983d-c3f8fee7f175\" (UID: \"bdff7949-e76a-484d-983d-c3f8fee7f175\") " Oct 06 12:04:50 crc kubenswrapper[4958]: I1006 12:04:50.555457 4958 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d8409911-8a67-4382-87b7-7a050b025a09-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 06 12:04:50 crc kubenswrapper[4958]: I1006 12:04:50.555473 4958 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d8409911-8a67-4382-87b7-7a050b025a09-config-data\") on node \"crc\" DevicePath \"\"" Oct 06 12:04:50 crc kubenswrapper[4958]: I1006 12:04:50.557304 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bdff7949-e76a-484d-983d-c3f8fee7f175-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "bdff7949-e76a-484d-983d-c3f8fee7f175" (UID: "bdff7949-e76a-484d-983d-c3f8fee7f175"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 06 12:04:50 crc kubenswrapper[4958]: I1006 12:04:50.557587 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bdff7949-e76a-484d-983d-c3f8fee7f175-logs" (OuterVolumeSpecName: "logs") pod "bdff7949-e76a-484d-983d-c3f8fee7f175" (UID: "bdff7949-e76a-484d-983d-c3f8fee7f175"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 06 12:04:50 crc kubenswrapper[4958]: I1006 12:04:50.559552 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage11-crc" (OuterVolumeSpecName: "glance") pod "bdff7949-e76a-484d-983d-c3f8fee7f175" (UID: "bdff7949-e76a-484d-983d-c3f8fee7f175"). InnerVolumeSpecName "local-storage11-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Oct 06 12:04:50 crc kubenswrapper[4958]: I1006 12:04:50.561801 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bdff7949-e76a-484d-983d-c3f8fee7f175-scripts" (OuterVolumeSpecName: "scripts") pod "bdff7949-e76a-484d-983d-c3f8fee7f175" (UID: "bdff7949-e76a-484d-983d-c3f8fee7f175"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 06 12:04:50 crc kubenswrapper[4958]: I1006 12:04:50.565119 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bdff7949-e76a-484d-983d-c3f8fee7f175-kube-api-access-j9x5m" (OuterVolumeSpecName: "kube-api-access-j9x5m") pod "bdff7949-e76a-484d-983d-c3f8fee7f175" (UID: "bdff7949-e76a-484d-983d-c3f8fee7f175"). InnerVolumeSpecName "kube-api-access-j9x5m". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 06 12:04:50 crc kubenswrapper[4958]: I1006 12:04:50.567785 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/db7e63d2-d43d-417e-939a-be456eaae637-kube-api-access-g44mr" (OuterVolumeSpecName: "kube-api-access-g44mr") pod "db7e63d2-d43d-417e-939a-be456eaae637" (UID: "db7e63d2-d43d-417e-939a-be456eaae637"). InnerVolumeSpecName "kube-api-access-g44mr". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 06 12:04:50 crc kubenswrapper[4958]: I1006 12:04:50.569435 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/db7e63d2-d43d-417e-939a-be456eaae637-httpd-config" (OuterVolumeSpecName: "httpd-config") pod "db7e63d2-d43d-417e-939a-be456eaae637" (UID: "db7e63d2-d43d-417e-939a-be456eaae637"). InnerVolumeSpecName "httpd-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 06 12:04:50 crc kubenswrapper[4958]: I1006 12:04:50.635278 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bdff7949-e76a-484d-983d-c3f8fee7f175-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "bdff7949-e76a-484d-983d-c3f8fee7f175" (UID: "bdff7949-e76a-484d-983d-c3f8fee7f175"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 06 12:04:50 crc kubenswrapper[4958]: I1006 12:04:50.646762 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bdff7949-e76a-484d-983d-c3f8fee7f175-config-data" (OuterVolumeSpecName: "config-data") pod "bdff7949-e76a-484d-983d-c3f8fee7f175" (UID: "bdff7949-e76a-484d-983d-c3f8fee7f175"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 06 12:04:50 crc kubenswrapper[4958]: I1006 12:04:50.654243 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bdff7949-e76a-484d-983d-c3f8fee7f175-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "bdff7949-e76a-484d-983d-c3f8fee7f175" (UID: "bdff7949-e76a-484d-983d-c3f8fee7f175"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 06 12:04:50 crc kubenswrapper[4958]: I1006 12:04:50.657696 4958 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/bdff7949-e76a-484d-983d-c3f8fee7f175-httpd-run\") on node \"crc\" DevicePath \"\"" Oct 06 12:04:50 crc kubenswrapper[4958]: I1006 12:04:50.663670 4958 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bdff7949-e76a-484d-983d-c3f8fee7f175-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 06 12:04:50 crc kubenswrapper[4958]: I1006 12:04:50.663900 4958 reconciler_common.go:293] "Volume detached for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/db7e63d2-d43d-417e-939a-be456eaae637-httpd-config\") on node \"crc\" DevicePath \"\"" Oct 06 12:04:50 crc kubenswrapper[4958]: I1006 12:04:50.664009 4958 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-j9x5m\" (UniqueName: \"kubernetes.io/projected/bdff7949-e76a-484d-983d-c3f8fee7f175-kube-api-access-j9x5m\") on node \"crc\" DevicePath \"\"" Oct 06 12:04:50 crc kubenswrapper[4958]: I1006 12:04:50.664087 4958 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bdff7949-e76a-484d-983d-c3f8fee7f175-config-data\") on node \"crc\" DevicePath \"\"" Oct 06 12:04:50 crc kubenswrapper[4958]: I1006 12:04:50.664168 4958 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/bdff7949-e76a-484d-983d-c3f8fee7f175-public-tls-certs\") on node \"crc\" DevicePath \"\"" Oct 06 12:04:50 crc kubenswrapper[4958]: I1006 12:04:50.664245 4958 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/bdff7949-e76a-484d-983d-c3f8fee7f175-scripts\") on node \"crc\" DevicePath \"\"" Oct 06 12:04:50 crc kubenswrapper[4958]: I1006 12:04:50.664318 4958 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/bdff7949-e76a-484d-983d-c3f8fee7f175-logs\") on node \"crc\" DevicePath \"\"" Oct 06 12:04:50 crc kubenswrapper[4958]: I1006 12:04:50.664377 4958 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-g44mr\" (UniqueName: \"kubernetes.io/projected/db7e63d2-d43d-417e-939a-be456eaae637-kube-api-access-g44mr\") on node \"crc\" DevicePath \"\"" Oct 06 12:04:50 crc kubenswrapper[4958]: I1006 12:04:50.664496 4958 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") on node \"crc\" " Oct 06 12:04:50 crc kubenswrapper[4958]: I1006 12:04:50.660757 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/db7e63d2-d43d-417e-939a-be456eaae637-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "db7e63d2-d43d-417e-939a-be456eaae637" (UID: "db7e63d2-d43d-417e-939a-be456eaae637"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 06 12:04:50 crc kubenswrapper[4958]: I1006 12:04:50.661467 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/db7e63d2-d43d-417e-939a-be456eaae637-config" (OuterVolumeSpecName: "config") pod "db7e63d2-d43d-417e-939a-be456eaae637" (UID: "db7e63d2-d43d-417e-939a-be456eaae637"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 06 12:04:50 crc kubenswrapper[4958]: I1006 12:04:50.677924 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-db-create-6xbk9"] Oct 06 12:04:50 crc kubenswrapper[4958]: I1006 12:04:50.693090 4958 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage11-crc" (UniqueName: "kubernetes.io/local-volume/local-storage11-crc") on node "crc" Oct 06 12:04:50 crc kubenswrapper[4958]: I1006 12:04:50.713776 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/db7e63d2-d43d-417e-939a-be456eaae637-ovndb-tls-certs" (OuterVolumeSpecName: "ovndb-tls-certs") pod "db7e63d2-d43d-417e-939a-be456eaae637" (UID: "db7e63d2-d43d-417e-939a-be456eaae637"). InnerVolumeSpecName "ovndb-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 06 12:04:50 crc kubenswrapper[4958]: I1006 12:04:50.717470 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-db-create-nsqbc"] Oct 06 12:04:50 crc kubenswrapper[4958]: I1006 12:04:50.767824 4958 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/db7e63d2-d43d-417e-939a-be456eaae637-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 06 12:04:50 crc kubenswrapper[4958]: I1006 12:04:50.767859 4958 reconciler_common.go:293] "Volume detached for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/db7e63d2-d43d-417e-939a-be456eaae637-ovndb-tls-certs\") on node \"crc\" DevicePath \"\"" Oct 06 12:04:50 crc kubenswrapper[4958]: I1006 12:04:50.767871 4958 reconciler_common.go:293] "Volume detached for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") on node \"crc\" DevicePath \"\"" Oct 06 12:04:50 crc kubenswrapper[4958]: I1006 12:04:50.767880 4958 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/db7e63d2-d43d-417e-939a-be456eaae637-config\") on node \"crc\" DevicePath \"\"" Oct 06 12:04:50 crc kubenswrapper[4958]: I1006 12:04:50.795055 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-db-create-gzxhx"] Oct 06 12:04:50 crc kubenswrapper[4958]: I1006 12:04:50.848657 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-6xbk9" event={"ID":"ee717167-2b5b-4c0a-9528-b14f49856f5e","Type":"ContainerStarted","Data":"d6cf87a04d586117881e923cc0c9a26d953468166b93f4b0b6101d9a97f5ce18"} Oct 06 12:04:50 crc kubenswrapper[4958]: I1006 12:04:50.856041 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"d8409911-8a67-4382-87b7-7a050b025a09","Type":"ContainerDied","Data":"868f1a1f91d715c7af4c8a173310c1418248c63e0b330650eeca75eb67f0588d"} Oct 06 12:04:50 crc kubenswrapper[4958]: I1006 12:04:50.856092 4958 scope.go:117] "RemoveContainer" containerID="4ec8589ab5c358d1ae115d7cb048e88b0ead557915aab85d2287be752cdfaaea" Oct 06 12:04:50 crc kubenswrapper[4958]: I1006 12:04:50.856229 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Oct 06 12:04:50 crc kubenswrapper[4958]: I1006 12:04:50.862060 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-nsqbc" event={"ID":"9ba0cdbc-1261-4dd4-9be1-f733bc94f7d3","Type":"ContainerStarted","Data":"fb1640726b6ba2d1765ebcfb6797915a49de0f83f5568dc613d6a61bd288868c"} Oct 06 12:04:50 crc kubenswrapper[4958]: I1006 12:04:50.877002 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstackclient" event={"ID":"2474d50f-478f-4d0f-abc0-f0a5135285ca","Type":"ContainerStarted","Data":"1ec07dead73305167ed1fc4c472a2291b2db9e5401196e252059aa47e09e68de"} Oct 06 12:04:50 crc kubenswrapper[4958]: I1006 12:04:50.882581 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-df9b9d74d-rffxt" event={"ID":"db7e63d2-d43d-417e-939a-be456eaae637","Type":"ContainerDied","Data":"a509850dd117c718747e1eb2989becc666f554c53a50236926ded0b8d3e0d4d3"} Oct 06 12:04:50 crc kubenswrapper[4958]: I1006 12:04:50.882600 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-df9b9d74d-rffxt" Oct 06 12:04:50 crc kubenswrapper[4958]: I1006 12:04:50.902761 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"5c8fffd7-1c3e-46a3-b444-0fd22eaff575","Type":"ContainerStarted","Data":"47d45832ba43a81a4163752a379605fce53affec79dd40084db7a87e0ee33e6d"} Oct 06 12:04:50 crc kubenswrapper[4958]: I1006 12:04:50.907641 4958 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstackclient" podStartSLOduration=3.300832856 podStartE2EDuration="20.907618078s" podCreationTimestamp="2025-10-06 12:04:30 +0000 UTC" firstStartedPulling="2025-10-06 12:04:32.250860017 +0000 UTC m=+1026.136885325" lastFinishedPulling="2025-10-06 12:04:49.857645239 +0000 UTC m=+1043.743670547" observedRunningTime="2025-10-06 12:04:50.893708169 +0000 UTC m=+1044.779733487" watchObservedRunningTime="2025-10-06 12:04:50.907618078 +0000 UTC m=+1044.793643386" Oct 06 12:04:50 crc kubenswrapper[4958]: I1006 12:04:50.939766 4958 scope.go:117] "RemoveContainer" containerID="1b07e72b8a8349eeb0637e284e7beeff8033c68b56372590ebd563595f132eca" Oct 06 12:04:50 crc kubenswrapper[4958]: I1006 12:04:50.942887 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Oct 06 12:04:50 crc kubenswrapper[4958]: I1006 12:04:50.962649 4958 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-scheduler-0"] Oct 06 12:04:50 crc kubenswrapper[4958]: I1006 12:04:50.962677 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"bdff7949-e76a-484d-983d-c3f8fee7f175","Type":"ContainerDied","Data":"3c1baf0a5511b0d512ebcf27a5d0666d1af31e5a95fcd903f15ff39b88e5f5d8"} Oct 06 12:04:50 crc kubenswrapper[4958]: I1006 12:04:50.965547 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-gzxhx" event={"ID":"f8492838-08b1-4d13-8697-bf595621d465","Type":"ContainerStarted","Data":"20fa331c4c25820521f1334d8395799615321f9c1186958a9769c7bfa9ab7778"} Oct 06 12:04:50 crc kubenswrapper[4958]: I1006 12:04:50.969398 4958 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-scheduler-0"] Oct 06 12:04:50 crc kubenswrapper[4958]: I1006 12:04:50.972452 4958 scope.go:117] "RemoveContainer" containerID="d4f06781df28f582f5978d251210caeb10e53375d803de3880309ed0e406df19" Oct 06 12:04:50 crc kubenswrapper[4958]: I1006 12:04:50.973034 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Oct 06 12:04:50 crc kubenswrapper[4958]: I1006 12:04:50.976192 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-75c8ddd69c-l6bx4" event={"ID":"f71790e2-93a0-4e5a-9099-e2e5a103af3d","Type":"ContainerDied","Data":"aa6af3ab22016b3ee39fed0bf244d64c4a9c9ac173212c193e23daf3074f3a95"} Oct 06 12:04:50 crc kubenswrapper[4958]: I1006 12:04:50.976266 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-75c8ddd69c-l6bx4" Oct 06 12:04:51 crc kubenswrapper[4958]: I1006 12:04:51.018211 4958 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-scheduler-0"] Oct 06 12:04:51 crc kubenswrapper[4958]: E1006 12:04:51.018998 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bdff7949-e76a-484d-983d-c3f8fee7f175" containerName="glance-log" Oct 06 12:04:51 crc kubenswrapper[4958]: I1006 12:04:51.019011 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="bdff7949-e76a-484d-983d-c3f8fee7f175" containerName="glance-log" Oct 06 12:04:51 crc kubenswrapper[4958]: E1006 12:04:51.019031 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="db7e63d2-d43d-417e-939a-be456eaae637" containerName="neutron-api" Oct 06 12:04:51 crc kubenswrapper[4958]: I1006 12:04:51.019037 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="db7e63d2-d43d-417e-939a-be456eaae637" containerName="neutron-api" Oct 06 12:04:51 crc kubenswrapper[4958]: E1006 12:04:51.019049 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7b1cc01c-8b93-41aa-bf54-7d98363efbca" containerName="glance-log" Oct 06 12:04:51 crc kubenswrapper[4958]: I1006 12:04:51.019059 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="7b1cc01c-8b93-41aa-bf54-7d98363efbca" containerName="glance-log" Oct 06 12:04:51 crc kubenswrapper[4958]: E1006 12:04:51.019076 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f71790e2-93a0-4e5a-9099-e2e5a103af3d" containerName="init" Oct 06 12:04:51 crc kubenswrapper[4958]: I1006 12:04:51.019082 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="f71790e2-93a0-4e5a-9099-e2e5a103af3d" containerName="init" Oct 06 12:04:51 crc kubenswrapper[4958]: E1006 12:04:51.019095 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d8409911-8a67-4382-87b7-7a050b025a09" containerName="probe" Oct 06 12:04:51 crc kubenswrapper[4958]: I1006 12:04:51.019101 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="d8409911-8a67-4382-87b7-7a050b025a09" containerName="probe" Oct 06 12:04:51 crc kubenswrapper[4958]: E1006 12:04:51.019116 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d8409911-8a67-4382-87b7-7a050b025a09" containerName="cinder-scheduler" Oct 06 12:04:51 crc kubenswrapper[4958]: I1006 12:04:51.019123 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="d8409911-8a67-4382-87b7-7a050b025a09" containerName="cinder-scheduler" Oct 06 12:04:51 crc kubenswrapper[4958]: E1006 12:04:51.019130 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7b1cc01c-8b93-41aa-bf54-7d98363efbca" containerName="glance-httpd" Oct 06 12:04:51 crc kubenswrapper[4958]: I1006 12:04:51.019136 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="7b1cc01c-8b93-41aa-bf54-7d98363efbca" containerName="glance-httpd" Oct 06 12:04:51 crc kubenswrapper[4958]: E1006 12:04:51.019161 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f71790e2-93a0-4e5a-9099-e2e5a103af3d" containerName="dnsmasq-dns" Oct 06 12:04:51 crc kubenswrapper[4958]: I1006 12:04:51.019168 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="f71790e2-93a0-4e5a-9099-e2e5a103af3d" containerName="dnsmasq-dns" Oct 06 12:04:51 crc kubenswrapper[4958]: E1006 12:04:51.019180 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="db7e63d2-d43d-417e-939a-be456eaae637" containerName="neutron-httpd" Oct 06 12:04:51 crc kubenswrapper[4958]: I1006 12:04:51.019186 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="db7e63d2-d43d-417e-939a-be456eaae637" containerName="neutron-httpd" Oct 06 12:04:51 crc kubenswrapper[4958]: E1006 12:04:51.019201 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bdff7949-e76a-484d-983d-c3f8fee7f175" containerName="glance-httpd" Oct 06 12:04:51 crc kubenswrapper[4958]: I1006 12:04:51.019208 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="bdff7949-e76a-484d-983d-c3f8fee7f175" containerName="glance-httpd" Oct 06 12:04:51 crc kubenswrapper[4958]: I1006 12:04:51.019368 4958 memory_manager.go:354] "RemoveStaleState removing state" podUID="d8409911-8a67-4382-87b7-7a050b025a09" containerName="probe" Oct 06 12:04:51 crc kubenswrapper[4958]: I1006 12:04:51.019384 4958 memory_manager.go:354] "RemoveStaleState removing state" podUID="f71790e2-93a0-4e5a-9099-e2e5a103af3d" containerName="dnsmasq-dns" Oct 06 12:04:51 crc kubenswrapper[4958]: I1006 12:04:51.019395 4958 memory_manager.go:354] "RemoveStaleState removing state" podUID="d8409911-8a67-4382-87b7-7a050b025a09" containerName="cinder-scheduler" Oct 06 12:04:51 crc kubenswrapper[4958]: I1006 12:04:51.019402 4958 memory_manager.go:354] "RemoveStaleState removing state" podUID="7b1cc01c-8b93-41aa-bf54-7d98363efbca" containerName="glance-httpd" Oct 06 12:04:51 crc kubenswrapper[4958]: I1006 12:04:51.019411 4958 memory_manager.go:354] "RemoveStaleState removing state" podUID="7b1cc01c-8b93-41aa-bf54-7d98363efbca" containerName="glance-log" Oct 06 12:04:51 crc kubenswrapper[4958]: I1006 12:04:51.019421 4958 memory_manager.go:354] "RemoveStaleState removing state" podUID="bdff7949-e76a-484d-983d-c3f8fee7f175" containerName="glance-log" Oct 06 12:04:51 crc kubenswrapper[4958]: I1006 12:04:51.019429 4958 memory_manager.go:354] "RemoveStaleState removing state" podUID="db7e63d2-d43d-417e-939a-be456eaae637" containerName="neutron-api" Oct 06 12:04:51 crc kubenswrapper[4958]: I1006 12:04:51.019443 4958 memory_manager.go:354] "RemoveStaleState removing state" podUID="db7e63d2-d43d-417e-939a-be456eaae637" containerName="neutron-httpd" Oct 06 12:04:51 crc kubenswrapper[4958]: I1006 12:04:51.019454 4958 memory_manager.go:354] "RemoveStaleState removing state" podUID="bdff7949-e76a-484d-983d-c3f8fee7f175" containerName="glance-httpd" Oct 06 12:04:51 crc kubenswrapper[4958]: I1006 12:04:51.020427 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Oct 06 12:04:51 crc kubenswrapper[4958]: I1006 12:04:51.024503 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scheduler-config-data" Oct 06 12:04:51 crc kubenswrapper[4958]: I1006 12:04:51.026839 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Oct 06 12:04:51 crc kubenswrapper[4958]: I1006 12:04:51.038448 4958 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-df9b9d74d-rffxt"] Oct 06 12:04:51 crc kubenswrapper[4958]: I1006 12:04:51.049801 4958 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-df9b9d74d-rffxt"] Oct 06 12:04:51 crc kubenswrapper[4958]: I1006 12:04:51.059953 4958 scope.go:117] "RemoveContainer" containerID="85829cdeb0396bf6abe3552a08347214ec79f661f6a666f2091f84050b6b906b" Oct 06 12:04:51 crc kubenswrapper[4958]: I1006 12:04:51.071536 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"7b1cc01c-8b93-41aa-bf54-7d98363efbca\" (UID: \"7b1cc01c-8b93-41aa-bf54-7d98363efbca\") " Oct 06 12:04:51 crc kubenswrapper[4958]: I1006 12:04:51.071630 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7b1cc01c-8b93-41aa-bf54-7d98363efbca-combined-ca-bundle\") pod \"7b1cc01c-8b93-41aa-bf54-7d98363efbca\" (UID: \"7b1cc01c-8b93-41aa-bf54-7d98363efbca\") " Oct 06 12:04:51 crc kubenswrapper[4958]: I1006 12:04:51.071661 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7b1cc01c-8b93-41aa-bf54-7d98363efbca-logs\") pod \"7b1cc01c-8b93-41aa-bf54-7d98363efbca\" (UID: \"7b1cc01c-8b93-41aa-bf54-7d98363efbca\") " Oct 06 12:04:51 crc kubenswrapper[4958]: I1006 12:04:51.071701 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/7b1cc01c-8b93-41aa-bf54-7d98363efbca-internal-tls-certs\") pod \"7b1cc01c-8b93-41aa-bf54-7d98363efbca\" (UID: \"7b1cc01c-8b93-41aa-bf54-7d98363efbca\") " Oct 06 12:04:51 crc kubenswrapper[4958]: I1006 12:04:51.071732 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/7b1cc01c-8b93-41aa-bf54-7d98363efbca-httpd-run\") pod \"7b1cc01c-8b93-41aa-bf54-7d98363efbca\" (UID: \"7b1cc01c-8b93-41aa-bf54-7d98363efbca\") " Oct 06 12:04:51 crc kubenswrapper[4958]: I1006 12:04:51.071756 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7b1cc01c-8b93-41aa-bf54-7d98363efbca-config-data\") pod \"7b1cc01c-8b93-41aa-bf54-7d98363efbca\" (UID: \"7b1cc01c-8b93-41aa-bf54-7d98363efbca\") " Oct 06 12:04:51 crc kubenswrapper[4958]: I1006 12:04:51.071778 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mfwwn\" (UniqueName: \"kubernetes.io/projected/7b1cc01c-8b93-41aa-bf54-7d98363efbca-kube-api-access-mfwwn\") pod \"7b1cc01c-8b93-41aa-bf54-7d98363efbca\" (UID: \"7b1cc01c-8b93-41aa-bf54-7d98363efbca\") " Oct 06 12:04:51 crc kubenswrapper[4958]: I1006 12:04:51.071810 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7b1cc01c-8b93-41aa-bf54-7d98363efbca-scripts\") pod \"7b1cc01c-8b93-41aa-bf54-7d98363efbca\" (UID: \"7b1cc01c-8b93-41aa-bf54-7d98363efbca\") " Oct 06 12:04:51 crc kubenswrapper[4958]: I1006 12:04:51.071918 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/131b14e5-e45a-4fc4-817c-b8f82c27e92e-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"131b14e5-e45a-4fc4-817c-b8f82c27e92e\") " pod="openstack/cinder-scheduler-0" Oct 06 12:04:51 crc kubenswrapper[4958]: I1006 12:04:51.071950 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/131b14e5-e45a-4fc4-817c-b8f82c27e92e-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"131b14e5-e45a-4fc4-817c-b8f82c27e92e\") " pod="openstack/cinder-scheduler-0" Oct 06 12:04:51 crc kubenswrapper[4958]: I1006 12:04:51.071989 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gdtqx\" (UniqueName: \"kubernetes.io/projected/131b14e5-e45a-4fc4-817c-b8f82c27e92e-kube-api-access-gdtqx\") pod \"cinder-scheduler-0\" (UID: \"131b14e5-e45a-4fc4-817c-b8f82c27e92e\") " pod="openstack/cinder-scheduler-0" Oct 06 12:04:51 crc kubenswrapper[4958]: I1006 12:04:51.072007 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/131b14e5-e45a-4fc4-817c-b8f82c27e92e-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"131b14e5-e45a-4fc4-817c-b8f82c27e92e\") " pod="openstack/cinder-scheduler-0" Oct 06 12:04:51 crc kubenswrapper[4958]: I1006 12:04:51.072022 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/131b14e5-e45a-4fc4-817c-b8f82c27e92e-scripts\") pod \"cinder-scheduler-0\" (UID: \"131b14e5-e45a-4fc4-817c-b8f82c27e92e\") " pod="openstack/cinder-scheduler-0" Oct 06 12:04:51 crc kubenswrapper[4958]: I1006 12:04:51.072079 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/131b14e5-e45a-4fc4-817c-b8f82c27e92e-config-data\") pod \"cinder-scheduler-0\" (UID: \"131b14e5-e45a-4fc4-817c-b8f82c27e92e\") " pod="openstack/cinder-scheduler-0" Oct 06 12:04:51 crc kubenswrapper[4958]: I1006 12:04:51.083418 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage08-crc" (OuterVolumeSpecName: "glance") pod "7b1cc01c-8b93-41aa-bf54-7d98363efbca" (UID: "7b1cc01c-8b93-41aa-bf54-7d98363efbca"). InnerVolumeSpecName "local-storage08-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Oct 06 12:04:51 crc kubenswrapper[4958]: I1006 12:04:51.084060 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7b1cc01c-8b93-41aa-bf54-7d98363efbca-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "7b1cc01c-8b93-41aa-bf54-7d98363efbca" (UID: "7b1cc01c-8b93-41aa-bf54-7d98363efbca"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 06 12:04:51 crc kubenswrapper[4958]: I1006 12:04:51.084405 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7b1cc01c-8b93-41aa-bf54-7d98363efbca-logs" (OuterVolumeSpecName: "logs") pod "7b1cc01c-8b93-41aa-bf54-7d98363efbca" (UID: "7b1cc01c-8b93-41aa-bf54-7d98363efbca"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 06 12:04:51 crc kubenswrapper[4958]: I1006 12:04:51.087475 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7b1cc01c-8b93-41aa-bf54-7d98363efbca-scripts" (OuterVolumeSpecName: "scripts") pod "7b1cc01c-8b93-41aa-bf54-7d98363efbca" (UID: "7b1cc01c-8b93-41aa-bf54-7d98363efbca"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 06 12:04:51 crc kubenswrapper[4958]: I1006 12:04:51.095990 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7b1cc01c-8b93-41aa-bf54-7d98363efbca-kube-api-access-mfwwn" (OuterVolumeSpecName: "kube-api-access-mfwwn") pod "7b1cc01c-8b93-41aa-bf54-7d98363efbca" (UID: "7b1cc01c-8b93-41aa-bf54-7d98363efbca"). InnerVolumeSpecName "kube-api-access-mfwwn". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 06 12:04:51 crc kubenswrapper[4958]: I1006 12:04:51.096054 4958 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Oct 06 12:04:51 crc kubenswrapper[4958]: I1006 12:04:51.114966 4958 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-external-api-0"] Oct 06 12:04:51 crc kubenswrapper[4958]: I1006 12:04:51.123753 4958 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-75c8ddd69c-l6bx4"] Oct 06 12:04:51 crc kubenswrapper[4958]: I1006 12:04:51.150408 4958 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-75c8ddd69c-l6bx4"] Oct 06 12:04:51 crc kubenswrapper[4958]: I1006 12:04:51.159491 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7b1cc01c-8b93-41aa-bf54-7d98363efbca-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "7b1cc01c-8b93-41aa-bf54-7d98363efbca" (UID: "7b1cc01c-8b93-41aa-bf54-7d98363efbca"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 06 12:04:51 crc kubenswrapper[4958]: I1006 12:04:51.159544 4958 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-external-api-0"] Oct 06 12:04:51 crc kubenswrapper[4958]: I1006 12:04:51.161580 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Oct 06 12:04:51 crc kubenswrapper[4958]: I1006 12:04:51.164112 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-public-svc" Oct 06 12:04:51 crc kubenswrapper[4958]: I1006 12:04:51.164684 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-external-config-data" Oct 06 12:04:51 crc kubenswrapper[4958]: I1006 12:04:51.166438 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Oct 06 12:04:51 crc kubenswrapper[4958]: I1006 12:04:51.174470 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/131b14e5-e45a-4fc4-817c-b8f82c27e92e-config-data\") pod \"cinder-scheduler-0\" (UID: \"131b14e5-e45a-4fc4-817c-b8f82c27e92e\") " pod="openstack/cinder-scheduler-0" Oct 06 12:04:51 crc kubenswrapper[4958]: I1006 12:04:51.174561 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/131b14e5-e45a-4fc4-817c-b8f82c27e92e-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"131b14e5-e45a-4fc4-817c-b8f82c27e92e\") " pod="openstack/cinder-scheduler-0" Oct 06 12:04:51 crc kubenswrapper[4958]: I1006 12:04:51.174602 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/131b14e5-e45a-4fc4-817c-b8f82c27e92e-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"131b14e5-e45a-4fc4-817c-b8f82c27e92e\") " pod="openstack/cinder-scheduler-0" Oct 06 12:04:51 crc kubenswrapper[4958]: I1006 12:04:51.174649 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gdtqx\" (UniqueName: \"kubernetes.io/projected/131b14e5-e45a-4fc4-817c-b8f82c27e92e-kube-api-access-gdtqx\") pod \"cinder-scheduler-0\" (UID: \"131b14e5-e45a-4fc4-817c-b8f82c27e92e\") " pod="openstack/cinder-scheduler-0" Oct 06 12:04:51 crc kubenswrapper[4958]: I1006 12:04:51.174669 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/131b14e5-e45a-4fc4-817c-b8f82c27e92e-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"131b14e5-e45a-4fc4-817c-b8f82c27e92e\") " pod="openstack/cinder-scheduler-0" Oct 06 12:04:51 crc kubenswrapper[4958]: I1006 12:04:51.174684 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/131b14e5-e45a-4fc4-817c-b8f82c27e92e-scripts\") pod \"cinder-scheduler-0\" (UID: \"131b14e5-e45a-4fc4-817c-b8f82c27e92e\") " pod="openstack/cinder-scheduler-0" Oct 06 12:04:51 crc kubenswrapper[4958]: I1006 12:04:51.174770 4958 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7b1cc01c-8b93-41aa-bf54-7d98363efbca-logs\") on node \"crc\" DevicePath \"\"" Oct 06 12:04:51 crc kubenswrapper[4958]: I1006 12:04:51.174780 4958 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/7b1cc01c-8b93-41aa-bf54-7d98363efbca-httpd-run\") on node \"crc\" DevicePath \"\"" Oct 06 12:04:51 crc kubenswrapper[4958]: I1006 12:04:51.174789 4958 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mfwwn\" (UniqueName: \"kubernetes.io/projected/7b1cc01c-8b93-41aa-bf54-7d98363efbca-kube-api-access-mfwwn\") on node \"crc\" DevicePath \"\"" Oct 06 12:04:51 crc kubenswrapper[4958]: I1006 12:04:51.174798 4958 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7b1cc01c-8b93-41aa-bf54-7d98363efbca-scripts\") on node \"crc\" DevicePath \"\"" Oct 06 12:04:51 crc kubenswrapper[4958]: I1006 12:04:51.174819 4958 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") on node \"crc\" " Oct 06 12:04:51 crc kubenswrapper[4958]: I1006 12:04:51.174828 4958 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7b1cc01c-8b93-41aa-bf54-7d98363efbca-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 06 12:04:51 crc kubenswrapper[4958]: I1006 12:04:51.175965 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/131b14e5-e45a-4fc4-817c-b8f82c27e92e-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"131b14e5-e45a-4fc4-817c-b8f82c27e92e\") " pod="openstack/cinder-scheduler-0" Oct 06 12:04:51 crc kubenswrapper[4958]: I1006 12:04:51.180970 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7b1cc01c-8b93-41aa-bf54-7d98363efbca-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "7b1cc01c-8b93-41aa-bf54-7d98363efbca" (UID: "7b1cc01c-8b93-41aa-bf54-7d98363efbca"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 06 12:04:51 crc kubenswrapper[4958]: I1006 12:04:51.189416 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/131b14e5-e45a-4fc4-817c-b8f82c27e92e-scripts\") pod \"cinder-scheduler-0\" (UID: \"131b14e5-e45a-4fc4-817c-b8f82c27e92e\") " pod="openstack/cinder-scheduler-0" Oct 06 12:04:51 crc kubenswrapper[4958]: I1006 12:04:51.200999 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/131b14e5-e45a-4fc4-817c-b8f82c27e92e-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"131b14e5-e45a-4fc4-817c-b8f82c27e92e\") " pod="openstack/cinder-scheduler-0" Oct 06 12:04:51 crc kubenswrapper[4958]: I1006 12:04:51.207863 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/131b14e5-e45a-4fc4-817c-b8f82c27e92e-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"131b14e5-e45a-4fc4-817c-b8f82c27e92e\") " pod="openstack/cinder-scheduler-0" Oct 06 12:04:51 crc kubenswrapper[4958]: I1006 12:04:51.211241 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/131b14e5-e45a-4fc4-817c-b8f82c27e92e-config-data\") pod \"cinder-scheduler-0\" (UID: \"131b14e5-e45a-4fc4-817c-b8f82c27e92e\") " pod="openstack/cinder-scheduler-0" Oct 06 12:04:51 crc kubenswrapper[4958]: I1006 12:04:51.212912 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gdtqx\" (UniqueName: \"kubernetes.io/projected/131b14e5-e45a-4fc4-817c-b8f82c27e92e-kube-api-access-gdtqx\") pod \"cinder-scheduler-0\" (UID: \"131b14e5-e45a-4fc4-817c-b8f82c27e92e\") " pod="openstack/cinder-scheduler-0" Oct 06 12:04:51 crc kubenswrapper[4958]: I1006 12:04:51.214248 4958 scope.go:117] "RemoveContainer" containerID="5e604db492197be7746aaa503276106e68a4d9e81736affd54302a31a30ba540" Oct 06 12:04:51 crc kubenswrapper[4958]: I1006 12:04:51.239317 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7b1cc01c-8b93-41aa-bf54-7d98363efbca-config-data" (OuterVolumeSpecName: "config-data") pod "7b1cc01c-8b93-41aa-bf54-7d98363efbca" (UID: "7b1cc01c-8b93-41aa-bf54-7d98363efbca"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 06 12:04:51 crc kubenswrapper[4958]: I1006 12:04:51.246329 4958 scope.go:117] "RemoveContainer" containerID="eb466551a2ed74deb5b0345d460e4a060341bf53984c36e1ad2159e49007280e" Oct 06 12:04:51 crc kubenswrapper[4958]: I1006 12:04:51.248674 4958 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage08-crc" (UniqueName: "kubernetes.io/local-volume/local-storage08-crc") on node "crc" Oct 06 12:04:51 crc kubenswrapper[4958]: I1006 12:04:51.270054 4958 scope.go:117] "RemoveContainer" containerID="3d6e23357b40b220c2cce11946235ee83bbcb5a25c4a585c3a497c082925b56b" Oct 06 12:04:51 crc kubenswrapper[4958]: I1006 12:04:51.276095 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/33d3c29a-90b4-49ce-9dd3-0c3a415a8ec4-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"33d3c29a-90b4-49ce-9dd3-0c3a415a8ec4\") " pod="openstack/glance-default-external-api-0" Oct 06 12:04:51 crc kubenswrapper[4958]: I1006 12:04:51.277314 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/33d3c29a-90b4-49ce-9dd3-0c3a415a8ec4-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"33d3c29a-90b4-49ce-9dd3-0c3a415a8ec4\") " pod="openstack/glance-default-external-api-0" Oct 06 12:04:51 crc kubenswrapper[4958]: I1006 12:04:51.277559 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/33d3c29a-90b4-49ce-9dd3-0c3a415a8ec4-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"33d3c29a-90b4-49ce-9dd3-0c3a415a8ec4\") " pod="openstack/glance-default-external-api-0" Oct 06 12:04:51 crc kubenswrapper[4958]: I1006 12:04:51.277763 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"glance-default-external-api-0\" (UID: \"33d3c29a-90b4-49ce-9dd3-0c3a415a8ec4\") " pod="openstack/glance-default-external-api-0" Oct 06 12:04:51 crc kubenswrapper[4958]: I1006 12:04:51.277845 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m4p88\" (UniqueName: \"kubernetes.io/projected/33d3c29a-90b4-49ce-9dd3-0c3a415a8ec4-kube-api-access-m4p88\") pod \"glance-default-external-api-0\" (UID: \"33d3c29a-90b4-49ce-9dd3-0c3a415a8ec4\") " pod="openstack/glance-default-external-api-0" Oct 06 12:04:51 crc kubenswrapper[4958]: I1006 12:04:51.277938 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/33d3c29a-90b4-49ce-9dd3-0c3a415a8ec4-config-data\") pod \"glance-default-external-api-0\" (UID: \"33d3c29a-90b4-49ce-9dd3-0c3a415a8ec4\") " pod="openstack/glance-default-external-api-0" Oct 06 12:04:51 crc kubenswrapper[4958]: I1006 12:04:51.278109 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/33d3c29a-90b4-49ce-9dd3-0c3a415a8ec4-scripts\") pod \"glance-default-external-api-0\" (UID: \"33d3c29a-90b4-49ce-9dd3-0c3a415a8ec4\") " pod="openstack/glance-default-external-api-0" Oct 06 12:04:51 crc kubenswrapper[4958]: I1006 12:04:51.278316 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/33d3c29a-90b4-49ce-9dd3-0c3a415a8ec4-logs\") pod \"glance-default-external-api-0\" (UID: \"33d3c29a-90b4-49ce-9dd3-0c3a415a8ec4\") " pod="openstack/glance-default-external-api-0" Oct 06 12:04:51 crc kubenswrapper[4958]: I1006 12:04:51.278471 4958 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/7b1cc01c-8b93-41aa-bf54-7d98363efbca-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Oct 06 12:04:51 crc kubenswrapper[4958]: I1006 12:04:51.278568 4958 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7b1cc01c-8b93-41aa-bf54-7d98363efbca-config-data\") on node \"crc\" DevicePath \"\"" Oct 06 12:04:51 crc kubenswrapper[4958]: I1006 12:04:51.278624 4958 reconciler_common.go:293] "Volume detached for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") on node \"crc\" DevicePath \"\"" Oct 06 12:04:51 crc kubenswrapper[4958]: I1006 12:04:51.289138 4958 scope.go:117] "RemoveContainer" containerID="3e8e17445015a8cf78c4d36b4a999a630d2717c0eea0f10f82600c3784ae30b4" Oct 06 12:04:51 crc kubenswrapper[4958]: I1006 12:04:51.379731 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/33d3c29a-90b4-49ce-9dd3-0c3a415a8ec4-logs\") pod \"glance-default-external-api-0\" (UID: \"33d3c29a-90b4-49ce-9dd3-0c3a415a8ec4\") " pod="openstack/glance-default-external-api-0" Oct 06 12:04:51 crc kubenswrapper[4958]: I1006 12:04:51.379972 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/33d3c29a-90b4-49ce-9dd3-0c3a415a8ec4-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"33d3c29a-90b4-49ce-9dd3-0c3a415a8ec4\") " pod="openstack/glance-default-external-api-0" Oct 06 12:04:51 crc kubenswrapper[4958]: I1006 12:04:51.380216 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/33d3c29a-90b4-49ce-9dd3-0c3a415a8ec4-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"33d3c29a-90b4-49ce-9dd3-0c3a415a8ec4\") " pod="openstack/glance-default-external-api-0" Oct 06 12:04:51 crc kubenswrapper[4958]: I1006 12:04:51.380398 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/33d3c29a-90b4-49ce-9dd3-0c3a415a8ec4-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"33d3c29a-90b4-49ce-9dd3-0c3a415a8ec4\") " pod="openstack/glance-default-external-api-0" Oct 06 12:04:51 crc kubenswrapper[4958]: I1006 12:04:51.380542 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"glance-default-external-api-0\" (UID: \"33d3c29a-90b4-49ce-9dd3-0c3a415a8ec4\") " pod="openstack/glance-default-external-api-0" Oct 06 12:04:51 crc kubenswrapper[4958]: I1006 12:04:51.380619 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m4p88\" (UniqueName: \"kubernetes.io/projected/33d3c29a-90b4-49ce-9dd3-0c3a415a8ec4-kube-api-access-m4p88\") pod \"glance-default-external-api-0\" (UID: \"33d3c29a-90b4-49ce-9dd3-0c3a415a8ec4\") " pod="openstack/glance-default-external-api-0" Oct 06 12:04:51 crc kubenswrapper[4958]: I1006 12:04:51.380693 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/33d3c29a-90b4-49ce-9dd3-0c3a415a8ec4-config-data\") pod \"glance-default-external-api-0\" (UID: \"33d3c29a-90b4-49ce-9dd3-0c3a415a8ec4\") " pod="openstack/glance-default-external-api-0" Oct 06 12:04:51 crc kubenswrapper[4958]: I1006 12:04:51.380839 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/33d3c29a-90b4-49ce-9dd3-0c3a415a8ec4-scripts\") pod \"glance-default-external-api-0\" (UID: \"33d3c29a-90b4-49ce-9dd3-0c3a415a8ec4\") " pod="openstack/glance-default-external-api-0" Oct 06 12:04:51 crc kubenswrapper[4958]: I1006 12:04:51.380280 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/33d3c29a-90b4-49ce-9dd3-0c3a415a8ec4-logs\") pod \"glance-default-external-api-0\" (UID: \"33d3c29a-90b4-49ce-9dd3-0c3a415a8ec4\") " pod="openstack/glance-default-external-api-0" Oct 06 12:04:51 crc kubenswrapper[4958]: I1006 12:04:51.381393 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/33d3c29a-90b4-49ce-9dd3-0c3a415a8ec4-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"33d3c29a-90b4-49ce-9dd3-0c3a415a8ec4\") " pod="openstack/glance-default-external-api-0" Oct 06 12:04:51 crc kubenswrapper[4958]: I1006 12:04:51.381553 4958 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"glance-default-external-api-0\" (UID: \"33d3c29a-90b4-49ce-9dd3-0c3a415a8ec4\") device mount path \"/mnt/openstack/pv11\"" pod="openstack/glance-default-external-api-0" Oct 06 12:04:51 crc kubenswrapper[4958]: I1006 12:04:51.384799 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/33d3c29a-90b4-49ce-9dd3-0c3a415a8ec4-scripts\") pod \"glance-default-external-api-0\" (UID: \"33d3c29a-90b4-49ce-9dd3-0c3a415a8ec4\") " pod="openstack/glance-default-external-api-0" Oct 06 12:04:51 crc kubenswrapper[4958]: I1006 12:04:51.385480 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/33d3c29a-90b4-49ce-9dd3-0c3a415a8ec4-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"33d3c29a-90b4-49ce-9dd3-0c3a415a8ec4\") " pod="openstack/glance-default-external-api-0" Oct 06 12:04:51 crc kubenswrapper[4958]: I1006 12:04:51.386180 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/33d3c29a-90b4-49ce-9dd3-0c3a415a8ec4-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"33d3c29a-90b4-49ce-9dd3-0c3a415a8ec4\") " pod="openstack/glance-default-external-api-0" Oct 06 12:04:51 crc kubenswrapper[4958]: I1006 12:04:51.389302 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/33d3c29a-90b4-49ce-9dd3-0c3a415a8ec4-config-data\") pod \"glance-default-external-api-0\" (UID: \"33d3c29a-90b4-49ce-9dd3-0c3a415a8ec4\") " pod="openstack/glance-default-external-api-0" Oct 06 12:04:51 crc kubenswrapper[4958]: I1006 12:04:51.402377 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m4p88\" (UniqueName: \"kubernetes.io/projected/33d3c29a-90b4-49ce-9dd3-0c3a415a8ec4-kube-api-access-m4p88\") pod \"glance-default-external-api-0\" (UID: \"33d3c29a-90b4-49ce-9dd3-0c3a415a8ec4\") " pod="openstack/glance-default-external-api-0" Oct 06 12:04:51 crc kubenswrapper[4958]: I1006 12:04:51.417895 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"glance-default-external-api-0\" (UID: \"33d3c29a-90b4-49ce-9dd3-0c3a415a8ec4\") " pod="openstack/glance-default-external-api-0" Oct 06 12:04:51 crc kubenswrapper[4958]: I1006 12:04:51.500723 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Oct 06 12:04:51 crc kubenswrapper[4958]: I1006 12:04:51.537188 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Oct 06 12:04:51 crc kubenswrapper[4958]: I1006 12:04:51.986742 4958 generic.go:334] "Generic (PLEG): container finished" podID="ee717167-2b5b-4c0a-9528-b14f49856f5e" containerID="0fe94584495c09f75008c7acfd8ecf375d80f2d3a0555d07594747077c408196" exitCode=0 Oct 06 12:04:51 crc kubenswrapper[4958]: I1006 12:04:51.986849 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-6xbk9" event={"ID":"ee717167-2b5b-4c0a-9528-b14f49856f5e","Type":"ContainerDied","Data":"0fe94584495c09f75008c7acfd8ecf375d80f2d3a0555d07594747077c408196"} Oct 06 12:04:51 crc kubenswrapper[4958]: I1006 12:04:51.993719 4958 generic.go:334] "Generic (PLEG): container finished" podID="f8492838-08b1-4d13-8697-bf595621d465" containerID="01d0deaa66b9acb07807834538e40a9c822915037de0932b420bf804d22ee9ea" exitCode=0 Oct 06 12:04:51 crc kubenswrapper[4958]: I1006 12:04:51.993847 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-gzxhx" event={"ID":"f8492838-08b1-4d13-8697-bf595621d465","Type":"ContainerDied","Data":"01d0deaa66b9acb07807834538e40a9c822915037de0932b420bf804d22ee9ea"} Oct 06 12:04:52 crc kubenswrapper[4958]: I1006 12:04:52.000092 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Oct 06 12:04:52 crc kubenswrapper[4958]: I1006 12:04:52.003959 4958 generic.go:334] "Generic (PLEG): container finished" podID="9ba0cdbc-1261-4dd4-9be1-f733bc94f7d3" containerID="e605699a52e226bfe9b9e83d9cd83d10da6754e4c007920d3b8fdc00181e8885" exitCode=0 Oct 06 12:04:52 crc kubenswrapper[4958]: I1006 12:04:52.004063 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-nsqbc" event={"ID":"9ba0cdbc-1261-4dd4-9be1-f733bc94f7d3","Type":"ContainerDied","Data":"e605699a52e226bfe9b9e83d9cd83d10da6754e4c007920d3b8fdc00181e8885"} Oct 06 12:04:52 crc kubenswrapper[4958]: I1006 12:04:52.020217 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"7b1cc01c-8b93-41aa-bf54-7d98363efbca","Type":"ContainerDied","Data":"43e6b6cc944453ee0ba21cd26936ee8af6dda9689484ab901d805e620b334d7d"} Oct 06 12:04:52 crc kubenswrapper[4958]: I1006 12:04:52.020264 4958 scope.go:117] "RemoveContainer" containerID="b6bdf703a557f076696027a3d0398d0fe978145e58903f39a8694f11dc29ff8c" Oct 06 12:04:52 crc kubenswrapper[4958]: I1006 12:04:52.020291 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Oct 06 12:04:52 crc kubenswrapper[4958]: I1006 12:04:52.059629 4958 scope.go:117] "RemoveContainer" containerID="3417a50edc73a8f5851ebbd98453cc79184544b45feb20c6d00c9b8f44f808fd" Oct 06 12:04:52 crc kubenswrapper[4958]: I1006 12:04:52.075824 4958 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Oct 06 12:04:52 crc kubenswrapper[4958]: I1006 12:04:52.087730 4958 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-internal-api-0"] Oct 06 12:04:52 crc kubenswrapper[4958]: I1006 12:04:52.124180 4958 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-internal-api-0"] Oct 06 12:04:52 crc kubenswrapper[4958]: I1006 12:04:52.126086 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Oct 06 12:04:52 crc kubenswrapper[4958]: I1006 12:04:52.129921 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-internal-svc" Oct 06 12:04:52 crc kubenswrapper[4958]: I1006 12:04:52.130244 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-internal-config-data" Oct 06 12:04:52 crc kubenswrapper[4958]: I1006 12:04:52.131688 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Oct 06 12:04:52 crc kubenswrapper[4958]: I1006 12:04:52.199229 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Oct 06 12:04:52 crc kubenswrapper[4958]: I1006 12:04:52.298076 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/ddd9cc34-6d2f-41d2-ba9f-e41230964003-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"ddd9cc34-6d2f-41d2-ba9f-e41230964003\") " pod="openstack/glance-default-internal-api-0" Oct 06 12:04:52 crc kubenswrapper[4958]: I1006 12:04:52.298138 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ddd9cc34-6d2f-41d2-ba9f-e41230964003-config-data\") pod \"glance-default-internal-api-0\" (UID: \"ddd9cc34-6d2f-41d2-ba9f-e41230964003\") " pod="openstack/glance-default-internal-api-0" Oct 06 12:04:52 crc kubenswrapper[4958]: I1006 12:04:52.298169 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"glance-default-internal-api-0\" (UID: \"ddd9cc34-6d2f-41d2-ba9f-e41230964003\") " pod="openstack/glance-default-internal-api-0" Oct 06 12:04:52 crc kubenswrapper[4958]: I1006 12:04:52.298188 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/ddd9cc34-6d2f-41d2-ba9f-e41230964003-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"ddd9cc34-6d2f-41d2-ba9f-e41230964003\") " pod="openstack/glance-default-internal-api-0" Oct 06 12:04:52 crc kubenswrapper[4958]: I1006 12:04:52.298209 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p5pvf\" (UniqueName: \"kubernetes.io/projected/ddd9cc34-6d2f-41d2-ba9f-e41230964003-kube-api-access-p5pvf\") pod \"glance-default-internal-api-0\" (UID: \"ddd9cc34-6d2f-41d2-ba9f-e41230964003\") " pod="openstack/glance-default-internal-api-0" Oct 06 12:04:52 crc kubenswrapper[4958]: I1006 12:04:52.298256 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ddd9cc34-6d2f-41d2-ba9f-e41230964003-scripts\") pod \"glance-default-internal-api-0\" (UID: \"ddd9cc34-6d2f-41d2-ba9f-e41230964003\") " pod="openstack/glance-default-internal-api-0" Oct 06 12:04:52 crc kubenswrapper[4958]: I1006 12:04:52.298274 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ddd9cc34-6d2f-41d2-ba9f-e41230964003-logs\") pod \"glance-default-internal-api-0\" (UID: \"ddd9cc34-6d2f-41d2-ba9f-e41230964003\") " pod="openstack/glance-default-internal-api-0" Oct 06 12:04:52 crc kubenswrapper[4958]: I1006 12:04:52.298295 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ddd9cc34-6d2f-41d2-ba9f-e41230964003-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"ddd9cc34-6d2f-41d2-ba9f-e41230964003\") " pod="openstack/glance-default-internal-api-0" Oct 06 12:04:52 crc kubenswrapper[4958]: I1006 12:04:52.399841 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/ddd9cc34-6d2f-41d2-ba9f-e41230964003-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"ddd9cc34-6d2f-41d2-ba9f-e41230964003\") " pod="openstack/glance-default-internal-api-0" Oct 06 12:04:52 crc kubenswrapper[4958]: I1006 12:04:52.399901 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"glance-default-internal-api-0\" (UID: \"ddd9cc34-6d2f-41d2-ba9f-e41230964003\") " pod="openstack/glance-default-internal-api-0" Oct 06 12:04:52 crc kubenswrapper[4958]: I1006 12:04:52.399925 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ddd9cc34-6d2f-41d2-ba9f-e41230964003-config-data\") pod \"glance-default-internal-api-0\" (UID: \"ddd9cc34-6d2f-41d2-ba9f-e41230964003\") " pod="openstack/glance-default-internal-api-0" Oct 06 12:04:52 crc kubenswrapper[4958]: I1006 12:04:52.399942 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/ddd9cc34-6d2f-41d2-ba9f-e41230964003-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"ddd9cc34-6d2f-41d2-ba9f-e41230964003\") " pod="openstack/glance-default-internal-api-0" Oct 06 12:04:52 crc kubenswrapper[4958]: I1006 12:04:52.399971 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p5pvf\" (UniqueName: \"kubernetes.io/projected/ddd9cc34-6d2f-41d2-ba9f-e41230964003-kube-api-access-p5pvf\") pod \"glance-default-internal-api-0\" (UID: \"ddd9cc34-6d2f-41d2-ba9f-e41230964003\") " pod="openstack/glance-default-internal-api-0" Oct 06 12:04:52 crc kubenswrapper[4958]: I1006 12:04:52.400017 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ddd9cc34-6d2f-41d2-ba9f-e41230964003-scripts\") pod \"glance-default-internal-api-0\" (UID: \"ddd9cc34-6d2f-41d2-ba9f-e41230964003\") " pod="openstack/glance-default-internal-api-0" Oct 06 12:04:52 crc kubenswrapper[4958]: I1006 12:04:52.400034 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ddd9cc34-6d2f-41d2-ba9f-e41230964003-logs\") pod \"glance-default-internal-api-0\" (UID: \"ddd9cc34-6d2f-41d2-ba9f-e41230964003\") " pod="openstack/glance-default-internal-api-0" Oct 06 12:04:52 crc kubenswrapper[4958]: I1006 12:04:52.400058 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ddd9cc34-6d2f-41d2-ba9f-e41230964003-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"ddd9cc34-6d2f-41d2-ba9f-e41230964003\") " pod="openstack/glance-default-internal-api-0" Oct 06 12:04:52 crc kubenswrapper[4958]: I1006 12:04:52.400360 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/ddd9cc34-6d2f-41d2-ba9f-e41230964003-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"ddd9cc34-6d2f-41d2-ba9f-e41230964003\") " pod="openstack/glance-default-internal-api-0" Oct 06 12:04:52 crc kubenswrapper[4958]: I1006 12:04:52.401062 4958 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"glance-default-internal-api-0\" (UID: \"ddd9cc34-6d2f-41d2-ba9f-e41230964003\") device mount path \"/mnt/openstack/pv08\"" pod="openstack/glance-default-internal-api-0" Oct 06 12:04:52 crc kubenswrapper[4958]: I1006 12:04:52.401667 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ddd9cc34-6d2f-41d2-ba9f-e41230964003-logs\") pod \"glance-default-internal-api-0\" (UID: \"ddd9cc34-6d2f-41d2-ba9f-e41230964003\") " pod="openstack/glance-default-internal-api-0" Oct 06 12:04:52 crc kubenswrapper[4958]: I1006 12:04:52.405122 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ddd9cc34-6d2f-41d2-ba9f-e41230964003-scripts\") pod \"glance-default-internal-api-0\" (UID: \"ddd9cc34-6d2f-41d2-ba9f-e41230964003\") " pod="openstack/glance-default-internal-api-0" Oct 06 12:04:52 crc kubenswrapper[4958]: I1006 12:04:52.405258 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ddd9cc34-6d2f-41d2-ba9f-e41230964003-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"ddd9cc34-6d2f-41d2-ba9f-e41230964003\") " pod="openstack/glance-default-internal-api-0" Oct 06 12:04:52 crc kubenswrapper[4958]: I1006 12:04:52.405519 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/ddd9cc34-6d2f-41d2-ba9f-e41230964003-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"ddd9cc34-6d2f-41d2-ba9f-e41230964003\") " pod="openstack/glance-default-internal-api-0" Oct 06 12:04:52 crc kubenswrapper[4958]: I1006 12:04:52.414971 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ddd9cc34-6d2f-41d2-ba9f-e41230964003-config-data\") pod \"glance-default-internal-api-0\" (UID: \"ddd9cc34-6d2f-41d2-ba9f-e41230964003\") " pod="openstack/glance-default-internal-api-0" Oct 06 12:04:52 crc kubenswrapper[4958]: I1006 12:04:52.417518 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p5pvf\" (UniqueName: \"kubernetes.io/projected/ddd9cc34-6d2f-41d2-ba9f-e41230964003-kube-api-access-p5pvf\") pod \"glance-default-internal-api-0\" (UID: \"ddd9cc34-6d2f-41d2-ba9f-e41230964003\") " pod="openstack/glance-default-internal-api-0" Oct 06 12:04:52 crc kubenswrapper[4958]: I1006 12:04:52.432623 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"glance-default-internal-api-0\" (UID: \"ddd9cc34-6d2f-41d2-ba9f-e41230964003\") " pod="openstack/glance-default-internal-api-0" Oct 06 12:04:52 crc kubenswrapper[4958]: I1006 12:04:52.448552 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Oct 06 12:04:52 crc kubenswrapper[4958]: I1006 12:04:52.923440 4958 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7b1cc01c-8b93-41aa-bf54-7d98363efbca" path="/var/lib/kubelet/pods/7b1cc01c-8b93-41aa-bf54-7d98363efbca/volumes" Oct 06 12:04:52 crc kubenswrapper[4958]: I1006 12:04:52.924219 4958 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bdff7949-e76a-484d-983d-c3f8fee7f175" path="/var/lib/kubelet/pods/bdff7949-e76a-484d-983d-c3f8fee7f175/volumes" Oct 06 12:04:52 crc kubenswrapper[4958]: I1006 12:04:52.924834 4958 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d8409911-8a67-4382-87b7-7a050b025a09" path="/var/lib/kubelet/pods/d8409911-8a67-4382-87b7-7a050b025a09/volumes" Oct 06 12:04:52 crc kubenswrapper[4958]: I1006 12:04:52.926212 4958 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="db7e63d2-d43d-417e-939a-be456eaae637" path="/var/lib/kubelet/pods/db7e63d2-d43d-417e-939a-be456eaae637/volumes" Oct 06 12:04:52 crc kubenswrapper[4958]: I1006 12:04:52.926742 4958 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f71790e2-93a0-4e5a-9099-e2e5a103af3d" path="/var/lib/kubelet/pods/f71790e2-93a0-4e5a-9099-e2e5a103af3d/volumes" Oct 06 12:04:53 crc kubenswrapper[4958]: I1006 12:04:53.042354 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"5c8fffd7-1c3e-46a3-b444-0fd22eaff575","Type":"ContainerStarted","Data":"80052173c3fb59ceb569e64c380c543f2a0fadf001f6d5592bf100c089d845c3"} Oct 06 12:04:53 crc kubenswrapper[4958]: I1006 12:04:53.042742 4958 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="5c8fffd7-1c3e-46a3-b444-0fd22eaff575" containerName="ceilometer-central-agent" containerID="cri-o://193fed6e08ebe32078d874abcaed84c327519c0ca0d78726772106a4ecc67bf7" gracePeriod=30 Oct 06 12:04:53 crc kubenswrapper[4958]: I1006 12:04:53.042898 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Oct 06 12:04:53 crc kubenswrapper[4958]: I1006 12:04:53.043299 4958 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="5c8fffd7-1c3e-46a3-b444-0fd22eaff575" containerName="proxy-httpd" containerID="cri-o://80052173c3fb59ceb569e64c380c543f2a0fadf001f6d5592bf100c089d845c3" gracePeriod=30 Oct 06 12:04:53 crc kubenswrapper[4958]: I1006 12:04:53.043354 4958 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="5c8fffd7-1c3e-46a3-b444-0fd22eaff575" containerName="sg-core" containerID="cri-o://47d45832ba43a81a4163752a379605fce53affec79dd40084db7a87e0ee33e6d" gracePeriod=30 Oct 06 12:04:53 crc kubenswrapper[4958]: I1006 12:04:53.043383 4958 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="5c8fffd7-1c3e-46a3-b444-0fd22eaff575" containerName="ceilometer-notification-agent" containerID="cri-o://c5fdbff6fdb4090bba70e3d80a50116d5fe69e43ec27147b4f655fc3f449e579" gracePeriod=30 Oct 06 12:04:53 crc kubenswrapper[4958]: I1006 12:04:53.055900 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"131b14e5-e45a-4fc4-817c-b8f82c27e92e","Type":"ContainerStarted","Data":"b89565526e1732ced6aa6578d10f1a6c513de4ff9fcca72b0f95bf295fdf3c4c"} Oct 06 12:04:53 crc kubenswrapper[4958]: I1006 12:04:53.055947 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"131b14e5-e45a-4fc4-817c-b8f82c27e92e","Type":"ContainerStarted","Data":"9780910bd0ad8df1590c3941cbed43380621d7f70d8bbf823f8a66d23351d54f"} Oct 06 12:04:53 crc kubenswrapper[4958]: I1006 12:04:53.062711 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"33d3c29a-90b4-49ce-9dd3-0c3a415a8ec4","Type":"ContainerStarted","Data":"4926cd939428d734a5190cfa87e102c0a592779308db6e424ef74f31a4b90775"} Oct 06 12:04:53 crc kubenswrapper[4958]: I1006 12:04:53.062757 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"33d3c29a-90b4-49ce-9dd3-0c3a415a8ec4","Type":"ContainerStarted","Data":"c484a2a04402ea86860f41a38d236181f816456b30c53a30c56f8c5f230a5781"} Oct 06 12:04:53 crc kubenswrapper[4958]: I1006 12:04:53.088342 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Oct 06 12:04:53 crc kubenswrapper[4958]: I1006 12:04:53.089945 4958 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=3.092512327 podStartE2EDuration="15.089928698s" podCreationTimestamp="2025-10-06 12:04:38 +0000 UTC" firstStartedPulling="2025-10-06 12:04:39.782540964 +0000 UTC m=+1033.668566272" lastFinishedPulling="2025-10-06 12:04:51.779957345 +0000 UTC m=+1045.665982643" observedRunningTime="2025-10-06 12:04:53.073543634 +0000 UTC m=+1046.959568942" watchObservedRunningTime="2025-10-06 12:04:53.089928698 +0000 UTC m=+1046.975954006" Oct 06 12:04:53 crc kubenswrapper[4958]: I1006 12:04:53.589115 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-gzxhx" Oct 06 12:04:53 crc kubenswrapper[4958]: I1006 12:04:53.698348 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-6xbk9" Oct 06 12:04:53 crc kubenswrapper[4958]: I1006 12:04:53.704461 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-nsqbc" Oct 06 12:04:53 crc kubenswrapper[4958]: I1006 12:04:53.739749 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mnqgl\" (UniqueName: \"kubernetes.io/projected/f8492838-08b1-4d13-8697-bf595621d465-kube-api-access-mnqgl\") pod \"f8492838-08b1-4d13-8697-bf595621d465\" (UID: \"f8492838-08b1-4d13-8697-bf595621d465\") " Oct 06 12:04:53 crc kubenswrapper[4958]: I1006 12:04:53.763564 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f8492838-08b1-4d13-8697-bf595621d465-kube-api-access-mnqgl" (OuterVolumeSpecName: "kube-api-access-mnqgl") pod "f8492838-08b1-4d13-8697-bf595621d465" (UID: "f8492838-08b1-4d13-8697-bf595621d465"). InnerVolumeSpecName "kube-api-access-mnqgl". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 06 12:04:53 crc kubenswrapper[4958]: I1006 12:04:53.801936 4958 patch_prober.go:28] interesting pod/machine-config-daemon-whw6z container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 06 12:04:53 crc kubenswrapper[4958]: I1006 12:04:53.802010 4958 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-whw6z" podUID="1a63a5e9-6d00-40ff-a080-cfcaf03a1c1b" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 06 12:04:53 crc kubenswrapper[4958]: I1006 12:04:53.802050 4958 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-whw6z" Oct 06 12:04:53 crc kubenswrapper[4958]: I1006 12:04:53.802711 4958 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"d557eb58019a825d47465bf11cc9a867134d838f4f8d2d5b54e629c90b675773"} pod="openshift-machine-config-operator/machine-config-daemon-whw6z" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Oct 06 12:04:53 crc kubenswrapper[4958]: I1006 12:04:53.802755 4958 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-whw6z" podUID="1a63a5e9-6d00-40ff-a080-cfcaf03a1c1b" containerName="machine-config-daemon" containerID="cri-o://d557eb58019a825d47465bf11cc9a867134d838f4f8d2d5b54e629c90b675773" gracePeriod=600 Oct 06 12:04:53 crc kubenswrapper[4958]: I1006 12:04:53.841180 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4b2hx\" (UniqueName: \"kubernetes.io/projected/9ba0cdbc-1261-4dd4-9be1-f733bc94f7d3-kube-api-access-4b2hx\") pod \"9ba0cdbc-1261-4dd4-9be1-f733bc94f7d3\" (UID: \"9ba0cdbc-1261-4dd4-9be1-f733bc94f7d3\") " Oct 06 12:04:53 crc kubenswrapper[4958]: I1006 12:04:53.841278 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-srqg8\" (UniqueName: \"kubernetes.io/projected/ee717167-2b5b-4c0a-9528-b14f49856f5e-kube-api-access-srqg8\") pod \"ee717167-2b5b-4c0a-9528-b14f49856f5e\" (UID: \"ee717167-2b5b-4c0a-9528-b14f49856f5e\") " Oct 06 12:04:53 crc kubenswrapper[4958]: I1006 12:04:53.841649 4958 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mnqgl\" (UniqueName: \"kubernetes.io/projected/f8492838-08b1-4d13-8697-bf595621d465-kube-api-access-mnqgl\") on node \"crc\" DevicePath \"\"" Oct 06 12:04:53 crc kubenswrapper[4958]: I1006 12:04:53.845428 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ee717167-2b5b-4c0a-9528-b14f49856f5e-kube-api-access-srqg8" (OuterVolumeSpecName: "kube-api-access-srqg8") pod "ee717167-2b5b-4c0a-9528-b14f49856f5e" (UID: "ee717167-2b5b-4c0a-9528-b14f49856f5e"). InnerVolumeSpecName "kube-api-access-srqg8". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 06 12:04:53 crc kubenswrapper[4958]: I1006 12:04:53.854876 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9ba0cdbc-1261-4dd4-9be1-f733bc94f7d3-kube-api-access-4b2hx" (OuterVolumeSpecName: "kube-api-access-4b2hx") pod "9ba0cdbc-1261-4dd4-9be1-f733bc94f7d3" (UID: "9ba0cdbc-1261-4dd4-9be1-f733bc94f7d3"). InnerVolumeSpecName "kube-api-access-4b2hx". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 06 12:04:53 crc kubenswrapper[4958]: I1006 12:04:53.943730 4958 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4b2hx\" (UniqueName: \"kubernetes.io/projected/9ba0cdbc-1261-4dd4-9be1-f733bc94f7d3-kube-api-access-4b2hx\") on node \"crc\" DevicePath \"\"" Oct 06 12:04:53 crc kubenswrapper[4958]: I1006 12:04:53.944283 4958 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-srqg8\" (UniqueName: \"kubernetes.io/projected/ee717167-2b5b-4c0a-9528-b14f49856f5e-kube-api-access-srqg8\") on node \"crc\" DevicePath \"\"" Oct 06 12:04:54 crc kubenswrapper[4958]: I1006 12:04:54.076925 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-nsqbc" Oct 06 12:04:54 crc kubenswrapper[4958]: I1006 12:04:54.077215 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-nsqbc" event={"ID":"9ba0cdbc-1261-4dd4-9be1-f733bc94f7d3","Type":"ContainerDied","Data":"fb1640726b6ba2d1765ebcfb6797915a49de0f83f5568dc613d6a61bd288868c"} Oct 06 12:04:54 crc kubenswrapper[4958]: I1006 12:04:54.077240 4958 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="fb1640726b6ba2d1765ebcfb6797915a49de0f83f5568dc613d6a61bd288868c" Oct 06 12:04:54 crc kubenswrapper[4958]: I1006 12:04:54.081587 4958 generic.go:334] "Generic (PLEG): container finished" podID="1a63a5e9-6d00-40ff-a080-cfcaf03a1c1b" containerID="d557eb58019a825d47465bf11cc9a867134d838f4f8d2d5b54e629c90b675773" exitCode=0 Oct 06 12:04:54 crc kubenswrapper[4958]: I1006 12:04:54.081682 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-whw6z" event={"ID":"1a63a5e9-6d00-40ff-a080-cfcaf03a1c1b","Type":"ContainerDied","Data":"d557eb58019a825d47465bf11cc9a867134d838f4f8d2d5b54e629c90b675773"} Oct 06 12:04:54 crc kubenswrapper[4958]: I1006 12:04:54.081761 4958 scope.go:117] "RemoveContainer" containerID="50ea0a44529e4bdc070f78b8c163d73c0feb9c99ae2d2366012f3431b888a961" Oct 06 12:04:54 crc kubenswrapper[4958]: I1006 12:04:54.088966 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"33d3c29a-90b4-49ce-9dd3-0c3a415a8ec4","Type":"ContainerStarted","Data":"2198c79489870a973e9434cb23151a203d5e3583c9f8d6cd6fbf32f2cbd17bf0"} Oct 06 12:04:54 crc kubenswrapper[4958]: I1006 12:04:54.094480 4958 generic.go:334] "Generic (PLEG): container finished" podID="5c8fffd7-1c3e-46a3-b444-0fd22eaff575" containerID="80052173c3fb59ceb569e64c380c543f2a0fadf001f6d5592bf100c089d845c3" exitCode=0 Oct 06 12:04:54 crc kubenswrapper[4958]: I1006 12:04:54.094505 4958 generic.go:334] "Generic (PLEG): container finished" podID="5c8fffd7-1c3e-46a3-b444-0fd22eaff575" containerID="47d45832ba43a81a4163752a379605fce53affec79dd40084db7a87e0ee33e6d" exitCode=2 Oct 06 12:04:54 crc kubenswrapper[4958]: I1006 12:04:54.094512 4958 generic.go:334] "Generic (PLEG): container finished" podID="5c8fffd7-1c3e-46a3-b444-0fd22eaff575" containerID="193fed6e08ebe32078d874abcaed84c327519c0ca0d78726772106a4ecc67bf7" exitCode=0 Oct 06 12:04:54 crc kubenswrapper[4958]: I1006 12:04:54.094561 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"5c8fffd7-1c3e-46a3-b444-0fd22eaff575","Type":"ContainerDied","Data":"80052173c3fb59ceb569e64c380c543f2a0fadf001f6d5592bf100c089d845c3"} Oct 06 12:04:54 crc kubenswrapper[4958]: I1006 12:04:54.095790 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"5c8fffd7-1c3e-46a3-b444-0fd22eaff575","Type":"ContainerDied","Data":"47d45832ba43a81a4163752a379605fce53affec79dd40084db7a87e0ee33e6d"} Oct 06 12:04:54 crc kubenswrapper[4958]: I1006 12:04:54.095838 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"5c8fffd7-1c3e-46a3-b444-0fd22eaff575","Type":"ContainerDied","Data":"193fed6e08ebe32078d874abcaed84c327519c0ca0d78726772106a4ecc67bf7"} Oct 06 12:04:54 crc kubenswrapper[4958]: I1006 12:04:54.098630 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"131b14e5-e45a-4fc4-817c-b8f82c27e92e","Type":"ContainerStarted","Data":"9c60b3b7670ce9e1b36178897590509961591005fdcf4c89d8cd60dc7d5c3f37"} Oct 06 12:04:54 crc kubenswrapper[4958]: I1006 12:04:54.101053 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-6xbk9" event={"ID":"ee717167-2b5b-4c0a-9528-b14f49856f5e","Type":"ContainerDied","Data":"d6cf87a04d586117881e923cc0c9a26d953468166b93f4b0b6101d9a97f5ce18"} Oct 06 12:04:54 crc kubenswrapper[4958]: I1006 12:04:54.101099 4958 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d6cf87a04d586117881e923cc0c9a26d953468166b93f4b0b6101d9a97f5ce18" Oct 06 12:04:54 crc kubenswrapper[4958]: I1006 12:04:54.101175 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-6xbk9" Oct 06 12:04:54 crc kubenswrapper[4958]: I1006 12:04:54.108600 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-gzxhx" event={"ID":"f8492838-08b1-4d13-8697-bf595621d465","Type":"ContainerDied","Data":"20fa331c4c25820521f1334d8395799615321f9c1186958a9769c7bfa9ab7778"} Oct 06 12:04:54 crc kubenswrapper[4958]: I1006 12:04:54.108833 4958 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="20fa331c4c25820521f1334d8395799615321f9c1186958a9769c7bfa9ab7778" Oct 06 12:04:54 crc kubenswrapper[4958]: I1006 12:04:54.108908 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-gzxhx" Oct 06 12:04:54 crc kubenswrapper[4958]: I1006 12:04:54.111610 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"ddd9cc34-6d2f-41d2-ba9f-e41230964003","Type":"ContainerStarted","Data":"bdfedf278ee7dcf0a32a44e4f0bd504a6a300dd6e61e65c3a1f4cdcc28fd68de"} Oct 06 12:04:54 crc kubenswrapper[4958]: I1006 12:04:54.111660 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"ddd9cc34-6d2f-41d2-ba9f-e41230964003","Type":"ContainerStarted","Data":"f72ee8ad182d1cb99ed93ec2e33f4f1906354441cf3c191e277bfb2599ea12a5"} Oct 06 12:04:54 crc kubenswrapper[4958]: I1006 12:04:54.135265 4958 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-external-api-0" podStartSLOduration=3.135243437 podStartE2EDuration="3.135243437s" podCreationTimestamp="2025-10-06 12:04:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-06 12:04:54.111677907 +0000 UTC m=+1047.997703255" watchObservedRunningTime="2025-10-06 12:04:54.135243437 +0000 UTC m=+1048.021268765" Oct 06 12:04:54 crc kubenswrapper[4958]: I1006 12:04:54.145825 4958 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-scheduler-0" podStartSLOduration=4.145807176 podStartE2EDuration="4.145807176s" podCreationTimestamp="2025-10-06 12:04:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-06 12:04:54.141207467 +0000 UTC m=+1048.027232785" watchObservedRunningTime="2025-10-06 12:04:54.145807176 +0000 UTC m=+1048.031832484" Oct 06 12:04:54 crc kubenswrapper[4958]: I1006 12:04:54.962535 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Oct 06 12:04:55 crc kubenswrapper[4958]: I1006 12:04:55.076361 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5c8fffd7-1c3e-46a3-b444-0fd22eaff575-config-data\") pod \"5c8fffd7-1c3e-46a3-b444-0fd22eaff575\" (UID: \"5c8fffd7-1c3e-46a3-b444-0fd22eaff575\") " Oct 06 12:04:55 crc kubenswrapper[4958]: I1006 12:04:55.076442 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5c8fffd7-1c3e-46a3-b444-0fd22eaff575-scripts\") pod \"5c8fffd7-1c3e-46a3-b444-0fd22eaff575\" (UID: \"5c8fffd7-1c3e-46a3-b444-0fd22eaff575\") " Oct 06 12:04:55 crc kubenswrapper[4958]: I1006 12:04:55.076492 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/5c8fffd7-1c3e-46a3-b444-0fd22eaff575-sg-core-conf-yaml\") pod \"5c8fffd7-1c3e-46a3-b444-0fd22eaff575\" (UID: \"5c8fffd7-1c3e-46a3-b444-0fd22eaff575\") " Oct 06 12:04:55 crc kubenswrapper[4958]: I1006 12:04:55.076544 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zk84n\" (UniqueName: \"kubernetes.io/projected/5c8fffd7-1c3e-46a3-b444-0fd22eaff575-kube-api-access-zk84n\") pod \"5c8fffd7-1c3e-46a3-b444-0fd22eaff575\" (UID: \"5c8fffd7-1c3e-46a3-b444-0fd22eaff575\") " Oct 06 12:04:55 crc kubenswrapper[4958]: I1006 12:04:55.076583 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5c8fffd7-1c3e-46a3-b444-0fd22eaff575-combined-ca-bundle\") pod \"5c8fffd7-1c3e-46a3-b444-0fd22eaff575\" (UID: \"5c8fffd7-1c3e-46a3-b444-0fd22eaff575\") " Oct 06 12:04:55 crc kubenswrapper[4958]: I1006 12:04:55.076617 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/5c8fffd7-1c3e-46a3-b444-0fd22eaff575-log-httpd\") pod \"5c8fffd7-1c3e-46a3-b444-0fd22eaff575\" (UID: \"5c8fffd7-1c3e-46a3-b444-0fd22eaff575\") " Oct 06 12:04:55 crc kubenswrapper[4958]: I1006 12:04:55.076643 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/5c8fffd7-1c3e-46a3-b444-0fd22eaff575-run-httpd\") pod \"5c8fffd7-1c3e-46a3-b444-0fd22eaff575\" (UID: \"5c8fffd7-1c3e-46a3-b444-0fd22eaff575\") " Oct 06 12:04:55 crc kubenswrapper[4958]: I1006 12:04:55.077535 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5c8fffd7-1c3e-46a3-b444-0fd22eaff575-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "5c8fffd7-1c3e-46a3-b444-0fd22eaff575" (UID: "5c8fffd7-1c3e-46a3-b444-0fd22eaff575"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 06 12:04:55 crc kubenswrapper[4958]: I1006 12:04:55.077579 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5c8fffd7-1c3e-46a3-b444-0fd22eaff575-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "5c8fffd7-1c3e-46a3-b444-0fd22eaff575" (UID: "5c8fffd7-1c3e-46a3-b444-0fd22eaff575"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 06 12:04:55 crc kubenswrapper[4958]: I1006 12:04:55.082220 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5c8fffd7-1c3e-46a3-b444-0fd22eaff575-kube-api-access-zk84n" (OuterVolumeSpecName: "kube-api-access-zk84n") pod "5c8fffd7-1c3e-46a3-b444-0fd22eaff575" (UID: "5c8fffd7-1c3e-46a3-b444-0fd22eaff575"). InnerVolumeSpecName "kube-api-access-zk84n". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 06 12:04:55 crc kubenswrapper[4958]: I1006 12:04:55.086699 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5c8fffd7-1c3e-46a3-b444-0fd22eaff575-scripts" (OuterVolumeSpecName: "scripts") pod "5c8fffd7-1c3e-46a3-b444-0fd22eaff575" (UID: "5c8fffd7-1c3e-46a3-b444-0fd22eaff575"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 06 12:04:55 crc kubenswrapper[4958]: I1006 12:04:55.106153 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5c8fffd7-1c3e-46a3-b444-0fd22eaff575-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "5c8fffd7-1c3e-46a3-b444-0fd22eaff575" (UID: "5c8fffd7-1c3e-46a3-b444-0fd22eaff575"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 06 12:04:55 crc kubenswrapper[4958]: I1006 12:04:55.126625 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"ddd9cc34-6d2f-41d2-ba9f-e41230964003","Type":"ContainerStarted","Data":"b93c6c21d00b7d5f527e356fa222c5e6e9a8c0c23431983e55487502f2376cc8"} Oct 06 12:04:55 crc kubenswrapper[4958]: I1006 12:04:55.132444 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-whw6z" event={"ID":"1a63a5e9-6d00-40ff-a080-cfcaf03a1c1b","Type":"ContainerStarted","Data":"58063f347fe5bcd8235160b2a3cc2b46057a0950357e7063907acb38c848571d"} Oct 06 12:04:55 crc kubenswrapper[4958]: I1006 12:04:55.135103 4958 generic.go:334] "Generic (PLEG): container finished" podID="5c8fffd7-1c3e-46a3-b444-0fd22eaff575" containerID="c5fdbff6fdb4090bba70e3d80a50116d5fe69e43ec27147b4f655fc3f449e579" exitCode=0 Oct 06 12:04:55 crc kubenswrapper[4958]: I1006 12:04:55.135608 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Oct 06 12:04:55 crc kubenswrapper[4958]: I1006 12:04:55.135777 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"5c8fffd7-1c3e-46a3-b444-0fd22eaff575","Type":"ContainerDied","Data":"c5fdbff6fdb4090bba70e3d80a50116d5fe69e43ec27147b4f655fc3f449e579"} Oct 06 12:04:55 crc kubenswrapper[4958]: I1006 12:04:55.135803 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"5c8fffd7-1c3e-46a3-b444-0fd22eaff575","Type":"ContainerDied","Data":"acba22244bac507d673aca3f9064b23210c12f0ea506d28e7869322c344c52ea"} Oct 06 12:04:55 crc kubenswrapper[4958]: I1006 12:04:55.135819 4958 scope.go:117] "RemoveContainer" containerID="80052173c3fb59ceb569e64c380c543f2a0fadf001f6d5592bf100c089d845c3" Oct 06 12:04:55 crc kubenswrapper[4958]: I1006 12:04:55.153113 4958 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-internal-api-0" podStartSLOduration=3.153095579 podStartE2EDuration="3.153095579s" podCreationTimestamp="2025-10-06 12:04:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-06 12:04:55.146122849 +0000 UTC m=+1049.032148157" watchObservedRunningTime="2025-10-06 12:04:55.153095579 +0000 UTC m=+1049.039120877" Oct 06 12:04:55 crc kubenswrapper[4958]: I1006 12:04:55.186389 4958 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5c8fffd7-1c3e-46a3-b444-0fd22eaff575-scripts\") on node \"crc\" DevicePath \"\"" Oct 06 12:04:55 crc kubenswrapper[4958]: I1006 12:04:55.186616 4958 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/5c8fffd7-1c3e-46a3-b444-0fd22eaff575-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Oct 06 12:04:55 crc kubenswrapper[4958]: I1006 12:04:55.186692 4958 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zk84n\" (UniqueName: \"kubernetes.io/projected/5c8fffd7-1c3e-46a3-b444-0fd22eaff575-kube-api-access-zk84n\") on node \"crc\" DevicePath \"\"" Oct 06 12:04:55 crc kubenswrapper[4958]: I1006 12:04:55.186749 4958 scope.go:117] "RemoveContainer" containerID="47d45832ba43a81a4163752a379605fce53affec79dd40084db7a87e0ee33e6d" Oct 06 12:04:55 crc kubenswrapper[4958]: I1006 12:04:55.186762 4958 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/5c8fffd7-1c3e-46a3-b444-0fd22eaff575-log-httpd\") on node \"crc\" DevicePath \"\"" Oct 06 12:04:55 crc kubenswrapper[4958]: I1006 12:04:55.186929 4958 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/5c8fffd7-1c3e-46a3-b444-0fd22eaff575-run-httpd\") on node \"crc\" DevicePath \"\"" Oct 06 12:04:55 crc kubenswrapper[4958]: I1006 12:04:55.220296 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5c8fffd7-1c3e-46a3-b444-0fd22eaff575-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "5c8fffd7-1c3e-46a3-b444-0fd22eaff575" (UID: "5c8fffd7-1c3e-46a3-b444-0fd22eaff575"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 06 12:04:55 crc kubenswrapper[4958]: I1006 12:04:55.242396 4958 scope.go:117] "RemoveContainer" containerID="c5fdbff6fdb4090bba70e3d80a50116d5fe69e43ec27147b4f655fc3f449e579" Oct 06 12:04:55 crc kubenswrapper[4958]: I1006 12:04:55.298559 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5c8fffd7-1c3e-46a3-b444-0fd22eaff575-config-data" (OuterVolumeSpecName: "config-data") pod "5c8fffd7-1c3e-46a3-b444-0fd22eaff575" (UID: "5c8fffd7-1c3e-46a3-b444-0fd22eaff575"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 06 12:04:55 crc kubenswrapper[4958]: I1006 12:04:55.304117 4958 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5c8fffd7-1c3e-46a3-b444-0fd22eaff575-config-data\") on node \"crc\" DevicePath \"\"" Oct 06 12:04:55 crc kubenswrapper[4958]: I1006 12:04:55.304248 4958 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5c8fffd7-1c3e-46a3-b444-0fd22eaff575-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 06 12:04:55 crc kubenswrapper[4958]: I1006 12:04:55.314281 4958 scope.go:117] "RemoveContainer" containerID="193fed6e08ebe32078d874abcaed84c327519c0ca0d78726772106a4ecc67bf7" Oct 06 12:04:55 crc kubenswrapper[4958]: I1006 12:04:55.334676 4958 scope.go:117] "RemoveContainer" containerID="80052173c3fb59ceb569e64c380c543f2a0fadf001f6d5592bf100c089d845c3" Oct 06 12:04:55 crc kubenswrapper[4958]: E1006 12:04:55.338507 4958 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"80052173c3fb59ceb569e64c380c543f2a0fadf001f6d5592bf100c089d845c3\": container with ID starting with 80052173c3fb59ceb569e64c380c543f2a0fadf001f6d5592bf100c089d845c3 not found: ID does not exist" containerID="80052173c3fb59ceb569e64c380c543f2a0fadf001f6d5592bf100c089d845c3" Oct 06 12:04:55 crc kubenswrapper[4958]: I1006 12:04:55.338539 4958 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"80052173c3fb59ceb569e64c380c543f2a0fadf001f6d5592bf100c089d845c3"} err="failed to get container status \"80052173c3fb59ceb569e64c380c543f2a0fadf001f6d5592bf100c089d845c3\": rpc error: code = NotFound desc = could not find container \"80052173c3fb59ceb569e64c380c543f2a0fadf001f6d5592bf100c089d845c3\": container with ID starting with 80052173c3fb59ceb569e64c380c543f2a0fadf001f6d5592bf100c089d845c3 not found: ID does not exist" Oct 06 12:04:55 crc kubenswrapper[4958]: I1006 12:04:55.338559 4958 scope.go:117] "RemoveContainer" containerID="47d45832ba43a81a4163752a379605fce53affec79dd40084db7a87e0ee33e6d" Oct 06 12:04:55 crc kubenswrapper[4958]: E1006 12:04:55.345243 4958 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"47d45832ba43a81a4163752a379605fce53affec79dd40084db7a87e0ee33e6d\": container with ID starting with 47d45832ba43a81a4163752a379605fce53affec79dd40084db7a87e0ee33e6d not found: ID does not exist" containerID="47d45832ba43a81a4163752a379605fce53affec79dd40084db7a87e0ee33e6d" Oct 06 12:04:55 crc kubenswrapper[4958]: I1006 12:04:55.345281 4958 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"47d45832ba43a81a4163752a379605fce53affec79dd40084db7a87e0ee33e6d"} err="failed to get container status \"47d45832ba43a81a4163752a379605fce53affec79dd40084db7a87e0ee33e6d\": rpc error: code = NotFound desc = could not find container \"47d45832ba43a81a4163752a379605fce53affec79dd40084db7a87e0ee33e6d\": container with ID starting with 47d45832ba43a81a4163752a379605fce53affec79dd40084db7a87e0ee33e6d not found: ID does not exist" Oct 06 12:04:55 crc kubenswrapper[4958]: I1006 12:04:55.345300 4958 scope.go:117] "RemoveContainer" containerID="c5fdbff6fdb4090bba70e3d80a50116d5fe69e43ec27147b4f655fc3f449e579" Oct 06 12:04:55 crc kubenswrapper[4958]: E1006 12:04:55.346052 4958 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c5fdbff6fdb4090bba70e3d80a50116d5fe69e43ec27147b4f655fc3f449e579\": container with ID starting with c5fdbff6fdb4090bba70e3d80a50116d5fe69e43ec27147b4f655fc3f449e579 not found: ID does not exist" containerID="c5fdbff6fdb4090bba70e3d80a50116d5fe69e43ec27147b4f655fc3f449e579" Oct 06 12:04:55 crc kubenswrapper[4958]: I1006 12:04:55.346077 4958 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c5fdbff6fdb4090bba70e3d80a50116d5fe69e43ec27147b4f655fc3f449e579"} err="failed to get container status \"c5fdbff6fdb4090bba70e3d80a50116d5fe69e43ec27147b4f655fc3f449e579\": rpc error: code = NotFound desc = could not find container \"c5fdbff6fdb4090bba70e3d80a50116d5fe69e43ec27147b4f655fc3f449e579\": container with ID starting with c5fdbff6fdb4090bba70e3d80a50116d5fe69e43ec27147b4f655fc3f449e579 not found: ID does not exist" Oct 06 12:04:55 crc kubenswrapper[4958]: I1006 12:04:55.346091 4958 scope.go:117] "RemoveContainer" containerID="193fed6e08ebe32078d874abcaed84c327519c0ca0d78726772106a4ecc67bf7" Oct 06 12:04:55 crc kubenswrapper[4958]: E1006 12:04:55.348137 4958 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"193fed6e08ebe32078d874abcaed84c327519c0ca0d78726772106a4ecc67bf7\": container with ID starting with 193fed6e08ebe32078d874abcaed84c327519c0ca0d78726772106a4ecc67bf7 not found: ID does not exist" containerID="193fed6e08ebe32078d874abcaed84c327519c0ca0d78726772106a4ecc67bf7" Oct 06 12:04:55 crc kubenswrapper[4958]: I1006 12:04:55.348170 4958 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"193fed6e08ebe32078d874abcaed84c327519c0ca0d78726772106a4ecc67bf7"} err="failed to get container status \"193fed6e08ebe32078d874abcaed84c327519c0ca0d78726772106a4ecc67bf7\": rpc error: code = NotFound desc = could not find container \"193fed6e08ebe32078d874abcaed84c327519c0ca0d78726772106a4ecc67bf7\": container with ID starting with 193fed6e08ebe32078d874abcaed84c327519c0ca0d78726772106a4ecc67bf7 not found: ID does not exist" Oct 06 12:04:55 crc kubenswrapper[4958]: I1006 12:04:55.476557 4958 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Oct 06 12:04:55 crc kubenswrapper[4958]: I1006 12:04:55.491520 4958 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Oct 06 12:04:55 crc kubenswrapper[4958]: I1006 12:04:55.512138 4958 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Oct 06 12:04:55 crc kubenswrapper[4958]: E1006 12:04:55.512484 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5c8fffd7-1c3e-46a3-b444-0fd22eaff575" containerName="ceilometer-central-agent" Oct 06 12:04:55 crc kubenswrapper[4958]: I1006 12:04:55.512502 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="5c8fffd7-1c3e-46a3-b444-0fd22eaff575" containerName="ceilometer-central-agent" Oct 06 12:04:55 crc kubenswrapper[4958]: E1006 12:04:55.512515 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5c8fffd7-1c3e-46a3-b444-0fd22eaff575" containerName="sg-core" Oct 06 12:04:55 crc kubenswrapper[4958]: I1006 12:04:55.512522 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="5c8fffd7-1c3e-46a3-b444-0fd22eaff575" containerName="sg-core" Oct 06 12:04:55 crc kubenswrapper[4958]: E1006 12:04:55.512529 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9ba0cdbc-1261-4dd4-9be1-f733bc94f7d3" containerName="mariadb-database-create" Oct 06 12:04:55 crc kubenswrapper[4958]: I1006 12:04:55.512536 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="9ba0cdbc-1261-4dd4-9be1-f733bc94f7d3" containerName="mariadb-database-create" Oct 06 12:04:55 crc kubenswrapper[4958]: E1006 12:04:55.512548 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5c8fffd7-1c3e-46a3-b444-0fd22eaff575" containerName="ceilometer-notification-agent" Oct 06 12:04:55 crc kubenswrapper[4958]: I1006 12:04:55.512553 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="5c8fffd7-1c3e-46a3-b444-0fd22eaff575" containerName="ceilometer-notification-agent" Oct 06 12:04:55 crc kubenswrapper[4958]: E1006 12:04:55.512581 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ee717167-2b5b-4c0a-9528-b14f49856f5e" containerName="mariadb-database-create" Oct 06 12:04:55 crc kubenswrapper[4958]: I1006 12:04:55.512589 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="ee717167-2b5b-4c0a-9528-b14f49856f5e" containerName="mariadb-database-create" Oct 06 12:04:55 crc kubenswrapper[4958]: E1006 12:04:55.512602 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5c8fffd7-1c3e-46a3-b444-0fd22eaff575" containerName="proxy-httpd" Oct 06 12:04:55 crc kubenswrapper[4958]: I1006 12:04:55.512608 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="5c8fffd7-1c3e-46a3-b444-0fd22eaff575" containerName="proxy-httpd" Oct 06 12:04:55 crc kubenswrapper[4958]: E1006 12:04:55.512620 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f8492838-08b1-4d13-8697-bf595621d465" containerName="mariadb-database-create" Oct 06 12:04:55 crc kubenswrapper[4958]: I1006 12:04:55.512625 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="f8492838-08b1-4d13-8697-bf595621d465" containerName="mariadb-database-create" Oct 06 12:04:55 crc kubenswrapper[4958]: I1006 12:04:55.512815 4958 memory_manager.go:354] "RemoveStaleState removing state" podUID="9ba0cdbc-1261-4dd4-9be1-f733bc94f7d3" containerName="mariadb-database-create" Oct 06 12:04:55 crc kubenswrapper[4958]: I1006 12:04:55.512830 4958 memory_manager.go:354] "RemoveStaleState removing state" podUID="5c8fffd7-1c3e-46a3-b444-0fd22eaff575" containerName="ceilometer-notification-agent" Oct 06 12:04:55 crc kubenswrapper[4958]: I1006 12:04:55.512841 4958 memory_manager.go:354] "RemoveStaleState removing state" podUID="5c8fffd7-1c3e-46a3-b444-0fd22eaff575" containerName="proxy-httpd" Oct 06 12:04:55 crc kubenswrapper[4958]: I1006 12:04:55.512852 4958 memory_manager.go:354] "RemoveStaleState removing state" podUID="f8492838-08b1-4d13-8697-bf595621d465" containerName="mariadb-database-create" Oct 06 12:04:55 crc kubenswrapper[4958]: I1006 12:04:55.512867 4958 memory_manager.go:354] "RemoveStaleState removing state" podUID="5c8fffd7-1c3e-46a3-b444-0fd22eaff575" containerName="ceilometer-central-agent" Oct 06 12:04:55 crc kubenswrapper[4958]: I1006 12:04:55.512877 4958 memory_manager.go:354] "RemoveStaleState removing state" podUID="ee717167-2b5b-4c0a-9528-b14f49856f5e" containerName="mariadb-database-create" Oct 06 12:04:55 crc kubenswrapper[4958]: I1006 12:04:55.512888 4958 memory_manager.go:354] "RemoveStaleState removing state" podUID="5c8fffd7-1c3e-46a3-b444-0fd22eaff575" containerName="sg-core" Oct 06 12:04:55 crc kubenswrapper[4958]: I1006 12:04:55.514749 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Oct 06 12:04:55 crc kubenswrapper[4958]: I1006 12:04:55.517015 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Oct 06 12:04:55 crc kubenswrapper[4958]: I1006 12:04:55.517246 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Oct 06 12:04:55 crc kubenswrapper[4958]: I1006 12:04:55.535164 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Oct 06 12:04:55 crc kubenswrapper[4958]: I1006 12:04:55.710059 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e6e42935-5c96-4ce1-8fa4-aef40c03a225-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"e6e42935-5c96-4ce1-8fa4-aef40c03a225\") " pod="openstack/ceilometer-0" Oct 06 12:04:55 crc kubenswrapper[4958]: I1006 12:04:55.710133 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e6e42935-5c96-4ce1-8fa4-aef40c03a225-scripts\") pod \"ceilometer-0\" (UID: \"e6e42935-5c96-4ce1-8fa4-aef40c03a225\") " pod="openstack/ceilometer-0" Oct 06 12:04:55 crc kubenswrapper[4958]: I1006 12:04:55.710208 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e6e42935-5c96-4ce1-8fa4-aef40c03a225-log-httpd\") pod \"ceilometer-0\" (UID: \"e6e42935-5c96-4ce1-8fa4-aef40c03a225\") " pod="openstack/ceilometer-0" Oct 06 12:04:55 crc kubenswrapper[4958]: I1006 12:04:55.710480 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bphz9\" (UniqueName: \"kubernetes.io/projected/e6e42935-5c96-4ce1-8fa4-aef40c03a225-kube-api-access-bphz9\") pod \"ceilometer-0\" (UID: \"e6e42935-5c96-4ce1-8fa4-aef40c03a225\") " pod="openstack/ceilometer-0" Oct 06 12:04:55 crc kubenswrapper[4958]: I1006 12:04:55.710530 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e6e42935-5c96-4ce1-8fa4-aef40c03a225-run-httpd\") pod \"ceilometer-0\" (UID: \"e6e42935-5c96-4ce1-8fa4-aef40c03a225\") " pod="openstack/ceilometer-0" Oct 06 12:04:55 crc kubenswrapper[4958]: I1006 12:04:55.710760 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e6e42935-5c96-4ce1-8fa4-aef40c03a225-config-data\") pod \"ceilometer-0\" (UID: \"e6e42935-5c96-4ce1-8fa4-aef40c03a225\") " pod="openstack/ceilometer-0" Oct 06 12:04:55 crc kubenswrapper[4958]: I1006 12:04:55.710868 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/e6e42935-5c96-4ce1-8fa4-aef40c03a225-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"e6e42935-5c96-4ce1-8fa4-aef40c03a225\") " pod="openstack/ceilometer-0" Oct 06 12:04:55 crc kubenswrapper[4958]: I1006 12:04:55.812451 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bphz9\" (UniqueName: \"kubernetes.io/projected/e6e42935-5c96-4ce1-8fa4-aef40c03a225-kube-api-access-bphz9\") pod \"ceilometer-0\" (UID: \"e6e42935-5c96-4ce1-8fa4-aef40c03a225\") " pod="openstack/ceilometer-0" Oct 06 12:04:55 crc kubenswrapper[4958]: I1006 12:04:55.812491 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e6e42935-5c96-4ce1-8fa4-aef40c03a225-run-httpd\") pod \"ceilometer-0\" (UID: \"e6e42935-5c96-4ce1-8fa4-aef40c03a225\") " pod="openstack/ceilometer-0" Oct 06 12:04:55 crc kubenswrapper[4958]: I1006 12:04:55.812535 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e6e42935-5c96-4ce1-8fa4-aef40c03a225-config-data\") pod \"ceilometer-0\" (UID: \"e6e42935-5c96-4ce1-8fa4-aef40c03a225\") " pod="openstack/ceilometer-0" Oct 06 12:04:55 crc kubenswrapper[4958]: I1006 12:04:55.812562 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/e6e42935-5c96-4ce1-8fa4-aef40c03a225-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"e6e42935-5c96-4ce1-8fa4-aef40c03a225\") " pod="openstack/ceilometer-0" Oct 06 12:04:55 crc kubenswrapper[4958]: I1006 12:04:55.812582 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e6e42935-5c96-4ce1-8fa4-aef40c03a225-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"e6e42935-5c96-4ce1-8fa4-aef40c03a225\") " pod="openstack/ceilometer-0" Oct 06 12:04:55 crc kubenswrapper[4958]: I1006 12:04:55.812623 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e6e42935-5c96-4ce1-8fa4-aef40c03a225-scripts\") pod \"ceilometer-0\" (UID: \"e6e42935-5c96-4ce1-8fa4-aef40c03a225\") " pod="openstack/ceilometer-0" Oct 06 12:04:55 crc kubenswrapper[4958]: I1006 12:04:55.812671 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e6e42935-5c96-4ce1-8fa4-aef40c03a225-log-httpd\") pod \"ceilometer-0\" (UID: \"e6e42935-5c96-4ce1-8fa4-aef40c03a225\") " pod="openstack/ceilometer-0" Oct 06 12:04:55 crc kubenswrapper[4958]: I1006 12:04:55.813077 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e6e42935-5c96-4ce1-8fa4-aef40c03a225-log-httpd\") pod \"ceilometer-0\" (UID: \"e6e42935-5c96-4ce1-8fa4-aef40c03a225\") " pod="openstack/ceilometer-0" Oct 06 12:04:55 crc kubenswrapper[4958]: I1006 12:04:55.813567 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e6e42935-5c96-4ce1-8fa4-aef40c03a225-run-httpd\") pod \"ceilometer-0\" (UID: \"e6e42935-5c96-4ce1-8fa4-aef40c03a225\") " pod="openstack/ceilometer-0" Oct 06 12:04:55 crc kubenswrapper[4958]: I1006 12:04:55.818300 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e6e42935-5c96-4ce1-8fa4-aef40c03a225-scripts\") pod \"ceilometer-0\" (UID: \"e6e42935-5c96-4ce1-8fa4-aef40c03a225\") " pod="openstack/ceilometer-0" Oct 06 12:04:55 crc kubenswrapper[4958]: I1006 12:04:55.821573 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e6e42935-5c96-4ce1-8fa4-aef40c03a225-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"e6e42935-5c96-4ce1-8fa4-aef40c03a225\") " pod="openstack/ceilometer-0" Oct 06 12:04:55 crc kubenswrapper[4958]: I1006 12:04:55.823375 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/e6e42935-5c96-4ce1-8fa4-aef40c03a225-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"e6e42935-5c96-4ce1-8fa4-aef40c03a225\") " pod="openstack/ceilometer-0" Oct 06 12:04:55 crc kubenswrapper[4958]: I1006 12:04:55.825901 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e6e42935-5c96-4ce1-8fa4-aef40c03a225-config-data\") pod \"ceilometer-0\" (UID: \"e6e42935-5c96-4ce1-8fa4-aef40c03a225\") " pod="openstack/ceilometer-0" Oct 06 12:04:55 crc kubenswrapper[4958]: I1006 12:04:55.843527 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bphz9\" (UniqueName: \"kubernetes.io/projected/e6e42935-5c96-4ce1-8fa4-aef40c03a225-kube-api-access-bphz9\") pod \"ceilometer-0\" (UID: \"e6e42935-5c96-4ce1-8fa4-aef40c03a225\") " pod="openstack/ceilometer-0" Oct 06 12:04:56 crc kubenswrapper[4958]: I1006 12:04:56.136225 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Oct 06 12:04:56 crc kubenswrapper[4958]: I1006 12:04:56.500872 4958 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-scheduler-0" Oct 06 12:04:56 crc kubenswrapper[4958]: I1006 12:04:56.593373 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Oct 06 12:04:56 crc kubenswrapper[4958]: W1006 12:04:56.599117 4958 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode6e42935_5c96_4ce1_8fa4_aef40c03a225.slice/crio-c775022386d8ff16ce5a75f74d29b736fffc35b0b39b73ba9036e571c3fc885d WatchSource:0}: Error finding container c775022386d8ff16ce5a75f74d29b736fffc35b0b39b73ba9036e571c3fc885d: Status 404 returned error can't find the container with id c775022386d8ff16ce5a75f74d29b736fffc35b0b39b73ba9036e571c3fc885d Oct 06 12:04:56 crc kubenswrapper[4958]: I1006 12:04:56.925327 4958 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5c8fffd7-1c3e-46a3-b444-0fd22eaff575" path="/var/lib/kubelet/pods/5c8fffd7-1c3e-46a3-b444-0fd22eaff575/volumes" Oct 06 12:04:57 crc kubenswrapper[4958]: I1006 12:04:57.155938 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"e6e42935-5c96-4ce1-8fa4-aef40c03a225","Type":"ContainerStarted","Data":"c775022386d8ff16ce5a75f74d29b736fffc35b0b39b73ba9036e571c3fc885d"} Oct 06 12:04:58 crc kubenswrapper[4958]: I1006 12:04:58.166933 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"e6e42935-5c96-4ce1-8fa4-aef40c03a225","Type":"ContainerStarted","Data":"4add19501dd052ee43b568081128f7bdf136249ddc747b31eb8fff5f4d98545e"} Oct 06 12:04:58 crc kubenswrapper[4958]: I1006 12:04:58.167479 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"e6e42935-5c96-4ce1-8fa4-aef40c03a225","Type":"ContainerStarted","Data":"543489d65ab668f808f1890af5ac7b717b3dc59e1d9581cdf5dc0e077559ac67"} Oct 06 12:04:59 crc kubenswrapper[4958]: I1006 12:04:59.180876 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"e6e42935-5c96-4ce1-8fa4-aef40c03a225","Type":"ContainerStarted","Data":"095ba21d43e4da0693b08ae4749d48cfa882e7c5e46f86481c432daea5645df5"} Oct 06 12:05:00 crc kubenswrapper[4958]: I1006 12:05:00.191124 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"e6e42935-5c96-4ce1-8fa4-aef40c03a225","Type":"ContainerStarted","Data":"662a72d64de251e7acd1184c04b2e7f5fd9aef8cc35425d67eaa8366575e829c"} Oct 06 12:05:00 crc kubenswrapper[4958]: I1006 12:05:00.191762 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Oct 06 12:05:00 crc kubenswrapper[4958]: I1006 12:05:00.211704 4958 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=1.870285883 podStartE2EDuration="5.211687781s" podCreationTimestamp="2025-10-06 12:04:55 +0000 UTC" firstStartedPulling="2025-10-06 12:04:56.602540845 +0000 UTC m=+1050.488566153" lastFinishedPulling="2025-10-06 12:04:59.943942743 +0000 UTC m=+1053.829968051" observedRunningTime="2025-10-06 12:05:00.205566736 +0000 UTC m=+1054.091592044" watchObservedRunningTime="2025-10-06 12:05:00.211687781 +0000 UTC m=+1054.097713089" Oct 06 12:05:01 crc kubenswrapper[4958]: I1006 12:05:01.538378 4958 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Oct 06 12:05:01 crc kubenswrapper[4958]: I1006 12:05:01.538750 4958 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Oct 06 12:05:01 crc kubenswrapper[4958]: I1006 12:05:01.605180 4958 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Oct 06 12:05:01 crc kubenswrapper[4958]: I1006 12:05:01.615921 4958 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Oct 06 12:05:01 crc kubenswrapper[4958]: I1006 12:05:01.782571 4958 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/cinder-scheduler-0" Oct 06 12:05:02 crc kubenswrapper[4958]: I1006 12:05:02.209509 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Oct 06 12:05:02 crc kubenswrapper[4958]: I1006 12:05:02.209544 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Oct 06 12:05:02 crc kubenswrapper[4958]: I1006 12:05:02.449837 4958 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Oct 06 12:05:02 crc kubenswrapper[4958]: I1006 12:05:02.450198 4958 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Oct 06 12:05:02 crc kubenswrapper[4958]: I1006 12:05:02.505402 4958 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Oct 06 12:05:02 crc kubenswrapper[4958]: I1006 12:05:02.523763 4958 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Oct 06 12:05:03 crc kubenswrapper[4958]: I1006 12:05:03.220616 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Oct 06 12:05:03 crc kubenswrapper[4958]: I1006 12:05:03.221502 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Oct 06 12:05:04 crc kubenswrapper[4958]: I1006 12:05:04.174449 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Oct 06 12:05:04 crc kubenswrapper[4958]: I1006 12:05:04.178500 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Oct 06 12:05:05 crc kubenswrapper[4958]: I1006 12:05:05.258607 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Oct 06 12:05:05 crc kubenswrapper[4958]: I1006 12:05:05.278165 4958 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Oct 06 12:05:05 crc kubenswrapper[4958]: I1006 12:05:05.320987 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Oct 06 12:05:05 crc kubenswrapper[4958]: I1006 12:05:05.652324 4958 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-1f67-account-create-nd8n5"] Oct 06 12:05:05 crc kubenswrapper[4958]: I1006 12:05:05.653549 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-1f67-account-create-nd8n5" Oct 06 12:05:05 crc kubenswrapper[4958]: I1006 12:05:05.657301 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-db-secret" Oct 06 12:05:05 crc kubenswrapper[4958]: I1006 12:05:05.677773 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-1f67-account-create-nd8n5"] Oct 06 12:05:05 crc kubenswrapper[4958]: I1006 12:05:05.805499 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-slzjn\" (UniqueName: \"kubernetes.io/projected/30bd3ffb-c536-40ae-af8e-e579520bb461-kube-api-access-slzjn\") pod \"nova-api-1f67-account-create-nd8n5\" (UID: \"30bd3ffb-c536-40ae-af8e-e579520bb461\") " pod="openstack/nova-api-1f67-account-create-nd8n5" Oct 06 12:05:05 crc kubenswrapper[4958]: I1006 12:05:05.844665 4958 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-ebeb-account-create-cqw5c"] Oct 06 12:05:05 crc kubenswrapper[4958]: I1006 12:05:05.845917 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-ebeb-account-create-cqw5c" Oct 06 12:05:05 crc kubenswrapper[4958]: I1006 12:05:05.848266 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-db-secret" Oct 06 12:05:05 crc kubenswrapper[4958]: I1006 12:05:05.851798 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-ebeb-account-create-cqw5c"] Oct 06 12:05:05 crc kubenswrapper[4958]: I1006 12:05:05.907061 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-slzjn\" (UniqueName: \"kubernetes.io/projected/30bd3ffb-c536-40ae-af8e-e579520bb461-kube-api-access-slzjn\") pod \"nova-api-1f67-account-create-nd8n5\" (UID: \"30bd3ffb-c536-40ae-af8e-e579520bb461\") " pod="openstack/nova-api-1f67-account-create-nd8n5" Oct 06 12:05:05 crc kubenswrapper[4958]: I1006 12:05:05.940131 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-slzjn\" (UniqueName: \"kubernetes.io/projected/30bd3ffb-c536-40ae-af8e-e579520bb461-kube-api-access-slzjn\") pod \"nova-api-1f67-account-create-nd8n5\" (UID: \"30bd3ffb-c536-40ae-af8e-e579520bb461\") " pod="openstack/nova-api-1f67-account-create-nd8n5" Oct 06 12:05:05 crc kubenswrapper[4958]: I1006 12:05:05.978398 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-1f67-account-create-nd8n5" Oct 06 12:05:06 crc kubenswrapper[4958]: I1006 12:05:06.008646 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d4xxj\" (UniqueName: \"kubernetes.io/projected/8004c5d1-adb8-4834-b181-1f9f393a9555-kube-api-access-d4xxj\") pod \"nova-cell0-ebeb-account-create-cqw5c\" (UID: \"8004c5d1-adb8-4834-b181-1f9f393a9555\") " pod="openstack/nova-cell0-ebeb-account-create-cqw5c" Oct 06 12:05:06 crc kubenswrapper[4958]: I1006 12:05:06.048801 4958 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-279f-account-create-j5q5b"] Oct 06 12:05:06 crc kubenswrapper[4958]: I1006 12:05:06.050069 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-279f-account-create-j5q5b" Oct 06 12:05:06 crc kubenswrapper[4958]: I1006 12:05:06.054535 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-db-secret" Oct 06 12:05:06 crc kubenswrapper[4958]: I1006 12:05:06.065919 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-279f-account-create-j5q5b"] Oct 06 12:05:06 crc kubenswrapper[4958]: I1006 12:05:06.110050 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d4xxj\" (UniqueName: \"kubernetes.io/projected/8004c5d1-adb8-4834-b181-1f9f393a9555-kube-api-access-d4xxj\") pod \"nova-cell0-ebeb-account-create-cqw5c\" (UID: \"8004c5d1-adb8-4834-b181-1f9f393a9555\") " pod="openstack/nova-cell0-ebeb-account-create-cqw5c" Oct 06 12:05:06 crc kubenswrapper[4958]: I1006 12:05:06.144743 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d4xxj\" (UniqueName: \"kubernetes.io/projected/8004c5d1-adb8-4834-b181-1f9f393a9555-kube-api-access-d4xxj\") pod \"nova-cell0-ebeb-account-create-cqw5c\" (UID: \"8004c5d1-adb8-4834-b181-1f9f393a9555\") " pod="openstack/nova-cell0-ebeb-account-create-cqw5c" Oct 06 12:05:06 crc kubenswrapper[4958]: I1006 12:05:06.183557 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-ebeb-account-create-cqw5c" Oct 06 12:05:06 crc kubenswrapper[4958]: I1006 12:05:06.214161 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vqtrp\" (UniqueName: \"kubernetes.io/projected/2660dd3f-a383-4257-8591-0ae6dfc37d62-kube-api-access-vqtrp\") pod \"nova-cell1-279f-account-create-j5q5b\" (UID: \"2660dd3f-a383-4257-8591-0ae6dfc37d62\") " pod="openstack/nova-cell1-279f-account-create-j5q5b" Oct 06 12:05:06 crc kubenswrapper[4958]: I1006 12:05:06.315497 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vqtrp\" (UniqueName: \"kubernetes.io/projected/2660dd3f-a383-4257-8591-0ae6dfc37d62-kube-api-access-vqtrp\") pod \"nova-cell1-279f-account-create-j5q5b\" (UID: \"2660dd3f-a383-4257-8591-0ae6dfc37d62\") " pod="openstack/nova-cell1-279f-account-create-j5q5b" Oct 06 12:05:06 crc kubenswrapper[4958]: I1006 12:05:06.337841 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vqtrp\" (UniqueName: \"kubernetes.io/projected/2660dd3f-a383-4257-8591-0ae6dfc37d62-kube-api-access-vqtrp\") pod \"nova-cell1-279f-account-create-j5q5b\" (UID: \"2660dd3f-a383-4257-8591-0ae6dfc37d62\") " pod="openstack/nova-cell1-279f-account-create-j5q5b" Oct 06 12:05:06 crc kubenswrapper[4958]: I1006 12:05:06.402549 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-279f-account-create-j5q5b" Oct 06 12:05:06 crc kubenswrapper[4958]: I1006 12:05:06.570697 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-1f67-account-create-nd8n5"] Oct 06 12:05:06 crc kubenswrapper[4958]: W1006 12:05:06.581163 4958 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod30bd3ffb_c536_40ae_af8e_e579520bb461.slice/crio-2f03ae0d7a8d086652cf9dd63b2ddafa98aa0bb7fb13d9daeab521ea56170de2 WatchSource:0}: Error finding container 2f03ae0d7a8d086652cf9dd63b2ddafa98aa0bb7fb13d9daeab521ea56170de2: Status 404 returned error can't find the container with id 2f03ae0d7a8d086652cf9dd63b2ddafa98aa0bb7fb13d9daeab521ea56170de2 Oct 06 12:05:06 crc kubenswrapper[4958]: W1006 12:05:06.706945 4958 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod8004c5d1_adb8_4834_b181_1f9f393a9555.slice/crio-84627a17e7a9ad7f494c74da417f93a5691e8237cc9ea39fd446f9f734a0cc53 WatchSource:0}: Error finding container 84627a17e7a9ad7f494c74da417f93a5691e8237cc9ea39fd446f9f734a0cc53: Status 404 returned error can't find the container with id 84627a17e7a9ad7f494c74da417f93a5691e8237cc9ea39fd446f9f734a0cc53 Oct 06 12:05:06 crc kubenswrapper[4958]: I1006 12:05:06.709369 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-ebeb-account-create-cqw5c"] Oct 06 12:05:06 crc kubenswrapper[4958]: I1006 12:05:06.840916 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-279f-account-create-j5q5b"] Oct 06 12:05:07 crc kubenswrapper[4958]: I1006 12:05:07.302570 4958 generic.go:334] "Generic (PLEG): container finished" podID="2660dd3f-a383-4257-8591-0ae6dfc37d62" containerID="9332de5a5120ad7fa67a25a4e15c057cddb2bb5f7dade6111a95be93aeac7579" exitCode=0 Oct 06 12:05:07 crc kubenswrapper[4958]: I1006 12:05:07.302763 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-279f-account-create-j5q5b" event={"ID":"2660dd3f-a383-4257-8591-0ae6dfc37d62","Type":"ContainerDied","Data":"9332de5a5120ad7fa67a25a4e15c057cddb2bb5f7dade6111a95be93aeac7579"} Oct 06 12:05:07 crc kubenswrapper[4958]: I1006 12:05:07.303084 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-279f-account-create-j5q5b" event={"ID":"2660dd3f-a383-4257-8591-0ae6dfc37d62","Type":"ContainerStarted","Data":"afcaf68f2a4a4e8b227cdb3c4481484221a10cf6083f4599203c8364cebd20c2"} Oct 06 12:05:07 crc kubenswrapper[4958]: I1006 12:05:07.304945 4958 generic.go:334] "Generic (PLEG): container finished" podID="30bd3ffb-c536-40ae-af8e-e579520bb461" containerID="58cd5dabc47b974671339c8128ad769ec7cd73a4962460365f64ed7ee184b341" exitCode=0 Oct 06 12:05:07 crc kubenswrapper[4958]: I1006 12:05:07.304999 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-1f67-account-create-nd8n5" event={"ID":"30bd3ffb-c536-40ae-af8e-e579520bb461","Type":"ContainerDied","Data":"58cd5dabc47b974671339c8128ad769ec7cd73a4962460365f64ed7ee184b341"} Oct 06 12:05:07 crc kubenswrapper[4958]: I1006 12:05:07.305020 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-1f67-account-create-nd8n5" event={"ID":"30bd3ffb-c536-40ae-af8e-e579520bb461","Type":"ContainerStarted","Data":"2f03ae0d7a8d086652cf9dd63b2ddafa98aa0bb7fb13d9daeab521ea56170de2"} Oct 06 12:05:07 crc kubenswrapper[4958]: I1006 12:05:07.307638 4958 generic.go:334] "Generic (PLEG): container finished" podID="8004c5d1-adb8-4834-b181-1f9f393a9555" containerID="b6a8901a24727c2bac3f7d1fa2a4e8d296acbb728b45ec919812e646dc73df52" exitCode=0 Oct 06 12:05:07 crc kubenswrapper[4958]: I1006 12:05:07.307697 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-ebeb-account-create-cqw5c" event={"ID":"8004c5d1-adb8-4834-b181-1f9f393a9555","Type":"ContainerDied","Data":"b6a8901a24727c2bac3f7d1fa2a4e8d296acbb728b45ec919812e646dc73df52"} Oct 06 12:05:07 crc kubenswrapper[4958]: I1006 12:05:07.307728 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-ebeb-account-create-cqw5c" event={"ID":"8004c5d1-adb8-4834-b181-1f9f393a9555","Type":"ContainerStarted","Data":"84627a17e7a9ad7f494c74da417f93a5691e8237cc9ea39fd446f9f734a0cc53"} Oct 06 12:05:08 crc kubenswrapper[4958]: I1006 12:05:08.807530 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-ebeb-account-create-cqw5c" Oct 06 12:05:08 crc kubenswrapper[4958]: I1006 12:05:08.813696 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-279f-account-create-j5q5b" Oct 06 12:05:08 crc kubenswrapper[4958]: I1006 12:05:08.819073 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-1f67-account-create-nd8n5" Oct 06 12:05:08 crc kubenswrapper[4958]: I1006 12:05:08.961816 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-slzjn\" (UniqueName: \"kubernetes.io/projected/30bd3ffb-c536-40ae-af8e-e579520bb461-kube-api-access-slzjn\") pod \"30bd3ffb-c536-40ae-af8e-e579520bb461\" (UID: \"30bd3ffb-c536-40ae-af8e-e579520bb461\") " Oct 06 12:05:08 crc kubenswrapper[4958]: I1006 12:05:08.961885 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d4xxj\" (UniqueName: \"kubernetes.io/projected/8004c5d1-adb8-4834-b181-1f9f393a9555-kube-api-access-d4xxj\") pod \"8004c5d1-adb8-4834-b181-1f9f393a9555\" (UID: \"8004c5d1-adb8-4834-b181-1f9f393a9555\") " Oct 06 12:05:08 crc kubenswrapper[4958]: I1006 12:05:08.961974 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vqtrp\" (UniqueName: \"kubernetes.io/projected/2660dd3f-a383-4257-8591-0ae6dfc37d62-kube-api-access-vqtrp\") pod \"2660dd3f-a383-4257-8591-0ae6dfc37d62\" (UID: \"2660dd3f-a383-4257-8591-0ae6dfc37d62\") " Oct 06 12:05:08 crc kubenswrapper[4958]: I1006 12:05:08.968351 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2660dd3f-a383-4257-8591-0ae6dfc37d62-kube-api-access-vqtrp" (OuterVolumeSpecName: "kube-api-access-vqtrp") pod "2660dd3f-a383-4257-8591-0ae6dfc37d62" (UID: "2660dd3f-a383-4257-8591-0ae6dfc37d62"). InnerVolumeSpecName "kube-api-access-vqtrp". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 06 12:05:08 crc kubenswrapper[4958]: I1006 12:05:08.968813 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/30bd3ffb-c536-40ae-af8e-e579520bb461-kube-api-access-slzjn" (OuterVolumeSpecName: "kube-api-access-slzjn") pod "30bd3ffb-c536-40ae-af8e-e579520bb461" (UID: "30bd3ffb-c536-40ae-af8e-e579520bb461"). InnerVolumeSpecName "kube-api-access-slzjn". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 06 12:05:08 crc kubenswrapper[4958]: I1006 12:05:08.969035 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8004c5d1-adb8-4834-b181-1f9f393a9555-kube-api-access-d4xxj" (OuterVolumeSpecName: "kube-api-access-d4xxj") pod "8004c5d1-adb8-4834-b181-1f9f393a9555" (UID: "8004c5d1-adb8-4834-b181-1f9f393a9555"). InnerVolumeSpecName "kube-api-access-d4xxj". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 06 12:05:09 crc kubenswrapper[4958]: I1006 12:05:09.065234 4958 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-slzjn\" (UniqueName: \"kubernetes.io/projected/30bd3ffb-c536-40ae-af8e-e579520bb461-kube-api-access-slzjn\") on node \"crc\" DevicePath \"\"" Oct 06 12:05:09 crc kubenswrapper[4958]: I1006 12:05:09.065282 4958 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d4xxj\" (UniqueName: \"kubernetes.io/projected/8004c5d1-adb8-4834-b181-1f9f393a9555-kube-api-access-d4xxj\") on node \"crc\" DevicePath \"\"" Oct 06 12:05:09 crc kubenswrapper[4958]: I1006 12:05:09.065296 4958 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vqtrp\" (UniqueName: \"kubernetes.io/projected/2660dd3f-a383-4257-8591-0ae6dfc37d62-kube-api-access-vqtrp\") on node \"crc\" DevicePath \"\"" Oct 06 12:05:09 crc kubenswrapper[4958]: I1006 12:05:09.335007 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-1f67-account-create-nd8n5" Oct 06 12:05:09 crc kubenswrapper[4958]: I1006 12:05:09.335006 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-1f67-account-create-nd8n5" event={"ID":"30bd3ffb-c536-40ae-af8e-e579520bb461","Type":"ContainerDied","Data":"2f03ae0d7a8d086652cf9dd63b2ddafa98aa0bb7fb13d9daeab521ea56170de2"} Oct 06 12:05:09 crc kubenswrapper[4958]: I1006 12:05:09.335195 4958 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2f03ae0d7a8d086652cf9dd63b2ddafa98aa0bb7fb13d9daeab521ea56170de2" Oct 06 12:05:09 crc kubenswrapper[4958]: I1006 12:05:09.338399 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-ebeb-account-create-cqw5c" event={"ID":"8004c5d1-adb8-4834-b181-1f9f393a9555","Type":"ContainerDied","Data":"84627a17e7a9ad7f494c74da417f93a5691e8237cc9ea39fd446f9f734a0cc53"} Oct 06 12:05:09 crc kubenswrapper[4958]: I1006 12:05:09.338542 4958 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="84627a17e7a9ad7f494c74da417f93a5691e8237cc9ea39fd446f9f734a0cc53" Oct 06 12:05:09 crc kubenswrapper[4958]: I1006 12:05:09.338415 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-ebeb-account-create-cqw5c" Oct 06 12:05:09 crc kubenswrapper[4958]: I1006 12:05:09.340303 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-279f-account-create-j5q5b" event={"ID":"2660dd3f-a383-4257-8591-0ae6dfc37d62","Type":"ContainerDied","Data":"afcaf68f2a4a4e8b227cdb3c4481484221a10cf6083f4599203c8364cebd20c2"} Oct 06 12:05:09 crc kubenswrapper[4958]: I1006 12:05:09.340348 4958 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="afcaf68f2a4a4e8b227cdb3c4481484221a10cf6083f4599203c8364cebd20c2" Oct 06 12:05:09 crc kubenswrapper[4958]: I1006 12:05:09.340391 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-279f-account-create-j5q5b" Oct 06 12:05:09 crc kubenswrapper[4958]: I1006 12:05:09.380380 4958 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Oct 06 12:05:09 crc kubenswrapper[4958]: I1006 12:05:09.381258 4958 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="e6e42935-5c96-4ce1-8fa4-aef40c03a225" containerName="ceilometer-central-agent" containerID="cri-o://543489d65ab668f808f1890af5ac7b717b3dc59e1d9581cdf5dc0e077559ac67" gracePeriod=30 Oct 06 12:05:09 crc kubenswrapper[4958]: I1006 12:05:09.381267 4958 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="e6e42935-5c96-4ce1-8fa4-aef40c03a225" containerName="proxy-httpd" containerID="cri-o://662a72d64de251e7acd1184c04b2e7f5fd9aef8cc35425d67eaa8366575e829c" gracePeriod=30 Oct 06 12:05:09 crc kubenswrapper[4958]: I1006 12:05:09.381276 4958 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="e6e42935-5c96-4ce1-8fa4-aef40c03a225" containerName="sg-core" containerID="cri-o://095ba21d43e4da0693b08ae4749d48cfa882e7c5e46f86481c432daea5645df5" gracePeriod=30 Oct 06 12:05:09 crc kubenswrapper[4958]: I1006 12:05:09.381276 4958 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="e6e42935-5c96-4ce1-8fa4-aef40c03a225" containerName="ceilometer-notification-agent" containerID="cri-o://4add19501dd052ee43b568081128f7bdf136249ddc747b31eb8fff5f4d98545e" gracePeriod=30 Oct 06 12:05:10 crc kubenswrapper[4958]: I1006 12:05:10.297199 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Oct 06 12:05:10 crc kubenswrapper[4958]: I1006 12:05:10.361987 4958 generic.go:334] "Generic (PLEG): container finished" podID="e6e42935-5c96-4ce1-8fa4-aef40c03a225" containerID="662a72d64de251e7acd1184c04b2e7f5fd9aef8cc35425d67eaa8366575e829c" exitCode=0 Oct 06 12:05:10 crc kubenswrapper[4958]: I1006 12:05:10.362034 4958 generic.go:334] "Generic (PLEG): container finished" podID="e6e42935-5c96-4ce1-8fa4-aef40c03a225" containerID="095ba21d43e4da0693b08ae4749d48cfa882e7c5e46f86481c432daea5645df5" exitCode=2 Oct 06 12:05:10 crc kubenswrapper[4958]: I1006 12:05:10.362041 4958 generic.go:334] "Generic (PLEG): container finished" podID="e6e42935-5c96-4ce1-8fa4-aef40c03a225" containerID="4add19501dd052ee43b568081128f7bdf136249ddc747b31eb8fff5f4d98545e" exitCode=0 Oct 06 12:05:10 crc kubenswrapper[4958]: I1006 12:05:10.362048 4958 generic.go:334] "Generic (PLEG): container finished" podID="e6e42935-5c96-4ce1-8fa4-aef40c03a225" containerID="543489d65ab668f808f1890af5ac7b717b3dc59e1d9581cdf5dc0e077559ac67" exitCode=0 Oct 06 12:05:10 crc kubenswrapper[4958]: I1006 12:05:10.362077 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"e6e42935-5c96-4ce1-8fa4-aef40c03a225","Type":"ContainerDied","Data":"662a72d64de251e7acd1184c04b2e7f5fd9aef8cc35425d67eaa8366575e829c"} Oct 06 12:05:10 crc kubenswrapper[4958]: I1006 12:05:10.362116 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"e6e42935-5c96-4ce1-8fa4-aef40c03a225","Type":"ContainerDied","Data":"095ba21d43e4da0693b08ae4749d48cfa882e7c5e46f86481c432daea5645df5"} Oct 06 12:05:10 crc kubenswrapper[4958]: I1006 12:05:10.362144 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"e6e42935-5c96-4ce1-8fa4-aef40c03a225","Type":"ContainerDied","Data":"4add19501dd052ee43b568081128f7bdf136249ddc747b31eb8fff5f4d98545e"} Oct 06 12:05:10 crc kubenswrapper[4958]: I1006 12:05:10.362155 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"e6e42935-5c96-4ce1-8fa4-aef40c03a225","Type":"ContainerDied","Data":"543489d65ab668f808f1890af5ac7b717b3dc59e1d9581cdf5dc0e077559ac67"} Oct 06 12:05:10 crc kubenswrapper[4958]: I1006 12:05:10.362181 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"e6e42935-5c96-4ce1-8fa4-aef40c03a225","Type":"ContainerDied","Data":"c775022386d8ff16ce5a75f74d29b736fffc35b0b39b73ba9036e571c3fc885d"} Oct 06 12:05:10 crc kubenswrapper[4958]: I1006 12:05:10.362205 4958 scope.go:117] "RemoveContainer" containerID="662a72d64de251e7acd1184c04b2e7f5fd9aef8cc35425d67eaa8366575e829c" Oct 06 12:05:10 crc kubenswrapper[4958]: I1006 12:05:10.362298 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Oct 06 12:05:10 crc kubenswrapper[4958]: I1006 12:05:10.389716 4958 scope.go:117] "RemoveContainer" containerID="095ba21d43e4da0693b08ae4749d48cfa882e7c5e46f86481c432daea5645df5" Oct 06 12:05:10 crc kubenswrapper[4958]: I1006 12:05:10.395615 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e6e42935-5c96-4ce1-8fa4-aef40c03a225-run-httpd\") pod \"e6e42935-5c96-4ce1-8fa4-aef40c03a225\" (UID: \"e6e42935-5c96-4ce1-8fa4-aef40c03a225\") " Oct 06 12:05:10 crc kubenswrapper[4958]: I1006 12:05:10.395755 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e6e42935-5c96-4ce1-8fa4-aef40c03a225-log-httpd\") pod \"e6e42935-5c96-4ce1-8fa4-aef40c03a225\" (UID: \"e6e42935-5c96-4ce1-8fa4-aef40c03a225\") " Oct 06 12:05:10 crc kubenswrapper[4958]: I1006 12:05:10.395871 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e6e42935-5c96-4ce1-8fa4-aef40c03a225-combined-ca-bundle\") pod \"e6e42935-5c96-4ce1-8fa4-aef40c03a225\" (UID: \"e6e42935-5c96-4ce1-8fa4-aef40c03a225\") " Oct 06 12:05:10 crc kubenswrapper[4958]: I1006 12:05:10.396089 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e6e42935-5c96-4ce1-8fa4-aef40c03a225-config-data\") pod \"e6e42935-5c96-4ce1-8fa4-aef40c03a225\" (UID: \"e6e42935-5c96-4ce1-8fa4-aef40c03a225\") " Oct 06 12:05:10 crc kubenswrapper[4958]: I1006 12:05:10.396212 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bphz9\" (UniqueName: \"kubernetes.io/projected/e6e42935-5c96-4ce1-8fa4-aef40c03a225-kube-api-access-bphz9\") pod \"e6e42935-5c96-4ce1-8fa4-aef40c03a225\" (UID: \"e6e42935-5c96-4ce1-8fa4-aef40c03a225\") " Oct 06 12:05:10 crc kubenswrapper[4958]: I1006 12:05:10.396365 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e6e42935-5c96-4ce1-8fa4-aef40c03a225-scripts\") pod \"e6e42935-5c96-4ce1-8fa4-aef40c03a225\" (UID: \"e6e42935-5c96-4ce1-8fa4-aef40c03a225\") " Oct 06 12:05:10 crc kubenswrapper[4958]: I1006 12:05:10.396518 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/e6e42935-5c96-4ce1-8fa4-aef40c03a225-sg-core-conf-yaml\") pod \"e6e42935-5c96-4ce1-8fa4-aef40c03a225\" (UID: \"e6e42935-5c96-4ce1-8fa4-aef40c03a225\") " Oct 06 12:05:10 crc kubenswrapper[4958]: I1006 12:05:10.397245 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e6e42935-5c96-4ce1-8fa4-aef40c03a225-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "e6e42935-5c96-4ce1-8fa4-aef40c03a225" (UID: "e6e42935-5c96-4ce1-8fa4-aef40c03a225"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 06 12:05:10 crc kubenswrapper[4958]: I1006 12:05:10.397782 4958 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e6e42935-5c96-4ce1-8fa4-aef40c03a225-log-httpd\") on node \"crc\" DevicePath \"\"" Oct 06 12:05:10 crc kubenswrapper[4958]: I1006 12:05:10.397879 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e6e42935-5c96-4ce1-8fa4-aef40c03a225-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "e6e42935-5c96-4ce1-8fa4-aef40c03a225" (UID: "e6e42935-5c96-4ce1-8fa4-aef40c03a225"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 06 12:05:10 crc kubenswrapper[4958]: I1006 12:05:10.402106 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e6e42935-5c96-4ce1-8fa4-aef40c03a225-scripts" (OuterVolumeSpecName: "scripts") pod "e6e42935-5c96-4ce1-8fa4-aef40c03a225" (UID: "e6e42935-5c96-4ce1-8fa4-aef40c03a225"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 06 12:05:10 crc kubenswrapper[4958]: I1006 12:05:10.428364 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e6e42935-5c96-4ce1-8fa4-aef40c03a225-kube-api-access-bphz9" (OuterVolumeSpecName: "kube-api-access-bphz9") pod "e6e42935-5c96-4ce1-8fa4-aef40c03a225" (UID: "e6e42935-5c96-4ce1-8fa4-aef40c03a225"). InnerVolumeSpecName "kube-api-access-bphz9". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 06 12:05:10 crc kubenswrapper[4958]: I1006 12:05:10.443729 4958 scope.go:117] "RemoveContainer" containerID="4add19501dd052ee43b568081128f7bdf136249ddc747b31eb8fff5f4d98545e" Oct 06 12:05:10 crc kubenswrapper[4958]: I1006 12:05:10.444106 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e6e42935-5c96-4ce1-8fa4-aef40c03a225-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "e6e42935-5c96-4ce1-8fa4-aef40c03a225" (UID: "e6e42935-5c96-4ce1-8fa4-aef40c03a225"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 06 12:05:10 crc kubenswrapper[4958]: I1006 12:05:10.475461 4958 scope.go:117] "RemoveContainer" containerID="543489d65ab668f808f1890af5ac7b717b3dc59e1d9581cdf5dc0e077559ac67" Oct 06 12:05:10 crc kubenswrapper[4958]: I1006 12:05:10.495276 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e6e42935-5c96-4ce1-8fa4-aef40c03a225-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "e6e42935-5c96-4ce1-8fa4-aef40c03a225" (UID: "e6e42935-5c96-4ce1-8fa4-aef40c03a225"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 06 12:05:10 crc kubenswrapper[4958]: I1006 12:05:10.501399 4958 scope.go:117] "RemoveContainer" containerID="662a72d64de251e7acd1184c04b2e7f5fd9aef8cc35425d67eaa8366575e829c" Oct 06 12:05:10 crc kubenswrapper[4958]: E1006 12:05:10.502081 4958 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"662a72d64de251e7acd1184c04b2e7f5fd9aef8cc35425d67eaa8366575e829c\": container with ID starting with 662a72d64de251e7acd1184c04b2e7f5fd9aef8cc35425d67eaa8366575e829c not found: ID does not exist" containerID="662a72d64de251e7acd1184c04b2e7f5fd9aef8cc35425d67eaa8366575e829c" Oct 06 12:05:10 crc kubenswrapper[4958]: I1006 12:05:10.502112 4958 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"662a72d64de251e7acd1184c04b2e7f5fd9aef8cc35425d67eaa8366575e829c"} err="failed to get container status \"662a72d64de251e7acd1184c04b2e7f5fd9aef8cc35425d67eaa8366575e829c\": rpc error: code = NotFound desc = could not find container \"662a72d64de251e7acd1184c04b2e7f5fd9aef8cc35425d67eaa8366575e829c\": container with ID starting with 662a72d64de251e7acd1184c04b2e7f5fd9aef8cc35425d67eaa8366575e829c not found: ID does not exist" Oct 06 12:05:10 crc kubenswrapper[4958]: I1006 12:05:10.502133 4958 scope.go:117] "RemoveContainer" containerID="095ba21d43e4da0693b08ae4749d48cfa882e7c5e46f86481c432daea5645df5" Oct 06 12:05:10 crc kubenswrapper[4958]: E1006 12:05:10.502369 4958 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"095ba21d43e4da0693b08ae4749d48cfa882e7c5e46f86481c432daea5645df5\": container with ID starting with 095ba21d43e4da0693b08ae4749d48cfa882e7c5e46f86481c432daea5645df5 not found: ID does not exist" containerID="095ba21d43e4da0693b08ae4749d48cfa882e7c5e46f86481c432daea5645df5" Oct 06 12:05:10 crc kubenswrapper[4958]: I1006 12:05:10.502390 4958 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"095ba21d43e4da0693b08ae4749d48cfa882e7c5e46f86481c432daea5645df5"} err="failed to get container status \"095ba21d43e4da0693b08ae4749d48cfa882e7c5e46f86481c432daea5645df5\": rpc error: code = NotFound desc = could not find container \"095ba21d43e4da0693b08ae4749d48cfa882e7c5e46f86481c432daea5645df5\": container with ID starting with 095ba21d43e4da0693b08ae4749d48cfa882e7c5e46f86481c432daea5645df5 not found: ID does not exist" Oct 06 12:05:10 crc kubenswrapper[4958]: I1006 12:05:10.502404 4958 scope.go:117] "RemoveContainer" containerID="4add19501dd052ee43b568081128f7bdf136249ddc747b31eb8fff5f4d98545e" Oct 06 12:05:10 crc kubenswrapper[4958]: E1006 12:05:10.502625 4958 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4add19501dd052ee43b568081128f7bdf136249ddc747b31eb8fff5f4d98545e\": container with ID starting with 4add19501dd052ee43b568081128f7bdf136249ddc747b31eb8fff5f4d98545e not found: ID does not exist" containerID="4add19501dd052ee43b568081128f7bdf136249ddc747b31eb8fff5f4d98545e" Oct 06 12:05:10 crc kubenswrapper[4958]: I1006 12:05:10.502645 4958 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4add19501dd052ee43b568081128f7bdf136249ddc747b31eb8fff5f4d98545e"} err="failed to get container status \"4add19501dd052ee43b568081128f7bdf136249ddc747b31eb8fff5f4d98545e\": rpc error: code = NotFound desc = could not find container \"4add19501dd052ee43b568081128f7bdf136249ddc747b31eb8fff5f4d98545e\": container with ID starting with 4add19501dd052ee43b568081128f7bdf136249ddc747b31eb8fff5f4d98545e not found: ID does not exist" Oct 06 12:05:10 crc kubenswrapper[4958]: I1006 12:05:10.502655 4958 scope.go:117] "RemoveContainer" containerID="543489d65ab668f808f1890af5ac7b717b3dc59e1d9581cdf5dc0e077559ac67" Oct 06 12:05:10 crc kubenswrapper[4958]: E1006 12:05:10.502855 4958 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"543489d65ab668f808f1890af5ac7b717b3dc59e1d9581cdf5dc0e077559ac67\": container with ID starting with 543489d65ab668f808f1890af5ac7b717b3dc59e1d9581cdf5dc0e077559ac67 not found: ID does not exist" containerID="543489d65ab668f808f1890af5ac7b717b3dc59e1d9581cdf5dc0e077559ac67" Oct 06 12:05:10 crc kubenswrapper[4958]: I1006 12:05:10.502879 4958 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"543489d65ab668f808f1890af5ac7b717b3dc59e1d9581cdf5dc0e077559ac67"} err="failed to get container status \"543489d65ab668f808f1890af5ac7b717b3dc59e1d9581cdf5dc0e077559ac67\": rpc error: code = NotFound desc = could not find container \"543489d65ab668f808f1890af5ac7b717b3dc59e1d9581cdf5dc0e077559ac67\": container with ID starting with 543489d65ab668f808f1890af5ac7b717b3dc59e1d9581cdf5dc0e077559ac67 not found: ID does not exist" Oct 06 12:05:10 crc kubenswrapper[4958]: I1006 12:05:10.502892 4958 scope.go:117] "RemoveContainer" containerID="662a72d64de251e7acd1184c04b2e7f5fd9aef8cc35425d67eaa8366575e829c" Oct 06 12:05:10 crc kubenswrapper[4958]: I1006 12:05:10.503081 4958 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"662a72d64de251e7acd1184c04b2e7f5fd9aef8cc35425d67eaa8366575e829c"} err="failed to get container status \"662a72d64de251e7acd1184c04b2e7f5fd9aef8cc35425d67eaa8366575e829c\": rpc error: code = NotFound desc = could not find container \"662a72d64de251e7acd1184c04b2e7f5fd9aef8cc35425d67eaa8366575e829c\": container with ID starting with 662a72d64de251e7acd1184c04b2e7f5fd9aef8cc35425d67eaa8366575e829c not found: ID does not exist" Oct 06 12:05:10 crc kubenswrapper[4958]: I1006 12:05:10.503108 4958 scope.go:117] "RemoveContainer" containerID="095ba21d43e4da0693b08ae4749d48cfa882e7c5e46f86481c432daea5645df5" Oct 06 12:05:10 crc kubenswrapper[4958]: I1006 12:05:10.503341 4958 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"095ba21d43e4da0693b08ae4749d48cfa882e7c5e46f86481c432daea5645df5"} err="failed to get container status \"095ba21d43e4da0693b08ae4749d48cfa882e7c5e46f86481c432daea5645df5\": rpc error: code = NotFound desc = could not find container \"095ba21d43e4da0693b08ae4749d48cfa882e7c5e46f86481c432daea5645df5\": container with ID starting with 095ba21d43e4da0693b08ae4749d48cfa882e7c5e46f86481c432daea5645df5 not found: ID does not exist" Oct 06 12:05:10 crc kubenswrapper[4958]: I1006 12:05:10.503354 4958 scope.go:117] "RemoveContainer" containerID="4add19501dd052ee43b568081128f7bdf136249ddc747b31eb8fff5f4d98545e" Oct 06 12:05:10 crc kubenswrapper[4958]: I1006 12:05:10.503661 4958 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4add19501dd052ee43b568081128f7bdf136249ddc747b31eb8fff5f4d98545e"} err="failed to get container status \"4add19501dd052ee43b568081128f7bdf136249ddc747b31eb8fff5f4d98545e\": rpc error: code = NotFound desc = could not find container \"4add19501dd052ee43b568081128f7bdf136249ddc747b31eb8fff5f4d98545e\": container with ID starting with 4add19501dd052ee43b568081128f7bdf136249ddc747b31eb8fff5f4d98545e not found: ID does not exist" Oct 06 12:05:10 crc kubenswrapper[4958]: I1006 12:05:10.503718 4958 scope.go:117] "RemoveContainer" containerID="543489d65ab668f808f1890af5ac7b717b3dc59e1d9581cdf5dc0e077559ac67" Oct 06 12:05:10 crc kubenswrapper[4958]: I1006 12:05:10.504102 4958 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"543489d65ab668f808f1890af5ac7b717b3dc59e1d9581cdf5dc0e077559ac67"} err="failed to get container status \"543489d65ab668f808f1890af5ac7b717b3dc59e1d9581cdf5dc0e077559ac67\": rpc error: code = NotFound desc = could not find container \"543489d65ab668f808f1890af5ac7b717b3dc59e1d9581cdf5dc0e077559ac67\": container with ID starting with 543489d65ab668f808f1890af5ac7b717b3dc59e1d9581cdf5dc0e077559ac67 not found: ID does not exist" Oct 06 12:05:10 crc kubenswrapper[4958]: I1006 12:05:10.504131 4958 scope.go:117] "RemoveContainer" containerID="662a72d64de251e7acd1184c04b2e7f5fd9aef8cc35425d67eaa8366575e829c" Oct 06 12:05:10 crc kubenswrapper[4958]: I1006 12:05:10.504519 4958 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"662a72d64de251e7acd1184c04b2e7f5fd9aef8cc35425d67eaa8366575e829c"} err="failed to get container status \"662a72d64de251e7acd1184c04b2e7f5fd9aef8cc35425d67eaa8366575e829c\": rpc error: code = NotFound desc = could not find container \"662a72d64de251e7acd1184c04b2e7f5fd9aef8cc35425d67eaa8366575e829c\": container with ID starting with 662a72d64de251e7acd1184c04b2e7f5fd9aef8cc35425d67eaa8366575e829c not found: ID does not exist" Oct 06 12:05:10 crc kubenswrapper[4958]: I1006 12:05:10.504563 4958 scope.go:117] "RemoveContainer" containerID="095ba21d43e4da0693b08ae4749d48cfa882e7c5e46f86481c432daea5645df5" Oct 06 12:05:10 crc kubenswrapper[4958]: I1006 12:05:10.504958 4958 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"095ba21d43e4da0693b08ae4749d48cfa882e7c5e46f86481c432daea5645df5"} err="failed to get container status \"095ba21d43e4da0693b08ae4749d48cfa882e7c5e46f86481c432daea5645df5\": rpc error: code = NotFound desc = could not find container \"095ba21d43e4da0693b08ae4749d48cfa882e7c5e46f86481c432daea5645df5\": container with ID starting with 095ba21d43e4da0693b08ae4749d48cfa882e7c5e46f86481c432daea5645df5 not found: ID does not exist" Oct 06 12:05:10 crc kubenswrapper[4958]: I1006 12:05:10.504983 4958 scope.go:117] "RemoveContainer" containerID="4add19501dd052ee43b568081128f7bdf136249ddc747b31eb8fff5f4d98545e" Oct 06 12:05:10 crc kubenswrapper[4958]: I1006 12:05:10.505475 4958 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4add19501dd052ee43b568081128f7bdf136249ddc747b31eb8fff5f4d98545e"} err="failed to get container status \"4add19501dd052ee43b568081128f7bdf136249ddc747b31eb8fff5f4d98545e\": rpc error: code = NotFound desc = could not find container \"4add19501dd052ee43b568081128f7bdf136249ddc747b31eb8fff5f4d98545e\": container with ID starting with 4add19501dd052ee43b568081128f7bdf136249ddc747b31eb8fff5f4d98545e not found: ID does not exist" Oct 06 12:05:10 crc kubenswrapper[4958]: I1006 12:05:10.505498 4958 scope.go:117] "RemoveContainer" containerID="543489d65ab668f808f1890af5ac7b717b3dc59e1d9581cdf5dc0e077559ac67" Oct 06 12:05:10 crc kubenswrapper[4958]: I1006 12:05:10.505765 4958 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"543489d65ab668f808f1890af5ac7b717b3dc59e1d9581cdf5dc0e077559ac67"} err="failed to get container status \"543489d65ab668f808f1890af5ac7b717b3dc59e1d9581cdf5dc0e077559ac67\": rpc error: code = NotFound desc = could not find container \"543489d65ab668f808f1890af5ac7b717b3dc59e1d9581cdf5dc0e077559ac67\": container with ID starting with 543489d65ab668f808f1890af5ac7b717b3dc59e1d9581cdf5dc0e077559ac67 not found: ID does not exist" Oct 06 12:05:10 crc kubenswrapper[4958]: I1006 12:05:10.505789 4958 scope.go:117] "RemoveContainer" containerID="662a72d64de251e7acd1184c04b2e7f5fd9aef8cc35425d67eaa8366575e829c" Oct 06 12:05:10 crc kubenswrapper[4958]: I1006 12:05:10.506049 4958 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"662a72d64de251e7acd1184c04b2e7f5fd9aef8cc35425d67eaa8366575e829c"} err="failed to get container status \"662a72d64de251e7acd1184c04b2e7f5fd9aef8cc35425d67eaa8366575e829c\": rpc error: code = NotFound desc = could not find container \"662a72d64de251e7acd1184c04b2e7f5fd9aef8cc35425d67eaa8366575e829c\": container with ID starting with 662a72d64de251e7acd1184c04b2e7f5fd9aef8cc35425d67eaa8366575e829c not found: ID does not exist" Oct 06 12:05:10 crc kubenswrapper[4958]: I1006 12:05:10.506070 4958 scope.go:117] "RemoveContainer" containerID="095ba21d43e4da0693b08ae4749d48cfa882e7c5e46f86481c432daea5645df5" Oct 06 12:05:10 crc kubenswrapper[4958]: I1006 12:05:10.506390 4958 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e6e42935-5c96-4ce1-8fa4-aef40c03a225-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 06 12:05:10 crc kubenswrapper[4958]: I1006 12:05:10.506448 4958 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bphz9\" (UniqueName: \"kubernetes.io/projected/e6e42935-5c96-4ce1-8fa4-aef40c03a225-kube-api-access-bphz9\") on node \"crc\" DevicePath \"\"" Oct 06 12:05:10 crc kubenswrapper[4958]: I1006 12:05:10.506467 4958 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e6e42935-5c96-4ce1-8fa4-aef40c03a225-scripts\") on node \"crc\" DevicePath \"\"" Oct 06 12:05:10 crc kubenswrapper[4958]: I1006 12:05:10.506479 4958 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/e6e42935-5c96-4ce1-8fa4-aef40c03a225-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Oct 06 12:05:10 crc kubenswrapper[4958]: I1006 12:05:10.506489 4958 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e6e42935-5c96-4ce1-8fa4-aef40c03a225-run-httpd\") on node \"crc\" DevicePath \"\"" Oct 06 12:05:10 crc kubenswrapper[4958]: I1006 12:05:10.507425 4958 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"095ba21d43e4da0693b08ae4749d48cfa882e7c5e46f86481c432daea5645df5"} err="failed to get container status \"095ba21d43e4da0693b08ae4749d48cfa882e7c5e46f86481c432daea5645df5\": rpc error: code = NotFound desc = could not find container \"095ba21d43e4da0693b08ae4749d48cfa882e7c5e46f86481c432daea5645df5\": container with ID starting with 095ba21d43e4da0693b08ae4749d48cfa882e7c5e46f86481c432daea5645df5 not found: ID does not exist" Oct 06 12:05:10 crc kubenswrapper[4958]: I1006 12:05:10.507451 4958 scope.go:117] "RemoveContainer" containerID="4add19501dd052ee43b568081128f7bdf136249ddc747b31eb8fff5f4d98545e" Oct 06 12:05:10 crc kubenswrapper[4958]: I1006 12:05:10.514944 4958 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4add19501dd052ee43b568081128f7bdf136249ddc747b31eb8fff5f4d98545e"} err="failed to get container status \"4add19501dd052ee43b568081128f7bdf136249ddc747b31eb8fff5f4d98545e\": rpc error: code = NotFound desc = could not find container \"4add19501dd052ee43b568081128f7bdf136249ddc747b31eb8fff5f4d98545e\": container with ID starting with 4add19501dd052ee43b568081128f7bdf136249ddc747b31eb8fff5f4d98545e not found: ID does not exist" Oct 06 12:05:10 crc kubenswrapper[4958]: I1006 12:05:10.515004 4958 scope.go:117] "RemoveContainer" containerID="543489d65ab668f808f1890af5ac7b717b3dc59e1d9581cdf5dc0e077559ac67" Oct 06 12:05:10 crc kubenswrapper[4958]: I1006 12:05:10.517323 4958 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"543489d65ab668f808f1890af5ac7b717b3dc59e1d9581cdf5dc0e077559ac67"} err="failed to get container status \"543489d65ab668f808f1890af5ac7b717b3dc59e1d9581cdf5dc0e077559ac67\": rpc error: code = NotFound desc = could not find container \"543489d65ab668f808f1890af5ac7b717b3dc59e1d9581cdf5dc0e077559ac67\": container with ID starting with 543489d65ab668f808f1890af5ac7b717b3dc59e1d9581cdf5dc0e077559ac67 not found: ID does not exist" Oct 06 12:05:10 crc kubenswrapper[4958]: I1006 12:05:10.527992 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e6e42935-5c96-4ce1-8fa4-aef40c03a225-config-data" (OuterVolumeSpecName: "config-data") pod "e6e42935-5c96-4ce1-8fa4-aef40c03a225" (UID: "e6e42935-5c96-4ce1-8fa4-aef40c03a225"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 06 12:05:10 crc kubenswrapper[4958]: I1006 12:05:10.609408 4958 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e6e42935-5c96-4ce1-8fa4-aef40c03a225-config-data\") on node \"crc\" DevicePath \"\"" Oct 06 12:05:10 crc kubenswrapper[4958]: I1006 12:05:10.740709 4958 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Oct 06 12:05:10 crc kubenswrapper[4958]: I1006 12:05:10.750909 4958 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Oct 06 12:05:10 crc kubenswrapper[4958]: I1006 12:05:10.764351 4958 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Oct 06 12:05:10 crc kubenswrapper[4958]: E1006 12:05:10.764693 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e6e42935-5c96-4ce1-8fa4-aef40c03a225" containerName="sg-core" Oct 06 12:05:10 crc kubenswrapper[4958]: I1006 12:05:10.764707 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="e6e42935-5c96-4ce1-8fa4-aef40c03a225" containerName="sg-core" Oct 06 12:05:10 crc kubenswrapper[4958]: E1006 12:05:10.764732 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e6e42935-5c96-4ce1-8fa4-aef40c03a225" containerName="ceilometer-central-agent" Oct 06 12:05:10 crc kubenswrapper[4958]: I1006 12:05:10.764741 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="e6e42935-5c96-4ce1-8fa4-aef40c03a225" containerName="ceilometer-central-agent" Oct 06 12:05:10 crc kubenswrapper[4958]: E1006 12:05:10.764754 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e6e42935-5c96-4ce1-8fa4-aef40c03a225" containerName="proxy-httpd" Oct 06 12:05:10 crc kubenswrapper[4958]: I1006 12:05:10.764759 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="e6e42935-5c96-4ce1-8fa4-aef40c03a225" containerName="proxy-httpd" Oct 06 12:05:10 crc kubenswrapper[4958]: E1006 12:05:10.764778 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e6e42935-5c96-4ce1-8fa4-aef40c03a225" containerName="ceilometer-notification-agent" Oct 06 12:05:10 crc kubenswrapper[4958]: I1006 12:05:10.764784 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="e6e42935-5c96-4ce1-8fa4-aef40c03a225" containerName="ceilometer-notification-agent" Oct 06 12:05:10 crc kubenswrapper[4958]: E1006 12:05:10.764792 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2660dd3f-a383-4257-8591-0ae6dfc37d62" containerName="mariadb-account-create" Oct 06 12:05:10 crc kubenswrapper[4958]: I1006 12:05:10.764798 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="2660dd3f-a383-4257-8591-0ae6dfc37d62" containerName="mariadb-account-create" Oct 06 12:05:10 crc kubenswrapper[4958]: E1006 12:05:10.764811 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="30bd3ffb-c536-40ae-af8e-e579520bb461" containerName="mariadb-account-create" Oct 06 12:05:10 crc kubenswrapper[4958]: I1006 12:05:10.764816 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="30bd3ffb-c536-40ae-af8e-e579520bb461" containerName="mariadb-account-create" Oct 06 12:05:10 crc kubenswrapper[4958]: E1006 12:05:10.764833 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8004c5d1-adb8-4834-b181-1f9f393a9555" containerName="mariadb-account-create" Oct 06 12:05:10 crc kubenswrapper[4958]: I1006 12:05:10.764841 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="8004c5d1-adb8-4834-b181-1f9f393a9555" containerName="mariadb-account-create" Oct 06 12:05:10 crc kubenswrapper[4958]: I1006 12:05:10.764993 4958 memory_manager.go:354] "RemoveStaleState removing state" podUID="2660dd3f-a383-4257-8591-0ae6dfc37d62" containerName="mariadb-account-create" Oct 06 12:05:10 crc kubenswrapper[4958]: I1006 12:05:10.765005 4958 memory_manager.go:354] "RemoveStaleState removing state" podUID="e6e42935-5c96-4ce1-8fa4-aef40c03a225" containerName="proxy-httpd" Oct 06 12:05:10 crc kubenswrapper[4958]: I1006 12:05:10.765020 4958 memory_manager.go:354] "RemoveStaleState removing state" podUID="e6e42935-5c96-4ce1-8fa4-aef40c03a225" containerName="ceilometer-notification-agent" Oct 06 12:05:10 crc kubenswrapper[4958]: I1006 12:05:10.765029 4958 memory_manager.go:354] "RemoveStaleState removing state" podUID="30bd3ffb-c536-40ae-af8e-e579520bb461" containerName="mariadb-account-create" Oct 06 12:05:10 crc kubenswrapper[4958]: I1006 12:05:10.765041 4958 memory_manager.go:354] "RemoveStaleState removing state" podUID="e6e42935-5c96-4ce1-8fa4-aef40c03a225" containerName="ceilometer-central-agent" Oct 06 12:05:10 crc kubenswrapper[4958]: I1006 12:05:10.765052 4958 memory_manager.go:354] "RemoveStaleState removing state" podUID="e6e42935-5c96-4ce1-8fa4-aef40c03a225" containerName="sg-core" Oct 06 12:05:10 crc kubenswrapper[4958]: I1006 12:05:10.765064 4958 memory_manager.go:354] "RemoveStaleState removing state" podUID="8004c5d1-adb8-4834-b181-1f9f393a9555" containerName="mariadb-account-create" Oct 06 12:05:10 crc kubenswrapper[4958]: I1006 12:05:10.766581 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Oct 06 12:05:10 crc kubenswrapper[4958]: I1006 12:05:10.768548 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Oct 06 12:05:10 crc kubenswrapper[4958]: I1006 12:05:10.769078 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Oct 06 12:05:10 crc kubenswrapper[4958]: I1006 12:05:10.792136 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Oct 06 12:05:10 crc kubenswrapper[4958]: I1006 12:05:10.917344 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l2njz\" (UniqueName: \"kubernetes.io/projected/930d6079-e92e-4e05-bfc7-25f3b2392123-kube-api-access-l2njz\") pod \"ceilometer-0\" (UID: \"930d6079-e92e-4e05-bfc7-25f3b2392123\") " pod="openstack/ceilometer-0" Oct 06 12:05:10 crc kubenswrapper[4958]: I1006 12:05:10.918054 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/930d6079-e92e-4e05-bfc7-25f3b2392123-scripts\") pod \"ceilometer-0\" (UID: \"930d6079-e92e-4e05-bfc7-25f3b2392123\") " pod="openstack/ceilometer-0" Oct 06 12:05:10 crc kubenswrapper[4958]: I1006 12:05:10.918115 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/930d6079-e92e-4e05-bfc7-25f3b2392123-log-httpd\") pod \"ceilometer-0\" (UID: \"930d6079-e92e-4e05-bfc7-25f3b2392123\") " pod="openstack/ceilometer-0" Oct 06 12:05:10 crc kubenswrapper[4958]: I1006 12:05:10.918304 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/930d6079-e92e-4e05-bfc7-25f3b2392123-run-httpd\") pod \"ceilometer-0\" (UID: \"930d6079-e92e-4e05-bfc7-25f3b2392123\") " pod="openstack/ceilometer-0" Oct 06 12:05:10 crc kubenswrapper[4958]: I1006 12:05:10.918748 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/930d6079-e92e-4e05-bfc7-25f3b2392123-config-data\") pod \"ceilometer-0\" (UID: \"930d6079-e92e-4e05-bfc7-25f3b2392123\") " pod="openstack/ceilometer-0" Oct 06 12:05:10 crc kubenswrapper[4958]: I1006 12:05:10.918845 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/930d6079-e92e-4e05-bfc7-25f3b2392123-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"930d6079-e92e-4e05-bfc7-25f3b2392123\") " pod="openstack/ceilometer-0" Oct 06 12:05:10 crc kubenswrapper[4958]: I1006 12:05:10.919375 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/930d6079-e92e-4e05-bfc7-25f3b2392123-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"930d6079-e92e-4e05-bfc7-25f3b2392123\") " pod="openstack/ceilometer-0" Oct 06 12:05:10 crc kubenswrapper[4958]: I1006 12:05:10.940598 4958 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e6e42935-5c96-4ce1-8fa4-aef40c03a225" path="/var/lib/kubelet/pods/e6e42935-5c96-4ce1-8fa4-aef40c03a225/volumes" Oct 06 12:05:11 crc kubenswrapper[4958]: I1006 12:05:11.033090 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/930d6079-e92e-4e05-bfc7-25f3b2392123-config-data\") pod \"ceilometer-0\" (UID: \"930d6079-e92e-4e05-bfc7-25f3b2392123\") " pod="openstack/ceilometer-0" Oct 06 12:05:11 crc kubenswrapper[4958]: I1006 12:05:11.033195 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/930d6079-e92e-4e05-bfc7-25f3b2392123-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"930d6079-e92e-4e05-bfc7-25f3b2392123\") " pod="openstack/ceilometer-0" Oct 06 12:05:11 crc kubenswrapper[4958]: I1006 12:05:11.033239 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/930d6079-e92e-4e05-bfc7-25f3b2392123-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"930d6079-e92e-4e05-bfc7-25f3b2392123\") " pod="openstack/ceilometer-0" Oct 06 12:05:11 crc kubenswrapper[4958]: I1006 12:05:11.033341 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l2njz\" (UniqueName: \"kubernetes.io/projected/930d6079-e92e-4e05-bfc7-25f3b2392123-kube-api-access-l2njz\") pod \"ceilometer-0\" (UID: \"930d6079-e92e-4e05-bfc7-25f3b2392123\") " pod="openstack/ceilometer-0" Oct 06 12:05:11 crc kubenswrapper[4958]: I1006 12:05:11.033382 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/930d6079-e92e-4e05-bfc7-25f3b2392123-scripts\") pod \"ceilometer-0\" (UID: \"930d6079-e92e-4e05-bfc7-25f3b2392123\") " pod="openstack/ceilometer-0" Oct 06 12:05:11 crc kubenswrapper[4958]: I1006 12:05:11.033405 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/930d6079-e92e-4e05-bfc7-25f3b2392123-log-httpd\") pod \"ceilometer-0\" (UID: \"930d6079-e92e-4e05-bfc7-25f3b2392123\") " pod="openstack/ceilometer-0" Oct 06 12:05:11 crc kubenswrapper[4958]: I1006 12:05:11.033445 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/930d6079-e92e-4e05-bfc7-25f3b2392123-run-httpd\") pod \"ceilometer-0\" (UID: \"930d6079-e92e-4e05-bfc7-25f3b2392123\") " pod="openstack/ceilometer-0" Oct 06 12:05:11 crc kubenswrapper[4958]: I1006 12:05:11.034091 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/930d6079-e92e-4e05-bfc7-25f3b2392123-log-httpd\") pod \"ceilometer-0\" (UID: \"930d6079-e92e-4e05-bfc7-25f3b2392123\") " pod="openstack/ceilometer-0" Oct 06 12:05:11 crc kubenswrapper[4958]: I1006 12:05:11.034233 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/930d6079-e92e-4e05-bfc7-25f3b2392123-run-httpd\") pod \"ceilometer-0\" (UID: \"930d6079-e92e-4e05-bfc7-25f3b2392123\") " pod="openstack/ceilometer-0" Oct 06 12:05:11 crc kubenswrapper[4958]: I1006 12:05:11.038119 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/930d6079-e92e-4e05-bfc7-25f3b2392123-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"930d6079-e92e-4e05-bfc7-25f3b2392123\") " pod="openstack/ceilometer-0" Oct 06 12:05:11 crc kubenswrapper[4958]: I1006 12:05:11.038889 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/930d6079-e92e-4e05-bfc7-25f3b2392123-config-data\") pod \"ceilometer-0\" (UID: \"930d6079-e92e-4e05-bfc7-25f3b2392123\") " pod="openstack/ceilometer-0" Oct 06 12:05:11 crc kubenswrapper[4958]: I1006 12:05:11.039184 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/930d6079-e92e-4e05-bfc7-25f3b2392123-scripts\") pod \"ceilometer-0\" (UID: \"930d6079-e92e-4e05-bfc7-25f3b2392123\") " pod="openstack/ceilometer-0" Oct 06 12:05:11 crc kubenswrapper[4958]: I1006 12:05:11.055360 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l2njz\" (UniqueName: \"kubernetes.io/projected/930d6079-e92e-4e05-bfc7-25f3b2392123-kube-api-access-l2njz\") pod \"ceilometer-0\" (UID: \"930d6079-e92e-4e05-bfc7-25f3b2392123\") " pod="openstack/ceilometer-0" Oct 06 12:05:11 crc kubenswrapper[4958]: I1006 12:05:11.056962 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/930d6079-e92e-4e05-bfc7-25f3b2392123-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"930d6079-e92e-4e05-bfc7-25f3b2392123\") " pod="openstack/ceilometer-0" Oct 06 12:05:11 crc kubenswrapper[4958]: I1006 12:05:11.090649 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Oct 06 12:05:11 crc kubenswrapper[4958]: I1006 12:05:11.175983 4958 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-conductor-db-sync-zld27"] Oct 06 12:05:11 crc kubenswrapper[4958]: I1006 12:05:11.179508 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-zld27" Oct 06 12:05:11 crc kubenswrapper[4958]: I1006 12:05:11.183576 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-nova-dockercfg-kfwdp" Oct 06 12:05:11 crc kubenswrapper[4958]: I1006 12:05:11.184373 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-config-data" Oct 06 12:05:11 crc kubenswrapper[4958]: I1006 12:05:11.184690 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-scripts" Oct 06 12:05:11 crc kubenswrapper[4958]: I1006 12:05:11.189114 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-zld27"] Oct 06 12:05:11 crc kubenswrapper[4958]: I1006 12:05:11.239981 4958 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Oct 06 12:05:11 crc kubenswrapper[4958]: I1006 12:05:11.339845 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c4703feb-17e7-47e1-9487-ffb7fd7303c2-scripts\") pod \"nova-cell0-conductor-db-sync-zld27\" (UID: \"c4703feb-17e7-47e1-9487-ffb7fd7303c2\") " pod="openstack/nova-cell0-conductor-db-sync-zld27" Oct 06 12:05:11 crc kubenswrapper[4958]: I1006 12:05:11.340025 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c4703feb-17e7-47e1-9487-ffb7fd7303c2-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-zld27\" (UID: \"c4703feb-17e7-47e1-9487-ffb7fd7303c2\") " pod="openstack/nova-cell0-conductor-db-sync-zld27" Oct 06 12:05:11 crc kubenswrapper[4958]: I1006 12:05:11.340056 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7tt57\" (UniqueName: \"kubernetes.io/projected/c4703feb-17e7-47e1-9487-ffb7fd7303c2-kube-api-access-7tt57\") pod \"nova-cell0-conductor-db-sync-zld27\" (UID: \"c4703feb-17e7-47e1-9487-ffb7fd7303c2\") " pod="openstack/nova-cell0-conductor-db-sync-zld27" Oct 06 12:05:11 crc kubenswrapper[4958]: I1006 12:05:11.340093 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c4703feb-17e7-47e1-9487-ffb7fd7303c2-config-data\") pod \"nova-cell0-conductor-db-sync-zld27\" (UID: \"c4703feb-17e7-47e1-9487-ffb7fd7303c2\") " pod="openstack/nova-cell0-conductor-db-sync-zld27" Oct 06 12:05:11 crc kubenswrapper[4958]: I1006 12:05:11.441707 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c4703feb-17e7-47e1-9487-ffb7fd7303c2-scripts\") pod \"nova-cell0-conductor-db-sync-zld27\" (UID: \"c4703feb-17e7-47e1-9487-ffb7fd7303c2\") " pod="openstack/nova-cell0-conductor-db-sync-zld27" Oct 06 12:05:11 crc kubenswrapper[4958]: I1006 12:05:11.441908 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c4703feb-17e7-47e1-9487-ffb7fd7303c2-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-zld27\" (UID: \"c4703feb-17e7-47e1-9487-ffb7fd7303c2\") " pod="openstack/nova-cell0-conductor-db-sync-zld27" Oct 06 12:05:11 crc kubenswrapper[4958]: I1006 12:05:11.441938 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7tt57\" (UniqueName: \"kubernetes.io/projected/c4703feb-17e7-47e1-9487-ffb7fd7303c2-kube-api-access-7tt57\") pod \"nova-cell0-conductor-db-sync-zld27\" (UID: \"c4703feb-17e7-47e1-9487-ffb7fd7303c2\") " pod="openstack/nova-cell0-conductor-db-sync-zld27" Oct 06 12:05:11 crc kubenswrapper[4958]: I1006 12:05:11.442792 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c4703feb-17e7-47e1-9487-ffb7fd7303c2-config-data\") pod \"nova-cell0-conductor-db-sync-zld27\" (UID: \"c4703feb-17e7-47e1-9487-ffb7fd7303c2\") " pod="openstack/nova-cell0-conductor-db-sync-zld27" Oct 06 12:05:11 crc kubenswrapper[4958]: I1006 12:05:11.446618 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c4703feb-17e7-47e1-9487-ffb7fd7303c2-scripts\") pod \"nova-cell0-conductor-db-sync-zld27\" (UID: \"c4703feb-17e7-47e1-9487-ffb7fd7303c2\") " pod="openstack/nova-cell0-conductor-db-sync-zld27" Oct 06 12:05:11 crc kubenswrapper[4958]: I1006 12:05:11.447935 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c4703feb-17e7-47e1-9487-ffb7fd7303c2-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-zld27\" (UID: \"c4703feb-17e7-47e1-9487-ffb7fd7303c2\") " pod="openstack/nova-cell0-conductor-db-sync-zld27" Oct 06 12:05:11 crc kubenswrapper[4958]: I1006 12:05:11.448570 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c4703feb-17e7-47e1-9487-ffb7fd7303c2-config-data\") pod \"nova-cell0-conductor-db-sync-zld27\" (UID: \"c4703feb-17e7-47e1-9487-ffb7fd7303c2\") " pod="openstack/nova-cell0-conductor-db-sync-zld27" Oct 06 12:05:11 crc kubenswrapper[4958]: I1006 12:05:11.468475 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7tt57\" (UniqueName: \"kubernetes.io/projected/c4703feb-17e7-47e1-9487-ffb7fd7303c2-kube-api-access-7tt57\") pod \"nova-cell0-conductor-db-sync-zld27\" (UID: \"c4703feb-17e7-47e1-9487-ffb7fd7303c2\") " pod="openstack/nova-cell0-conductor-db-sync-zld27" Oct 06 12:05:11 crc kubenswrapper[4958]: I1006 12:05:11.537603 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-zld27" Oct 06 12:05:11 crc kubenswrapper[4958]: I1006 12:05:11.608494 4958 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Oct 06 12:05:11 crc kubenswrapper[4958]: W1006 12:05:11.612067 4958 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod930d6079_e92e_4e05_bfc7_25f3b2392123.slice/crio-64e49275f4e305feef3d16bc07574f316b5be0331834302e1566feeb37a25161 WatchSource:0}: Error finding container 64e49275f4e305feef3d16bc07574f316b5be0331834302e1566feeb37a25161: Status 404 returned error can't find the container with id 64e49275f4e305feef3d16bc07574f316b5be0331834302e1566feeb37a25161 Oct 06 12:05:12 crc kubenswrapper[4958]: I1006 12:05:12.006275 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-zld27"] Oct 06 12:05:12 crc kubenswrapper[4958]: I1006 12:05:12.387516 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"930d6079-e92e-4e05-bfc7-25f3b2392123","Type":"ContainerStarted","Data":"64e49275f4e305feef3d16bc07574f316b5be0331834302e1566feeb37a25161"} Oct 06 12:05:12 crc kubenswrapper[4958]: I1006 12:05:12.389894 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-zld27" event={"ID":"c4703feb-17e7-47e1-9487-ffb7fd7303c2","Type":"ContainerStarted","Data":"cd902c0186d3c345daaf24c7e6d7f7b5905899c442eb345b29eac063449b8e1a"} Oct 06 12:05:13 crc kubenswrapper[4958]: I1006 12:05:13.402921 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"930d6079-e92e-4e05-bfc7-25f3b2392123","Type":"ContainerStarted","Data":"afe45e6488225ef89ec4227b4ff72c61d54c922f0650d1793e15bd207c2ce8cc"} Oct 06 12:05:14 crc kubenswrapper[4958]: I1006 12:05:14.414301 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"930d6079-e92e-4e05-bfc7-25f3b2392123","Type":"ContainerStarted","Data":"a173d851cb9195499aa959a8606ee7d4412483ddb3785677a4229f4657e64563"} Oct 06 12:05:14 crc kubenswrapper[4958]: I1006 12:05:14.414779 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"930d6079-e92e-4e05-bfc7-25f3b2392123","Type":"ContainerStarted","Data":"7cfe3f206e733dd9574ff8d71c0e8684a9da98fd84191044392c6443c09e4f98"} Oct 06 12:05:19 crc kubenswrapper[4958]: I1006 12:05:19.475124 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-zld27" event={"ID":"c4703feb-17e7-47e1-9487-ffb7fd7303c2","Type":"ContainerStarted","Data":"82440537c81f3157cd5b3ab206175befa2606065430c1e7af8e130544b646e8e"} Oct 06 12:05:19 crc kubenswrapper[4958]: I1006 12:05:19.482139 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"930d6079-e92e-4e05-bfc7-25f3b2392123","Type":"ContainerStarted","Data":"e0e9f7afc89b0f570dfd1840af7adfbc86f969e4f962b7456b60abfa70cc1bf5"} Oct 06 12:05:19 crc kubenswrapper[4958]: I1006 12:05:19.482648 4958 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="930d6079-e92e-4e05-bfc7-25f3b2392123" containerName="ceilometer-central-agent" containerID="cri-o://afe45e6488225ef89ec4227b4ff72c61d54c922f0650d1793e15bd207c2ce8cc" gracePeriod=30 Oct 06 12:05:19 crc kubenswrapper[4958]: I1006 12:05:19.483316 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Oct 06 12:05:19 crc kubenswrapper[4958]: I1006 12:05:19.483350 4958 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="930d6079-e92e-4e05-bfc7-25f3b2392123" containerName="sg-core" containerID="cri-o://a173d851cb9195499aa959a8606ee7d4412483ddb3785677a4229f4657e64563" gracePeriod=30 Oct 06 12:05:19 crc kubenswrapper[4958]: I1006 12:05:19.483375 4958 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="930d6079-e92e-4e05-bfc7-25f3b2392123" containerName="proxy-httpd" containerID="cri-o://e0e9f7afc89b0f570dfd1840af7adfbc86f969e4f962b7456b60abfa70cc1bf5" gracePeriod=30 Oct 06 12:05:19 crc kubenswrapper[4958]: I1006 12:05:19.483469 4958 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="930d6079-e92e-4e05-bfc7-25f3b2392123" containerName="ceilometer-notification-agent" containerID="cri-o://7cfe3f206e733dd9574ff8d71c0e8684a9da98fd84191044392c6443c09e4f98" gracePeriod=30 Oct 06 12:05:19 crc kubenswrapper[4958]: I1006 12:05:19.503286 4958 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-conductor-db-sync-zld27" podStartSLOduration=1.635978436 podStartE2EDuration="8.50326885s" podCreationTimestamp="2025-10-06 12:05:11 +0000 UTC" firstStartedPulling="2025-10-06 12:05:12.013920661 +0000 UTC m=+1065.899945969" lastFinishedPulling="2025-10-06 12:05:18.881211075 +0000 UTC m=+1072.767236383" observedRunningTime="2025-10-06 12:05:19.495345321 +0000 UTC m=+1073.381370639" watchObservedRunningTime="2025-10-06 12:05:19.50326885 +0000 UTC m=+1073.389294178" Oct 06 12:05:19 crc kubenswrapper[4958]: I1006 12:05:19.523007 4958 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.262304795 podStartE2EDuration="9.522981894s" podCreationTimestamp="2025-10-06 12:05:10 +0000 UTC" firstStartedPulling="2025-10-06 12:05:11.614966009 +0000 UTC m=+1065.500991317" lastFinishedPulling="2025-10-06 12:05:18.875643108 +0000 UTC m=+1072.761668416" observedRunningTime="2025-10-06 12:05:19.52252143 +0000 UTC m=+1073.408546748" watchObservedRunningTime="2025-10-06 12:05:19.522981894 +0000 UTC m=+1073.409007212" Oct 06 12:05:20 crc kubenswrapper[4958]: I1006 12:05:20.497294 4958 generic.go:334] "Generic (PLEG): container finished" podID="930d6079-e92e-4e05-bfc7-25f3b2392123" containerID="e0e9f7afc89b0f570dfd1840af7adfbc86f969e4f962b7456b60abfa70cc1bf5" exitCode=0 Oct 06 12:05:20 crc kubenswrapper[4958]: I1006 12:05:20.497611 4958 generic.go:334] "Generic (PLEG): container finished" podID="930d6079-e92e-4e05-bfc7-25f3b2392123" containerID="a173d851cb9195499aa959a8606ee7d4412483ddb3785677a4229f4657e64563" exitCode=2 Oct 06 12:05:20 crc kubenswrapper[4958]: I1006 12:05:20.497624 4958 generic.go:334] "Generic (PLEG): container finished" podID="930d6079-e92e-4e05-bfc7-25f3b2392123" containerID="7cfe3f206e733dd9574ff8d71c0e8684a9da98fd84191044392c6443c09e4f98" exitCode=0 Oct 06 12:05:20 crc kubenswrapper[4958]: I1006 12:05:20.497320 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"930d6079-e92e-4e05-bfc7-25f3b2392123","Type":"ContainerDied","Data":"e0e9f7afc89b0f570dfd1840af7adfbc86f969e4f962b7456b60abfa70cc1bf5"} Oct 06 12:05:20 crc kubenswrapper[4958]: I1006 12:05:20.497707 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"930d6079-e92e-4e05-bfc7-25f3b2392123","Type":"ContainerDied","Data":"a173d851cb9195499aa959a8606ee7d4412483ddb3785677a4229f4657e64563"} Oct 06 12:05:20 crc kubenswrapper[4958]: I1006 12:05:20.497756 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"930d6079-e92e-4e05-bfc7-25f3b2392123","Type":"ContainerDied","Data":"7cfe3f206e733dd9574ff8d71c0e8684a9da98fd84191044392c6443c09e4f98"} Oct 06 12:05:22 crc kubenswrapper[4958]: I1006 12:05:22.531068 4958 generic.go:334] "Generic (PLEG): container finished" podID="930d6079-e92e-4e05-bfc7-25f3b2392123" containerID="afe45e6488225ef89ec4227b4ff72c61d54c922f0650d1793e15bd207c2ce8cc" exitCode=0 Oct 06 12:05:22 crc kubenswrapper[4958]: I1006 12:05:22.531187 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"930d6079-e92e-4e05-bfc7-25f3b2392123","Type":"ContainerDied","Data":"afe45e6488225ef89ec4227b4ff72c61d54c922f0650d1793e15bd207c2ce8cc"} Oct 06 12:05:22 crc kubenswrapper[4958]: I1006 12:05:22.531555 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"930d6079-e92e-4e05-bfc7-25f3b2392123","Type":"ContainerDied","Data":"64e49275f4e305feef3d16bc07574f316b5be0331834302e1566feeb37a25161"} Oct 06 12:05:22 crc kubenswrapper[4958]: I1006 12:05:22.531575 4958 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="64e49275f4e305feef3d16bc07574f316b5be0331834302e1566feeb37a25161" Oct 06 12:05:22 crc kubenswrapper[4958]: I1006 12:05:22.587188 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Oct 06 12:05:22 crc kubenswrapper[4958]: I1006 12:05:22.660129 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-l2njz\" (UniqueName: \"kubernetes.io/projected/930d6079-e92e-4e05-bfc7-25f3b2392123-kube-api-access-l2njz\") pod \"930d6079-e92e-4e05-bfc7-25f3b2392123\" (UID: \"930d6079-e92e-4e05-bfc7-25f3b2392123\") " Oct 06 12:05:22 crc kubenswrapper[4958]: I1006 12:05:22.660267 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/930d6079-e92e-4e05-bfc7-25f3b2392123-sg-core-conf-yaml\") pod \"930d6079-e92e-4e05-bfc7-25f3b2392123\" (UID: \"930d6079-e92e-4e05-bfc7-25f3b2392123\") " Oct 06 12:05:22 crc kubenswrapper[4958]: I1006 12:05:22.660322 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/930d6079-e92e-4e05-bfc7-25f3b2392123-combined-ca-bundle\") pod \"930d6079-e92e-4e05-bfc7-25f3b2392123\" (UID: \"930d6079-e92e-4e05-bfc7-25f3b2392123\") " Oct 06 12:05:22 crc kubenswrapper[4958]: I1006 12:05:22.660433 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/930d6079-e92e-4e05-bfc7-25f3b2392123-log-httpd\") pod \"930d6079-e92e-4e05-bfc7-25f3b2392123\" (UID: \"930d6079-e92e-4e05-bfc7-25f3b2392123\") " Oct 06 12:05:22 crc kubenswrapper[4958]: I1006 12:05:22.660476 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/930d6079-e92e-4e05-bfc7-25f3b2392123-scripts\") pod \"930d6079-e92e-4e05-bfc7-25f3b2392123\" (UID: \"930d6079-e92e-4e05-bfc7-25f3b2392123\") " Oct 06 12:05:22 crc kubenswrapper[4958]: I1006 12:05:22.660620 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/930d6079-e92e-4e05-bfc7-25f3b2392123-config-data\") pod \"930d6079-e92e-4e05-bfc7-25f3b2392123\" (UID: \"930d6079-e92e-4e05-bfc7-25f3b2392123\") " Oct 06 12:05:22 crc kubenswrapper[4958]: I1006 12:05:22.660662 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/930d6079-e92e-4e05-bfc7-25f3b2392123-run-httpd\") pod \"930d6079-e92e-4e05-bfc7-25f3b2392123\" (UID: \"930d6079-e92e-4e05-bfc7-25f3b2392123\") " Oct 06 12:05:22 crc kubenswrapper[4958]: I1006 12:05:22.661394 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/930d6079-e92e-4e05-bfc7-25f3b2392123-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "930d6079-e92e-4e05-bfc7-25f3b2392123" (UID: "930d6079-e92e-4e05-bfc7-25f3b2392123"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 06 12:05:22 crc kubenswrapper[4958]: I1006 12:05:22.661457 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/930d6079-e92e-4e05-bfc7-25f3b2392123-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "930d6079-e92e-4e05-bfc7-25f3b2392123" (UID: "930d6079-e92e-4e05-bfc7-25f3b2392123"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 06 12:05:22 crc kubenswrapper[4958]: I1006 12:05:22.666208 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/930d6079-e92e-4e05-bfc7-25f3b2392123-kube-api-access-l2njz" (OuterVolumeSpecName: "kube-api-access-l2njz") pod "930d6079-e92e-4e05-bfc7-25f3b2392123" (UID: "930d6079-e92e-4e05-bfc7-25f3b2392123"). InnerVolumeSpecName "kube-api-access-l2njz". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 06 12:05:22 crc kubenswrapper[4958]: I1006 12:05:22.676241 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/930d6079-e92e-4e05-bfc7-25f3b2392123-scripts" (OuterVolumeSpecName: "scripts") pod "930d6079-e92e-4e05-bfc7-25f3b2392123" (UID: "930d6079-e92e-4e05-bfc7-25f3b2392123"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 06 12:05:22 crc kubenswrapper[4958]: I1006 12:05:22.699344 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/930d6079-e92e-4e05-bfc7-25f3b2392123-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "930d6079-e92e-4e05-bfc7-25f3b2392123" (UID: "930d6079-e92e-4e05-bfc7-25f3b2392123"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 06 12:05:22 crc kubenswrapper[4958]: I1006 12:05:22.748497 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/930d6079-e92e-4e05-bfc7-25f3b2392123-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "930d6079-e92e-4e05-bfc7-25f3b2392123" (UID: "930d6079-e92e-4e05-bfc7-25f3b2392123"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 06 12:05:22 crc kubenswrapper[4958]: I1006 12:05:22.762838 4958 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-l2njz\" (UniqueName: \"kubernetes.io/projected/930d6079-e92e-4e05-bfc7-25f3b2392123-kube-api-access-l2njz\") on node \"crc\" DevicePath \"\"" Oct 06 12:05:22 crc kubenswrapper[4958]: I1006 12:05:22.762869 4958 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/930d6079-e92e-4e05-bfc7-25f3b2392123-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Oct 06 12:05:22 crc kubenswrapper[4958]: I1006 12:05:22.762882 4958 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/930d6079-e92e-4e05-bfc7-25f3b2392123-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 06 12:05:22 crc kubenswrapper[4958]: I1006 12:05:22.762893 4958 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/930d6079-e92e-4e05-bfc7-25f3b2392123-log-httpd\") on node \"crc\" DevicePath \"\"" Oct 06 12:05:22 crc kubenswrapper[4958]: I1006 12:05:22.762903 4958 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/930d6079-e92e-4e05-bfc7-25f3b2392123-scripts\") on node \"crc\" DevicePath \"\"" Oct 06 12:05:22 crc kubenswrapper[4958]: I1006 12:05:22.762912 4958 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/930d6079-e92e-4e05-bfc7-25f3b2392123-run-httpd\") on node \"crc\" DevicePath \"\"" Oct 06 12:05:22 crc kubenswrapper[4958]: I1006 12:05:22.777984 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/930d6079-e92e-4e05-bfc7-25f3b2392123-config-data" (OuterVolumeSpecName: "config-data") pod "930d6079-e92e-4e05-bfc7-25f3b2392123" (UID: "930d6079-e92e-4e05-bfc7-25f3b2392123"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 06 12:05:22 crc kubenswrapper[4958]: I1006 12:05:22.865352 4958 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/930d6079-e92e-4e05-bfc7-25f3b2392123-config-data\") on node \"crc\" DevicePath \"\"" Oct 06 12:05:23 crc kubenswrapper[4958]: I1006 12:05:23.544351 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Oct 06 12:05:23 crc kubenswrapper[4958]: I1006 12:05:23.592074 4958 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Oct 06 12:05:23 crc kubenswrapper[4958]: I1006 12:05:23.603216 4958 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Oct 06 12:05:23 crc kubenswrapper[4958]: I1006 12:05:23.623460 4958 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Oct 06 12:05:23 crc kubenswrapper[4958]: E1006 12:05:23.624135 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="930d6079-e92e-4e05-bfc7-25f3b2392123" containerName="ceilometer-central-agent" Oct 06 12:05:23 crc kubenswrapper[4958]: I1006 12:05:23.624247 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="930d6079-e92e-4e05-bfc7-25f3b2392123" containerName="ceilometer-central-agent" Oct 06 12:05:23 crc kubenswrapper[4958]: E1006 12:05:23.624387 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="930d6079-e92e-4e05-bfc7-25f3b2392123" containerName="ceilometer-notification-agent" Oct 06 12:05:23 crc kubenswrapper[4958]: I1006 12:05:23.624475 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="930d6079-e92e-4e05-bfc7-25f3b2392123" containerName="ceilometer-notification-agent" Oct 06 12:05:23 crc kubenswrapper[4958]: E1006 12:05:23.624575 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="930d6079-e92e-4e05-bfc7-25f3b2392123" containerName="proxy-httpd" Oct 06 12:05:23 crc kubenswrapper[4958]: I1006 12:05:23.624662 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="930d6079-e92e-4e05-bfc7-25f3b2392123" containerName="proxy-httpd" Oct 06 12:05:23 crc kubenswrapper[4958]: E1006 12:05:23.624751 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="930d6079-e92e-4e05-bfc7-25f3b2392123" containerName="sg-core" Oct 06 12:05:23 crc kubenswrapper[4958]: I1006 12:05:23.624820 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="930d6079-e92e-4e05-bfc7-25f3b2392123" containerName="sg-core" Oct 06 12:05:23 crc kubenswrapper[4958]: I1006 12:05:23.625136 4958 memory_manager.go:354] "RemoveStaleState removing state" podUID="930d6079-e92e-4e05-bfc7-25f3b2392123" containerName="ceilometer-notification-agent" Oct 06 12:05:23 crc kubenswrapper[4958]: I1006 12:05:23.625500 4958 memory_manager.go:354] "RemoveStaleState removing state" podUID="930d6079-e92e-4e05-bfc7-25f3b2392123" containerName="ceilometer-central-agent" Oct 06 12:05:23 crc kubenswrapper[4958]: I1006 12:05:23.625636 4958 memory_manager.go:354] "RemoveStaleState removing state" podUID="930d6079-e92e-4e05-bfc7-25f3b2392123" containerName="proxy-httpd" Oct 06 12:05:23 crc kubenswrapper[4958]: I1006 12:05:23.625728 4958 memory_manager.go:354] "RemoveStaleState removing state" podUID="930d6079-e92e-4e05-bfc7-25f3b2392123" containerName="sg-core" Oct 06 12:05:23 crc kubenswrapper[4958]: I1006 12:05:23.627903 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Oct 06 12:05:23 crc kubenswrapper[4958]: I1006 12:05:23.644764 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Oct 06 12:05:23 crc kubenswrapper[4958]: I1006 12:05:23.644797 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Oct 06 12:05:23 crc kubenswrapper[4958]: I1006 12:05:23.658309 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Oct 06 12:05:23 crc kubenswrapper[4958]: I1006 12:05:23.785234 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/df7de90f-720e-4d12-92b8-ced2ed491360-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"df7de90f-720e-4d12-92b8-ced2ed491360\") " pod="openstack/ceilometer-0" Oct 06 12:05:23 crc kubenswrapper[4958]: I1006 12:05:23.785319 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/df7de90f-720e-4d12-92b8-ced2ed491360-log-httpd\") pod \"ceilometer-0\" (UID: \"df7de90f-720e-4d12-92b8-ced2ed491360\") " pod="openstack/ceilometer-0" Oct 06 12:05:23 crc kubenswrapper[4958]: I1006 12:05:23.785361 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/df7de90f-720e-4d12-92b8-ced2ed491360-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"df7de90f-720e-4d12-92b8-ced2ed491360\") " pod="openstack/ceilometer-0" Oct 06 12:05:23 crc kubenswrapper[4958]: I1006 12:05:23.785381 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/df7de90f-720e-4d12-92b8-ced2ed491360-scripts\") pod \"ceilometer-0\" (UID: \"df7de90f-720e-4d12-92b8-ced2ed491360\") " pod="openstack/ceilometer-0" Oct 06 12:05:23 crc kubenswrapper[4958]: I1006 12:05:23.785401 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/df7de90f-720e-4d12-92b8-ced2ed491360-config-data\") pod \"ceilometer-0\" (UID: \"df7de90f-720e-4d12-92b8-ced2ed491360\") " pod="openstack/ceilometer-0" Oct 06 12:05:23 crc kubenswrapper[4958]: I1006 12:05:23.785423 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/df7de90f-720e-4d12-92b8-ced2ed491360-run-httpd\") pod \"ceilometer-0\" (UID: \"df7de90f-720e-4d12-92b8-ced2ed491360\") " pod="openstack/ceilometer-0" Oct 06 12:05:23 crc kubenswrapper[4958]: I1006 12:05:23.785460 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nkb94\" (UniqueName: \"kubernetes.io/projected/df7de90f-720e-4d12-92b8-ced2ed491360-kube-api-access-nkb94\") pod \"ceilometer-0\" (UID: \"df7de90f-720e-4d12-92b8-ced2ed491360\") " pod="openstack/ceilometer-0" Oct 06 12:05:23 crc kubenswrapper[4958]: I1006 12:05:23.886835 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/df7de90f-720e-4d12-92b8-ced2ed491360-run-httpd\") pod \"ceilometer-0\" (UID: \"df7de90f-720e-4d12-92b8-ced2ed491360\") " pod="openstack/ceilometer-0" Oct 06 12:05:23 crc kubenswrapper[4958]: I1006 12:05:23.886946 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nkb94\" (UniqueName: \"kubernetes.io/projected/df7de90f-720e-4d12-92b8-ced2ed491360-kube-api-access-nkb94\") pod \"ceilometer-0\" (UID: \"df7de90f-720e-4d12-92b8-ced2ed491360\") " pod="openstack/ceilometer-0" Oct 06 12:05:23 crc kubenswrapper[4958]: I1006 12:05:23.887085 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/df7de90f-720e-4d12-92b8-ced2ed491360-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"df7de90f-720e-4d12-92b8-ced2ed491360\") " pod="openstack/ceilometer-0" Oct 06 12:05:23 crc kubenswrapper[4958]: I1006 12:05:23.887199 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/df7de90f-720e-4d12-92b8-ced2ed491360-log-httpd\") pod \"ceilometer-0\" (UID: \"df7de90f-720e-4d12-92b8-ced2ed491360\") " pod="openstack/ceilometer-0" Oct 06 12:05:23 crc kubenswrapper[4958]: I1006 12:05:23.887271 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/df7de90f-720e-4d12-92b8-ced2ed491360-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"df7de90f-720e-4d12-92b8-ced2ed491360\") " pod="openstack/ceilometer-0" Oct 06 12:05:23 crc kubenswrapper[4958]: I1006 12:05:23.887306 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/df7de90f-720e-4d12-92b8-ced2ed491360-scripts\") pod \"ceilometer-0\" (UID: \"df7de90f-720e-4d12-92b8-ced2ed491360\") " pod="openstack/ceilometer-0" Oct 06 12:05:23 crc kubenswrapper[4958]: I1006 12:05:23.887343 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/df7de90f-720e-4d12-92b8-ced2ed491360-config-data\") pod \"ceilometer-0\" (UID: \"df7de90f-720e-4d12-92b8-ced2ed491360\") " pod="openstack/ceilometer-0" Oct 06 12:05:23 crc kubenswrapper[4958]: I1006 12:05:23.888069 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/df7de90f-720e-4d12-92b8-ced2ed491360-run-httpd\") pod \"ceilometer-0\" (UID: \"df7de90f-720e-4d12-92b8-ced2ed491360\") " pod="openstack/ceilometer-0" Oct 06 12:05:23 crc kubenswrapper[4958]: I1006 12:05:23.888681 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/df7de90f-720e-4d12-92b8-ced2ed491360-log-httpd\") pod \"ceilometer-0\" (UID: \"df7de90f-720e-4d12-92b8-ced2ed491360\") " pod="openstack/ceilometer-0" Oct 06 12:05:23 crc kubenswrapper[4958]: I1006 12:05:23.892060 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/df7de90f-720e-4d12-92b8-ced2ed491360-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"df7de90f-720e-4d12-92b8-ced2ed491360\") " pod="openstack/ceilometer-0" Oct 06 12:05:23 crc kubenswrapper[4958]: I1006 12:05:23.893361 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/df7de90f-720e-4d12-92b8-ced2ed491360-config-data\") pod \"ceilometer-0\" (UID: \"df7de90f-720e-4d12-92b8-ced2ed491360\") " pod="openstack/ceilometer-0" Oct 06 12:05:23 crc kubenswrapper[4958]: I1006 12:05:23.906332 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/df7de90f-720e-4d12-92b8-ced2ed491360-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"df7de90f-720e-4d12-92b8-ced2ed491360\") " pod="openstack/ceilometer-0" Oct 06 12:05:23 crc kubenswrapper[4958]: I1006 12:05:23.906922 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/df7de90f-720e-4d12-92b8-ced2ed491360-scripts\") pod \"ceilometer-0\" (UID: \"df7de90f-720e-4d12-92b8-ced2ed491360\") " pod="openstack/ceilometer-0" Oct 06 12:05:23 crc kubenswrapper[4958]: I1006 12:05:23.912849 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nkb94\" (UniqueName: \"kubernetes.io/projected/df7de90f-720e-4d12-92b8-ced2ed491360-kube-api-access-nkb94\") pod \"ceilometer-0\" (UID: \"df7de90f-720e-4d12-92b8-ced2ed491360\") " pod="openstack/ceilometer-0" Oct 06 12:05:23 crc kubenswrapper[4958]: I1006 12:05:23.958254 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Oct 06 12:05:24 crc kubenswrapper[4958]: I1006 12:05:24.475910 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Oct 06 12:05:24 crc kubenswrapper[4958]: I1006 12:05:24.553522 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"df7de90f-720e-4d12-92b8-ced2ed491360","Type":"ContainerStarted","Data":"4fcb3f82b8bf1bbf6ce9c1fcfdf0e4cfcfe6e7b8a1d62e6a45e631b5f812db3c"} Oct 06 12:05:24 crc kubenswrapper[4958]: I1006 12:05:24.926303 4958 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="930d6079-e92e-4e05-bfc7-25f3b2392123" path="/var/lib/kubelet/pods/930d6079-e92e-4e05-bfc7-25f3b2392123/volumes" Oct 06 12:05:25 crc kubenswrapper[4958]: I1006 12:05:25.568292 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"df7de90f-720e-4d12-92b8-ced2ed491360","Type":"ContainerStarted","Data":"a510678ee0d09a9b3aebaa16ea4ffde1fa2ecf35424ec5581a23101af74b1ddf"} Oct 06 12:05:26 crc kubenswrapper[4958]: I1006 12:05:26.580986 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"df7de90f-720e-4d12-92b8-ced2ed491360","Type":"ContainerStarted","Data":"ebe011f647b372de72d69d4f11ca1b0380879092844dd16287ee5becbf99b893"} Oct 06 12:05:27 crc kubenswrapper[4958]: I1006 12:05:27.591317 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"df7de90f-720e-4d12-92b8-ced2ed491360","Type":"ContainerStarted","Data":"52d57eb75384e2796cd25f00fea69b3e1d15a653bd62d82d2ef2327172ba9adb"} Oct 06 12:05:28 crc kubenswrapper[4958]: I1006 12:05:28.609837 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"df7de90f-720e-4d12-92b8-ced2ed491360","Type":"ContainerStarted","Data":"792e79d8587fa71c5c2166f0c3dea77b11fbc99452d0f4541c45ecf36b647b1c"} Oct 06 12:05:28 crc kubenswrapper[4958]: I1006 12:05:28.610738 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Oct 06 12:05:28 crc kubenswrapper[4958]: I1006 12:05:28.618508 4958 generic.go:334] "Generic (PLEG): container finished" podID="c4703feb-17e7-47e1-9487-ffb7fd7303c2" containerID="82440537c81f3157cd5b3ab206175befa2606065430c1e7af8e130544b646e8e" exitCode=0 Oct 06 12:05:28 crc kubenswrapper[4958]: I1006 12:05:28.618564 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-zld27" event={"ID":"c4703feb-17e7-47e1-9487-ffb7fd7303c2","Type":"ContainerDied","Data":"82440537c81f3157cd5b3ab206175befa2606065430c1e7af8e130544b646e8e"} Oct 06 12:05:28 crc kubenswrapper[4958]: I1006 12:05:28.648836 4958 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.084367857 podStartE2EDuration="5.648814006s" podCreationTimestamp="2025-10-06 12:05:23 +0000 UTC" firstStartedPulling="2025-10-06 12:05:24.502867264 +0000 UTC m=+1078.388892572" lastFinishedPulling="2025-10-06 12:05:28.067313403 +0000 UTC m=+1081.953338721" observedRunningTime="2025-10-06 12:05:28.641711592 +0000 UTC m=+1082.527736940" watchObservedRunningTime="2025-10-06 12:05:28.648814006 +0000 UTC m=+1082.534839304" Oct 06 12:05:29 crc kubenswrapper[4958]: I1006 12:05:29.988528 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-zld27" Oct 06 12:05:30 crc kubenswrapper[4958]: I1006 12:05:30.119465 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c4703feb-17e7-47e1-9487-ffb7fd7303c2-config-data\") pod \"c4703feb-17e7-47e1-9487-ffb7fd7303c2\" (UID: \"c4703feb-17e7-47e1-9487-ffb7fd7303c2\") " Oct 06 12:05:30 crc kubenswrapper[4958]: I1006 12:05:30.119882 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c4703feb-17e7-47e1-9487-ffb7fd7303c2-scripts\") pod \"c4703feb-17e7-47e1-9487-ffb7fd7303c2\" (UID: \"c4703feb-17e7-47e1-9487-ffb7fd7303c2\") " Oct 06 12:05:30 crc kubenswrapper[4958]: I1006 12:05:30.119920 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c4703feb-17e7-47e1-9487-ffb7fd7303c2-combined-ca-bundle\") pod \"c4703feb-17e7-47e1-9487-ffb7fd7303c2\" (UID: \"c4703feb-17e7-47e1-9487-ffb7fd7303c2\") " Oct 06 12:05:30 crc kubenswrapper[4958]: I1006 12:05:30.120041 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7tt57\" (UniqueName: \"kubernetes.io/projected/c4703feb-17e7-47e1-9487-ffb7fd7303c2-kube-api-access-7tt57\") pod \"c4703feb-17e7-47e1-9487-ffb7fd7303c2\" (UID: \"c4703feb-17e7-47e1-9487-ffb7fd7303c2\") " Oct 06 12:05:30 crc kubenswrapper[4958]: I1006 12:05:30.125905 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c4703feb-17e7-47e1-9487-ffb7fd7303c2-scripts" (OuterVolumeSpecName: "scripts") pod "c4703feb-17e7-47e1-9487-ffb7fd7303c2" (UID: "c4703feb-17e7-47e1-9487-ffb7fd7303c2"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 06 12:05:30 crc kubenswrapper[4958]: I1006 12:05:30.127707 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c4703feb-17e7-47e1-9487-ffb7fd7303c2-kube-api-access-7tt57" (OuterVolumeSpecName: "kube-api-access-7tt57") pod "c4703feb-17e7-47e1-9487-ffb7fd7303c2" (UID: "c4703feb-17e7-47e1-9487-ffb7fd7303c2"). InnerVolumeSpecName "kube-api-access-7tt57". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 06 12:05:30 crc kubenswrapper[4958]: I1006 12:05:30.154791 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c4703feb-17e7-47e1-9487-ffb7fd7303c2-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "c4703feb-17e7-47e1-9487-ffb7fd7303c2" (UID: "c4703feb-17e7-47e1-9487-ffb7fd7303c2"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 06 12:05:30 crc kubenswrapper[4958]: I1006 12:05:30.155476 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c4703feb-17e7-47e1-9487-ffb7fd7303c2-config-data" (OuterVolumeSpecName: "config-data") pod "c4703feb-17e7-47e1-9487-ffb7fd7303c2" (UID: "c4703feb-17e7-47e1-9487-ffb7fd7303c2"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 06 12:05:30 crc kubenswrapper[4958]: I1006 12:05:30.222832 4958 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c4703feb-17e7-47e1-9487-ffb7fd7303c2-scripts\") on node \"crc\" DevicePath \"\"" Oct 06 12:05:30 crc kubenswrapper[4958]: I1006 12:05:30.222873 4958 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c4703feb-17e7-47e1-9487-ffb7fd7303c2-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 06 12:05:30 crc kubenswrapper[4958]: I1006 12:05:30.222890 4958 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7tt57\" (UniqueName: \"kubernetes.io/projected/c4703feb-17e7-47e1-9487-ffb7fd7303c2-kube-api-access-7tt57\") on node \"crc\" DevicePath \"\"" Oct 06 12:05:30 crc kubenswrapper[4958]: I1006 12:05:30.222903 4958 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c4703feb-17e7-47e1-9487-ffb7fd7303c2-config-data\") on node \"crc\" DevicePath \"\"" Oct 06 12:05:30 crc kubenswrapper[4958]: I1006 12:05:30.651571 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-zld27" event={"ID":"c4703feb-17e7-47e1-9487-ffb7fd7303c2","Type":"ContainerDied","Data":"cd902c0186d3c345daaf24c7e6d7f7b5905899c442eb345b29eac063449b8e1a"} Oct 06 12:05:30 crc kubenswrapper[4958]: I1006 12:05:30.651608 4958 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="cd902c0186d3c345daaf24c7e6d7f7b5905899c442eb345b29eac063449b8e1a" Oct 06 12:05:30 crc kubenswrapper[4958]: I1006 12:05:30.651620 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-zld27" Oct 06 12:05:30 crc kubenswrapper[4958]: E1006 12:05:30.764127 4958 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc4703feb_17e7_47e1_9487_ffb7fd7303c2.slice/crio-cd902c0186d3c345daaf24c7e6d7f7b5905899c442eb345b29eac063449b8e1a\": RecentStats: unable to find data in memory cache]" Oct 06 12:05:30 crc kubenswrapper[4958]: I1006 12:05:30.769857 4958 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-conductor-0"] Oct 06 12:05:30 crc kubenswrapper[4958]: E1006 12:05:30.770282 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c4703feb-17e7-47e1-9487-ffb7fd7303c2" containerName="nova-cell0-conductor-db-sync" Oct 06 12:05:30 crc kubenswrapper[4958]: I1006 12:05:30.770299 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="c4703feb-17e7-47e1-9487-ffb7fd7303c2" containerName="nova-cell0-conductor-db-sync" Oct 06 12:05:30 crc kubenswrapper[4958]: I1006 12:05:30.770472 4958 memory_manager.go:354] "RemoveStaleState removing state" podUID="c4703feb-17e7-47e1-9487-ffb7fd7303c2" containerName="nova-cell0-conductor-db-sync" Oct 06 12:05:30 crc kubenswrapper[4958]: I1006 12:05:30.771107 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Oct 06 12:05:30 crc kubenswrapper[4958]: I1006 12:05:30.773963 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-nova-dockercfg-kfwdp" Oct 06 12:05:30 crc kubenswrapper[4958]: I1006 12:05:30.777733 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-config-data" Oct 06 12:05:30 crc kubenswrapper[4958]: I1006 12:05:30.806299 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-0"] Oct 06 12:05:30 crc kubenswrapper[4958]: I1006 12:05:30.840383 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ded8adc6-35b0-4901-89ec-7f314c7817e7-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"ded8adc6-35b0-4901-89ec-7f314c7817e7\") " pod="openstack/nova-cell0-conductor-0" Oct 06 12:05:30 crc kubenswrapper[4958]: I1006 12:05:30.840468 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fw4dj\" (UniqueName: \"kubernetes.io/projected/ded8adc6-35b0-4901-89ec-7f314c7817e7-kube-api-access-fw4dj\") pod \"nova-cell0-conductor-0\" (UID: \"ded8adc6-35b0-4901-89ec-7f314c7817e7\") " pod="openstack/nova-cell0-conductor-0" Oct 06 12:05:30 crc kubenswrapper[4958]: I1006 12:05:30.840547 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ded8adc6-35b0-4901-89ec-7f314c7817e7-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"ded8adc6-35b0-4901-89ec-7f314c7817e7\") " pod="openstack/nova-cell0-conductor-0" Oct 06 12:05:30 crc kubenswrapper[4958]: I1006 12:05:30.942065 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fw4dj\" (UniqueName: \"kubernetes.io/projected/ded8adc6-35b0-4901-89ec-7f314c7817e7-kube-api-access-fw4dj\") pod \"nova-cell0-conductor-0\" (UID: \"ded8adc6-35b0-4901-89ec-7f314c7817e7\") " pod="openstack/nova-cell0-conductor-0" Oct 06 12:05:30 crc kubenswrapper[4958]: I1006 12:05:30.942366 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ded8adc6-35b0-4901-89ec-7f314c7817e7-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"ded8adc6-35b0-4901-89ec-7f314c7817e7\") " pod="openstack/nova-cell0-conductor-0" Oct 06 12:05:30 crc kubenswrapper[4958]: I1006 12:05:30.942483 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ded8adc6-35b0-4901-89ec-7f314c7817e7-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"ded8adc6-35b0-4901-89ec-7f314c7817e7\") " pod="openstack/nova-cell0-conductor-0" Oct 06 12:05:30 crc kubenswrapper[4958]: I1006 12:05:30.948446 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ded8adc6-35b0-4901-89ec-7f314c7817e7-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"ded8adc6-35b0-4901-89ec-7f314c7817e7\") " pod="openstack/nova-cell0-conductor-0" Oct 06 12:05:30 crc kubenswrapper[4958]: I1006 12:05:30.957582 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ded8adc6-35b0-4901-89ec-7f314c7817e7-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"ded8adc6-35b0-4901-89ec-7f314c7817e7\") " pod="openstack/nova-cell0-conductor-0" Oct 06 12:05:30 crc kubenswrapper[4958]: I1006 12:05:30.972871 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fw4dj\" (UniqueName: \"kubernetes.io/projected/ded8adc6-35b0-4901-89ec-7f314c7817e7-kube-api-access-fw4dj\") pod \"nova-cell0-conductor-0\" (UID: \"ded8adc6-35b0-4901-89ec-7f314c7817e7\") " pod="openstack/nova-cell0-conductor-0" Oct 06 12:05:31 crc kubenswrapper[4958]: I1006 12:05:31.141565 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Oct 06 12:05:31 crc kubenswrapper[4958]: I1006 12:05:31.647787 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-0"] Oct 06 12:05:31 crc kubenswrapper[4958]: W1006 12:05:31.661010 4958 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podded8adc6_35b0_4901_89ec_7f314c7817e7.slice/crio-d79bc61ee0a71beb64c44f424ff18f05c3ac000011012f818ec4257d7444472a WatchSource:0}: Error finding container d79bc61ee0a71beb64c44f424ff18f05c3ac000011012f818ec4257d7444472a: Status 404 returned error can't find the container with id d79bc61ee0a71beb64c44f424ff18f05c3ac000011012f818ec4257d7444472a Oct 06 12:05:32 crc kubenswrapper[4958]: I1006 12:05:32.685555 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"ded8adc6-35b0-4901-89ec-7f314c7817e7","Type":"ContainerStarted","Data":"c2a43bb3152d53b9d81f0285a1342f57a674438f1985b01932fa5f72fc81d2a2"} Oct 06 12:05:32 crc kubenswrapper[4958]: I1006 12:05:32.685985 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell0-conductor-0" Oct 06 12:05:32 crc kubenswrapper[4958]: I1006 12:05:32.686003 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"ded8adc6-35b0-4901-89ec-7f314c7817e7","Type":"ContainerStarted","Data":"d79bc61ee0a71beb64c44f424ff18f05c3ac000011012f818ec4257d7444472a"} Oct 06 12:05:32 crc kubenswrapper[4958]: I1006 12:05:32.715523 4958 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-conductor-0" podStartSLOduration=2.715498279 podStartE2EDuration="2.715498279s" podCreationTimestamp="2025-10-06 12:05:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-06 12:05:32.709524139 +0000 UTC m=+1086.595549517" watchObservedRunningTime="2025-10-06 12:05:32.715498279 +0000 UTC m=+1086.601523627" Oct 06 12:05:36 crc kubenswrapper[4958]: I1006 12:05:36.185695 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell0-conductor-0" Oct 06 12:05:36 crc kubenswrapper[4958]: I1006 12:05:36.709331 4958 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-cell-mapping-mbwxn"] Oct 06 12:05:36 crc kubenswrapper[4958]: I1006 12:05:36.713180 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-mbwxn" Oct 06 12:05:36 crc kubenswrapper[4958]: I1006 12:05:36.715565 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-manage-scripts" Oct 06 12:05:36 crc kubenswrapper[4958]: I1006 12:05:36.715566 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-manage-config-data" Oct 06 12:05:36 crc kubenswrapper[4958]: I1006 12:05:36.721740 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-cell-mapping-mbwxn"] Oct 06 12:05:36 crc kubenswrapper[4958]: I1006 12:05:36.877371 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6ed410ad-6f3a-4b26-bdad-1de8609840cb-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-mbwxn\" (UID: \"6ed410ad-6f3a-4b26-bdad-1de8609840cb\") " pod="openstack/nova-cell0-cell-mapping-mbwxn" Oct 06 12:05:36 crc kubenswrapper[4958]: I1006 12:05:36.877425 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bsl29\" (UniqueName: \"kubernetes.io/projected/6ed410ad-6f3a-4b26-bdad-1de8609840cb-kube-api-access-bsl29\") pod \"nova-cell0-cell-mapping-mbwxn\" (UID: \"6ed410ad-6f3a-4b26-bdad-1de8609840cb\") " pod="openstack/nova-cell0-cell-mapping-mbwxn" Oct 06 12:05:36 crc kubenswrapper[4958]: I1006 12:05:36.877460 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6ed410ad-6f3a-4b26-bdad-1de8609840cb-config-data\") pod \"nova-cell0-cell-mapping-mbwxn\" (UID: \"6ed410ad-6f3a-4b26-bdad-1de8609840cb\") " pod="openstack/nova-cell0-cell-mapping-mbwxn" Oct 06 12:05:36 crc kubenswrapper[4958]: I1006 12:05:36.877532 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6ed410ad-6f3a-4b26-bdad-1de8609840cb-scripts\") pod \"nova-cell0-cell-mapping-mbwxn\" (UID: \"6ed410ad-6f3a-4b26-bdad-1de8609840cb\") " pod="openstack/nova-cell0-cell-mapping-mbwxn" Oct 06 12:05:36 crc kubenswrapper[4958]: I1006 12:05:36.900540 4958 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Oct 06 12:05:36 crc kubenswrapper[4958]: I1006 12:05:36.901940 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Oct 06 12:05:36 crc kubenswrapper[4958]: I1006 12:05:36.904986 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Oct 06 12:05:36 crc kubenswrapper[4958]: I1006 12:05:36.952109 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Oct 06 12:05:36 crc kubenswrapper[4958]: I1006 12:05:36.980482 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6ed410ad-6f3a-4b26-bdad-1de8609840cb-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-mbwxn\" (UID: \"6ed410ad-6f3a-4b26-bdad-1de8609840cb\") " pod="openstack/nova-cell0-cell-mapping-mbwxn" Oct 06 12:05:36 crc kubenswrapper[4958]: I1006 12:05:36.980560 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bsl29\" (UniqueName: \"kubernetes.io/projected/6ed410ad-6f3a-4b26-bdad-1de8609840cb-kube-api-access-bsl29\") pod \"nova-cell0-cell-mapping-mbwxn\" (UID: \"6ed410ad-6f3a-4b26-bdad-1de8609840cb\") " pod="openstack/nova-cell0-cell-mapping-mbwxn" Oct 06 12:05:36 crc kubenswrapper[4958]: I1006 12:05:36.980596 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/749169b6-fbac-4d10-858e-9d8c592ac408-logs\") pod \"nova-api-0\" (UID: \"749169b6-fbac-4d10-858e-9d8c592ac408\") " pod="openstack/nova-api-0" Oct 06 12:05:36 crc kubenswrapper[4958]: I1006 12:05:36.980647 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6ed410ad-6f3a-4b26-bdad-1de8609840cb-config-data\") pod \"nova-cell0-cell-mapping-mbwxn\" (UID: \"6ed410ad-6f3a-4b26-bdad-1de8609840cb\") " pod="openstack/nova-cell0-cell-mapping-mbwxn" Oct 06 12:05:36 crc kubenswrapper[4958]: I1006 12:05:36.980769 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6ed410ad-6f3a-4b26-bdad-1de8609840cb-scripts\") pod \"nova-cell0-cell-mapping-mbwxn\" (UID: \"6ed410ad-6f3a-4b26-bdad-1de8609840cb\") " pod="openstack/nova-cell0-cell-mapping-mbwxn" Oct 06 12:05:36 crc kubenswrapper[4958]: I1006 12:05:36.980886 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/749169b6-fbac-4d10-858e-9d8c592ac408-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"749169b6-fbac-4d10-858e-9d8c592ac408\") " pod="openstack/nova-api-0" Oct 06 12:05:36 crc kubenswrapper[4958]: I1006 12:05:36.980986 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/749169b6-fbac-4d10-858e-9d8c592ac408-config-data\") pod \"nova-api-0\" (UID: \"749169b6-fbac-4d10-858e-9d8c592ac408\") " pod="openstack/nova-api-0" Oct 06 12:05:36 crc kubenswrapper[4958]: I1006 12:05:36.981171 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7x64d\" (UniqueName: \"kubernetes.io/projected/749169b6-fbac-4d10-858e-9d8c592ac408-kube-api-access-7x64d\") pod \"nova-api-0\" (UID: \"749169b6-fbac-4d10-858e-9d8c592ac408\") " pod="openstack/nova-api-0" Oct 06 12:05:36 crc kubenswrapper[4958]: I1006 12:05:36.999919 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6ed410ad-6f3a-4b26-bdad-1de8609840cb-scripts\") pod \"nova-cell0-cell-mapping-mbwxn\" (UID: \"6ed410ad-6f3a-4b26-bdad-1de8609840cb\") " pod="openstack/nova-cell0-cell-mapping-mbwxn" Oct 06 12:05:37 crc kubenswrapper[4958]: I1006 12:05:37.013908 4958 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Oct 06 12:05:37 crc kubenswrapper[4958]: I1006 12:05:37.023129 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bsl29\" (UniqueName: \"kubernetes.io/projected/6ed410ad-6f3a-4b26-bdad-1de8609840cb-kube-api-access-bsl29\") pod \"nova-cell0-cell-mapping-mbwxn\" (UID: \"6ed410ad-6f3a-4b26-bdad-1de8609840cb\") " pod="openstack/nova-cell0-cell-mapping-mbwxn" Oct 06 12:05:37 crc kubenswrapper[4958]: I1006 12:05:37.023840 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6ed410ad-6f3a-4b26-bdad-1de8609840cb-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-mbwxn\" (UID: \"6ed410ad-6f3a-4b26-bdad-1de8609840cb\") " pod="openstack/nova-cell0-cell-mapping-mbwxn" Oct 06 12:05:37 crc kubenswrapper[4958]: I1006 12:05:37.023912 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Oct 06 12:05:37 crc kubenswrapper[4958]: I1006 12:05:37.024533 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6ed410ad-6f3a-4b26-bdad-1de8609840cb-config-data\") pod \"nova-cell0-cell-mapping-mbwxn\" (UID: \"6ed410ad-6f3a-4b26-bdad-1de8609840cb\") " pod="openstack/nova-cell0-cell-mapping-mbwxn" Oct 06 12:05:37 crc kubenswrapper[4958]: I1006 12:05:37.037032 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Oct 06 12:05:37 crc kubenswrapper[4958]: I1006 12:05:37.085238 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Oct 06 12:05:37 crc kubenswrapper[4958]: I1006 12:05:37.086186 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7x64d\" (UniqueName: \"kubernetes.io/projected/749169b6-fbac-4d10-858e-9d8c592ac408-kube-api-access-7x64d\") pod \"nova-api-0\" (UID: \"749169b6-fbac-4d10-858e-9d8c592ac408\") " pod="openstack/nova-api-0" Oct 06 12:05:37 crc kubenswrapper[4958]: I1006 12:05:37.086230 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rvd4g\" (UniqueName: \"kubernetes.io/projected/2bfdc32c-205a-436a-b5a7-9437d8db55cb-kube-api-access-rvd4g\") pod \"nova-metadata-0\" (UID: \"2bfdc32c-205a-436a-b5a7-9437d8db55cb\") " pod="openstack/nova-metadata-0" Oct 06 12:05:37 crc kubenswrapper[4958]: I1006 12:05:37.086254 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2bfdc32c-205a-436a-b5a7-9437d8db55cb-config-data\") pod \"nova-metadata-0\" (UID: \"2bfdc32c-205a-436a-b5a7-9437d8db55cb\") " pod="openstack/nova-metadata-0" Oct 06 12:05:37 crc kubenswrapper[4958]: I1006 12:05:37.086288 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2bfdc32c-205a-436a-b5a7-9437d8db55cb-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"2bfdc32c-205a-436a-b5a7-9437d8db55cb\") " pod="openstack/nova-metadata-0" Oct 06 12:05:37 crc kubenswrapper[4958]: I1006 12:05:37.086330 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/749169b6-fbac-4d10-858e-9d8c592ac408-logs\") pod \"nova-api-0\" (UID: \"749169b6-fbac-4d10-858e-9d8c592ac408\") " pod="openstack/nova-api-0" Oct 06 12:05:37 crc kubenswrapper[4958]: I1006 12:05:37.086394 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/749169b6-fbac-4d10-858e-9d8c592ac408-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"749169b6-fbac-4d10-858e-9d8c592ac408\") " pod="openstack/nova-api-0" Oct 06 12:05:37 crc kubenswrapper[4958]: I1006 12:05:37.086427 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/749169b6-fbac-4d10-858e-9d8c592ac408-config-data\") pod \"nova-api-0\" (UID: \"749169b6-fbac-4d10-858e-9d8c592ac408\") " pod="openstack/nova-api-0" Oct 06 12:05:37 crc kubenswrapper[4958]: I1006 12:05:37.086460 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2bfdc32c-205a-436a-b5a7-9437d8db55cb-logs\") pod \"nova-metadata-0\" (UID: \"2bfdc32c-205a-436a-b5a7-9437d8db55cb\") " pod="openstack/nova-metadata-0" Oct 06 12:05:37 crc kubenswrapper[4958]: I1006 12:05:37.087273 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/749169b6-fbac-4d10-858e-9d8c592ac408-logs\") pod \"nova-api-0\" (UID: \"749169b6-fbac-4d10-858e-9d8c592ac408\") " pod="openstack/nova-api-0" Oct 06 12:05:37 crc kubenswrapper[4958]: I1006 12:05:37.097983 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/749169b6-fbac-4d10-858e-9d8c592ac408-config-data\") pod \"nova-api-0\" (UID: \"749169b6-fbac-4d10-858e-9d8c592ac408\") " pod="openstack/nova-api-0" Oct 06 12:05:37 crc kubenswrapper[4958]: I1006 12:05:37.098414 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-mbwxn" Oct 06 12:05:37 crc kubenswrapper[4958]: I1006 12:05:37.110839 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/749169b6-fbac-4d10-858e-9d8c592ac408-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"749169b6-fbac-4d10-858e-9d8c592ac408\") " pod="openstack/nova-api-0" Oct 06 12:05:37 crc kubenswrapper[4958]: I1006 12:05:37.110921 4958 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Oct 06 12:05:37 crc kubenswrapper[4958]: I1006 12:05:37.112419 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Oct 06 12:05:37 crc kubenswrapper[4958]: I1006 12:05:37.112647 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7x64d\" (UniqueName: \"kubernetes.io/projected/749169b6-fbac-4d10-858e-9d8c592ac408-kube-api-access-7x64d\") pod \"nova-api-0\" (UID: \"749169b6-fbac-4d10-858e-9d8c592ac408\") " pod="openstack/nova-api-0" Oct 06 12:05:37 crc kubenswrapper[4958]: I1006 12:05:37.122618 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Oct 06 12:05:37 crc kubenswrapper[4958]: I1006 12:05:37.148455 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Oct 06 12:05:37 crc kubenswrapper[4958]: I1006 12:05:37.173918 4958 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Oct 06 12:05:37 crc kubenswrapper[4958]: I1006 12:05:37.175283 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Oct 06 12:05:37 crc kubenswrapper[4958]: I1006 12:05:37.181507 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-novncproxy-config-data" Oct 06 12:05:37 crc kubenswrapper[4958]: I1006 12:05:37.188326 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2bfdc32c-205a-436a-b5a7-9437d8db55cb-logs\") pod \"nova-metadata-0\" (UID: \"2bfdc32c-205a-436a-b5a7-9437d8db55cb\") " pod="openstack/nova-metadata-0" Oct 06 12:05:37 crc kubenswrapper[4958]: I1006 12:05:37.188389 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4n225\" (UniqueName: \"kubernetes.io/projected/3a98f260-a4db-4810-8dbd-42cdc4bb3e20-kube-api-access-4n225\") pod \"nova-scheduler-0\" (UID: \"3a98f260-a4db-4810-8dbd-42cdc4bb3e20\") " pod="openstack/nova-scheduler-0" Oct 06 12:05:37 crc kubenswrapper[4958]: I1006 12:05:37.188413 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rvd4g\" (UniqueName: \"kubernetes.io/projected/2bfdc32c-205a-436a-b5a7-9437d8db55cb-kube-api-access-rvd4g\") pod \"nova-metadata-0\" (UID: \"2bfdc32c-205a-436a-b5a7-9437d8db55cb\") " pod="openstack/nova-metadata-0" Oct 06 12:05:37 crc kubenswrapper[4958]: I1006 12:05:37.188432 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2bfdc32c-205a-436a-b5a7-9437d8db55cb-config-data\") pod \"nova-metadata-0\" (UID: \"2bfdc32c-205a-436a-b5a7-9437d8db55cb\") " pod="openstack/nova-metadata-0" Oct 06 12:05:37 crc kubenswrapper[4958]: I1006 12:05:37.188463 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3a98f260-a4db-4810-8dbd-42cdc4bb3e20-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"3a98f260-a4db-4810-8dbd-42cdc4bb3e20\") " pod="openstack/nova-scheduler-0" Oct 06 12:05:37 crc kubenswrapper[4958]: I1006 12:05:37.188486 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2bfdc32c-205a-436a-b5a7-9437d8db55cb-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"2bfdc32c-205a-436a-b5a7-9437d8db55cb\") " pod="openstack/nova-metadata-0" Oct 06 12:05:37 crc kubenswrapper[4958]: I1006 12:05:37.188552 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3a98f260-a4db-4810-8dbd-42cdc4bb3e20-config-data\") pod \"nova-scheduler-0\" (UID: \"3a98f260-a4db-4810-8dbd-42cdc4bb3e20\") " pod="openstack/nova-scheduler-0" Oct 06 12:05:37 crc kubenswrapper[4958]: I1006 12:05:37.193852 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2bfdc32c-205a-436a-b5a7-9437d8db55cb-logs\") pod \"nova-metadata-0\" (UID: \"2bfdc32c-205a-436a-b5a7-9437d8db55cb\") " pod="openstack/nova-metadata-0" Oct 06 12:05:37 crc kubenswrapper[4958]: I1006 12:05:37.201686 4958 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-845d6d6f59-jhwnw"] Oct 06 12:05:37 crc kubenswrapper[4958]: I1006 12:05:37.201898 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2bfdc32c-205a-436a-b5a7-9437d8db55cb-config-data\") pod \"nova-metadata-0\" (UID: \"2bfdc32c-205a-436a-b5a7-9437d8db55cb\") " pod="openstack/nova-metadata-0" Oct 06 12:05:37 crc kubenswrapper[4958]: I1006 12:05:37.203669 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-845d6d6f59-jhwnw" Oct 06 12:05:37 crc kubenswrapper[4958]: I1006 12:05:37.203854 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2bfdc32c-205a-436a-b5a7-9437d8db55cb-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"2bfdc32c-205a-436a-b5a7-9437d8db55cb\") " pod="openstack/nova-metadata-0" Oct 06 12:05:37 crc kubenswrapper[4958]: I1006 12:05:37.231097 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rvd4g\" (UniqueName: \"kubernetes.io/projected/2bfdc32c-205a-436a-b5a7-9437d8db55cb-kube-api-access-rvd4g\") pod \"nova-metadata-0\" (UID: \"2bfdc32c-205a-436a-b5a7-9437d8db55cb\") " pod="openstack/nova-metadata-0" Oct 06 12:05:37 crc kubenswrapper[4958]: I1006 12:05:37.246819 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Oct 06 12:05:37 crc kubenswrapper[4958]: I1006 12:05:37.256322 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Oct 06 12:05:37 crc kubenswrapper[4958]: I1006 12:05:37.262448 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-845d6d6f59-jhwnw"] Oct 06 12:05:37 crc kubenswrapper[4958]: I1006 12:05:37.289972 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/56bbbfd2-0526-4ce7-a41d-a18dc9a0a652-dns-swift-storage-0\") pod \"dnsmasq-dns-845d6d6f59-jhwnw\" (UID: \"56bbbfd2-0526-4ce7-a41d-a18dc9a0a652\") " pod="openstack/dnsmasq-dns-845d6d6f59-jhwnw" Oct 06 12:05:37 crc kubenswrapper[4958]: I1006 12:05:37.290004 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rzjfr\" (UniqueName: \"kubernetes.io/projected/56bbbfd2-0526-4ce7-a41d-a18dc9a0a652-kube-api-access-rzjfr\") pod \"dnsmasq-dns-845d6d6f59-jhwnw\" (UID: \"56bbbfd2-0526-4ce7-a41d-a18dc9a0a652\") " pod="openstack/dnsmasq-dns-845d6d6f59-jhwnw" Oct 06 12:05:37 crc kubenswrapper[4958]: I1006 12:05:37.290046 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/56bbbfd2-0526-4ce7-a41d-a18dc9a0a652-dns-svc\") pod \"dnsmasq-dns-845d6d6f59-jhwnw\" (UID: \"56bbbfd2-0526-4ce7-a41d-a18dc9a0a652\") " pod="openstack/dnsmasq-dns-845d6d6f59-jhwnw" Oct 06 12:05:37 crc kubenswrapper[4958]: I1006 12:05:37.290063 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/56bbbfd2-0526-4ce7-a41d-a18dc9a0a652-ovsdbserver-sb\") pod \"dnsmasq-dns-845d6d6f59-jhwnw\" (UID: \"56bbbfd2-0526-4ce7-a41d-a18dc9a0a652\") " pod="openstack/dnsmasq-dns-845d6d6f59-jhwnw" Oct 06 12:05:37 crc kubenswrapper[4958]: I1006 12:05:37.290093 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4n225\" (UniqueName: \"kubernetes.io/projected/3a98f260-a4db-4810-8dbd-42cdc4bb3e20-kube-api-access-4n225\") pod \"nova-scheduler-0\" (UID: \"3a98f260-a4db-4810-8dbd-42cdc4bb3e20\") " pod="openstack/nova-scheduler-0" Oct 06 12:05:37 crc kubenswrapper[4958]: I1006 12:05:37.290115 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6e659360-bc60-4dc2-90b0-af9567ac637d-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"6e659360-bc60-4dc2-90b0-af9567ac637d\") " pod="openstack/nova-cell1-novncproxy-0" Oct 06 12:05:37 crc kubenswrapper[4958]: I1006 12:05:37.290140 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/56bbbfd2-0526-4ce7-a41d-a18dc9a0a652-config\") pod \"dnsmasq-dns-845d6d6f59-jhwnw\" (UID: \"56bbbfd2-0526-4ce7-a41d-a18dc9a0a652\") " pod="openstack/dnsmasq-dns-845d6d6f59-jhwnw" Oct 06 12:05:37 crc kubenswrapper[4958]: I1006 12:05:37.290173 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3a98f260-a4db-4810-8dbd-42cdc4bb3e20-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"3a98f260-a4db-4810-8dbd-42cdc4bb3e20\") " pod="openstack/nova-scheduler-0" Oct 06 12:05:37 crc kubenswrapper[4958]: I1006 12:05:37.290239 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3a98f260-a4db-4810-8dbd-42cdc4bb3e20-config-data\") pod \"nova-scheduler-0\" (UID: \"3a98f260-a4db-4810-8dbd-42cdc4bb3e20\") " pod="openstack/nova-scheduler-0" Oct 06 12:05:37 crc kubenswrapper[4958]: I1006 12:05:37.290264 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/56bbbfd2-0526-4ce7-a41d-a18dc9a0a652-ovsdbserver-nb\") pod \"dnsmasq-dns-845d6d6f59-jhwnw\" (UID: \"56bbbfd2-0526-4ce7-a41d-a18dc9a0a652\") " pod="openstack/dnsmasq-dns-845d6d6f59-jhwnw" Oct 06 12:05:37 crc kubenswrapper[4958]: I1006 12:05:37.290285 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6e659360-bc60-4dc2-90b0-af9567ac637d-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"6e659360-bc60-4dc2-90b0-af9567ac637d\") " pod="openstack/nova-cell1-novncproxy-0" Oct 06 12:05:37 crc kubenswrapper[4958]: I1006 12:05:37.290317 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2d6zw\" (UniqueName: \"kubernetes.io/projected/6e659360-bc60-4dc2-90b0-af9567ac637d-kube-api-access-2d6zw\") pod \"nova-cell1-novncproxy-0\" (UID: \"6e659360-bc60-4dc2-90b0-af9567ac637d\") " pod="openstack/nova-cell1-novncproxy-0" Oct 06 12:05:37 crc kubenswrapper[4958]: I1006 12:05:37.295745 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3a98f260-a4db-4810-8dbd-42cdc4bb3e20-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"3a98f260-a4db-4810-8dbd-42cdc4bb3e20\") " pod="openstack/nova-scheduler-0" Oct 06 12:05:37 crc kubenswrapper[4958]: I1006 12:05:37.298120 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3a98f260-a4db-4810-8dbd-42cdc4bb3e20-config-data\") pod \"nova-scheduler-0\" (UID: \"3a98f260-a4db-4810-8dbd-42cdc4bb3e20\") " pod="openstack/nova-scheduler-0" Oct 06 12:05:37 crc kubenswrapper[4958]: I1006 12:05:37.308087 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4n225\" (UniqueName: \"kubernetes.io/projected/3a98f260-a4db-4810-8dbd-42cdc4bb3e20-kube-api-access-4n225\") pod \"nova-scheduler-0\" (UID: \"3a98f260-a4db-4810-8dbd-42cdc4bb3e20\") " pod="openstack/nova-scheduler-0" Oct 06 12:05:37 crc kubenswrapper[4958]: I1006 12:05:37.318754 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Oct 06 12:05:37 crc kubenswrapper[4958]: I1006 12:05:37.342520 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Oct 06 12:05:37 crc kubenswrapper[4958]: I1006 12:05:37.395210 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/56bbbfd2-0526-4ce7-a41d-a18dc9a0a652-ovsdbserver-sb\") pod \"dnsmasq-dns-845d6d6f59-jhwnw\" (UID: \"56bbbfd2-0526-4ce7-a41d-a18dc9a0a652\") " pod="openstack/dnsmasq-dns-845d6d6f59-jhwnw" Oct 06 12:05:37 crc kubenswrapper[4958]: I1006 12:05:37.395315 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6e659360-bc60-4dc2-90b0-af9567ac637d-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"6e659360-bc60-4dc2-90b0-af9567ac637d\") " pod="openstack/nova-cell1-novncproxy-0" Oct 06 12:05:37 crc kubenswrapper[4958]: I1006 12:05:37.395371 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/56bbbfd2-0526-4ce7-a41d-a18dc9a0a652-config\") pod \"dnsmasq-dns-845d6d6f59-jhwnw\" (UID: \"56bbbfd2-0526-4ce7-a41d-a18dc9a0a652\") " pod="openstack/dnsmasq-dns-845d6d6f59-jhwnw" Oct 06 12:05:37 crc kubenswrapper[4958]: I1006 12:05:37.396893 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/56bbbfd2-0526-4ce7-a41d-a18dc9a0a652-ovsdbserver-sb\") pod \"dnsmasq-dns-845d6d6f59-jhwnw\" (UID: \"56bbbfd2-0526-4ce7-a41d-a18dc9a0a652\") " pod="openstack/dnsmasq-dns-845d6d6f59-jhwnw" Oct 06 12:05:37 crc kubenswrapper[4958]: I1006 12:05:37.398961 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/56bbbfd2-0526-4ce7-a41d-a18dc9a0a652-config\") pod \"dnsmasq-dns-845d6d6f59-jhwnw\" (UID: \"56bbbfd2-0526-4ce7-a41d-a18dc9a0a652\") " pod="openstack/dnsmasq-dns-845d6d6f59-jhwnw" Oct 06 12:05:37 crc kubenswrapper[4958]: I1006 12:05:37.399204 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/56bbbfd2-0526-4ce7-a41d-a18dc9a0a652-ovsdbserver-nb\") pod \"dnsmasq-dns-845d6d6f59-jhwnw\" (UID: \"56bbbfd2-0526-4ce7-a41d-a18dc9a0a652\") " pod="openstack/dnsmasq-dns-845d6d6f59-jhwnw" Oct 06 12:05:37 crc kubenswrapper[4958]: I1006 12:05:37.400269 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/56bbbfd2-0526-4ce7-a41d-a18dc9a0a652-ovsdbserver-nb\") pod \"dnsmasq-dns-845d6d6f59-jhwnw\" (UID: \"56bbbfd2-0526-4ce7-a41d-a18dc9a0a652\") " pod="openstack/dnsmasq-dns-845d6d6f59-jhwnw" Oct 06 12:05:37 crc kubenswrapper[4958]: I1006 12:05:37.400363 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6e659360-bc60-4dc2-90b0-af9567ac637d-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"6e659360-bc60-4dc2-90b0-af9567ac637d\") " pod="openstack/nova-cell1-novncproxy-0" Oct 06 12:05:37 crc kubenswrapper[4958]: I1006 12:05:37.400453 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2d6zw\" (UniqueName: \"kubernetes.io/projected/6e659360-bc60-4dc2-90b0-af9567ac637d-kube-api-access-2d6zw\") pod \"nova-cell1-novncproxy-0\" (UID: \"6e659360-bc60-4dc2-90b0-af9567ac637d\") " pod="openstack/nova-cell1-novncproxy-0" Oct 06 12:05:37 crc kubenswrapper[4958]: I1006 12:05:37.400554 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/56bbbfd2-0526-4ce7-a41d-a18dc9a0a652-dns-swift-storage-0\") pod \"dnsmasq-dns-845d6d6f59-jhwnw\" (UID: \"56bbbfd2-0526-4ce7-a41d-a18dc9a0a652\") " pod="openstack/dnsmasq-dns-845d6d6f59-jhwnw" Oct 06 12:05:37 crc kubenswrapper[4958]: I1006 12:05:37.400582 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rzjfr\" (UniqueName: \"kubernetes.io/projected/56bbbfd2-0526-4ce7-a41d-a18dc9a0a652-kube-api-access-rzjfr\") pod \"dnsmasq-dns-845d6d6f59-jhwnw\" (UID: \"56bbbfd2-0526-4ce7-a41d-a18dc9a0a652\") " pod="openstack/dnsmasq-dns-845d6d6f59-jhwnw" Oct 06 12:05:37 crc kubenswrapper[4958]: I1006 12:05:37.400687 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/56bbbfd2-0526-4ce7-a41d-a18dc9a0a652-dns-svc\") pod \"dnsmasq-dns-845d6d6f59-jhwnw\" (UID: \"56bbbfd2-0526-4ce7-a41d-a18dc9a0a652\") " pod="openstack/dnsmasq-dns-845d6d6f59-jhwnw" Oct 06 12:05:37 crc kubenswrapper[4958]: I1006 12:05:37.401263 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/56bbbfd2-0526-4ce7-a41d-a18dc9a0a652-dns-swift-storage-0\") pod \"dnsmasq-dns-845d6d6f59-jhwnw\" (UID: \"56bbbfd2-0526-4ce7-a41d-a18dc9a0a652\") " pod="openstack/dnsmasq-dns-845d6d6f59-jhwnw" Oct 06 12:05:37 crc kubenswrapper[4958]: I1006 12:05:37.401471 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/56bbbfd2-0526-4ce7-a41d-a18dc9a0a652-dns-svc\") pod \"dnsmasq-dns-845d6d6f59-jhwnw\" (UID: \"56bbbfd2-0526-4ce7-a41d-a18dc9a0a652\") " pod="openstack/dnsmasq-dns-845d6d6f59-jhwnw" Oct 06 12:05:37 crc kubenswrapper[4958]: I1006 12:05:37.404935 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6e659360-bc60-4dc2-90b0-af9567ac637d-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"6e659360-bc60-4dc2-90b0-af9567ac637d\") " pod="openstack/nova-cell1-novncproxy-0" Oct 06 12:05:37 crc kubenswrapper[4958]: I1006 12:05:37.420981 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rzjfr\" (UniqueName: \"kubernetes.io/projected/56bbbfd2-0526-4ce7-a41d-a18dc9a0a652-kube-api-access-rzjfr\") pod \"dnsmasq-dns-845d6d6f59-jhwnw\" (UID: \"56bbbfd2-0526-4ce7-a41d-a18dc9a0a652\") " pod="openstack/dnsmasq-dns-845d6d6f59-jhwnw" Oct 06 12:05:37 crc kubenswrapper[4958]: I1006 12:05:37.425848 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6e659360-bc60-4dc2-90b0-af9567ac637d-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"6e659360-bc60-4dc2-90b0-af9567ac637d\") " pod="openstack/nova-cell1-novncproxy-0" Oct 06 12:05:37 crc kubenswrapper[4958]: I1006 12:05:37.428521 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2d6zw\" (UniqueName: \"kubernetes.io/projected/6e659360-bc60-4dc2-90b0-af9567ac637d-kube-api-access-2d6zw\") pod \"nova-cell1-novncproxy-0\" (UID: \"6e659360-bc60-4dc2-90b0-af9567ac637d\") " pod="openstack/nova-cell1-novncproxy-0" Oct 06 12:05:37 crc kubenswrapper[4958]: I1006 12:05:37.667907 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Oct 06 12:05:37 crc kubenswrapper[4958]: I1006 12:05:37.694665 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-845d6d6f59-jhwnw" Oct 06 12:05:37 crc kubenswrapper[4958]: I1006 12:05:37.724247 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-cell-mapping-mbwxn"] Oct 06 12:05:37 crc kubenswrapper[4958]: W1006 12:05:37.742192 4958 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod6ed410ad_6f3a_4b26_bdad_1de8609840cb.slice/crio-7d4755b84f6d24f9c5d6032732dd4bb604d65ec9c610ed37507da0bb53a2b914 WatchSource:0}: Error finding container 7d4755b84f6d24f9c5d6032732dd4bb604d65ec9c610ed37507da0bb53a2b914: Status 404 returned error can't find the container with id 7d4755b84f6d24f9c5d6032732dd4bb604d65ec9c610ed37507da0bb53a2b914 Oct 06 12:05:37 crc kubenswrapper[4958]: I1006 12:05:37.851636 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Oct 06 12:05:37 crc kubenswrapper[4958]: I1006 12:05:37.929689 4958 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-conductor-db-sync-kpxjr"] Oct 06 12:05:37 crc kubenswrapper[4958]: I1006 12:05:37.930958 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-kpxjr" Oct 06 12:05:37 crc kubenswrapper[4958]: I1006 12:05:37.936735 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-scripts" Oct 06 12:05:37 crc kubenswrapper[4958]: I1006 12:05:37.936934 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-config-data" Oct 06 12:05:37 crc kubenswrapper[4958]: I1006 12:05:37.938644 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-kpxjr"] Oct 06 12:05:37 crc kubenswrapper[4958]: I1006 12:05:37.947439 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Oct 06 12:05:38 crc kubenswrapper[4958]: I1006 12:05:38.024443 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Oct 06 12:05:38 crc kubenswrapper[4958]: I1006 12:05:38.025573 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/847e5baa-32e2-4bba-88c2-a493724883f9-scripts\") pod \"nova-cell1-conductor-db-sync-kpxjr\" (UID: \"847e5baa-32e2-4bba-88c2-a493724883f9\") " pod="openstack/nova-cell1-conductor-db-sync-kpxjr" Oct 06 12:05:38 crc kubenswrapper[4958]: I1006 12:05:38.025604 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/847e5baa-32e2-4bba-88c2-a493724883f9-config-data\") pod \"nova-cell1-conductor-db-sync-kpxjr\" (UID: \"847e5baa-32e2-4bba-88c2-a493724883f9\") " pod="openstack/nova-cell1-conductor-db-sync-kpxjr" Oct 06 12:05:38 crc kubenswrapper[4958]: I1006 12:05:38.025635 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/847e5baa-32e2-4bba-88c2-a493724883f9-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-kpxjr\" (UID: \"847e5baa-32e2-4bba-88c2-a493724883f9\") " pod="openstack/nova-cell1-conductor-db-sync-kpxjr" Oct 06 12:05:38 crc kubenswrapper[4958]: I1006 12:05:38.025662 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mmn8x\" (UniqueName: \"kubernetes.io/projected/847e5baa-32e2-4bba-88c2-a493724883f9-kube-api-access-mmn8x\") pod \"nova-cell1-conductor-db-sync-kpxjr\" (UID: \"847e5baa-32e2-4bba-88c2-a493724883f9\") " pod="openstack/nova-cell1-conductor-db-sync-kpxjr" Oct 06 12:05:38 crc kubenswrapper[4958]: I1006 12:05:38.127459 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/847e5baa-32e2-4bba-88c2-a493724883f9-scripts\") pod \"nova-cell1-conductor-db-sync-kpxjr\" (UID: \"847e5baa-32e2-4bba-88c2-a493724883f9\") " pod="openstack/nova-cell1-conductor-db-sync-kpxjr" Oct 06 12:05:38 crc kubenswrapper[4958]: I1006 12:05:38.127512 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/847e5baa-32e2-4bba-88c2-a493724883f9-config-data\") pod \"nova-cell1-conductor-db-sync-kpxjr\" (UID: \"847e5baa-32e2-4bba-88c2-a493724883f9\") " pod="openstack/nova-cell1-conductor-db-sync-kpxjr" Oct 06 12:05:38 crc kubenswrapper[4958]: I1006 12:05:38.127549 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/847e5baa-32e2-4bba-88c2-a493724883f9-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-kpxjr\" (UID: \"847e5baa-32e2-4bba-88c2-a493724883f9\") " pod="openstack/nova-cell1-conductor-db-sync-kpxjr" Oct 06 12:05:38 crc kubenswrapper[4958]: I1006 12:05:38.127586 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mmn8x\" (UniqueName: \"kubernetes.io/projected/847e5baa-32e2-4bba-88c2-a493724883f9-kube-api-access-mmn8x\") pod \"nova-cell1-conductor-db-sync-kpxjr\" (UID: \"847e5baa-32e2-4bba-88c2-a493724883f9\") " pod="openstack/nova-cell1-conductor-db-sync-kpxjr" Oct 06 12:05:38 crc kubenswrapper[4958]: I1006 12:05:38.141798 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/847e5baa-32e2-4bba-88c2-a493724883f9-scripts\") pod \"nova-cell1-conductor-db-sync-kpxjr\" (UID: \"847e5baa-32e2-4bba-88c2-a493724883f9\") " pod="openstack/nova-cell1-conductor-db-sync-kpxjr" Oct 06 12:05:38 crc kubenswrapper[4958]: I1006 12:05:38.153994 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/847e5baa-32e2-4bba-88c2-a493724883f9-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-kpxjr\" (UID: \"847e5baa-32e2-4bba-88c2-a493724883f9\") " pod="openstack/nova-cell1-conductor-db-sync-kpxjr" Oct 06 12:05:38 crc kubenswrapper[4958]: I1006 12:05:38.154511 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/847e5baa-32e2-4bba-88c2-a493724883f9-config-data\") pod \"nova-cell1-conductor-db-sync-kpxjr\" (UID: \"847e5baa-32e2-4bba-88c2-a493724883f9\") " pod="openstack/nova-cell1-conductor-db-sync-kpxjr" Oct 06 12:05:38 crc kubenswrapper[4958]: I1006 12:05:38.168793 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mmn8x\" (UniqueName: \"kubernetes.io/projected/847e5baa-32e2-4bba-88c2-a493724883f9-kube-api-access-mmn8x\") pod \"nova-cell1-conductor-db-sync-kpxjr\" (UID: \"847e5baa-32e2-4bba-88c2-a493724883f9\") " pod="openstack/nova-cell1-conductor-db-sync-kpxjr" Oct 06 12:05:38 crc kubenswrapper[4958]: I1006 12:05:38.248209 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-845d6d6f59-jhwnw"] Oct 06 12:05:38 crc kubenswrapper[4958]: I1006 12:05:38.262038 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Oct 06 12:05:38 crc kubenswrapper[4958]: I1006 12:05:38.288946 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-kpxjr" Oct 06 12:05:38 crc kubenswrapper[4958]: I1006 12:05:38.749095 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-kpxjr"] Oct 06 12:05:38 crc kubenswrapper[4958]: I1006 12:05:38.758406 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-mbwxn" event={"ID":"6ed410ad-6f3a-4b26-bdad-1de8609840cb","Type":"ContainerStarted","Data":"dd9530070cb06ce9962ff69b3a29281c0fe964c5ca71d08d584dad6c2e23d6ee"} Oct 06 12:05:38 crc kubenswrapper[4958]: I1006 12:05:38.758697 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-mbwxn" event={"ID":"6ed410ad-6f3a-4b26-bdad-1de8609840cb","Type":"ContainerStarted","Data":"7d4755b84f6d24f9c5d6032732dd4bb604d65ec9c610ed37507da0bb53a2b914"} Oct 06 12:05:38 crc kubenswrapper[4958]: I1006 12:05:38.759785 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"3a98f260-a4db-4810-8dbd-42cdc4bb3e20","Type":"ContainerStarted","Data":"6b1eecbd9d48bc5f40c856877e563e83d492397811aecb6bbd8d72c321c6275f"} Oct 06 12:05:38 crc kubenswrapper[4958]: I1006 12:05:38.761394 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"2bfdc32c-205a-436a-b5a7-9437d8db55cb","Type":"ContainerStarted","Data":"2198396185e3f89b1682385b30b866ac2cc170d5e7c53d487d6e4ca33e312b75"} Oct 06 12:05:38 crc kubenswrapper[4958]: I1006 12:05:38.764536 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"749169b6-fbac-4d10-858e-9d8c592ac408","Type":"ContainerStarted","Data":"604e00a66a104af7a5c7bcc7f217beac88b23b4bbacdf261594cba17873e9530"} Oct 06 12:05:38 crc kubenswrapper[4958]: I1006 12:05:38.767191 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"6e659360-bc60-4dc2-90b0-af9567ac637d","Type":"ContainerStarted","Data":"eb78bbc392b24e830c7c703860d533080b318c99e1cf7435642d2222c5874336"} Oct 06 12:05:38 crc kubenswrapper[4958]: I1006 12:05:38.769612 4958 generic.go:334] "Generic (PLEG): container finished" podID="56bbbfd2-0526-4ce7-a41d-a18dc9a0a652" containerID="edf653c280a78f421d48acef18d37cfb503a817ceee78823a587dfa97a2a76b8" exitCode=0 Oct 06 12:05:38 crc kubenswrapper[4958]: I1006 12:05:38.769654 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-845d6d6f59-jhwnw" event={"ID":"56bbbfd2-0526-4ce7-a41d-a18dc9a0a652","Type":"ContainerDied","Data":"edf653c280a78f421d48acef18d37cfb503a817ceee78823a587dfa97a2a76b8"} Oct 06 12:05:38 crc kubenswrapper[4958]: I1006 12:05:38.769678 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-845d6d6f59-jhwnw" event={"ID":"56bbbfd2-0526-4ce7-a41d-a18dc9a0a652","Type":"ContainerStarted","Data":"7f300d2531a071ea8fdb4cb4b3b5979880ef70cb515dc0e592768c769ebc37f5"} Oct 06 12:05:38 crc kubenswrapper[4958]: I1006 12:05:38.781134 4958 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-cell-mapping-mbwxn" podStartSLOduration=2.781114466 podStartE2EDuration="2.781114466s" podCreationTimestamp="2025-10-06 12:05:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-06 12:05:38.779367454 +0000 UTC m=+1092.665392762" watchObservedRunningTime="2025-10-06 12:05:38.781114466 +0000 UTC m=+1092.667139764" Oct 06 12:05:39 crc kubenswrapper[4958]: I1006 12:05:39.787665 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-kpxjr" event={"ID":"847e5baa-32e2-4bba-88c2-a493724883f9","Type":"ContainerStarted","Data":"5282bdbc80071e56bf4669acdc716b15a530e229f47e2b2546483be68d41c10f"} Oct 06 12:05:39 crc kubenswrapper[4958]: I1006 12:05:39.788024 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-kpxjr" event={"ID":"847e5baa-32e2-4bba-88c2-a493724883f9","Type":"ContainerStarted","Data":"437c76c7713c812c45757f35dcc01a40b5b7271ea335dab9cad25ecf6d035c90"} Oct 06 12:05:39 crc kubenswrapper[4958]: I1006 12:05:39.792820 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-845d6d6f59-jhwnw" event={"ID":"56bbbfd2-0526-4ce7-a41d-a18dc9a0a652","Type":"ContainerStarted","Data":"71988c94015d22c0b2dbbc077888c692b3d96c6629f3713ac3772bcdd51e0241"} Oct 06 12:05:39 crc kubenswrapper[4958]: I1006 12:05:39.792870 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-845d6d6f59-jhwnw" Oct 06 12:05:39 crc kubenswrapper[4958]: I1006 12:05:39.807049 4958 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-conductor-db-sync-kpxjr" podStartSLOduration=2.807034771 podStartE2EDuration="2.807034771s" podCreationTimestamp="2025-10-06 12:05:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-06 12:05:39.802307359 +0000 UTC m=+1093.688332677" watchObservedRunningTime="2025-10-06 12:05:39.807034771 +0000 UTC m=+1093.693060079" Oct 06 12:05:39 crc kubenswrapper[4958]: I1006 12:05:39.838009 4958 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-845d6d6f59-jhwnw" podStartSLOduration=2.837994114 podStartE2EDuration="2.837994114s" podCreationTimestamp="2025-10-06 12:05:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-06 12:05:39.835006114 +0000 UTC m=+1093.721031422" watchObservedRunningTime="2025-10-06 12:05:39.837994114 +0000 UTC m=+1093.724019422" Oct 06 12:05:40 crc kubenswrapper[4958]: I1006 12:05:40.593355 4958 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Oct 06 12:05:40 crc kubenswrapper[4958]: I1006 12:05:40.631334 4958 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Oct 06 12:05:42 crc kubenswrapper[4958]: I1006 12:05:42.825501 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"749169b6-fbac-4d10-858e-9d8c592ac408","Type":"ContainerStarted","Data":"e78290cb4959e8057ec06906db8acb4e63f48e640c93a51ac03fd0ab1c5a4644"} Oct 06 12:05:42 crc kubenswrapper[4958]: I1006 12:05:42.826124 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"749169b6-fbac-4d10-858e-9d8c592ac408","Type":"ContainerStarted","Data":"0f6c302137de9d82ddf5bd0f335f230fa69764352d0459545e4aeeecb5c7622d"} Oct 06 12:05:42 crc kubenswrapper[4958]: I1006 12:05:42.836848 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"6e659360-bc60-4dc2-90b0-af9567ac637d","Type":"ContainerStarted","Data":"5fe4967cb221c00ba18b7bc56dc9ccd3d862f9e219e3e4cdd9922ccbb66abbe4"} Oct 06 12:05:42 crc kubenswrapper[4958]: I1006 12:05:42.836899 4958 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-cell1-novncproxy-0" podUID="6e659360-bc60-4dc2-90b0-af9567ac637d" containerName="nova-cell1-novncproxy-novncproxy" containerID="cri-o://5fe4967cb221c00ba18b7bc56dc9ccd3d862f9e219e3e4cdd9922ccbb66abbe4" gracePeriod=30 Oct 06 12:05:42 crc kubenswrapper[4958]: I1006 12:05:42.838670 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"3a98f260-a4db-4810-8dbd-42cdc4bb3e20","Type":"ContainerStarted","Data":"cf7633f239f2f13872f1980b837fdc1504a00b72f17c653bfac224737c0f61dc"} Oct 06 12:05:42 crc kubenswrapper[4958]: I1006 12:05:42.848912 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"2bfdc32c-205a-436a-b5a7-9437d8db55cb","Type":"ContainerStarted","Data":"48dbc90280171933cdee733cd038ed53e7f6497ff76168558dab77dc95c6f103"} Oct 06 12:05:42 crc kubenswrapper[4958]: I1006 12:05:42.849007 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"2bfdc32c-205a-436a-b5a7-9437d8db55cb","Type":"ContainerStarted","Data":"4e7c76d63d9fde3f7d70c610913e7ee3b414d70e3754e995934baf465bab09d1"} Oct 06 12:05:42 crc kubenswrapper[4958]: I1006 12:05:42.849560 4958 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="2bfdc32c-205a-436a-b5a7-9437d8db55cb" containerName="nova-metadata-log" containerID="cri-o://4e7c76d63d9fde3f7d70c610913e7ee3b414d70e3754e995934baf465bab09d1" gracePeriod=30 Oct 06 12:05:42 crc kubenswrapper[4958]: I1006 12:05:42.849944 4958 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="2bfdc32c-205a-436a-b5a7-9437d8db55cb" containerName="nova-metadata-metadata" containerID="cri-o://48dbc90280171933cdee733cd038ed53e7f6497ff76168558dab77dc95c6f103" gracePeriod=30 Oct 06 12:05:42 crc kubenswrapper[4958]: I1006 12:05:42.853364 4958 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=3.210379191 podStartE2EDuration="6.853341846s" podCreationTimestamp="2025-10-06 12:05:36 +0000 UTC" firstStartedPulling="2025-10-06 12:05:37.880422756 +0000 UTC m=+1091.766448064" lastFinishedPulling="2025-10-06 12:05:41.523385411 +0000 UTC m=+1095.409410719" observedRunningTime="2025-10-06 12:05:42.845276043 +0000 UTC m=+1096.731301361" watchObservedRunningTime="2025-10-06 12:05:42.853341846 +0000 UTC m=+1096.739367154" Oct 06 12:05:42 crc kubenswrapper[4958]: I1006 12:05:42.879803 4958 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-novncproxy-0" podStartSLOduration=2.619813739 podStartE2EDuration="5.879783813s" podCreationTimestamp="2025-10-06 12:05:37 +0000 UTC" firstStartedPulling="2025-10-06 12:05:38.264377196 +0000 UTC m=+1092.150402504" lastFinishedPulling="2025-10-06 12:05:41.52434726 +0000 UTC m=+1095.410372578" observedRunningTime="2025-10-06 12:05:42.872343739 +0000 UTC m=+1096.758369057" watchObservedRunningTime="2025-10-06 12:05:42.879783813 +0000 UTC m=+1096.765809121" Oct 06 12:05:42 crc kubenswrapper[4958]: I1006 12:05:42.897044 4958 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=3.49439061 podStartE2EDuration="6.897026843s" podCreationTimestamp="2025-10-06 12:05:36 +0000 UTC" firstStartedPulling="2025-10-06 12:05:38.117573152 +0000 UTC m=+1092.003598460" lastFinishedPulling="2025-10-06 12:05:41.520209355 +0000 UTC m=+1095.406234693" observedRunningTime="2025-10-06 12:05:42.890393533 +0000 UTC m=+1096.776418841" watchObservedRunningTime="2025-10-06 12:05:42.897026843 +0000 UTC m=+1096.783052151" Oct 06 12:05:42 crc kubenswrapper[4958]: I1006 12:05:42.920161 4958 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=3.365457285 podStartE2EDuration="6.920113198s" podCreationTimestamp="2025-10-06 12:05:36 +0000 UTC" firstStartedPulling="2025-10-06 12:05:37.965589303 +0000 UTC m=+1091.851614601" lastFinishedPulling="2025-10-06 12:05:41.520245206 +0000 UTC m=+1095.406270514" observedRunningTime="2025-10-06 12:05:42.910912971 +0000 UTC m=+1096.796938279" watchObservedRunningTime="2025-10-06 12:05:42.920113198 +0000 UTC m=+1096.806138496" Oct 06 12:05:43 crc kubenswrapper[4958]: I1006 12:05:43.391802 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Oct 06 12:05:43 crc kubenswrapper[4958]: I1006 12:05:43.560353 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2bfdc32c-205a-436a-b5a7-9437d8db55cb-logs\") pod \"2bfdc32c-205a-436a-b5a7-9437d8db55cb\" (UID: \"2bfdc32c-205a-436a-b5a7-9437d8db55cb\") " Oct 06 12:05:43 crc kubenswrapper[4958]: I1006 12:05:43.560458 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2bfdc32c-205a-436a-b5a7-9437d8db55cb-combined-ca-bundle\") pod \"2bfdc32c-205a-436a-b5a7-9437d8db55cb\" (UID: \"2bfdc32c-205a-436a-b5a7-9437d8db55cb\") " Oct 06 12:05:43 crc kubenswrapper[4958]: I1006 12:05:43.560555 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2bfdc32c-205a-436a-b5a7-9437d8db55cb-config-data\") pod \"2bfdc32c-205a-436a-b5a7-9437d8db55cb\" (UID: \"2bfdc32c-205a-436a-b5a7-9437d8db55cb\") " Oct 06 12:05:43 crc kubenswrapper[4958]: I1006 12:05:43.560644 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rvd4g\" (UniqueName: \"kubernetes.io/projected/2bfdc32c-205a-436a-b5a7-9437d8db55cb-kube-api-access-rvd4g\") pod \"2bfdc32c-205a-436a-b5a7-9437d8db55cb\" (UID: \"2bfdc32c-205a-436a-b5a7-9437d8db55cb\") " Oct 06 12:05:43 crc kubenswrapper[4958]: I1006 12:05:43.560873 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2bfdc32c-205a-436a-b5a7-9437d8db55cb-logs" (OuterVolumeSpecName: "logs") pod "2bfdc32c-205a-436a-b5a7-9437d8db55cb" (UID: "2bfdc32c-205a-436a-b5a7-9437d8db55cb"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 06 12:05:43 crc kubenswrapper[4958]: I1006 12:05:43.561641 4958 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2bfdc32c-205a-436a-b5a7-9437d8db55cb-logs\") on node \"crc\" DevicePath \"\"" Oct 06 12:05:43 crc kubenswrapper[4958]: I1006 12:05:43.568694 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2bfdc32c-205a-436a-b5a7-9437d8db55cb-kube-api-access-rvd4g" (OuterVolumeSpecName: "kube-api-access-rvd4g") pod "2bfdc32c-205a-436a-b5a7-9437d8db55cb" (UID: "2bfdc32c-205a-436a-b5a7-9437d8db55cb"). InnerVolumeSpecName "kube-api-access-rvd4g". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 06 12:05:43 crc kubenswrapper[4958]: I1006 12:05:43.597998 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2bfdc32c-205a-436a-b5a7-9437d8db55cb-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "2bfdc32c-205a-436a-b5a7-9437d8db55cb" (UID: "2bfdc32c-205a-436a-b5a7-9437d8db55cb"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 06 12:05:43 crc kubenswrapper[4958]: I1006 12:05:43.599299 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2bfdc32c-205a-436a-b5a7-9437d8db55cb-config-data" (OuterVolumeSpecName: "config-data") pod "2bfdc32c-205a-436a-b5a7-9437d8db55cb" (UID: "2bfdc32c-205a-436a-b5a7-9437d8db55cb"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 06 12:05:43 crc kubenswrapper[4958]: I1006 12:05:43.663283 4958 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rvd4g\" (UniqueName: \"kubernetes.io/projected/2bfdc32c-205a-436a-b5a7-9437d8db55cb-kube-api-access-rvd4g\") on node \"crc\" DevicePath \"\"" Oct 06 12:05:43 crc kubenswrapper[4958]: I1006 12:05:43.663779 4958 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2bfdc32c-205a-436a-b5a7-9437d8db55cb-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 06 12:05:43 crc kubenswrapper[4958]: I1006 12:05:43.663797 4958 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2bfdc32c-205a-436a-b5a7-9437d8db55cb-config-data\") on node \"crc\" DevicePath \"\"" Oct 06 12:05:43 crc kubenswrapper[4958]: I1006 12:05:43.859066 4958 generic.go:334] "Generic (PLEG): container finished" podID="2bfdc32c-205a-436a-b5a7-9437d8db55cb" containerID="48dbc90280171933cdee733cd038ed53e7f6497ff76168558dab77dc95c6f103" exitCode=0 Oct 06 12:05:43 crc kubenswrapper[4958]: I1006 12:05:43.859095 4958 generic.go:334] "Generic (PLEG): container finished" podID="2bfdc32c-205a-436a-b5a7-9437d8db55cb" containerID="4e7c76d63d9fde3f7d70c610913e7ee3b414d70e3754e995934baf465bab09d1" exitCode=143 Oct 06 12:05:43 crc kubenswrapper[4958]: I1006 12:05:43.859181 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Oct 06 12:05:43 crc kubenswrapper[4958]: I1006 12:05:43.859230 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"2bfdc32c-205a-436a-b5a7-9437d8db55cb","Type":"ContainerDied","Data":"48dbc90280171933cdee733cd038ed53e7f6497ff76168558dab77dc95c6f103"} Oct 06 12:05:43 crc kubenswrapper[4958]: I1006 12:05:43.859268 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"2bfdc32c-205a-436a-b5a7-9437d8db55cb","Type":"ContainerDied","Data":"4e7c76d63d9fde3f7d70c610913e7ee3b414d70e3754e995934baf465bab09d1"} Oct 06 12:05:43 crc kubenswrapper[4958]: I1006 12:05:43.859282 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"2bfdc32c-205a-436a-b5a7-9437d8db55cb","Type":"ContainerDied","Data":"2198396185e3f89b1682385b30b866ac2cc170d5e7c53d487d6e4ca33e312b75"} Oct 06 12:05:43 crc kubenswrapper[4958]: I1006 12:05:43.859300 4958 scope.go:117] "RemoveContainer" containerID="48dbc90280171933cdee733cd038ed53e7f6497ff76168558dab77dc95c6f103" Oct 06 12:05:43 crc kubenswrapper[4958]: I1006 12:05:43.892282 4958 scope.go:117] "RemoveContainer" containerID="4e7c76d63d9fde3f7d70c610913e7ee3b414d70e3754e995934baf465bab09d1" Oct 06 12:05:43 crc kubenswrapper[4958]: I1006 12:05:43.913693 4958 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Oct 06 12:05:43 crc kubenswrapper[4958]: I1006 12:05:43.918133 4958 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Oct 06 12:05:43 crc kubenswrapper[4958]: I1006 12:05:43.938474 4958 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Oct 06 12:05:43 crc kubenswrapper[4958]: E1006 12:05:43.938936 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2bfdc32c-205a-436a-b5a7-9437d8db55cb" containerName="nova-metadata-log" Oct 06 12:05:43 crc kubenswrapper[4958]: I1006 12:05:43.938972 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="2bfdc32c-205a-436a-b5a7-9437d8db55cb" containerName="nova-metadata-log" Oct 06 12:05:43 crc kubenswrapper[4958]: E1006 12:05:43.939007 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2bfdc32c-205a-436a-b5a7-9437d8db55cb" containerName="nova-metadata-metadata" Oct 06 12:05:43 crc kubenswrapper[4958]: I1006 12:05:43.939013 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="2bfdc32c-205a-436a-b5a7-9437d8db55cb" containerName="nova-metadata-metadata" Oct 06 12:05:43 crc kubenswrapper[4958]: I1006 12:05:43.939300 4958 memory_manager.go:354] "RemoveStaleState removing state" podUID="2bfdc32c-205a-436a-b5a7-9437d8db55cb" containerName="nova-metadata-log" Oct 06 12:05:43 crc kubenswrapper[4958]: I1006 12:05:43.939357 4958 memory_manager.go:354] "RemoveStaleState removing state" podUID="2bfdc32c-205a-436a-b5a7-9437d8db55cb" containerName="nova-metadata-metadata" Oct 06 12:05:43 crc kubenswrapper[4958]: I1006 12:05:43.940691 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Oct 06 12:05:43 crc kubenswrapper[4958]: I1006 12:05:43.953023 4958 scope.go:117] "RemoveContainer" containerID="48dbc90280171933cdee733cd038ed53e7f6497ff76168558dab77dc95c6f103" Oct 06 12:05:43 crc kubenswrapper[4958]: E1006 12:05:43.953603 4958 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"48dbc90280171933cdee733cd038ed53e7f6497ff76168558dab77dc95c6f103\": container with ID starting with 48dbc90280171933cdee733cd038ed53e7f6497ff76168558dab77dc95c6f103 not found: ID does not exist" containerID="48dbc90280171933cdee733cd038ed53e7f6497ff76168558dab77dc95c6f103" Oct 06 12:05:43 crc kubenswrapper[4958]: I1006 12:05:43.953642 4958 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"48dbc90280171933cdee733cd038ed53e7f6497ff76168558dab77dc95c6f103"} err="failed to get container status \"48dbc90280171933cdee733cd038ed53e7f6497ff76168558dab77dc95c6f103\": rpc error: code = NotFound desc = could not find container \"48dbc90280171933cdee733cd038ed53e7f6497ff76168558dab77dc95c6f103\": container with ID starting with 48dbc90280171933cdee733cd038ed53e7f6497ff76168558dab77dc95c6f103 not found: ID does not exist" Oct 06 12:05:43 crc kubenswrapper[4958]: I1006 12:05:43.953668 4958 scope.go:117] "RemoveContainer" containerID="4e7c76d63d9fde3f7d70c610913e7ee3b414d70e3754e995934baf465bab09d1" Oct 06 12:05:43 crc kubenswrapper[4958]: I1006 12:05:43.953757 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Oct 06 12:05:43 crc kubenswrapper[4958]: I1006 12:05:43.953898 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-metadata-internal-svc" Oct 06 12:05:43 crc kubenswrapper[4958]: E1006 12:05:43.954109 4958 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4e7c76d63d9fde3f7d70c610913e7ee3b414d70e3754e995934baf465bab09d1\": container with ID starting with 4e7c76d63d9fde3f7d70c610913e7ee3b414d70e3754e995934baf465bab09d1 not found: ID does not exist" containerID="4e7c76d63d9fde3f7d70c610913e7ee3b414d70e3754e995934baf465bab09d1" Oct 06 12:05:43 crc kubenswrapper[4958]: I1006 12:05:43.954156 4958 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4e7c76d63d9fde3f7d70c610913e7ee3b414d70e3754e995934baf465bab09d1"} err="failed to get container status \"4e7c76d63d9fde3f7d70c610913e7ee3b414d70e3754e995934baf465bab09d1\": rpc error: code = NotFound desc = could not find container \"4e7c76d63d9fde3f7d70c610913e7ee3b414d70e3754e995934baf465bab09d1\": container with ID starting with 4e7c76d63d9fde3f7d70c610913e7ee3b414d70e3754e995934baf465bab09d1 not found: ID does not exist" Oct 06 12:05:43 crc kubenswrapper[4958]: I1006 12:05:43.954183 4958 scope.go:117] "RemoveContainer" containerID="48dbc90280171933cdee733cd038ed53e7f6497ff76168558dab77dc95c6f103" Oct 06 12:05:43 crc kubenswrapper[4958]: I1006 12:05:43.954605 4958 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"48dbc90280171933cdee733cd038ed53e7f6497ff76168558dab77dc95c6f103"} err="failed to get container status \"48dbc90280171933cdee733cd038ed53e7f6497ff76168558dab77dc95c6f103\": rpc error: code = NotFound desc = could not find container \"48dbc90280171933cdee733cd038ed53e7f6497ff76168558dab77dc95c6f103\": container with ID starting with 48dbc90280171933cdee733cd038ed53e7f6497ff76168558dab77dc95c6f103 not found: ID does not exist" Oct 06 12:05:43 crc kubenswrapper[4958]: I1006 12:05:43.954628 4958 scope.go:117] "RemoveContainer" containerID="4e7c76d63d9fde3f7d70c610913e7ee3b414d70e3754e995934baf465bab09d1" Oct 06 12:05:43 crc kubenswrapper[4958]: I1006 12:05:43.954851 4958 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4e7c76d63d9fde3f7d70c610913e7ee3b414d70e3754e995934baf465bab09d1"} err="failed to get container status \"4e7c76d63d9fde3f7d70c610913e7ee3b414d70e3754e995934baf465bab09d1\": rpc error: code = NotFound desc = could not find container \"4e7c76d63d9fde3f7d70c610913e7ee3b414d70e3754e995934baf465bab09d1\": container with ID starting with 4e7c76d63d9fde3f7d70c610913e7ee3b414d70e3754e995934baf465bab09d1 not found: ID does not exist" Oct 06 12:05:43 crc kubenswrapper[4958]: I1006 12:05:43.974601 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Oct 06 12:05:44 crc kubenswrapper[4958]: I1006 12:05:44.071406 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/cb90e5bf-3a12-4105-9cd7-2904c373d2d4-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"cb90e5bf-3a12-4105-9cd7-2904c373d2d4\") " pod="openstack/nova-metadata-0" Oct 06 12:05:44 crc kubenswrapper[4958]: I1006 12:05:44.071819 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-htxdk\" (UniqueName: \"kubernetes.io/projected/cb90e5bf-3a12-4105-9cd7-2904c373d2d4-kube-api-access-htxdk\") pod \"nova-metadata-0\" (UID: \"cb90e5bf-3a12-4105-9cd7-2904c373d2d4\") " pod="openstack/nova-metadata-0" Oct 06 12:05:44 crc kubenswrapper[4958]: I1006 12:05:44.071890 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cb90e5bf-3a12-4105-9cd7-2904c373d2d4-config-data\") pod \"nova-metadata-0\" (UID: \"cb90e5bf-3a12-4105-9cd7-2904c373d2d4\") " pod="openstack/nova-metadata-0" Oct 06 12:05:44 crc kubenswrapper[4958]: I1006 12:05:44.072743 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cb90e5bf-3a12-4105-9cd7-2904c373d2d4-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"cb90e5bf-3a12-4105-9cd7-2904c373d2d4\") " pod="openstack/nova-metadata-0" Oct 06 12:05:44 crc kubenswrapper[4958]: I1006 12:05:44.072848 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/cb90e5bf-3a12-4105-9cd7-2904c373d2d4-logs\") pod \"nova-metadata-0\" (UID: \"cb90e5bf-3a12-4105-9cd7-2904c373d2d4\") " pod="openstack/nova-metadata-0" Oct 06 12:05:44 crc kubenswrapper[4958]: I1006 12:05:44.174348 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cb90e5bf-3a12-4105-9cd7-2904c373d2d4-config-data\") pod \"nova-metadata-0\" (UID: \"cb90e5bf-3a12-4105-9cd7-2904c373d2d4\") " pod="openstack/nova-metadata-0" Oct 06 12:05:44 crc kubenswrapper[4958]: I1006 12:05:44.174389 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-htxdk\" (UniqueName: \"kubernetes.io/projected/cb90e5bf-3a12-4105-9cd7-2904c373d2d4-kube-api-access-htxdk\") pod \"nova-metadata-0\" (UID: \"cb90e5bf-3a12-4105-9cd7-2904c373d2d4\") " pod="openstack/nova-metadata-0" Oct 06 12:05:44 crc kubenswrapper[4958]: I1006 12:05:44.174436 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cb90e5bf-3a12-4105-9cd7-2904c373d2d4-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"cb90e5bf-3a12-4105-9cd7-2904c373d2d4\") " pod="openstack/nova-metadata-0" Oct 06 12:05:44 crc kubenswrapper[4958]: I1006 12:05:44.174451 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/cb90e5bf-3a12-4105-9cd7-2904c373d2d4-logs\") pod \"nova-metadata-0\" (UID: \"cb90e5bf-3a12-4105-9cd7-2904c373d2d4\") " pod="openstack/nova-metadata-0" Oct 06 12:05:44 crc kubenswrapper[4958]: I1006 12:05:44.174520 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/cb90e5bf-3a12-4105-9cd7-2904c373d2d4-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"cb90e5bf-3a12-4105-9cd7-2904c373d2d4\") " pod="openstack/nova-metadata-0" Oct 06 12:05:44 crc kubenswrapper[4958]: I1006 12:05:44.175352 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/cb90e5bf-3a12-4105-9cd7-2904c373d2d4-logs\") pod \"nova-metadata-0\" (UID: \"cb90e5bf-3a12-4105-9cd7-2904c373d2d4\") " pod="openstack/nova-metadata-0" Oct 06 12:05:44 crc kubenswrapper[4958]: I1006 12:05:44.179952 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/cb90e5bf-3a12-4105-9cd7-2904c373d2d4-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"cb90e5bf-3a12-4105-9cd7-2904c373d2d4\") " pod="openstack/nova-metadata-0" Oct 06 12:05:44 crc kubenswrapper[4958]: I1006 12:05:44.180445 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cb90e5bf-3a12-4105-9cd7-2904c373d2d4-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"cb90e5bf-3a12-4105-9cd7-2904c373d2d4\") " pod="openstack/nova-metadata-0" Oct 06 12:05:44 crc kubenswrapper[4958]: I1006 12:05:44.180586 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cb90e5bf-3a12-4105-9cd7-2904c373d2d4-config-data\") pod \"nova-metadata-0\" (UID: \"cb90e5bf-3a12-4105-9cd7-2904c373d2d4\") " pod="openstack/nova-metadata-0" Oct 06 12:05:44 crc kubenswrapper[4958]: I1006 12:05:44.192958 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-htxdk\" (UniqueName: \"kubernetes.io/projected/cb90e5bf-3a12-4105-9cd7-2904c373d2d4-kube-api-access-htxdk\") pod \"nova-metadata-0\" (UID: \"cb90e5bf-3a12-4105-9cd7-2904c373d2d4\") " pod="openstack/nova-metadata-0" Oct 06 12:05:44 crc kubenswrapper[4958]: I1006 12:05:44.266386 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Oct 06 12:05:44 crc kubenswrapper[4958]: I1006 12:05:44.767800 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Oct 06 12:05:44 crc kubenswrapper[4958]: I1006 12:05:44.869814 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"cb90e5bf-3a12-4105-9cd7-2904c373d2d4","Type":"ContainerStarted","Data":"b676aa6bfc13b94350f22ed11bffc73b08cc940595419d2b18616c86c1134b22"} Oct 06 12:05:44 crc kubenswrapper[4958]: I1006 12:05:44.940494 4958 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2bfdc32c-205a-436a-b5a7-9437d8db55cb" path="/var/lib/kubelet/pods/2bfdc32c-205a-436a-b5a7-9437d8db55cb/volumes" Oct 06 12:05:45 crc kubenswrapper[4958]: I1006 12:05:45.882336 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"cb90e5bf-3a12-4105-9cd7-2904c373d2d4","Type":"ContainerStarted","Data":"07d1d41603f32f214d04bd853db283042de1c11bfb4097c53ec0e7791a8a67a9"} Oct 06 12:05:45 crc kubenswrapper[4958]: I1006 12:05:45.883643 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"cb90e5bf-3a12-4105-9cd7-2904c373d2d4","Type":"ContainerStarted","Data":"8f435290d81d8bdbc653eccf3fe484bc93095ca23c5f6c010ab8c07f68846ba1"} Oct 06 12:05:45 crc kubenswrapper[4958]: I1006 12:05:45.884120 4958 generic.go:334] "Generic (PLEG): container finished" podID="6ed410ad-6f3a-4b26-bdad-1de8609840cb" containerID="dd9530070cb06ce9962ff69b3a29281c0fe964c5ca71d08d584dad6c2e23d6ee" exitCode=0 Oct 06 12:05:45 crc kubenswrapper[4958]: I1006 12:05:45.884169 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-mbwxn" event={"ID":"6ed410ad-6f3a-4b26-bdad-1de8609840cb","Type":"ContainerDied","Data":"dd9530070cb06ce9962ff69b3a29281c0fe964c5ca71d08d584dad6c2e23d6ee"} Oct 06 12:05:45 crc kubenswrapper[4958]: I1006 12:05:45.912529 4958 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=2.912509519 podStartE2EDuration="2.912509519s" podCreationTimestamp="2025-10-06 12:05:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-06 12:05:45.905979462 +0000 UTC m=+1099.792004770" watchObservedRunningTime="2025-10-06 12:05:45.912509519 +0000 UTC m=+1099.798534827" Oct 06 12:05:46 crc kubenswrapper[4958]: I1006 12:05:46.897515 4958 generic.go:334] "Generic (PLEG): container finished" podID="847e5baa-32e2-4bba-88c2-a493724883f9" containerID="5282bdbc80071e56bf4669acdc716b15a530e229f47e2b2546483be68d41c10f" exitCode=0 Oct 06 12:05:46 crc kubenswrapper[4958]: I1006 12:05:46.897581 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-kpxjr" event={"ID":"847e5baa-32e2-4bba-88c2-a493724883f9","Type":"ContainerDied","Data":"5282bdbc80071e56bf4669acdc716b15a530e229f47e2b2546483be68d41c10f"} Oct 06 12:05:47 crc kubenswrapper[4958]: I1006 12:05:47.257167 4958 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Oct 06 12:05:47 crc kubenswrapper[4958]: I1006 12:05:47.257233 4958 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Oct 06 12:05:47 crc kubenswrapper[4958]: I1006 12:05:47.344249 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Oct 06 12:05:47 crc kubenswrapper[4958]: I1006 12:05:47.344284 4958 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-scheduler-0" Oct 06 12:05:47 crc kubenswrapper[4958]: I1006 12:05:47.375889 4958 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-scheduler-0" Oct 06 12:05:47 crc kubenswrapper[4958]: I1006 12:05:47.415383 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-mbwxn" Oct 06 12:05:47 crc kubenswrapper[4958]: I1006 12:05:47.542861 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6ed410ad-6f3a-4b26-bdad-1de8609840cb-scripts\") pod \"6ed410ad-6f3a-4b26-bdad-1de8609840cb\" (UID: \"6ed410ad-6f3a-4b26-bdad-1de8609840cb\") " Oct 06 12:05:47 crc kubenswrapper[4958]: I1006 12:05:47.542907 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6ed410ad-6f3a-4b26-bdad-1de8609840cb-combined-ca-bundle\") pod \"6ed410ad-6f3a-4b26-bdad-1de8609840cb\" (UID: \"6ed410ad-6f3a-4b26-bdad-1de8609840cb\") " Oct 06 12:05:47 crc kubenswrapper[4958]: I1006 12:05:47.542967 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bsl29\" (UniqueName: \"kubernetes.io/projected/6ed410ad-6f3a-4b26-bdad-1de8609840cb-kube-api-access-bsl29\") pod \"6ed410ad-6f3a-4b26-bdad-1de8609840cb\" (UID: \"6ed410ad-6f3a-4b26-bdad-1de8609840cb\") " Oct 06 12:05:47 crc kubenswrapper[4958]: I1006 12:05:47.543028 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6ed410ad-6f3a-4b26-bdad-1de8609840cb-config-data\") pod \"6ed410ad-6f3a-4b26-bdad-1de8609840cb\" (UID: \"6ed410ad-6f3a-4b26-bdad-1de8609840cb\") " Oct 06 12:05:47 crc kubenswrapper[4958]: I1006 12:05:47.550532 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6ed410ad-6f3a-4b26-bdad-1de8609840cb-kube-api-access-bsl29" (OuterVolumeSpecName: "kube-api-access-bsl29") pod "6ed410ad-6f3a-4b26-bdad-1de8609840cb" (UID: "6ed410ad-6f3a-4b26-bdad-1de8609840cb"). InnerVolumeSpecName "kube-api-access-bsl29". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 06 12:05:47 crc kubenswrapper[4958]: I1006 12:05:47.553464 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6ed410ad-6f3a-4b26-bdad-1de8609840cb-scripts" (OuterVolumeSpecName: "scripts") pod "6ed410ad-6f3a-4b26-bdad-1de8609840cb" (UID: "6ed410ad-6f3a-4b26-bdad-1de8609840cb"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 06 12:05:47 crc kubenswrapper[4958]: I1006 12:05:47.580272 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6ed410ad-6f3a-4b26-bdad-1de8609840cb-config-data" (OuterVolumeSpecName: "config-data") pod "6ed410ad-6f3a-4b26-bdad-1de8609840cb" (UID: "6ed410ad-6f3a-4b26-bdad-1de8609840cb"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 06 12:05:47 crc kubenswrapper[4958]: I1006 12:05:47.582111 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6ed410ad-6f3a-4b26-bdad-1de8609840cb-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "6ed410ad-6f3a-4b26-bdad-1de8609840cb" (UID: "6ed410ad-6f3a-4b26-bdad-1de8609840cb"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 06 12:05:47 crc kubenswrapper[4958]: I1006 12:05:47.646168 4958 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6ed410ad-6f3a-4b26-bdad-1de8609840cb-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 06 12:05:47 crc kubenswrapper[4958]: I1006 12:05:47.646211 4958 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6ed410ad-6f3a-4b26-bdad-1de8609840cb-scripts\") on node \"crc\" DevicePath \"\"" Oct 06 12:05:47 crc kubenswrapper[4958]: I1006 12:05:47.646226 4958 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bsl29\" (UniqueName: \"kubernetes.io/projected/6ed410ad-6f3a-4b26-bdad-1de8609840cb-kube-api-access-bsl29\") on node \"crc\" DevicePath \"\"" Oct 06 12:05:47 crc kubenswrapper[4958]: I1006 12:05:47.646241 4958 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6ed410ad-6f3a-4b26-bdad-1de8609840cb-config-data\") on node \"crc\" DevicePath \"\"" Oct 06 12:05:47 crc kubenswrapper[4958]: I1006 12:05:47.668727 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-novncproxy-0" Oct 06 12:05:47 crc kubenswrapper[4958]: I1006 12:05:47.697076 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-845d6d6f59-jhwnw" Oct 06 12:05:47 crc kubenswrapper[4958]: I1006 12:05:47.810082 4958 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5784cf869f-dkwc8"] Oct 06 12:05:47 crc kubenswrapper[4958]: I1006 12:05:47.810377 4958 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-5784cf869f-dkwc8" podUID="7ba8a4da-fc79-4b19-ae45-208cbf09bbff" containerName="dnsmasq-dns" containerID="cri-o://7dd68528e2f386a07bb63a0217b790742e4e8b2ebb6af2e3b0d3ce53c5ef34ac" gracePeriod=10 Oct 06 12:05:47 crc kubenswrapper[4958]: I1006 12:05:47.933573 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-mbwxn" Oct 06 12:05:47 crc kubenswrapper[4958]: I1006 12:05:47.934050 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-mbwxn" event={"ID":"6ed410ad-6f3a-4b26-bdad-1de8609840cb","Type":"ContainerDied","Data":"7d4755b84f6d24f9c5d6032732dd4bb604d65ec9c610ed37507da0bb53a2b914"} Oct 06 12:05:47 crc kubenswrapper[4958]: I1006 12:05:47.934101 4958 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7d4755b84f6d24f9c5d6032732dd4bb604d65ec9c610ed37507da0bb53a2b914" Oct 06 12:05:48 crc kubenswrapper[4958]: I1006 12:05:48.044132 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-scheduler-0" Oct 06 12:05:48 crc kubenswrapper[4958]: I1006 12:05:48.169934 4958 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Oct 06 12:05:48 crc kubenswrapper[4958]: I1006 12:05:48.170171 4958 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="749169b6-fbac-4d10-858e-9d8c592ac408" containerName="nova-api-log" containerID="cri-o://0f6c302137de9d82ddf5bd0f335f230fa69764352d0459545e4aeeecb5c7622d" gracePeriod=30 Oct 06 12:05:48 crc kubenswrapper[4958]: I1006 12:05:48.170589 4958 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="749169b6-fbac-4d10-858e-9d8c592ac408" containerName="nova-api-api" containerID="cri-o://e78290cb4959e8057ec06906db8acb4e63f48e640c93a51ac03fd0ab1c5a4644" gracePeriod=30 Oct 06 12:05:48 crc kubenswrapper[4958]: I1006 12:05:48.188849 4958 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Oct 06 12:05:48 crc kubenswrapper[4958]: I1006 12:05:48.189043 4958 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="cb90e5bf-3a12-4105-9cd7-2904c373d2d4" containerName="nova-metadata-log" containerID="cri-o://8f435290d81d8bdbc653eccf3fe484bc93095ca23c5f6c010ab8c07f68846ba1" gracePeriod=30 Oct 06 12:05:48 crc kubenswrapper[4958]: I1006 12:05:48.189426 4958 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="cb90e5bf-3a12-4105-9cd7-2904c373d2d4" containerName="nova-metadata-metadata" containerID="cri-o://07d1d41603f32f214d04bd853db283042de1c11bfb4097c53ec0e7791a8a67a9" gracePeriod=30 Oct 06 12:05:48 crc kubenswrapper[4958]: I1006 12:05:48.194836 4958 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="749169b6-fbac-4d10-858e-9d8c592ac408" containerName="nova-api-api" probeResult="failure" output="Get \"http://10.217.0.186:8774/\": EOF" Oct 06 12:05:48 crc kubenswrapper[4958]: I1006 12:05:48.195070 4958 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="749169b6-fbac-4d10-858e-9d8c592ac408" containerName="nova-api-log" probeResult="failure" output="Get \"http://10.217.0.186:8774/\": EOF" Oct 06 12:05:48 crc kubenswrapper[4958]: I1006 12:05:48.295636 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-kpxjr" Oct 06 12:05:48 crc kubenswrapper[4958]: I1006 12:05:48.370068 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/847e5baa-32e2-4bba-88c2-a493724883f9-config-data\") pod \"847e5baa-32e2-4bba-88c2-a493724883f9\" (UID: \"847e5baa-32e2-4bba-88c2-a493724883f9\") " Oct 06 12:05:48 crc kubenswrapper[4958]: I1006 12:05:48.370135 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/847e5baa-32e2-4bba-88c2-a493724883f9-combined-ca-bundle\") pod \"847e5baa-32e2-4bba-88c2-a493724883f9\" (UID: \"847e5baa-32e2-4bba-88c2-a493724883f9\") " Oct 06 12:05:48 crc kubenswrapper[4958]: I1006 12:05:48.370253 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mmn8x\" (UniqueName: \"kubernetes.io/projected/847e5baa-32e2-4bba-88c2-a493724883f9-kube-api-access-mmn8x\") pod \"847e5baa-32e2-4bba-88c2-a493724883f9\" (UID: \"847e5baa-32e2-4bba-88c2-a493724883f9\") " Oct 06 12:05:48 crc kubenswrapper[4958]: I1006 12:05:48.370373 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/847e5baa-32e2-4bba-88c2-a493724883f9-scripts\") pod \"847e5baa-32e2-4bba-88c2-a493724883f9\" (UID: \"847e5baa-32e2-4bba-88c2-a493724883f9\") " Oct 06 12:05:48 crc kubenswrapper[4958]: I1006 12:05:48.399824 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/847e5baa-32e2-4bba-88c2-a493724883f9-scripts" (OuterVolumeSpecName: "scripts") pod "847e5baa-32e2-4bba-88c2-a493724883f9" (UID: "847e5baa-32e2-4bba-88c2-a493724883f9"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 06 12:05:48 crc kubenswrapper[4958]: I1006 12:05:48.399994 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/847e5baa-32e2-4bba-88c2-a493724883f9-kube-api-access-mmn8x" (OuterVolumeSpecName: "kube-api-access-mmn8x") pod "847e5baa-32e2-4bba-88c2-a493724883f9" (UID: "847e5baa-32e2-4bba-88c2-a493724883f9"). InnerVolumeSpecName "kube-api-access-mmn8x". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 06 12:05:48 crc kubenswrapper[4958]: I1006 12:05:48.417384 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/847e5baa-32e2-4bba-88c2-a493724883f9-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "847e5baa-32e2-4bba-88c2-a493724883f9" (UID: "847e5baa-32e2-4bba-88c2-a493724883f9"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 06 12:05:48 crc kubenswrapper[4958]: I1006 12:05:48.468343 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/847e5baa-32e2-4bba-88c2-a493724883f9-config-data" (OuterVolumeSpecName: "config-data") pod "847e5baa-32e2-4bba-88c2-a493724883f9" (UID: "847e5baa-32e2-4bba-88c2-a493724883f9"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 06 12:05:48 crc kubenswrapper[4958]: I1006 12:05:48.475116 4958 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/847e5baa-32e2-4bba-88c2-a493724883f9-config-data\") on node \"crc\" DevicePath \"\"" Oct 06 12:05:48 crc kubenswrapper[4958]: I1006 12:05:48.475175 4958 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/847e5baa-32e2-4bba-88c2-a493724883f9-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 06 12:05:48 crc kubenswrapper[4958]: I1006 12:05:48.475193 4958 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mmn8x\" (UniqueName: \"kubernetes.io/projected/847e5baa-32e2-4bba-88c2-a493724883f9-kube-api-access-mmn8x\") on node \"crc\" DevicePath \"\"" Oct 06 12:05:48 crc kubenswrapper[4958]: I1006 12:05:48.475204 4958 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/847e5baa-32e2-4bba-88c2-a493724883f9-scripts\") on node \"crc\" DevicePath \"\"" Oct 06 12:05:48 crc kubenswrapper[4958]: I1006 12:05:48.580850 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5784cf869f-dkwc8" Oct 06 12:05:48 crc kubenswrapper[4958]: I1006 12:05:48.680202 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/7ba8a4da-fc79-4b19-ae45-208cbf09bbff-dns-svc\") pod \"7ba8a4da-fc79-4b19-ae45-208cbf09bbff\" (UID: \"7ba8a4da-fc79-4b19-ae45-208cbf09bbff\") " Oct 06 12:05:48 crc kubenswrapper[4958]: I1006 12:05:48.680357 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tfcmt\" (UniqueName: \"kubernetes.io/projected/7ba8a4da-fc79-4b19-ae45-208cbf09bbff-kube-api-access-tfcmt\") pod \"7ba8a4da-fc79-4b19-ae45-208cbf09bbff\" (UID: \"7ba8a4da-fc79-4b19-ae45-208cbf09bbff\") " Oct 06 12:05:48 crc kubenswrapper[4958]: I1006 12:05:48.680384 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7ba8a4da-fc79-4b19-ae45-208cbf09bbff-config\") pod \"7ba8a4da-fc79-4b19-ae45-208cbf09bbff\" (UID: \"7ba8a4da-fc79-4b19-ae45-208cbf09bbff\") " Oct 06 12:05:48 crc kubenswrapper[4958]: I1006 12:05:48.680458 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/7ba8a4da-fc79-4b19-ae45-208cbf09bbff-dns-swift-storage-0\") pod \"7ba8a4da-fc79-4b19-ae45-208cbf09bbff\" (UID: \"7ba8a4da-fc79-4b19-ae45-208cbf09bbff\") " Oct 06 12:05:48 crc kubenswrapper[4958]: I1006 12:05:48.680501 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/7ba8a4da-fc79-4b19-ae45-208cbf09bbff-ovsdbserver-nb\") pod \"7ba8a4da-fc79-4b19-ae45-208cbf09bbff\" (UID: \"7ba8a4da-fc79-4b19-ae45-208cbf09bbff\") " Oct 06 12:05:48 crc kubenswrapper[4958]: I1006 12:05:48.680532 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/7ba8a4da-fc79-4b19-ae45-208cbf09bbff-ovsdbserver-sb\") pod \"7ba8a4da-fc79-4b19-ae45-208cbf09bbff\" (UID: \"7ba8a4da-fc79-4b19-ae45-208cbf09bbff\") " Oct 06 12:05:48 crc kubenswrapper[4958]: I1006 12:05:48.686070 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7ba8a4da-fc79-4b19-ae45-208cbf09bbff-kube-api-access-tfcmt" (OuterVolumeSpecName: "kube-api-access-tfcmt") pod "7ba8a4da-fc79-4b19-ae45-208cbf09bbff" (UID: "7ba8a4da-fc79-4b19-ae45-208cbf09bbff"). InnerVolumeSpecName "kube-api-access-tfcmt". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 06 12:05:48 crc kubenswrapper[4958]: I1006 12:05:48.736886 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7ba8a4da-fc79-4b19-ae45-208cbf09bbff-config" (OuterVolumeSpecName: "config") pod "7ba8a4da-fc79-4b19-ae45-208cbf09bbff" (UID: "7ba8a4da-fc79-4b19-ae45-208cbf09bbff"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 06 12:05:48 crc kubenswrapper[4958]: I1006 12:05:48.742598 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7ba8a4da-fc79-4b19-ae45-208cbf09bbff-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "7ba8a4da-fc79-4b19-ae45-208cbf09bbff" (UID: "7ba8a4da-fc79-4b19-ae45-208cbf09bbff"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 06 12:05:48 crc kubenswrapper[4958]: I1006 12:05:48.761108 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7ba8a4da-fc79-4b19-ae45-208cbf09bbff-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "7ba8a4da-fc79-4b19-ae45-208cbf09bbff" (UID: "7ba8a4da-fc79-4b19-ae45-208cbf09bbff"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 06 12:05:48 crc kubenswrapper[4958]: I1006 12:05:48.764997 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7ba8a4da-fc79-4b19-ae45-208cbf09bbff-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "7ba8a4da-fc79-4b19-ae45-208cbf09bbff" (UID: "7ba8a4da-fc79-4b19-ae45-208cbf09bbff"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 06 12:05:48 crc kubenswrapper[4958]: I1006 12:05:48.767880 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7ba8a4da-fc79-4b19-ae45-208cbf09bbff-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "7ba8a4da-fc79-4b19-ae45-208cbf09bbff" (UID: "7ba8a4da-fc79-4b19-ae45-208cbf09bbff"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 06 12:05:48 crc kubenswrapper[4958]: I1006 12:05:48.768587 4958 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Oct 06 12:05:48 crc kubenswrapper[4958]: I1006 12:05:48.782453 4958 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/7ba8a4da-fc79-4b19-ae45-208cbf09bbff-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Oct 06 12:05:48 crc kubenswrapper[4958]: I1006 12:05:48.782483 4958 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/7ba8a4da-fc79-4b19-ae45-208cbf09bbff-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Oct 06 12:05:48 crc kubenswrapper[4958]: I1006 12:05:48.782493 4958 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/7ba8a4da-fc79-4b19-ae45-208cbf09bbff-dns-svc\") on node \"crc\" DevicePath \"\"" Oct 06 12:05:48 crc kubenswrapper[4958]: I1006 12:05:48.782504 4958 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tfcmt\" (UniqueName: \"kubernetes.io/projected/7ba8a4da-fc79-4b19-ae45-208cbf09bbff-kube-api-access-tfcmt\") on node \"crc\" DevicePath \"\"" Oct 06 12:05:48 crc kubenswrapper[4958]: I1006 12:05:48.782516 4958 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7ba8a4da-fc79-4b19-ae45-208cbf09bbff-config\") on node \"crc\" DevicePath \"\"" Oct 06 12:05:48 crc kubenswrapper[4958]: I1006 12:05:48.782525 4958 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/7ba8a4da-fc79-4b19-ae45-208cbf09bbff-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Oct 06 12:05:48 crc kubenswrapper[4958]: I1006 12:05:48.959802 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-kpxjr" event={"ID":"847e5baa-32e2-4bba-88c2-a493724883f9","Type":"ContainerDied","Data":"437c76c7713c812c45757f35dcc01a40b5b7271ea335dab9cad25ecf6d035c90"} Oct 06 12:05:48 crc kubenswrapper[4958]: I1006 12:05:48.960361 4958 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="437c76c7713c812c45757f35dcc01a40b5b7271ea335dab9cad25ecf6d035c90" Oct 06 12:05:48 crc kubenswrapper[4958]: I1006 12:05:48.960518 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-kpxjr" Oct 06 12:05:48 crc kubenswrapper[4958]: I1006 12:05:48.976209 4958 generic.go:334] "Generic (PLEG): container finished" podID="7ba8a4da-fc79-4b19-ae45-208cbf09bbff" containerID="7dd68528e2f386a07bb63a0217b790742e4e8b2ebb6af2e3b0d3ce53c5ef34ac" exitCode=0 Oct 06 12:05:48 crc kubenswrapper[4958]: I1006 12:05:48.976297 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5784cf869f-dkwc8" Oct 06 12:05:48 crc kubenswrapper[4958]: I1006 12:05:48.976324 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5784cf869f-dkwc8" event={"ID":"7ba8a4da-fc79-4b19-ae45-208cbf09bbff","Type":"ContainerDied","Data":"7dd68528e2f386a07bb63a0217b790742e4e8b2ebb6af2e3b0d3ce53c5ef34ac"} Oct 06 12:05:48 crc kubenswrapper[4958]: I1006 12:05:48.977121 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5784cf869f-dkwc8" event={"ID":"7ba8a4da-fc79-4b19-ae45-208cbf09bbff","Type":"ContainerDied","Data":"185d108b408bf63436e7eb728dfa99b56ab8d3a6fbb000b89702740ee302ba47"} Oct 06 12:05:48 crc kubenswrapper[4958]: I1006 12:05:48.977404 4958 scope.go:117] "RemoveContainer" containerID="7dd68528e2f386a07bb63a0217b790742e4e8b2ebb6af2e3b0d3ce53c5ef34ac" Oct 06 12:05:48 crc kubenswrapper[4958]: I1006 12:05:48.981627 4958 generic.go:334] "Generic (PLEG): container finished" podID="cb90e5bf-3a12-4105-9cd7-2904c373d2d4" containerID="07d1d41603f32f214d04bd853db283042de1c11bfb4097c53ec0e7791a8a67a9" exitCode=0 Oct 06 12:05:48 crc kubenswrapper[4958]: I1006 12:05:48.981652 4958 generic.go:334] "Generic (PLEG): container finished" podID="cb90e5bf-3a12-4105-9cd7-2904c373d2d4" containerID="8f435290d81d8bdbc653eccf3fe484bc93095ca23c5f6c010ab8c07f68846ba1" exitCode=143 Oct 06 12:05:48 crc kubenswrapper[4958]: I1006 12:05:48.981688 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"cb90e5bf-3a12-4105-9cd7-2904c373d2d4","Type":"ContainerDied","Data":"07d1d41603f32f214d04bd853db283042de1c11bfb4097c53ec0e7791a8a67a9"} Oct 06 12:05:48 crc kubenswrapper[4958]: I1006 12:05:48.981711 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"cb90e5bf-3a12-4105-9cd7-2904c373d2d4","Type":"ContainerDied","Data":"8f435290d81d8bdbc653eccf3fe484bc93095ca23c5f6c010ab8c07f68846ba1"} Oct 06 12:05:48 crc kubenswrapper[4958]: I1006 12:05:48.983022 4958 generic.go:334] "Generic (PLEG): container finished" podID="749169b6-fbac-4d10-858e-9d8c592ac408" containerID="0f6c302137de9d82ddf5bd0f335f230fa69764352d0459545e4aeeecb5c7622d" exitCode=143 Oct 06 12:05:48 crc kubenswrapper[4958]: I1006 12:05:48.983281 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"749169b6-fbac-4d10-858e-9d8c592ac408","Type":"ContainerDied","Data":"0f6c302137de9d82ddf5bd0f335f230fa69764352d0459545e4aeeecb5c7622d"} Oct 06 12:05:49 crc kubenswrapper[4958]: I1006 12:05:49.022545 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Oct 06 12:05:49 crc kubenswrapper[4958]: I1006 12:05:49.025608 4958 scope.go:117] "RemoveContainer" containerID="977e96c8c20b5ae44a5462c50c0f51725f01638f0e922a4631bd2485e2469cc3" Oct 06 12:05:49 crc kubenswrapper[4958]: I1006 12:05:49.033186 4958 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-conductor-0"] Oct 06 12:05:49 crc kubenswrapper[4958]: E1006 12:05:49.033602 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cb90e5bf-3a12-4105-9cd7-2904c373d2d4" containerName="nova-metadata-metadata" Oct 06 12:05:49 crc kubenswrapper[4958]: I1006 12:05:49.033620 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="cb90e5bf-3a12-4105-9cd7-2904c373d2d4" containerName="nova-metadata-metadata" Oct 06 12:05:49 crc kubenswrapper[4958]: E1006 12:05:49.033631 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6ed410ad-6f3a-4b26-bdad-1de8609840cb" containerName="nova-manage" Oct 06 12:05:49 crc kubenswrapper[4958]: I1006 12:05:49.033638 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="6ed410ad-6f3a-4b26-bdad-1de8609840cb" containerName="nova-manage" Oct 06 12:05:49 crc kubenswrapper[4958]: E1006 12:05:49.033650 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="847e5baa-32e2-4bba-88c2-a493724883f9" containerName="nova-cell1-conductor-db-sync" Oct 06 12:05:49 crc kubenswrapper[4958]: I1006 12:05:49.033657 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="847e5baa-32e2-4bba-88c2-a493724883f9" containerName="nova-cell1-conductor-db-sync" Oct 06 12:05:49 crc kubenswrapper[4958]: E1006 12:05:49.033668 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cb90e5bf-3a12-4105-9cd7-2904c373d2d4" containerName="nova-metadata-log" Oct 06 12:05:49 crc kubenswrapper[4958]: I1006 12:05:49.033674 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="cb90e5bf-3a12-4105-9cd7-2904c373d2d4" containerName="nova-metadata-log" Oct 06 12:05:49 crc kubenswrapper[4958]: E1006 12:05:49.033682 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7ba8a4da-fc79-4b19-ae45-208cbf09bbff" containerName="dnsmasq-dns" Oct 06 12:05:49 crc kubenswrapper[4958]: I1006 12:05:49.033688 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="7ba8a4da-fc79-4b19-ae45-208cbf09bbff" containerName="dnsmasq-dns" Oct 06 12:05:49 crc kubenswrapper[4958]: E1006 12:05:49.033720 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7ba8a4da-fc79-4b19-ae45-208cbf09bbff" containerName="init" Oct 06 12:05:49 crc kubenswrapper[4958]: I1006 12:05:49.033726 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="7ba8a4da-fc79-4b19-ae45-208cbf09bbff" containerName="init" Oct 06 12:05:49 crc kubenswrapper[4958]: I1006 12:05:49.033911 4958 memory_manager.go:354] "RemoveStaleState removing state" podUID="6ed410ad-6f3a-4b26-bdad-1de8609840cb" containerName="nova-manage" Oct 06 12:05:49 crc kubenswrapper[4958]: I1006 12:05:49.033923 4958 memory_manager.go:354] "RemoveStaleState removing state" podUID="7ba8a4da-fc79-4b19-ae45-208cbf09bbff" containerName="dnsmasq-dns" Oct 06 12:05:49 crc kubenswrapper[4958]: I1006 12:05:49.033931 4958 memory_manager.go:354] "RemoveStaleState removing state" podUID="cb90e5bf-3a12-4105-9cd7-2904c373d2d4" containerName="nova-metadata-metadata" Oct 06 12:05:49 crc kubenswrapper[4958]: I1006 12:05:49.033940 4958 memory_manager.go:354] "RemoveStaleState removing state" podUID="cb90e5bf-3a12-4105-9cd7-2904c373d2d4" containerName="nova-metadata-log" Oct 06 12:05:49 crc kubenswrapper[4958]: I1006 12:05:49.033956 4958 memory_manager.go:354] "RemoveStaleState removing state" podUID="847e5baa-32e2-4bba-88c2-a493724883f9" containerName="nova-cell1-conductor-db-sync" Oct 06 12:05:49 crc kubenswrapper[4958]: I1006 12:05:49.034589 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-0" Oct 06 12:05:49 crc kubenswrapper[4958]: I1006 12:05:49.037082 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-config-data" Oct 06 12:05:49 crc kubenswrapper[4958]: I1006 12:05:49.055953 4958 scope.go:117] "RemoveContainer" containerID="7dd68528e2f386a07bb63a0217b790742e4e8b2ebb6af2e3b0d3ce53c5ef34ac" Oct 06 12:05:49 crc kubenswrapper[4958]: I1006 12:05:49.056043 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-0"] Oct 06 12:05:49 crc kubenswrapper[4958]: E1006 12:05:49.059194 4958 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7dd68528e2f386a07bb63a0217b790742e4e8b2ebb6af2e3b0d3ce53c5ef34ac\": container with ID starting with 7dd68528e2f386a07bb63a0217b790742e4e8b2ebb6af2e3b0d3ce53c5ef34ac not found: ID does not exist" containerID="7dd68528e2f386a07bb63a0217b790742e4e8b2ebb6af2e3b0d3ce53c5ef34ac" Oct 06 12:05:49 crc kubenswrapper[4958]: I1006 12:05:49.059222 4958 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7dd68528e2f386a07bb63a0217b790742e4e8b2ebb6af2e3b0d3ce53c5ef34ac"} err="failed to get container status \"7dd68528e2f386a07bb63a0217b790742e4e8b2ebb6af2e3b0d3ce53c5ef34ac\": rpc error: code = NotFound desc = could not find container \"7dd68528e2f386a07bb63a0217b790742e4e8b2ebb6af2e3b0d3ce53c5ef34ac\": container with ID starting with 7dd68528e2f386a07bb63a0217b790742e4e8b2ebb6af2e3b0d3ce53c5ef34ac not found: ID does not exist" Oct 06 12:05:49 crc kubenswrapper[4958]: I1006 12:05:49.059244 4958 scope.go:117] "RemoveContainer" containerID="977e96c8c20b5ae44a5462c50c0f51725f01638f0e922a4631bd2485e2469cc3" Oct 06 12:05:49 crc kubenswrapper[4958]: E1006 12:05:49.060220 4958 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"977e96c8c20b5ae44a5462c50c0f51725f01638f0e922a4631bd2485e2469cc3\": container with ID starting with 977e96c8c20b5ae44a5462c50c0f51725f01638f0e922a4631bd2485e2469cc3 not found: ID does not exist" containerID="977e96c8c20b5ae44a5462c50c0f51725f01638f0e922a4631bd2485e2469cc3" Oct 06 12:05:49 crc kubenswrapper[4958]: I1006 12:05:49.060245 4958 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"977e96c8c20b5ae44a5462c50c0f51725f01638f0e922a4631bd2485e2469cc3"} err="failed to get container status \"977e96c8c20b5ae44a5462c50c0f51725f01638f0e922a4631bd2485e2469cc3\": rpc error: code = NotFound desc = could not find container \"977e96c8c20b5ae44a5462c50c0f51725f01638f0e922a4631bd2485e2469cc3\": container with ID starting with 977e96c8c20b5ae44a5462c50c0f51725f01638f0e922a4631bd2485e2469cc3 not found: ID does not exist" Oct 06 12:05:49 crc kubenswrapper[4958]: I1006 12:05:49.080466 4958 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5784cf869f-dkwc8"] Oct 06 12:05:49 crc kubenswrapper[4958]: I1006 12:05:49.087845 4958 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-5784cf869f-dkwc8"] Oct 06 12:05:49 crc kubenswrapper[4958]: I1006 12:05:49.090221 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cb90e5bf-3a12-4105-9cd7-2904c373d2d4-combined-ca-bundle\") pod \"cb90e5bf-3a12-4105-9cd7-2904c373d2d4\" (UID: \"cb90e5bf-3a12-4105-9cd7-2904c373d2d4\") " Oct 06 12:05:49 crc kubenswrapper[4958]: I1006 12:05:49.090272 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/cb90e5bf-3a12-4105-9cd7-2904c373d2d4-logs\") pod \"cb90e5bf-3a12-4105-9cd7-2904c373d2d4\" (UID: \"cb90e5bf-3a12-4105-9cd7-2904c373d2d4\") " Oct 06 12:05:49 crc kubenswrapper[4958]: I1006 12:05:49.090386 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-htxdk\" (UniqueName: \"kubernetes.io/projected/cb90e5bf-3a12-4105-9cd7-2904c373d2d4-kube-api-access-htxdk\") pod \"cb90e5bf-3a12-4105-9cd7-2904c373d2d4\" (UID: \"cb90e5bf-3a12-4105-9cd7-2904c373d2d4\") " Oct 06 12:05:49 crc kubenswrapper[4958]: I1006 12:05:49.090518 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/cb90e5bf-3a12-4105-9cd7-2904c373d2d4-nova-metadata-tls-certs\") pod \"cb90e5bf-3a12-4105-9cd7-2904c373d2d4\" (UID: \"cb90e5bf-3a12-4105-9cd7-2904c373d2d4\") " Oct 06 12:05:49 crc kubenswrapper[4958]: I1006 12:05:49.090543 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cb90e5bf-3a12-4105-9cd7-2904c373d2d4-config-data\") pod \"cb90e5bf-3a12-4105-9cd7-2904c373d2d4\" (UID: \"cb90e5bf-3a12-4105-9cd7-2904c373d2d4\") " Oct 06 12:05:49 crc kubenswrapper[4958]: I1006 12:05:49.090955 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sx6tz\" (UniqueName: \"kubernetes.io/projected/f769ee5d-6085-4e88-a212-2c3e2e8f6f2b-kube-api-access-sx6tz\") pod \"nova-cell1-conductor-0\" (UID: \"f769ee5d-6085-4e88-a212-2c3e2e8f6f2b\") " pod="openstack/nova-cell1-conductor-0" Oct 06 12:05:49 crc kubenswrapper[4958]: I1006 12:05:49.090997 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f769ee5d-6085-4e88-a212-2c3e2e8f6f2b-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"f769ee5d-6085-4e88-a212-2c3e2e8f6f2b\") " pod="openstack/nova-cell1-conductor-0" Oct 06 12:05:49 crc kubenswrapper[4958]: I1006 12:05:49.091027 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f769ee5d-6085-4e88-a212-2c3e2e8f6f2b-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"f769ee5d-6085-4e88-a212-2c3e2e8f6f2b\") " pod="openstack/nova-cell1-conductor-0" Oct 06 12:05:49 crc kubenswrapper[4958]: I1006 12:05:49.091054 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/cb90e5bf-3a12-4105-9cd7-2904c373d2d4-logs" (OuterVolumeSpecName: "logs") pod "cb90e5bf-3a12-4105-9cd7-2904c373d2d4" (UID: "cb90e5bf-3a12-4105-9cd7-2904c373d2d4"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 06 12:05:49 crc kubenswrapper[4958]: I1006 12:05:49.094351 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cb90e5bf-3a12-4105-9cd7-2904c373d2d4-kube-api-access-htxdk" (OuterVolumeSpecName: "kube-api-access-htxdk") pod "cb90e5bf-3a12-4105-9cd7-2904c373d2d4" (UID: "cb90e5bf-3a12-4105-9cd7-2904c373d2d4"). InnerVolumeSpecName "kube-api-access-htxdk". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 06 12:05:49 crc kubenswrapper[4958]: I1006 12:05:49.116104 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cb90e5bf-3a12-4105-9cd7-2904c373d2d4-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "cb90e5bf-3a12-4105-9cd7-2904c373d2d4" (UID: "cb90e5bf-3a12-4105-9cd7-2904c373d2d4"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 06 12:05:49 crc kubenswrapper[4958]: I1006 12:05:49.116976 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cb90e5bf-3a12-4105-9cd7-2904c373d2d4-config-data" (OuterVolumeSpecName: "config-data") pod "cb90e5bf-3a12-4105-9cd7-2904c373d2d4" (UID: "cb90e5bf-3a12-4105-9cd7-2904c373d2d4"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 06 12:05:49 crc kubenswrapper[4958]: I1006 12:05:49.139178 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cb90e5bf-3a12-4105-9cd7-2904c373d2d4-nova-metadata-tls-certs" (OuterVolumeSpecName: "nova-metadata-tls-certs") pod "cb90e5bf-3a12-4105-9cd7-2904c373d2d4" (UID: "cb90e5bf-3a12-4105-9cd7-2904c373d2d4"). InnerVolumeSpecName "nova-metadata-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 06 12:05:49 crc kubenswrapper[4958]: I1006 12:05:49.192924 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sx6tz\" (UniqueName: \"kubernetes.io/projected/f769ee5d-6085-4e88-a212-2c3e2e8f6f2b-kube-api-access-sx6tz\") pod \"nova-cell1-conductor-0\" (UID: \"f769ee5d-6085-4e88-a212-2c3e2e8f6f2b\") " pod="openstack/nova-cell1-conductor-0" Oct 06 12:05:49 crc kubenswrapper[4958]: I1006 12:05:49.193007 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f769ee5d-6085-4e88-a212-2c3e2e8f6f2b-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"f769ee5d-6085-4e88-a212-2c3e2e8f6f2b\") " pod="openstack/nova-cell1-conductor-0" Oct 06 12:05:49 crc kubenswrapper[4958]: I1006 12:05:49.193061 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f769ee5d-6085-4e88-a212-2c3e2e8f6f2b-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"f769ee5d-6085-4e88-a212-2c3e2e8f6f2b\") " pod="openstack/nova-cell1-conductor-0" Oct 06 12:05:49 crc kubenswrapper[4958]: I1006 12:05:49.193195 4958 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cb90e5bf-3a12-4105-9cd7-2904c373d2d4-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 06 12:05:49 crc kubenswrapper[4958]: I1006 12:05:49.193207 4958 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/cb90e5bf-3a12-4105-9cd7-2904c373d2d4-logs\") on node \"crc\" DevicePath \"\"" Oct 06 12:05:49 crc kubenswrapper[4958]: I1006 12:05:49.193215 4958 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-htxdk\" (UniqueName: \"kubernetes.io/projected/cb90e5bf-3a12-4105-9cd7-2904c373d2d4-kube-api-access-htxdk\") on node \"crc\" DevicePath \"\"" Oct 06 12:05:49 crc kubenswrapper[4958]: I1006 12:05:49.193242 4958 reconciler_common.go:293] "Volume detached for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/cb90e5bf-3a12-4105-9cd7-2904c373d2d4-nova-metadata-tls-certs\") on node \"crc\" DevicePath \"\"" Oct 06 12:05:49 crc kubenswrapper[4958]: I1006 12:05:49.193250 4958 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cb90e5bf-3a12-4105-9cd7-2904c373d2d4-config-data\") on node \"crc\" DevicePath \"\"" Oct 06 12:05:49 crc kubenswrapper[4958]: I1006 12:05:49.196079 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f769ee5d-6085-4e88-a212-2c3e2e8f6f2b-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"f769ee5d-6085-4e88-a212-2c3e2e8f6f2b\") " pod="openstack/nova-cell1-conductor-0" Oct 06 12:05:49 crc kubenswrapper[4958]: I1006 12:05:49.199889 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f769ee5d-6085-4e88-a212-2c3e2e8f6f2b-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"f769ee5d-6085-4e88-a212-2c3e2e8f6f2b\") " pod="openstack/nova-cell1-conductor-0" Oct 06 12:05:49 crc kubenswrapper[4958]: I1006 12:05:49.210293 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sx6tz\" (UniqueName: \"kubernetes.io/projected/f769ee5d-6085-4e88-a212-2c3e2e8f6f2b-kube-api-access-sx6tz\") pod \"nova-cell1-conductor-0\" (UID: \"f769ee5d-6085-4e88-a212-2c3e2e8f6f2b\") " pod="openstack/nova-cell1-conductor-0" Oct 06 12:05:49 crc kubenswrapper[4958]: I1006 12:05:49.349898 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-0" Oct 06 12:05:49 crc kubenswrapper[4958]: I1006 12:05:49.825549 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-0"] Oct 06 12:05:49 crc kubenswrapper[4958]: W1006 12:05:49.826596 4958 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf769ee5d_6085_4e88_a212_2c3e2e8f6f2b.slice/crio-efddf5b0d28fd51d06e455dee39f915329c22d003d6d5d30375b7991145aa7b5 WatchSource:0}: Error finding container efddf5b0d28fd51d06e455dee39f915329c22d003d6d5d30375b7991145aa7b5: Status 404 returned error can't find the container with id efddf5b0d28fd51d06e455dee39f915329c22d003d6d5d30375b7991145aa7b5 Oct 06 12:05:50 crc kubenswrapper[4958]: I1006 12:05:50.016481 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-0" event={"ID":"f769ee5d-6085-4e88-a212-2c3e2e8f6f2b","Type":"ContainerStarted","Data":"efddf5b0d28fd51d06e455dee39f915329c22d003d6d5d30375b7991145aa7b5"} Oct 06 12:05:50 crc kubenswrapper[4958]: I1006 12:05:50.031744 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Oct 06 12:05:50 crc kubenswrapper[4958]: I1006 12:05:50.031739 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"cb90e5bf-3a12-4105-9cd7-2904c373d2d4","Type":"ContainerDied","Data":"b676aa6bfc13b94350f22ed11bffc73b08cc940595419d2b18616c86c1134b22"} Oct 06 12:05:50 crc kubenswrapper[4958]: I1006 12:05:50.031875 4958 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-scheduler-0" podUID="3a98f260-a4db-4810-8dbd-42cdc4bb3e20" containerName="nova-scheduler-scheduler" containerID="cri-o://cf7633f239f2f13872f1980b837fdc1504a00b72f17c653bfac224737c0f61dc" gracePeriod=30 Oct 06 12:05:50 crc kubenswrapper[4958]: I1006 12:05:50.032110 4958 scope.go:117] "RemoveContainer" containerID="07d1d41603f32f214d04bd853db283042de1c11bfb4097c53ec0e7791a8a67a9" Oct 06 12:05:50 crc kubenswrapper[4958]: I1006 12:05:50.078522 4958 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Oct 06 12:05:50 crc kubenswrapper[4958]: I1006 12:05:50.079723 4958 scope.go:117] "RemoveContainer" containerID="8f435290d81d8bdbc653eccf3fe484bc93095ca23c5f6c010ab8c07f68846ba1" Oct 06 12:05:50 crc kubenswrapper[4958]: I1006 12:05:50.089030 4958 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Oct 06 12:05:50 crc kubenswrapper[4958]: I1006 12:05:50.095993 4958 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Oct 06 12:05:50 crc kubenswrapper[4958]: I1006 12:05:50.098493 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Oct 06 12:05:50 crc kubenswrapper[4958]: I1006 12:05:50.107579 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-metadata-internal-svc" Oct 06 12:05:50 crc kubenswrapper[4958]: I1006 12:05:50.108506 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Oct 06 12:05:50 crc kubenswrapper[4958]: I1006 12:05:50.127114 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Oct 06 12:05:50 crc kubenswrapper[4958]: I1006 12:05:50.211728 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/96807e80-df6e-4105-a74d-296cf572e80f-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"96807e80-df6e-4105-a74d-296cf572e80f\") " pod="openstack/nova-metadata-0" Oct 06 12:05:50 crc kubenswrapper[4958]: I1006 12:05:50.211792 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/96807e80-df6e-4105-a74d-296cf572e80f-config-data\") pod \"nova-metadata-0\" (UID: \"96807e80-df6e-4105-a74d-296cf572e80f\") " pod="openstack/nova-metadata-0" Oct 06 12:05:50 crc kubenswrapper[4958]: I1006 12:05:50.212288 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/96807e80-df6e-4105-a74d-296cf572e80f-logs\") pod \"nova-metadata-0\" (UID: \"96807e80-df6e-4105-a74d-296cf572e80f\") " pod="openstack/nova-metadata-0" Oct 06 12:05:50 crc kubenswrapper[4958]: I1006 12:05:50.212387 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/96807e80-df6e-4105-a74d-296cf572e80f-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"96807e80-df6e-4105-a74d-296cf572e80f\") " pod="openstack/nova-metadata-0" Oct 06 12:05:50 crc kubenswrapper[4958]: I1006 12:05:50.212486 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9fxrk\" (UniqueName: \"kubernetes.io/projected/96807e80-df6e-4105-a74d-296cf572e80f-kube-api-access-9fxrk\") pod \"nova-metadata-0\" (UID: \"96807e80-df6e-4105-a74d-296cf572e80f\") " pod="openstack/nova-metadata-0" Oct 06 12:05:50 crc kubenswrapper[4958]: I1006 12:05:50.313653 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/96807e80-df6e-4105-a74d-296cf572e80f-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"96807e80-df6e-4105-a74d-296cf572e80f\") " pod="openstack/nova-metadata-0" Oct 06 12:05:50 crc kubenswrapper[4958]: I1006 12:05:50.313705 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/96807e80-df6e-4105-a74d-296cf572e80f-config-data\") pod \"nova-metadata-0\" (UID: \"96807e80-df6e-4105-a74d-296cf572e80f\") " pod="openstack/nova-metadata-0" Oct 06 12:05:50 crc kubenswrapper[4958]: I1006 12:05:50.313812 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/96807e80-df6e-4105-a74d-296cf572e80f-logs\") pod \"nova-metadata-0\" (UID: \"96807e80-df6e-4105-a74d-296cf572e80f\") " pod="openstack/nova-metadata-0" Oct 06 12:05:50 crc kubenswrapper[4958]: I1006 12:05:50.313835 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/96807e80-df6e-4105-a74d-296cf572e80f-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"96807e80-df6e-4105-a74d-296cf572e80f\") " pod="openstack/nova-metadata-0" Oct 06 12:05:50 crc kubenswrapper[4958]: I1006 12:05:50.313864 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9fxrk\" (UniqueName: \"kubernetes.io/projected/96807e80-df6e-4105-a74d-296cf572e80f-kube-api-access-9fxrk\") pod \"nova-metadata-0\" (UID: \"96807e80-df6e-4105-a74d-296cf572e80f\") " pod="openstack/nova-metadata-0" Oct 06 12:05:50 crc kubenswrapper[4958]: I1006 12:05:50.314390 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/96807e80-df6e-4105-a74d-296cf572e80f-logs\") pod \"nova-metadata-0\" (UID: \"96807e80-df6e-4105-a74d-296cf572e80f\") " pod="openstack/nova-metadata-0" Oct 06 12:05:50 crc kubenswrapper[4958]: I1006 12:05:50.319776 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/96807e80-df6e-4105-a74d-296cf572e80f-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"96807e80-df6e-4105-a74d-296cf572e80f\") " pod="openstack/nova-metadata-0" Oct 06 12:05:50 crc kubenswrapper[4958]: I1006 12:05:50.327361 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/96807e80-df6e-4105-a74d-296cf572e80f-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"96807e80-df6e-4105-a74d-296cf572e80f\") " pod="openstack/nova-metadata-0" Oct 06 12:05:50 crc kubenswrapper[4958]: I1006 12:05:50.331294 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/96807e80-df6e-4105-a74d-296cf572e80f-config-data\") pod \"nova-metadata-0\" (UID: \"96807e80-df6e-4105-a74d-296cf572e80f\") " pod="openstack/nova-metadata-0" Oct 06 12:05:50 crc kubenswrapper[4958]: I1006 12:05:50.332001 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9fxrk\" (UniqueName: \"kubernetes.io/projected/96807e80-df6e-4105-a74d-296cf572e80f-kube-api-access-9fxrk\") pod \"nova-metadata-0\" (UID: \"96807e80-df6e-4105-a74d-296cf572e80f\") " pod="openstack/nova-metadata-0" Oct 06 12:05:50 crc kubenswrapper[4958]: I1006 12:05:50.434550 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Oct 06 12:05:50 crc kubenswrapper[4958]: I1006 12:05:50.755268 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Oct 06 12:05:50 crc kubenswrapper[4958]: I1006 12:05:50.924708 4958 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7ba8a4da-fc79-4b19-ae45-208cbf09bbff" path="/var/lib/kubelet/pods/7ba8a4da-fc79-4b19-ae45-208cbf09bbff/volumes" Oct 06 12:05:50 crc kubenswrapper[4958]: I1006 12:05:50.925969 4958 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cb90e5bf-3a12-4105-9cd7-2904c373d2d4" path="/var/lib/kubelet/pods/cb90e5bf-3a12-4105-9cd7-2904c373d2d4/volumes" Oct 06 12:05:51 crc kubenswrapper[4958]: I1006 12:05:51.042788 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-0" event={"ID":"f769ee5d-6085-4e88-a212-2c3e2e8f6f2b","Type":"ContainerStarted","Data":"f5f65ed71b53761fee6e6ae70fdebf76ff9676a78412cd30fee0ab5a282a1c3b"} Oct 06 12:05:51 crc kubenswrapper[4958]: I1006 12:05:51.043921 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-conductor-0" Oct 06 12:05:51 crc kubenswrapper[4958]: I1006 12:05:51.045672 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"96807e80-df6e-4105-a74d-296cf572e80f","Type":"ContainerStarted","Data":"f05fa31c47201ef7924773b1b6352543125b33d31f360741f2dc2069e991936e"} Oct 06 12:05:51 crc kubenswrapper[4958]: I1006 12:05:51.045695 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"96807e80-df6e-4105-a74d-296cf572e80f","Type":"ContainerStarted","Data":"8003a94808936843e6f4ecfb2f75e90d53af0eb29be1a397de1df3c72b807356"} Oct 06 12:05:51 crc kubenswrapper[4958]: I1006 12:05:51.062402 4958 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-conductor-0" podStartSLOduration=3.062384553 podStartE2EDuration="3.062384553s" podCreationTimestamp="2025-10-06 12:05:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-06 12:05:51.058594688 +0000 UTC m=+1104.944619996" watchObservedRunningTime="2025-10-06 12:05:51.062384553 +0000 UTC m=+1104.948409861" Oct 06 12:05:52 crc kubenswrapper[4958]: I1006 12:05:52.068977 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"96807e80-df6e-4105-a74d-296cf572e80f","Type":"ContainerStarted","Data":"0f8132c59efef53c444cef144397d153172122024dffdc5f94095ef023317a46"} Oct 06 12:05:52 crc kubenswrapper[4958]: I1006 12:05:52.101835 4958 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=2.101814664 podStartE2EDuration="2.101814664s" podCreationTimestamp="2025-10-06 12:05:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-06 12:05:52.092271217 +0000 UTC m=+1105.978296525" watchObservedRunningTime="2025-10-06 12:05:52.101814664 +0000 UTC m=+1105.987839982" Oct 06 12:05:52 crc kubenswrapper[4958]: E1006 12:05:52.346056 4958 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="cf7633f239f2f13872f1980b837fdc1504a00b72f17c653bfac224737c0f61dc" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Oct 06 12:05:52 crc kubenswrapper[4958]: E1006 12:05:52.347611 4958 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="cf7633f239f2f13872f1980b837fdc1504a00b72f17c653bfac224737c0f61dc" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Oct 06 12:05:52 crc kubenswrapper[4958]: E1006 12:05:52.349347 4958 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="cf7633f239f2f13872f1980b837fdc1504a00b72f17c653bfac224737c0f61dc" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Oct 06 12:05:52 crc kubenswrapper[4958]: E1006 12:05:52.349405 4958 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/nova-scheduler-0" podUID="3a98f260-a4db-4810-8dbd-42cdc4bb3e20" containerName="nova-scheduler-scheduler" Oct 06 12:05:53 crc kubenswrapper[4958]: I1006 12:05:53.969990 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ceilometer-0" Oct 06 12:05:54 crc kubenswrapper[4958]: I1006 12:05:54.109688 4958 generic.go:334] "Generic (PLEG): container finished" podID="3a98f260-a4db-4810-8dbd-42cdc4bb3e20" containerID="cf7633f239f2f13872f1980b837fdc1504a00b72f17c653bfac224737c0f61dc" exitCode=0 Oct 06 12:05:54 crc kubenswrapper[4958]: I1006 12:05:54.109760 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"3a98f260-a4db-4810-8dbd-42cdc4bb3e20","Type":"ContainerDied","Data":"cf7633f239f2f13872f1980b837fdc1504a00b72f17c653bfac224737c0f61dc"} Oct 06 12:05:54 crc kubenswrapper[4958]: I1006 12:05:54.115541 4958 generic.go:334] "Generic (PLEG): container finished" podID="749169b6-fbac-4d10-858e-9d8c592ac408" containerID="e78290cb4959e8057ec06906db8acb4e63f48e640c93a51ac03fd0ab1c5a4644" exitCode=0 Oct 06 12:05:54 crc kubenswrapper[4958]: I1006 12:05:54.115753 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"749169b6-fbac-4d10-858e-9d8c592ac408","Type":"ContainerDied","Data":"e78290cb4959e8057ec06906db8acb4e63f48e640c93a51ac03fd0ab1c5a4644"} Oct 06 12:05:54 crc kubenswrapper[4958]: I1006 12:05:54.116395 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"749169b6-fbac-4d10-858e-9d8c592ac408","Type":"ContainerDied","Data":"604e00a66a104af7a5c7bcc7f217beac88b23b4bbacdf261594cba17873e9530"} Oct 06 12:05:54 crc kubenswrapper[4958]: I1006 12:05:54.116413 4958 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="604e00a66a104af7a5c7bcc7f217beac88b23b4bbacdf261594cba17873e9530" Oct 06 12:05:54 crc kubenswrapper[4958]: I1006 12:05:54.162308 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Oct 06 12:05:54 crc kubenswrapper[4958]: I1006 12:05:54.205335 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/749169b6-fbac-4d10-858e-9d8c592ac408-combined-ca-bundle\") pod \"749169b6-fbac-4d10-858e-9d8c592ac408\" (UID: \"749169b6-fbac-4d10-858e-9d8c592ac408\") " Oct 06 12:05:54 crc kubenswrapper[4958]: I1006 12:05:54.205648 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/749169b6-fbac-4d10-858e-9d8c592ac408-logs\") pod \"749169b6-fbac-4d10-858e-9d8c592ac408\" (UID: \"749169b6-fbac-4d10-858e-9d8c592ac408\") " Oct 06 12:05:54 crc kubenswrapper[4958]: I1006 12:05:54.205689 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/749169b6-fbac-4d10-858e-9d8c592ac408-config-data\") pod \"749169b6-fbac-4d10-858e-9d8c592ac408\" (UID: \"749169b6-fbac-4d10-858e-9d8c592ac408\") " Oct 06 12:05:54 crc kubenswrapper[4958]: I1006 12:05:54.205727 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7x64d\" (UniqueName: \"kubernetes.io/projected/749169b6-fbac-4d10-858e-9d8c592ac408-kube-api-access-7x64d\") pod \"749169b6-fbac-4d10-858e-9d8c592ac408\" (UID: \"749169b6-fbac-4d10-858e-9d8c592ac408\") " Oct 06 12:05:54 crc kubenswrapper[4958]: I1006 12:05:54.208564 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/749169b6-fbac-4d10-858e-9d8c592ac408-logs" (OuterVolumeSpecName: "logs") pod "749169b6-fbac-4d10-858e-9d8c592ac408" (UID: "749169b6-fbac-4d10-858e-9d8c592ac408"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 06 12:05:54 crc kubenswrapper[4958]: I1006 12:05:54.212693 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/749169b6-fbac-4d10-858e-9d8c592ac408-kube-api-access-7x64d" (OuterVolumeSpecName: "kube-api-access-7x64d") pod "749169b6-fbac-4d10-858e-9d8c592ac408" (UID: "749169b6-fbac-4d10-858e-9d8c592ac408"). InnerVolumeSpecName "kube-api-access-7x64d". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 06 12:05:54 crc kubenswrapper[4958]: I1006 12:05:54.232534 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/749169b6-fbac-4d10-858e-9d8c592ac408-config-data" (OuterVolumeSpecName: "config-data") pod "749169b6-fbac-4d10-858e-9d8c592ac408" (UID: "749169b6-fbac-4d10-858e-9d8c592ac408"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 06 12:05:54 crc kubenswrapper[4958]: I1006 12:05:54.233858 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/749169b6-fbac-4d10-858e-9d8c592ac408-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "749169b6-fbac-4d10-858e-9d8c592ac408" (UID: "749169b6-fbac-4d10-858e-9d8c592ac408"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 06 12:05:54 crc kubenswrapper[4958]: I1006 12:05:54.253413 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Oct 06 12:05:54 crc kubenswrapper[4958]: I1006 12:05:54.307157 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3a98f260-a4db-4810-8dbd-42cdc4bb3e20-combined-ca-bundle\") pod \"3a98f260-a4db-4810-8dbd-42cdc4bb3e20\" (UID: \"3a98f260-a4db-4810-8dbd-42cdc4bb3e20\") " Oct 06 12:05:54 crc kubenswrapper[4958]: I1006 12:05:54.307555 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4n225\" (UniqueName: \"kubernetes.io/projected/3a98f260-a4db-4810-8dbd-42cdc4bb3e20-kube-api-access-4n225\") pod \"3a98f260-a4db-4810-8dbd-42cdc4bb3e20\" (UID: \"3a98f260-a4db-4810-8dbd-42cdc4bb3e20\") " Oct 06 12:05:54 crc kubenswrapper[4958]: I1006 12:05:54.307692 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3a98f260-a4db-4810-8dbd-42cdc4bb3e20-config-data\") pod \"3a98f260-a4db-4810-8dbd-42cdc4bb3e20\" (UID: \"3a98f260-a4db-4810-8dbd-42cdc4bb3e20\") " Oct 06 12:05:54 crc kubenswrapper[4958]: I1006 12:05:54.308267 4958 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/749169b6-fbac-4d10-858e-9d8c592ac408-logs\") on node \"crc\" DevicePath \"\"" Oct 06 12:05:54 crc kubenswrapper[4958]: I1006 12:05:54.308284 4958 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/749169b6-fbac-4d10-858e-9d8c592ac408-config-data\") on node \"crc\" DevicePath \"\"" Oct 06 12:05:54 crc kubenswrapper[4958]: I1006 12:05:54.308296 4958 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7x64d\" (UniqueName: \"kubernetes.io/projected/749169b6-fbac-4d10-858e-9d8c592ac408-kube-api-access-7x64d\") on node \"crc\" DevicePath \"\"" Oct 06 12:05:54 crc kubenswrapper[4958]: I1006 12:05:54.308308 4958 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/749169b6-fbac-4d10-858e-9d8c592ac408-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 06 12:05:54 crc kubenswrapper[4958]: I1006 12:05:54.310249 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3a98f260-a4db-4810-8dbd-42cdc4bb3e20-kube-api-access-4n225" (OuterVolumeSpecName: "kube-api-access-4n225") pod "3a98f260-a4db-4810-8dbd-42cdc4bb3e20" (UID: "3a98f260-a4db-4810-8dbd-42cdc4bb3e20"). InnerVolumeSpecName "kube-api-access-4n225". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 06 12:05:54 crc kubenswrapper[4958]: I1006 12:05:54.335025 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3a98f260-a4db-4810-8dbd-42cdc4bb3e20-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "3a98f260-a4db-4810-8dbd-42cdc4bb3e20" (UID: "3a98f260-a4db-4810-8dbd-42cdc4bb3e20"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 06 12:05:54 crc kubenswrapper[4958]: I1006 12:05:54.345398 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3a98f260-a4db-4810-8dbd-42cdc4bb3e20-config-data" (OuterVolumeSpecName: "config-data") pod "3a98f260-a4db-4810-8dbd-42cdc4bb3e20" (UID: "3a98f260-a4db-4810-8dbd-42cdc4bb3e20"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 06 12:05:54 crc kubenswrapper[4958]: I1006 12:05:54.409958 4958 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3a98f260-a4db-4810-8dbd-42cdc4bb3e20-config-data\") on node \"crc\" DevicePath \"\"" Oct 06 12:05:54 crc kubenswrapper[4958]: I1006 12:05:54.410288 4958 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3a98f260-a4db-4810-8dbd-42cdc4bb3e20-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 06 12:05:54 crc kubenswrapper[4958]: I1006 12:05:54.410417 4958 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4n225\" (UniqueName: \"kubernetes.io/projected/3a98f260-a4db-4810-8dbd-42cdc4bb3e20-kube-api-access-4n225\") on node \"crc\" DevicePath \"\"" Oct 06 12:05:55 crc kubenswrapper[4958]: I1006 12:05:55.129324 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Oct 06 12:05:55 crc kubenswrapper[4958]: I1006 12:05:55.129357 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Oct 06 12:05:55 crc kubenswrapper[4958]: I1006 12:05:55.129328 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"3a98f260-a4db-4810-8dbd-42cdc4bb3e20","Type":"ContainerDied","Data":"6b1eecbd9d48bc5f40c856877e563e83d492397811aecb6bbd8d72c321c6275f"} Oct 06 12:05:55 crc kubenswrapper[4958]: I1006 12:05:55.130286 4958 scope.go:117] "RemoveContainer" containerID="cf7633f239f2f13872f1980b837fdc1504a00b72f17c653bfac224737c0f61dc" Oct 06 12:05:55 crc kubenswrapper[4958]: I1006 12:05:55.159386 4958 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Oct 06 12:05:55 crc kubenswrapper[4958]: I1006 12:05:55.181636 4958 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-scheduler-0"] Oct 06 12:05:55 crc kubenswrapper[4958]: I1006 12:05:55.192042 4958 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Oct 06 12:05:55 crc kubenswrapper[4958]: E1006 12:05:55.192580 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3a98f260-a4db-4810-8dbd-42cdc4bb3e20" containerName="nova-scheduler-scheduler" Oct 06 12:05:55 crc kubenswrapper[4958]: I1006 12:05:55.192625 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="3a98f260-a4db-4810-8dbd-42cdc4bb3e20" containerName="nova-scheduler-scheduler" Oct 06 12:05:55 crc kubenswrapper[4958]: E1006 12:05:55.192652 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="749169b6-fbac-4d10-858e-9d8c592ac408" containerName="nova-api-log" Oct 06 12:05:55 crc kubenswrapper[4958]: I1006 12:05:55.192661 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="749169b6-fbac-4d10-858e-9d8c592ac408" containerName="nova-api-log" Oct 06 12:05:55 crc kubenswrapper[4958]: E1006 12:05:55.192683 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="749169b6-fbac-4d10-858e-9d8c592ac408" containerName="nova-api-api" Oct 06 12:05:55 crc kubenswrapper[4958]: I1006 12:05:55.192691 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="749169b6-fbac-4d10-858e-9d8c592ac408" containerName="nova-api-api" Oct 06 12:05:55 crc kubenswrapper[4958]: I1006 12:05:55.192936 4958 memory_manager.go:354] "RemoveStaleState removing state" podUID="749169b6-fbac-4d10-858e-9d8c592ac408" containerName="nova-api-log" Oct 06 12:05:55 crc kubenswrapper[4958]: I1006 12:05:55.192961 4958 memory_manager.go:354] "RemoveStaleState removing state" podUID="3a98f260-a4db-4810-8dbd-42cdc4bb3e20" containerName="nova-scheduler-scheduler" Oct 06 12:05:55 crc kubenswrapper[4958]: I1006 12:05:55.192977 4958 memory_manager.go:354] "RemoveStaleState removing state" podUID="749169b6-fbac-4d10-858e-9d8c592ac408" containerName="nova-api-api" Oct 06 12:05:55 crc kubenswrapper[4958]: I1006 12:05:55.193801 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Oct 06 12:05:55 crc kubenswrapper[4958]: I1006 12:05:55.197846 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Oct 06 12:05:55 crc kubenswrapper[4958]: I1006 12:05:55.202460 4958 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Oct 06 12:05:55 crc kubenswrapper[4958]: I1006 12:05:55.211725 4958 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Oct 06 12:05:55 crc kubenswrapper[4958]: I1006 12:05:55.222788 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Oct 06 12:05:55 crc kubenswrapper[4958]: I1006 12:05:55.224326 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tdzwx\" (UniqueName: \"kubernetes.io/projected/b9b7229f-dbe0-4ff1-83a1-e7adf979a1c5-kube-api-access-tdzwx\") pod \"nova-scheduler-0\" (UID: \"b9b7229f-dbe0-4ff1-83a1-e7adf979a1c5\") " pod="openstack/nova-scheduler-0" Oct 06 12:05:55 crc kubenswrapper[4958]: I1006 12:05:55.224505 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b9b7229f-dbe0-4ff1-83a1-e7adf979a1c5-config-data\") pod \"nova-scheduler-0\" (UID: \"b9b7229f-dbe0-4ff1-83a1-e7adf979a1c5\") " pod="openstack/nova-scheduler-0" Oct 06 12:05:55 crc kubenswrapper[4958]: I1006 12:05:55.224732 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b9b7229f-dbe0-4ff1-83a1-e7adf979a1c5-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"b9b7229f-dbe0-4ff1-83a1-e7adf979a1c5\") " pod="openstack/nova-scheduler-0" Oct 06 12:05:55 crc kubenswrapper[4958]: I1006 12:05:55.231492 4958 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Oct 06 12:05:55 crc kubenswrapper[4958]: I1006 12:05:55.233240 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Oct 06 12:05:55 crc kubenswrapper[4958]: I1006 12:05:55.236377 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Oct 06 12:05:55 crc kubenswrapper[4958]: I1006 12:05:55.252242 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Oct 06 12:05:55 crc kubenswrapper[4958]: I1006 12:05:55.334072 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tdzwx\" (UniqueName: \"kubernetes.io/projected/b9b7229f-dbe0-4ff1-83a1-e7adf979a1c5-kube-api-access-tdzwx\") pod \"nova-scheduler-0\" (UID: \"b9b7229f-dbe0-4ff1-83a1-e7adf979a1c5\") " pod="openstack/nova-scheduler-0" Oct 06 12:05:55 crc kubenswrapper[4958]: I1006 12:05:55.334165 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/78492be8-7a21-4a4b-902e-947656515f65-config-data\") pod \"nova-api-0\" (UID: \"78492be8-7a21-4a4b-902e-947656515f65\") " pod="openstack/nova-api-0" Oct 06 12:05:55 crc kubenswrapper[4958]: I1006 12:05:55.334234 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b9b7229f-dbe0-4ff1-83a1-e7adf979a1c5-config-data\") pod \"nova-scheduler-0\" (UID: \"b9b7229f-dbe0-4ff1-83a1-e7adf979a1c5\") " pod="openstack/nova-scheduler-0" Oct 06 12:05:55 crc kubenswrapper[4958]: I1006 12:05:55.334496 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/78492be8-7a21-4a4b-902e-947656515f65-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"78492be8-7a21-4a4b-902e-947656515f65\") " pod="openstack/nova-api-0" Oct 06 12:05:55 crc kubenswrapper[4958]: I1006 12:05:55.334560 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/78492be8-7a21-4a4b-902e-947656515f65-logs\") pod \"nova-api-0\" (UID: \"78492be8-7a21-4a4b-902e-947656515f65\") " pod="openstack/nova-api-0" Oct 06 12:05:55 crc kubenswrapper[4958]: I1006 12:05:55.334741 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b9b7229f-dbe0-4ff1-83a1-e7adf979a1c5-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"b9b7229f-dbe0-4ff1-83a1-e7adf979a1c5\") " pod="openstack/nova-scheduler-0" Oct 06 12:05:55 crc kubenswrapper[4958]: I1006 12:05:55.334774 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ctgg8\" (UniqueName: \"kubernetes.io/projected/78492be8-7a21-4a4b-902e-947656515f65-kube-api-access-ctgg8\") pod \"nova-api-0\" (UID: \"78492be8-7a21-4a4b-902e-947656515f65\") " pod="openstack/nova-api-0" Oct 06 12:05:55 crc kubenswrapper[4958]: I1006 12:05:55.338535 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b9b7229f-dbe0-4ff1-83a1-e7adf979a1c5-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"b9b7229f-dbe0-4ff1-83a1-e7adf979a1c5\") " pod="openstack/nova-scheduler-0" Oct 06 12:05:55 crc kubenswrapper[4958]: I1006 12:05:55.339126 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b9b7229f-dbe0-4ff1-83a1-e7adf979a1c5-config-data\") pod \"nova-scheduler-0\" (UID: \"b9b7229f-dbe0-4ff1-83a1-e7adf979a1c5\") " pod="openstack/nova-scheduler-0" Oct 06 12:05:55 crc kubenswrapper[4958]: I1006 12:05:55.356743 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tdzwx\" (UniqueName: \"kubernetes.io/projected/b9b7229f-dbe0-4ff1-83a1-e7adf979a1c5-kube-api-access-tdzwx\") pod \"nova-scheduler-0\" (UID: \"b9b7229f-dbe0-4ff1-83a1-e7adf979a1c5\") " pod="openstack/nova-scheduler-0" Oct 06 12:05:55 crc kubenswrapper[4958]: I1006 12:05:55.435700 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Oct 06 12:05:55 crc kubenswrapper[4958]: I1006 12:05:55.436098 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Oct 06 12:05:55 crc kubenswrapper[4958]: I1006 12:05:55.437028 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/78492be8-7a21-4a4b-902e-947656515f65-config-data\") pod \"nova-api-0\" (UID: \"78492be8-7a21-4a4b-902e-947656515f65\") " pod="openstack/nova-api-0" Oct 06 12:05:55 crc kubenswrapper[4958]: I1006 12:05:55.437256 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/78492be8-7a21-4a4b-902e-947656515f65-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"78492be8-7a21-4a4b-902e-947656515f65\") " pod="openstack/nova-api-0" Oct 06 12:05:55 crc kubenswrapper[4958]: I1006 12:05:55.437305 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/78492be8-7a21-4a4b-902e-947656515f65-logs\") pod \"nova-api-0\" (UID: \"78492be8-7a21-4a4b-902e-947656515f65\") " pod="openstack/nova-api-0" Oct 06 12:05:55 crc kubenswrapper[4958]: I1006 12:05:55.437401 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ctgg8\" (UniqueName: \"kubernetes.io/projected/78492be8-7a21-4a4b-902e-947656515f65-kube-api-access-ctgg8\") pod \"nova-api-0\" (UID: \"78492be8-7a21-4a4b-902e-947656515f65\") " pod="openstack/nova-api-0" Oct 06 12:05:55 crc kubenswrapper[4958]: I1006 12:05:55.438184 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/78492be8-7a21-4a4b-902e-947656515f65-logs\") pod \"nova-api-0\" (UID: \"78492be8-7a21-4a4b-902e-947656515f65\") " pod="openstack/nova-api-0" Oct 06 12:05:55 crc kubenswrapper[4958]: I1006 12:05:55.440554 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/78492be8-7a21-4a4b-902e-947656515f65-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"78492be8-7a21-4a4b-902e-947656515f65\") " pod="openstack/nova-api-0" Oct 06 12:05:55 crc kubenswrapper[4958]: I1006 12:05:55.440705 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/78492be8-7a21-4a4b-902e-947656515f65-config-data\") pod \"nova-api-0\" (UID: \"78492be8-7a21-4a4b-902e-947656515f65\") " pod="openstack/nova-api-0" Oct 06 12:05:55 crc kubenswrapper[4958]: I1006 12:05:55.454315 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ctgg8\" (UniqueName: \"kubernetes.io/projected/78492be8-7a21-4a4b-902e-947656515f65-kube-api-access-ctgg8\") pod \"nova-api-0\" (UID: \"78492be8-7a21-4a4b-902e-947656515f65\") " pod="openstack/nova-api-0" Oct 06 12:05:55 crc kubenswrapper[4958]: I1006 12:05:55.565738 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Oct 06 12:05:55 crc kubenswrapper[4958]: I1006 12:05:55.573077 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Oct 06 12:05:56 crc kubenswrapper[4958]: I1006 12:05:56.018802 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Oct 06 12:05:56 crc kubenswrapper[4958]: I1006 12:05:56.115370 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Oct 06 12:05:56 crc kubenswrapper[4958]: W1006 12:05:56.126117 4958 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod78492be8_7a21_4a4b_902e_947656515f65.slice/crio-980b3a5e5f6c99f071bf9499e728dd922f8cba8fccb21c00635469626c70d361 WatchSource:0}: Error finding container 980b3a5e5f6c99f071bf9499e728dd922f8cba8fccb21c00635469626c70d361: Status 404 returned error can't find the container with id 980b3a5e5f6c99f071bf9499e728dd922f8cba8fccb21c00635469626c70d361 Oct 06 12:05:56 crc kubenswrapper[4958]: I1006 12:05:56.146269 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"b9b7229f-dbe0-4ff1-83a1-e7adf979a1c5","Type":"ContainerStarted","Data":"89833430fba07e3d60c6687319fe1dac04127e029505b51414790d33a7781233"} Oct 06 12:05:56 crc kubenswrapper[4958]: I1006 12:05:56.151123 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"78492be8-7a21-4a4b-902e-947656515f65","Type":"ContainerStarted","Data":"980b3a5e5f6c99f071bf9499e728dd922f8cba8fccb21c00635469626c70d361"} Oct 06 12:05:56 crc kubenswrapper[4958]: I1006 12:05:56.923181 4958 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3a98f260-a4db-4810-8dbd-42cdc4bb3e20" path="/var/lib/kubelet/pods/3a98f260-a4db-4810-8dbd-42cdc4bb3e20/volumes" Oct 06 12:05:56 crc kubenswrapper[4958]: I1006 12:05:56.923993 4958 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="749169b6-fbac-4d10-858e-9d8c592ac408" path="/var/lib/kubelet/pods/749169b6-fbac-4d10-858e-9d8c592ac408/volumes" Oct 06 12:05:57 crc kubenswrapper[4958]: I1006 12:05:57.160885 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"78492be8-7a21-4a4b-902e-947656515f65","Type":"ContainerStarted","Data":"2b8a7ab82f9011405de708d95c40916dd79c5fab582a2640ad096f63c0727506"} Oct 06 12:05:57 crc kubenswrapper[4958]: I1006 12:05:57.161225 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"78492be8-7a21-4a4b-902e-947656515f65","Type":"ContainerStarted","Data":"b8f39ff80a1fc3259a69e273f73d4e86d94f99fd26810cf2cf2c608a8978a626"} Oct 06 12:05:57 crc kubenswrapper[4958]: I1006 12:05:57.162877 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"b9b7229f-dbe0-4ff1-83a1-e7adf979a1c5","Type":"ContainerStarted","Data":"688cfc82e38fc05dbd4d1059a62732effe2a28b07687f722d3d0460704530bca"} Oct 06 12:05:57 crc kubenswrapper[4958]: I1006 12:05:57.193057 4958 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=2.193037699 podStartE2EDuration="2.193037699s" podCreationTimestamp="2025-10-06 12:05:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-06 12:05:57.180798131 +0000 UTC m=+1111.066823449" watchObservedRunningTime="2025-10-06 12:05:57.193037699 +0000 UTC m=+1111.079063027" Oct 06 12:05:57 crc kubenswrapper[4958]: I1006 12:05:57.208069 4958 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=2.208050222 podStartE2EDuration="2.208050222s" podCreationTimestamp="2025-10-06 12:05:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-06 12:05:57.203317669 +0000 UTC m=+1111.089342977" watchObservedRunningTime="2025-10-06 12:05:57.208050222 +0000 UTC m=+1111.094075540" Oct 06 12:05:57 crc kubenswrapper[4958]: I1006 12:05:57.883784 4958 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/kube-state-metrics-0"] Oct 06 12:05:57 crc kubenswrapper[4958]: I1006 12:05:57.884200 4958 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/kube-state-metrics-0" podUID="ed469bc1-6294-481d-aed5-136cc0585e1c" containerName="kube-state-metrics" containerID="cri-o://9d08a890706eff159de753dbced2e509a813db0e921778a980d06d2cea82d28e" gracePeriod=30 Oct 06 12:05:58 crc kubenswrapper[4958]: I1006 12:05:58.173868 4958 generic.go:334] "Generic (PLEG): container finished" podID="ed469bc1-6294-481d-aed5-136cc0585e1c" containerID="9d08a890706eff159de753dbced2e509a813db0e921778a980d06d2cea82d28e" exitCode=2 Oct 06 12:05:58 crc kubenswrapper[4958]: I1006 12:05:58.174544 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"ed469bc1-6294-481d-aed5-136cc0585e1c","Type":"ContainerDied","Data":"9d08a890706eff159de753dbced2e509a813db0e921778a980d06d2cea82d28e"} Oct 06 12:05:58 crc kubenswrapper[4958]: I1006 12:05:58.387097 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Oct 06 12:05:58 crc kubenswrapper[4958]: I1006 12:05:58.494231 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rh5lx\" (UniqueName: \"kubernetes.io/projected/ed469bc1-6294-481d-aed5-136cc0585e1c-kube-api-access-rh5lx\") pod \"ed469bc1-6294-481d-aed5-136cc0585e1c\" (UID: \"ed469bc1-6294-481d-aed5-136cc0585e1c\") " Oct 06 12:05:58 crc kubenswrapper[4958]: I1006 12:05:58.499919 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ed469bc1-6294-481d-aed5-136cc0585e1c-kube-api-access-rh5lx" (OuterVolumeSpecName: "kube-api-access-rh5lx") pod "ed469bc1-6294-481d-aed5-136cc0585e1c" (UID: "ed469bc1-6294-481d-aed5-136cc0585e1c"). InnerVolumeSpecName "kube-api-access-rh5lx". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 06 12:05:58 crc kubenswrapper[4958]: I1006 12:05:58.596898 4958 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rh5lx\" (UniqueName: \"kubernetes.io/projected/ed469bc1-6294-481d-aed5-136cc0585e1c-kube-api-access-rh5lx\") on node \"crc\" DevicePath \"\"" Oct 06 12:05:59 crc kubenswrapper[4958]: I1006 12:05:59.189654 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"ed469bc1-6294-481d-aed5-136cc0585e1c","Type":"ContainerDied","Data":"4177041dad3df0d002a27dd05416f6b032363b7c4c792f80308c6d25f5172292"} Oct 06 12:05:59 crc kubenswrapper[4958]: I1006 12:05:59.189753 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Oct 06 12:05:59 crc kubenswrapper[4958]: I1006 12:05:59.190063 4958 scope.go:117] "RemoveContainer" containerID="9d08a890706eff159de753dbced2e509a813db0e921778a980d06d2cea82d28e" Oct 06 12:05:59 crc kubenswrapper[4958]: I1006 12:05:59.246725 4958 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/kube-state-metrics-0"] Oct 06 12:05:59 crc kubenswrapper[4958]: I1006 12:05:59.256247 4958 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/kube-state-metrics-0"] Oct 06 12:05:59 crc kubenswrapper[4958]: I1006 12:05:59.262691 4958 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/kube-state-metrics-0"] Oct 06 12:05:59 crc kubenswrapper[4958]: E1006 12:05:59.263282 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ed469bc1-6294-481d-aed5-136cc0585e1c" containerName="kube-state-metrics" Oct 06 12:05:59 crc kubenswrapper[4958]: I1006 12:05:59.263316 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="ed469bc1-6294-481d-aed5-136cc0585e1c" containerName="kube-state-metrics" Oct 06 12:05:59 crc kubenswrapper[4958]: I1006 12:05:59.263662 4958 memory_manager.go:354] "RemoveStaleState removing state" podUID="ed469bc1-6294-481d-aed5-136cc0585e1c" containerName="kube-state-metrics" Oct 06 12:05:59 crc kubenswrapper[4958]: I1006 12:05:59.264514 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Oct 06 12:05:59 crc kubenswrapper[4958]: I1006 12:05:59.266877 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-kube-state-metrics-svc" Oct 06 12:05:59 crc kubenswrapper[4958]: I1006 12:05:59.267593 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"kube-state-metrics-tls-config" Oct 06 12:05:59 crc kubenswrapper[4958]: I1006 12:05:59.272030 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Oct 06 12:05:59 crc kubenswrapper[4958]: I1006 12:05:59.308292 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/670f79b0-7850-4798-a452-f387018cd4d3-kube-state-metrics-tls-config\") pod \"kube-state-metrics-0\" (UID: \"670f79b0-7850-4798-a452-f387018cd4d3\") " pod="openstack/kube-state-metrics-0" Oct 06 12:05:59 crc kubenswrapper[4958]: I1006 12:05:59.308462 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/670f79b0-7850-4798-a452-f387018cd4d3-kube-state-metrics-tls-certs\") pod \"kube-state-metrics-0\" (UID: \"670f79b0-7850-4798-a452-f387018cd4d3\") " pod="openstack/kube-state-metrics-0" Oct 06 12:05:59 crc kubenswrapper[4958]: I1006 12:05:59.308571 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/670f79b0-7850-4798-a452-f387018cd4d3-combined-ca-bundle\") pod \"kube-state-metrics-0\" (UID: \"670f79b0-7850-4798-a452-f387018cd4d3\") " pod="openstack/kube-state-metrics-0" Oct 06 12:05:59 crc kubenswrapper[4958]: I1006 12:05:59.308733 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s8b58\" (UniqueName: \"kubernetes.io/projected/670f79b0-7850-4798-a452-f387018cd4d3-kube-api-access-s8b58\") pod \"kube-state-metrics-0\" (UID: \"670f79b0-7850-4798-a452-f387018cd4d3\") " pod="openstack/kube-state-metrics-0" Oct 06 12:05:59 crc kubenswrapper[4958]: I1006 12:05:59.384629 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell1-conductor-0" Oct 06 12:05:59 crc kubenswrapper[4958]: I1006 12:05:59.415681 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s8b58\" (UniqueName: \"kubernetes.io/projected/670f79b0-7850-4798-a452-f387018cd4d3-kube-api-access-s8b58\") pod \"kube-state-metrics-0\" (UID: \"670f79b0-7850-4798-a452-f387018cd4d3\") " pod="openstack/kube-state-metrics-0" Oct 06 12:05:59 crc kubenswrapper[4958]: I1006 12:05:59.415896 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/670f79b0-7850-4798-a452-f387018cd4d3-kube-state-metrics-tls-config\") pod \"kube-state-metrics-0\" (UID: \"670f79b0-7850-4798-a452-f387018cd4d3\") " pod="openstack/kube-state-metrics-0" Oct 06 12:05:59 crc kubenswrapper[4958]: I1006 12:05:59.416117 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/670f79b0-7850-4798-a452-f387018cd4d3-kube-state-metrics-tls-certs\") pod \"kube-state-metrics-0\" (UID: \"670f79b0-7850-4798-a452-f387018cd4d3\") " pod="openstack/kube-state-metrics-0" Oct 06 12:05:59 crc kubenswrapper[4958]: I1006 12:05:59.416242 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/670f79b0-7850-4798-a452-f387018cd4d3-combined-ca-bundle\") pod \"kube-state-metrics-0\" (UID: \"670f79b0-7850-4798-a452-f387018cd4d3\") " pod="openstack/kube-state-metrics-0" Oct 06 12:05:59 crc kubenswrapper[4958]: I1006 12:05:59.423569 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/670f79b0-7850-4798-a452-f387018cd4d3-kube-state-metrics-tls-certs\") pod \"kube-state-metrics-0\" (UID: \"670f79b0-7850-4798-a452-f387018cd4d3\") " pod="openstack/kube-state-metrics-0" Oct 06 12:05:59 crc kubenswrapper[4958]: I1006 12:05:59.433478 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/670f79b0-7850-4798-a452-f387018cd4d3-combined-ca-bundle\") pod \"kube-state-metrics-0\" (UID: \"670f79b0-7850-4798-a452-f387018cd4d3\") " pod="openstack/kube-state-metrics-0" Oct 06 12:05:59 crc kubenswrapper[4958]: I1006 12:05:59.437420 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s8b58\" (UniqueName: \"kubernetes.io/projected/670f79b0-7850-4798-a452-f387018cd4d3-kube-api-access-s8b58\") pod \"kube-state-metrics-0\" (UID: \"670f79b0-7850-4798-a452-f387018cd4d3\") " pod="openstack/kube-state-metrics-0" Oct 06 12:05:59 crc kubenswrapper[4958]: I1006 12:05:59.437991 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/670f79b0-7850-4798-a452-f387018cd4d3-kube-state-metrics-tls-config\") pod \"kube-state-metrics-0\" (UID: \"670f79b0-7850-4798-a452-f387018cd4d3\") " pod="openstack/kube-state-metrics-0" Oct 06 12:05:59 crc kubenswrapper[4958]: I1006 12:05:59.599818 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Oct 06 12:05:59 crc kubenswrapper[4958]: I1006 12:05:59.878403 4958 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Oct 06 12:05:59 crc kubenswrapper[4958]: I1006 12:05:59.878888 4958 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="df7de90f-720e-4d12-92b8-ced2ed491360" containerName="ceilometer-central-agent" containerID="cri-o://a510678ee0d09a9b3aebaa16ea4ffde1fa2ecf35424ec5581a23101af74b1ddf" gracePeriod=30 Oct 06 12:05:59 crc kubenswrapper[4958]: I1006 12:05:59.878908 4958 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="df7de90f-720e-4d12-92b8-ced2ed491360" containerName="proxy-httpd" containerID="cri-o://792e79d8587fa71c5c2166f0c3dea77b11fbc99452d0f4541c45ecf36b647b1c" gracePeriod=30 Oct 06 12:05:59 crc kubenswrapper[4958]: I1006 12:05:59.878985 4958 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="df7de90f-720e-4d12-92b8-ced2ed491360" containerName="ceilometer-notification-agent" containerID="cri-o://ebe011f647b372de72d69d4f11ca1b0380879092844dd16287ee5becbf99b893" gracePeriod=30 Oct 06 12:05:59 crc kubenswrapper[4958]: I1006 12:05:59.879005 4958 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="df7de90f-720e-4d12-92b8-ced2ed491360" containerName="sg-core" containerID="cri-o://52d57eb75384e2796cd25f00fea69b3e1d15a653bd62d82d2ef2327172ba9adb" gracePeriod=30 Oct 06 12:06:00 crc kubenswrapper[4958]: I1006 12:06:00.064233 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Oct 06 12:06:00 crc kubenswrapper[4958]: W1006 12:06:00.075174 4958 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod670f79b0_7850_4798_a452_f387018cd4d3.slice/crio-9df0ff1ac272624995318ce57d67fea29230b33fdb3a50e3143e1730845a5033 WatchSource:0}: Error finding container 9df0ff1ac272624995318ce57d67fea29230b33fdb3a50e3143e1730845a5033: Status 404 returned error can't find the container with id 9df0ff1ac272624995318ce57d67fea29230b33fdb3a50e3143e1730845a5033 Oct 06 12:06:00 crc kubenswrapper[4958]: I1006 12:06:00.206195 4958 generic.go:334] "Generic (PLEG): container finished" podID="df7de90f-720e-4d12-92b8-ced2ed491360" containerID="792e79d8587fa71c5c2166f0c3dea77b11fbc99452d0f4541c45ecf36b647b1c" exitCode=0 Oct 06 12:06:00 crc kubenswrapper[4958]: I1006 12:06:00.206236 4958 generic.go:334] "Generic (PLEG): container finished" podID="df7de90f-720e-4d12-92b8-ced2ed491360" containerID="52d57eb75384e2796cd25f00fea69b3e1d15a653bd62d82d2ef2327172ba9adb" exitCode=2 Oct 06 12:06:00 crc kubenswrapper[4958]: I1006 12:06:00.206277 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"df7de90f-720e-4d12-92b8-ced2ed491360","Type":"ContainerDied","Data":"792e79d8587fa71c5c2166f0c3dea77b11fbc99452d0f4541c45ecf36b647b1c"} Oct 06 12:06:00 crc kubenswrapper[4958]: I1006 12:06:00.206331 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"df7de90f-720e-4d12-92b8-ced2ed491360","Type":"ContainerDied","Data":"52d57eb75384e2796cd25f00fea69b3e1d15a653bd62d82d2ef2327172ba9adb"} Oct 06 12:06:00 crc kubenswrapper[4958]: I1006 12:06:00.207422 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"670f79b0-7850-4798-a452-f387018cd4d3","Type":"ContainerStarted","Data":"9df0ff1ac272624995318ce57d67fea29230b33fdb3a50e3143e1730845a5033"} Oct 06 12:06:00 crc kubenswrapper[4958]: I1006 12:06:00.434439 4958 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Oct 06 12:06:00 crc kubenswrapper[4958]: I1006 12:06:00.435810 4958 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Oct 06 12:06:00 crc kubenswrapper[4958]: I1006 12:06:00.565853 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Oct 06 12:06:00 crc kubenswrapper[4958]: I1006 12:06:00.927055 4958 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ed469bc1-6294-481d-aed5-136cc0585e1c" path="/var/lib/kubelet/pods/ed469bc1-6294-481d-aed5-136cc0585e1c/volumes" Oct 06 12:06:01 crc kubenswrapper[4958]: I1006 12:06:01.218129 4958 generic.go:334] "Generic (PLEG): container finished" podID="df7de90f-720e-4d12-92b8-ced2ed491360" containerID="a510678ee0d09a9b3aebaa16ea4ffde1fa2ecf35424ec5581a23101af74b1ddf" exitCode=0 Oct 06 12:06:01 crc kubenswrapper[4958]: I1006 12:06:01.218198 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"df7de90f-720e-4d12-92b8-ced2ed491360","Type":"ContainerDied","Data":"a510678ee0d09a9b3aebaa16ea4ffde1fa2ecf35424ec5581a23101af74b1ddf"} Oct 06 12:06:01 crc kubenswrapper[4958]: I1006 12:06:01.220204 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"670f79b0-7850-4798-a452-f387018cd4d3","Type":"ContainerStarted","Data":"c648f295695ab828ad19dc66ae372a6939eb52ac8fd6b30e4e458e233b558af1"} Oct 06 12:06:01 crc kubenswrapper[4958]: I1006 12:06:01.254536 4958 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/kube-state-metrics-0" podStartSLOduration=1.903738076 podStartE2EDuration="2.254509116s" podCreationTimestamp="2025-10-06 12:05:59 +0000 UTC" firstStartedPulling="2025-10-06 12:06:00.078626832 +0000 UTC m=+1113.964652140" lastFinishedPulling="2025-10-06 12:06:00.429397872 +0000 UTC m=+1114.315423180" observedRunningTime="2025-10-06 12:06:01.240806923 +0000 UTC m=+1115.126832241" watchObservedRunningTime="2025-10-06 12:06:01.254509116 +0000 UTC m=+1115.140534434" Oct 06 12:06:01 crc kubenswrapper[4958]: I1006 12:06:01.445270 4958 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="96807e80-df6e-4105-a74d-296cf572e80f" containerName="nova-metadata-log" probeResult="failure" output="Get \"https://10.217.0.194:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Oct 06 12:06:01 crc kubenswrapper[4958]: I1006 12:06:01.445395 4958 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="96807e80-df6e-4105-a74d-296cf572e80f" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"https://10.217.0.194:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Oct 06 12:06:02 crc kubenswrapper[4958]: I1006 12:06:02.236582 4958 generic.go:334] "Generic (PLEG): container finished" podID="df7de90f-720e-4d12-92b8-ced2ed491360" containerID="ebe011f647b372de72d69d4f11ca1b0380879092844dd16287ee5becbf99b893" exitCode=0 Oct 06 12:06:02 crc kubenswrapper[4958]: I1006 12:06:02.236696 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"df7de90f-720e-4d12-92b8-ced2ed491360","Type":"ContainerDied","Data":"ebe011f647b372de72d69d4f11ca1b0380879092844dd16287ee5becbf99b893"} Oct 06 12:06:02 crc kubenswrapper[4958]: I1006 12:06:02.237270 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"df7de90f-720e-4d12-92b8-ced2ed491360","Type":"ContainerDied","Data":"4fcb3f82b8bf1bbf6ce9c1fcfdf0e4cfcfe6e7b8a1d62e6a45e631b5f812db3c"} Oct 06 12:06:02 crc kubenswrapper[4958]: I1006 12:06:02.237297 4958 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4fcb3f82b8bf1bbf6ce9c1fcfdf0e4cfcfe6e7b8a1d62e6a45e631b5f812db3c" Oct 06 12:06:02 crc kubenswrapper[4958]: I1006 12:06:02.237418 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/kube-state-metrics-0" Oct 06 12:06:02 crc kubenswrapper[4958]: I1006 12:06:02.277970 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Oct 06 12:06:02 crc kubenswrapper[4958]: I1006 12:06:02.370007 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/df7de90f-720e-4d12-92b8-ced2ed491360-sg-core-conf-yaml\") pod \"df7de90f-720e-4d12-92b8-ced2ed491360\" (UID: \"df7de90f-720e-4d12-92b8-ced2ed491360\") " Oct 06 12:06:02 crc kubenswrapper[4958]: I1006 12:06:02.370064 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/df7de90f-720e-4d12-92b8-ced2ed491360-scripts\") pod \"df7de90f-720e-4d12-92b8-ced2ed491360\" (UID: \"df7de90f-720e-4d12-92b8-ced2ed491360\") " Oct 06 12:06:02 crc kubenswrapper[4958]: I1006 12:06:02.370091 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/df7de90f-720e-4d12-92b8-ced2ed491360-log-httpd\") pod \"df7de90f-720e-4d12-92b8-ced2ed491360\" (UID: \"df7de90f-720e-4d12-92b8-ced2ed491360\") " Oct 06 12:06:02 crc kubenswrapper[4958]: I1006 12:06:02.370233 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/df7de90f-720e-4d12-92b8-ced2ed491360-run-httpd\") pod \"df7de90f-720e-4d12-92b8-ced2ed491360\" (UID: \"df7de90f-720e-4d12-92b8-ced2ed491360\") " Oct 06 12:06:02 crc kubenswrapper[4958]: I1006 12:06:02.370298 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/df7de90f-720e-4d12-92b8-ced2ed491360-config-data\") pod \"df7de90f-720e-4d12-92b8-ced2ed491360\" (UID: \"df7de90f-720e-4d12-92b8-ced2ed491360\") " Oct 06 12:06:02 crc kubenswrapper[4958]: I1006 12:06:02.370323 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nkb94\" (UniqueName: \"kubernetes.io/projected/df7de90f-720e-4d12-92b8-ced2ed491360-kube-api-access-nkb94\") pod \"df7de90f-720e-4d12-92b8-ced2ed491360\" (UID: \"df7de90f-720e-4d12-92b8-ced2ed491360\") " Oct 06 12:06:02 crc kubenswrapper[4958]: I1006 12:06:02.370346 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/df7de90f-720e-4d12-92b8-ced2ed491360-combined-ca-bundle\") pod \"df7de90f-720e-4d12-92b8-ced2ed491360\" (UID: \"df7de90f-720e-4d12-92b8-ced2ed491360\") " Oct 06 12:06:02 crc kubenswrapper[4958]: I1006 12:06:02.370920 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/df7de90f-720e-4d12-92b8-ced2ed491360-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "df7de90f-720e-4d12-92b8-ced2ed491360" (UID: "df7de90f-720e-4d12-92b8-ced2ed491360"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 06 12:06:02 crc kubenswrapper[4958]: I1006 12:06:02.371408 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/df7de90f-720e-4d12-92b8-ced2ed491360-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "df7de90f-720e-4d12-92b8-ced2ed491360" (UID: "df7de90f-720e-4d12-92b8-ced2ed491360"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 06 12:06:02 crc kubenswrapper[4958]: I1006 12:06:02.386476 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/df7de90f-720e-4d12-92b8-ced2ed491360-scripts" (OuterVolumeSpecName: "scripts") pod "df7de90f-720e-4d12-92b8-ced2ed491360" (UID: "df7de90f-720e-4d12-92b8-ced2ed491360"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 06 12:06:02 crc kubenswrapper[4958]: I1006 12:06:02.387381 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/df7de90f-720e-4d12-92b8-ced2ed491360-kube-api-access-nkb94" (OuterVolumeSpecName: "kube-api-access-nkb94") pod "df7de90f-720e-4d12-92b8-ced2ed491360" (UID: "df7de90f-720e-4d12-92b8-ced2ed491360"). InnerVolumeSpecName "kube-api-access-nkb94". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 06 12:06:02 crc kubenswrapper[4958]: I1006 12:06:02.397518 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/df7de90f-720e-4d12-92b8-ced2ed491360-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "df7de90f-720e-4d12-92b8-ced2ed491360" (UID: "df7de90f-720e-4d12-92b8-ced2ed491360"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 06 12:06:02 crc kubenswrapper[4958]: I1006 12:06:02.442654 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/df7de90f-720e-4d12-92b8-ced2ed491360-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "df7de90f-720e-4d12-92b8-ced2ed491360" (UID: "df7de90f-720e-4d12-92b8-ced2ed491360"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 06 12:06:02 crc kubenswrapper[4958]: I1006 12:06:02.468321 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/df7de90f-720e-4d12-92b8-ced2ed491360-config-data" (OuterVolumeSpecName: "config-data") pod "df7de90f-720e-4d12-92b8-ced2ed491360" (UID: "df7de90f-720e-4d12-92b8-ced2ed491360"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 06 12:06:02 crc kubenswrapper[4958]: I1006 12:06:02.472259 4958 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/df7de90f-720e-4d12-92b8-ced2ed491360-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Oct 06 12:06:02 crc kubenswrapper[4958]: I1006 12:06:02.472286 4958 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/df7de90f-720e-4d12-92b8-ced2ed491360-scripts\") on node \"crc\" DevicePath \"\"" Oct 06 12:06:02 crc kubenswrapper[4958]: I1006 12:06:02.472295 4958 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/df7de90f-720e-4d12-92b8-ced2ed491360-log-httpd\") on node \"crc\" DevicePath \"\"" Oct 06 12:06:02 crc kubenswrapper[4958]: I1006 12:06:02.472303 4958 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/df7de90f-720e-4d12-92b8-ced2ed491360-run-httpd\") on node \"crc\" DevicePath \"\"" Oct 06 12:06:02 crc kubenswrapper[4958]: I1006 12:06:02.472312 4958 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/df7de90f-720e-4d12-92b8-ced2ed491360-config-data\") on node \"crc\" DevicePath \"\"" Oct 06 12:06:02 crc kubenswrapper[4958]: I1006 12:06:02.472320 4958 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nkb94\" (UniqueName: \"kubernetes.io/projected/df7de90f-720e-4d12-92b8-ced2ed491360-kube-api-access-nkb94\") on node \"crc\" DevicePath \"\"" Oct 06 12:06:02 crc kubenswrapper[4958]: I1006 12:06:02.472330 4958 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/df7de90f-720e-4d12-92b8-ced2ed491360-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 06 12:06:03 crc kubenswrapper[4958]: I1006 12:06:03.248666 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Oct 06 12:06:03 crc kubenswrapper[4958]: I1006 12:06:03.279074 4958 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Oct 06 12:06:03 crc kubenswrapper[4958]: I1006 12:06:03.297536 4958 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Oct 06 12:06:03 crc kubenswrapper[4958]: I1006 12:06:03.309688 4958 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Oct 06 12:06:03 crc kubenswrapper[4958]: E1006 12:06:03.310207 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="df7de90f-720e-4d12-92b8-ced2ed491360" containerName="sg-core" Oct 06 12:06:03 crc kubenswrapper[4958]: I1006 12:06:03.310230 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="df7de90f-720e-4d12-92b8-ced2ed491360" containerName="sg-core" Oct 06 12:06:03 crc kubenswrapper[4958]: E1006 12:06:03.310256 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="df7de90f-720e-4d12-92b8-ced2ed491360" containerName="ceilometer-central-agent" Oct 06 12:06:03 crc kubenswrapper[4958]: I1006 12:06:03.310265 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="df7de90f-720e-4d12-92b8-ced2ed491360" containerName="ceilometer-central-agent" Oct 06 12:06:03 crc kubenswrapper[4958]: E1006 12:06:03.310292 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="df7de90f-720e-4d12-92b8-ced2ed491360" containerName="ceilometer-notification-agent" Oct 06 12:06:03 crc kubenswrapper[4958]: I1006 12:06:03.310301 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="df7de90f-720e-4d12-92b8-ced2ed491360" containerName="ceilometer-notification-agent" Oct 06 12:06:03 crc kubenswrapper[4958]: E1006 12:06:03.310324 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="df7de90f-720e-4d12-92b8-ced2ed491360" containerName="proxy-httpd" Oct 06 12:06:03 crc kubenswrapper[4958]: I1006 12:06:03.310332 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="df7de90f-720e-4d12-92b8-ced2ed491360" containerName="proxy-httpd" Oct 06 12:06:03 crc kubenswrapper[4958]: I1006 12:06:03.310547 4958 memory_manager.go:354] "RemoveStaleState removing state" podUID="df7de90f-720e-4d12-92b8-ced2ed491360" containerName="sg-core" Oct 06 12:06:03 crc kubenswrapper[4958]: I1006 12:06:03.310577 4958 memory_manager.go:354] "RemoveStaleState removing state" podUID="df7de90f-720e-4d12-92b8-ced2ed491360" containerName="proxy-httpd" Oct 06 12:06:03 crc kubenswrapper[4958]: I1006 12:06:03.310588 4958 memory_manager.go:354] "RemoveStaleState removing state" podUID="df7de90f-720e-4d12-92b8-ced2ed491360" containerName="ceilometer-notification-agent" Oct 06 12:06:03 crc kubenswrapper[4958]: I1006 12:06:03.310610 4958 memory_manager.go:354] "RemoveStaleState removing state" podUID="df7de90f-720e-4d12-92b8-ced2ed491360" containerName="ceilometer-central-agent" Oct 06 12:06:03 crc kubenswrapper[4958]: I1006 12:06:03.312809 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Oct 06 12:06:03 crc kubenswrapper[4958]: I1006 12:06:03.316831 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Oct 06 12:06:03 crc kubenswrapper[4958]: I1006 12:06:03.316900 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Oct 06 12:06:03 crc kubenswrapper[4958]: I1006 12:06:03.317106 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ceilometer-internal-svc" Oct 06 12:06:03 crc kubenswrapper[4958]: I1006 12:06:03.322488 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Oct 06 12:06:03 crc kubenswrapper[4958]: I1006 12:06:03.402238 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b717fb97-37ee-4f1c-8d12-6a321800b724-scripts\") pod \"ceilometer-0\" (UID: \"b717fb97-37ee-4f1c-8d12-6a321800b724\") " pod="openstack/ceilometer-0" Oct 06 12:06:03 crc kubenswrapper[4958]: I1006 12:06:03.402308 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b717fb97-37ee-4f1c-8d12-6a321800b724-config-data\") pod \"ceilometer-0\" (UID: \"b717fb97-37ee-4f1c-8d12-6a321800b724\") " pod="openstack/ceilometer-0" Oct 06 12:06:03 crc kubenswrapper[4958]: I1006 12:06:03.402343 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g8pg8\" (UniqueName: \"kubernetes.io/projected/b717fb97-37ee-4f1c-8d12-6a321800b724-kube-api-access-g8pg8\") pod \"ceilometer-0\" (UID: \"b717fb97-37ee-4f1c-8d12-6a321800b724\") " pod="openstack/ceilometer-0" Oct 06 12:06:03 crc kubenswrapper[4958]: I1006 12:06:03.402373 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b717fb97-37ee-4f1c-8d12-6a321800b724-log-httpd\") pod \"ceilometer-0\" (UID: \"b717fb97-37ee-4f1c-8d12-6a321800b724\") " pod="openstack/ceilometer-0" Oct 06 12:06:03 crc kubenswrapper[4958]: I1006 12:06:03.402399 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/b717fb97-37ee-4f1c-8d12-6a321800b724-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"b717fb97-37ee-4f1c-8d12-6a321800b724\") " pod="openstack/ceilometer-0" Oct 06 12:06:03 crc kubenswrapper[4958]: I1006 12:06:03.402459 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b717fb97-37ee-4f1c-8d12-6a321800b724-run-httpd\") pod \"ceilometer-0\" (UID: \"b717fb97-37ee-4f1c-8d12-6a321800b724\") " pod="openstack/ceilometer-0" Oct 06 12:06:03 crc kubenswrapper[4958]: I1006 12:06:03.402526 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b717fb97-37ee-4f1c-8d12-6a321800b724-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"b717fb97-37ee-4f1c-8d12-6a321800b724\") " pod="openstack/ceilometer-0" Oct 06 12:06:03 crc kubenswrapper[4958]: I1006 12:06:03.402549 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/b717fb97-37ee-4f1c-8d12-6a321800b724-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"b717fb97-37ee-4f1c-8d12-6a321800b724\") " pod="openstack/ceilometer-0" Oct 06 12:06:03 crc kubenswrapper[4958]: I1006 12:06:03.504006 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b717fb97-37ee-4f1c-8d12-6a321800b724-scripts\") pod \"ceilometer-0\" (UID: \"b717fb97-37ee-4f1c-8d12-6a321800b724\") " pod="openstack/ceilometer-0" Oct 06 12:06:03 crc kubenswrapper[4958]: I1006 12:06:03.504074 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b717fb97-37ee-4f1c-8d12-6a321800b724-config-data\") pod \"ceilometer-0\" (UID: \"b717fb97-37ee-4f1c-8d12-6a321800b724\") " pod="openstack/ceilometer-0" Oct 06 12:06:03 crc kubenswrapper[4958]: I1006 12:06:03.504100 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g8pg8\" (UniqueName: \"kubernetes.io/projected/b717fb97-37ee-4f1c-8d12-6a321800b724-kube-api-access-g8pg8\") pod \"ceilometer-0\" (UID: \"b717fb97-37ee-4f1c-8d12-6a321800b724\") " pod="openstack/ceilometer-0" Oct 06 12:06:03 crc kubenswrapper[4958]: I1006 12:06:03.504122 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b717fb97-37ee-4f1c-8d12-6a321800b724-log-httpd\") pod \"ceilometer-0\" (UID: \"b717fb97-37ee-4f1c-8d12-6a321800b724\") " pod="openstack/ceilometer-0" Oct 06 12:06:03 crc kubenswrapper[4958]: I1006 12:06:03.504158 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/b717fb97-37ee-4f1c-8d12-6a321800b724-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"b717fb97-37ee-4f1c-8d12-6a321800b724\") " pod="openstack/ceilometer-0" Oct 06 12:06:03 crc kubenswrapper[4958]: I1006 12:06:03.504184 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b717fb97-37ee-4f1c-8d12-6a321800b724-run-httpd\") pod \"ceilometer-0\" (UID: \"b717fb97-37ee-4f1c-8d12-6a321800b724\") " pod="openstack/ceilometer-0" Oct 06 12:06:03 crc kubenswrapper[4958]: I1006 12:06:03.504219 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b717fb97-37ee-4f1c-8d12-6a321800b724-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"b717fb97-37ee-4f1c-8d12-6a321800b724\") " pod="openstack/ceilometer-0" Oct 06 12:06:03 crc kubenswrapper[4958]: I1006 12:06:03.504237 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/b717fb97-37ee-4f1c-8d12-6a321800b724-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"b717fb97-37ee-4f1c-8d12-6a321800b724\") " pod="openstack/ceilometer-0" Oct 06 12:06:03 crc kubenswrapper[4958]: I1006 12:06:03.504896 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b717fb97-37ee-4f1c-8d12-6a321800b724-run-httpd\") pod \"ceilometer-0\" (UID: \"b717fb97-37ee-4f1c-8d12-6a321800b724\") " pod="openstack/ceilometer-0" Oct 06 12:06:03 crc kubenswrapper[4958]: I1006 12:06:03.505019 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b717fb97-37ee-4f1c-8d12-6a321800b724-log-httpd\") pod \"ceilometer-0\" (UID: \"b717fb97-37ee-4f1c-8d12-6a321800b724\") " pod="openstack/ceilometer-0" Oct 06 12:06:03 crc kubenswrapper[4958]: I1006 12:06:03.510511 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b717fb97-37ee-4f1c-8d12-6a321800b724-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"b717fb97-37ee-4f1c-8d12-6a321800b724\") " pod="openstack/ceilometer-0" Oct 06 12:06:03 crc kubenswrapper[4958]: I1006 12:06:03.511179 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/b717fb97-37ee-4f1c-8d12-6a321800b724-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"b717fb97-37ee-4f1c-8d12-6a321800b724\") " pod="openstack/ceilometer-0" Oct 06 12:06:03 crc kubenswrapper[4958]: I1006 12:06:03.511490 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/b717fb97-37ee-4f1c-8d12-6a321800b724-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"b717fb97-37ee-4f1c-8d12-6a321800b724\") " pod="openstack/ceilometer-0" Oct 06 12:06:03 crc kubenswrapper[4958]: I1006 12:06:03.512360 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b717fb97-37ee-4f1c-8d12-6a321800b724-scripts\") pod \"ceilometer-0\" (UID: \"b717fb97-37ee-4f1c-8d12-6a321800b724\") " pod="openstack/ceilometer-0" Oct 06 12:06:03 crc kubenswrapper[4958]: I1006 12:06:03.518006 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b717fb97-37ee-4f1c-8d12-6a321800b724-config-data\") pod \"ceilometer-0\" (UID: \"b717fb97-37ee-4f1c-8d12-6a321800b724\") " pod="openstack/ceilometer-0" Oct 06 12:06:03 crc kubenswrapper[4958]: I1006 12:06:03.526356 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g8pg8\" (UniqueName: \"kubernetes.io/projected/b717fb97-37ee-4f1c-8d12-6a321800b724-kube-api-access-g8pg8\") pod \"ceilometer-0\" (UID: \"b717fb97-37ee-4f1c-8d12-6a321800b724\") " pod="openstack/ceilometer-0" Oct 06 12:06:03 crc kubenswrapper[4958]: I1006 12:06:03.666224 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Oct 06 12:06:04 crc kubenswrapper[4958]: W1006 12:06:04.172273 4958 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb717fb97_37ee_4f1c_8d12_6a321800b724.slice/crio-8e446a6516ac9b4f0289af7f39be335d884475f0ecce22a89e96f21aa57c3d41 WatchSource:0}: Error finding container 8e446a6516ac9b4f0289af7f39be335d884475f0ecce22a89e96f21aa57c3d41: Status 404 returned error can't find the container with id 8e446a6516ac9b4f0289af7f39be335d884475f0ecce22a89e96f21aa57c3d41 Oct 06 12:06:04 crc kubenswrapper[4958]: I1006 12:06:04.172702 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Oct 06 12:06:04 crc kubenswrapper[4958]: I1006 12:06:04.271913 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"b717fb97-37ee-4f1c-8d12-6a321800b724","Type":"ContainerStarted","Data":"8e446a6516ac9b4f0289af7f39be335d884475f0ecce22a89e96f21aa57c3d41"} Oct 06 12:06:04 crc kubenswrapper[4958]: I1006 12:06:04.923303 4958 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="df7de90f-720e-4d12-92b8-ced2ed491360" path="/var/lib/kubelet/pods/df7de90f-720e-4d12-92b8-ced2ed491360/volumes" Oct 06 12:06:05 crc kubenswrapper[4958]: I1006 12:06:05.285441 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"b717fb97-37ee-4f1c-8d12-6a321800b724","Type":"ContainerStarted","Data":"0402e3edaa6063c8ec54b1356ae5f24b3d687dcf9f9e84c2c091243a0a451afe"} Oct 06 12:06:05 crc kubenswrapper[4958]: I1006 12:06:05.566962 4958 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-scheduler-0" Oct 06 12:06:05 crc kubenswrapper[4958]: I1006 12:06:05.574275 4958 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Oct 06 12:06:05 crc kubenswrapper[4958]: I1006 12:06:05.574316 4958 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Oct 06 12:06:05 crc kubenswrapper[4958]: I1006 12:06:05.599574 4958 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-scheduler-0" Oct 06 12:06:06 crc kubenswrapper[4958]: I1006 12:06:06.325867 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"b717fb97-37ee-4f1c-8d12-6a321800b724","Type":"ContainerStarted","Data":"109cc25826ccd5930329b3abe387d929ae17493190c711369e9dc213a8315d53"} Oct 06 12:06:06 crc kubenswrapper[4958]: I1006 12:06:06.326484 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"b717fb97-37ee-4f1c-8d12-6a321800b724","Type":"ContainerStarted","Data":"a82b27cd1ab6ecac9be1cb32cc121e334224154dde85da814ba53f32ccd38b85"} Oct 06 12:06:06 crc kubenswrapper[4958]: I1006 12:06:06.353540 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-scheduler-0" Oct 06 12:06:06 crc kubenswrapper[4958]: I1006 12:06:06.656391 4958 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="78492be8-7a21-4a4b-902e-947656515f65" containerName="nova-api-log" probeResult="failure" output="Get \"http://10.217.0.196:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Oct 06 12:06:06 crc kubenswrapper[4958]: I1006 12:06:06.656416 4958 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="78492be8-7a21-4a4b-902e-947656515f65" containerName="nova-api-api" probeResult="failure" output="Get \"http://10.217.0.196:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Oct 06 12:06:08 crc kubenswrapper[4958]: I1006 12:06:08.345424 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"b717fb97-37ee-4f1c-8d12-6a321800b724","Type":"ContainerStarted","Data":"bf21e16efe1f01b182013cf79fc7767e428d095a337f2a467d58dfc84a5c4a63"} Oct 06 12:06:08 crc kubenswrapper[4958]: I1006 12:06:08.345848 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Oct 06 12:06:08 crc kubenswrapper[4958]: I1006 12:06:08.389650 4958 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=1.7971305960000001 podStartE2EDuration="5.389634061s" podCreationTimestamp="2025-10-06 12:06:03 +0000 UTC" firstStartedPulling="2025-10-06 12:06:04.176977869 +0000 UTC m=+1118.063003177" lastFinishedPulling="2025-10-06 12:06:07.769481324 +0000 UTC m=+1121.655506642" observedRunningTime="2025-10-06 12:06:08.384162166 +0000 UTC m=+1122.270187474" watchObservedRunningTime="2025-10-06 12:06:08.389634061 +0000 UTC m=+1122.275659369" Oct 06 12:06:09 crc kubenswrapper[4958]: I1006 12:06:09.614557 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/kube-state-metrics-0" Oct 06 12:06:10 crc kubenswrapper[4958]: I1006 12:06:10.447960 4958 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Oct 06 12:06:10 crc kubenswrapper[4958]: I1006 12:06:10.449434 4958 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Oct 06 12:06:10 crc kubenswrapper[4958]: I1006 12:06:10.457283 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Oct 06 12:06:11 crc kubenswrapper[4958]: I1006 12:06:11.407105 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Oct 06 12:06:13 crc kubenswrapper[4958]: I1006 12:06:13.388781 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Oct 06 12:06:13 crc kubenswrapper[4958]: I1006 12:06:13.461216 4958 generic.go:334] "Generic (PLEG): container finished" podID="6e659360-bc60-4dc2-90b0-af9567ac637d" containerID="5fe4967cb221c00ba18b7bc56dc9ccd3d862f9e219e3e4cdd9922ccbb66abbe4" exitCode=137 Oct 06 12:06:13 crc kubenswrapper[4958]: I1006 12:06:13.462807 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Oct 06 12:06:13 crc kubenswrapper[4958]: I1006 12:06:13.463621 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"6e659360-bc60-4dc2-90b0-af9567ac637d","Type":"ContainerDied","Data":"5fe4967cb221c00ba18b7bc56dc9ccd3d862f9e219e3e4cdd9922ccbb66abbe4"} Oct 06 12:06:13 crc kubenswrapper[4958]: I1006 12:06:13.463733 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"6e659360-bc60-4dc2-90b0-af9567ac637d","Type":"ContainerDied","Data":"eb78bbc392b24e830c7c703860d533080b318c99e1cf7435642d2222c5874336"} Oct 06 12:06:13 crc kubenswrapper[4958]: I1006 12:06:13.463757 4958 scope.go:117] "RemoveContainer" containerID="5fe4967cb221c00ba18b7bc56dc9ccd3d862f9e219e3e4cdd9922ccbb66abbe4" Oct 06 12:06:13 crc kubenswrapper[4958]: I1006 12:06:13.502480 4958 scope.go:117] "RemoveContainer" containerID="5fe4967cb221c00ba18b7bc56dc9ccd3d862f9e219e3e4cdd9922ccbb66abbe4" Oct 06 12:06:13 crc kubenswrapper[4958]: E1006 12:06:13.502932 4958 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5fe4967cb221c00ba18b7bc56dc9ccd3d862f9e219e3e4cdd9922ccbb66abbe4\": container with ID starting with 5fe4967cb221c00ba18b7bc56dc9ccd3d862f9e219e3e4cdd9922ccbb66abbe4 not found: ID does not exist" containerID="5fe4967cb221c00ba18b7bc56dc9ccd3d862f9e219e3e4cdd9922ccbb66abbe4" Oct 06 12:06:13 crc kubenswrapper[4958]: I1006 12:06:13.502964 4958 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5fe4967cb221c00ba18b7bc56dc9ccd3d862f9e219e3e4cdd9922ccbb66abbe4"} err="failed to get container status \"5fe4967cb221c00ba18b7bc56dc9ccd3d862f9e219e3e4cdd9922ccbb66abbe4\": rpc error: code = NotFound desc = could not find container \"5fe4967cb221c00ba18b7bc56dc9ccd3d862f9e219e3e4cdd9922ccbb66abbe4\": container with ID starting with 5fe4967cb221c00ba18b7bc56dc9ccd3d862f9e219e3e4cdd9922ccbb66abbe4 not found: ID does not exist" Oct 06 12:06:13 crc kubenswrapper[4958]: I1006 12:06:13.536019 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6e659360-bc60-4dc2-90b0-af9567ac637d-combined-ca-bundle\") pod \"6e659360-bc60-4dc2-90b0-af9567ac637d\" (UID: \"6e659360-bc60-4dc2-90b0-af9567ac637d\") " Oct 06 12:06:13 crc kubenswrapper[4958]: I1006 12:06:13.536116 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6e659360-bc60-4dc2-90b0-af9567ac637d-config-data\") pod \"6e659360-bc60-4dc2-90b0-af9567ac637d\" (UID: \"6e659360-bc60-4dc2-90b0-af9567ac637d\") " Oct 06 12:06:13 crc kubenswrapper[4958]: I1006 12:06:13.536445 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2d6zw\" (UniqueName: \"kubernetes.io/projected/6e659360-bc60-4dc2-90b0-af9567ac637d-kube-api-access-2d6zw\") pod \"6e659360-bc60-4dc2-90b0-af9567ac637d\" (UID: \"6e659360-bc60-4dc2-90b0-af9567ac637d\") " Oct 06 12:06:13 crc kubenswrapper[4958]: I1006 12:06:13.541753 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6e659360-bc60-4dc2-90b0-af9567ac637d-kube-api-access-2d6zw" (OuterVolumeSpecName: "kube-api-access-2d6zw") pod "6e659360-bc60-4dc2-90b0-af9567ac637d" (UID: "6e659360-bc60-4dc2-90b0-af9567ac637d"). InnerVolumeSpecName "kube-api-access-2d6zw". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 06 12:06:13 crc kubenswrapper[4958]: I1006 12:06:13.566423 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6e659360-bc60-4dc2-90b0-af9567ac637d-config-data" (OuterVolumeSpecName: "config-data") pod "6e659360-bc60-4dc2-90b0-af9567ac637d" (UID: "6e659360-bc60-4dc2-90b0-af9567ac637d"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 06 12:06:13 crc kubenswrapper[4958]: I1006 12:06:13.568762 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6e659360-bc60-4dc2-90b0-af9567ac637d-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "6e659360-bc60-4dc2-90b0-af9567ac637d" (UID: "6e659360-bc60-4dc2-90b0-af9567ac637d"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 06 12:06:13 crc kubenswrapper[4958]: I1006 12:06:13.638668 4958 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2d6zw\" (UniqueName: \"kubernetes.io/projected/6e659360-bc60-4dc2-90b0-af9567ac637d-kube-api-access-2d6zw\") on node \"crc\" DevicePath \"\"" Oct 06 12:06:13 crc kubenswrapper[4958]: I1006 12:06:13.638701 4958 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6e659360-bc60-4dc2-90b0-af9567ac637d-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 06 12:06:13 crc kubenswrapper[4958]: I1006 12:06:13.638712 4958 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6e659360-bc60-4dc2-90b0-af9567ac637d-config-data\") on node \"crc\" DevicePath \"\"" Oct 06 12:06:13 crc kubenswrapper[4958]: I1006 12:06:13.830646 4958 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Oct 06 12:06:13 crc kubenswrapper[4958]: I1006 12:06:13.840451 4958 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Oct 06 12:06:13 crc kubenswrapper[4958]: I1006 12:06:13.859091 4958 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Oct 06 12:06:13 crc kubenswrapper[4958]: E1006 12:06:13.859628 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6e659360-bc60-4dc2-90b0-af9567ac637d" containerName="nova-cell1-novncproxy-novncproxy" Oct 06 12:06:13 crc kubenswrapper[4958]: I1006 12:06:13.859650 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="6e659360-bc60-4dc2-90b0-af9567ac637d" containerName="nova-cell1-novncproxy-novncproxy" Oct 06 12:06:13 crc kubenswrapper[4958]: I1006 12:06:13.859922 4958 memory_manager.go:354] "RemoveStaleState removing state" podUID="6e659360-bc60-4dc2-90b0-af9567ac637d" containerName="nova-cell1-novncproxy-novncproxy" Oct 06 12:06:13 crc kubenswrapper[4958]: I1006 12:06:13.860718 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Oct 06 12:06:13 crc kubenswrapper[4958]: I1006 12:06:13.863331 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-novncproxy-cell1-public-svc" Oct 06 12:06:13 crc kubenswrapper[4958]: I1006 12:06:13.864318 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-novncproxy-config-data" Oct 06 12:06:13 crc kubenswrapper[4958]: I1006 12:06:13.864537 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-novncproxy-cell1-vencrypt" Oct 06 12:06:13 crc kubenswrapper[4958]: I1006 12:06:13.894271 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Oct 06 12:06:13 crc kubenswrapper[4958]: I1006 12:06:13.942764 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/07232aba-c139-41f7-b153-ab542bbfa39a-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"07232aba-c139-41f7-b153-ab542bbfa39a\") " pod="openstack/nova-cell1-novncproxy-0" Oct 06 12:06:13 crc kubenswrapper[4958]: I1006 12:06:13.943090 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/07232aba-c139-41f7-b153-ab542bbfa39a-vencrypt-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"07232aba-c139-41f7-b153-ab542bbfa39a\") " pod="openstack/nova-cell1-novncproxy-0" Oct 06 12:06:13 crc kubenswrapper[4958]: I1006 12:06:13.943215 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/07232aba-c139-41f7-b153-ab542bbfa39a-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"07232aba-c139-41f7-b153-ab542bbfa39a\") " pod="openstack/nova-cell1-novncproxy-0" Oct 06 12:06:13 crc kubenswrapper[4958]: I1006 12:06:13.943300 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gj8wj\" (UniqueName: \"kubernetes.io/projected/07232aba-c139-41f7-b153-ab542bbfa39a-kube-api-access-gj8wj\") pod \"nova-cell1-novncproxy-0\" (UID: \"07232aba-c139-41f7-b153-ab542bbfa39a\") " pod="openstack/nova-cell1-novncproxy-0" Oct 06 12:06:13 crc kubenswrapper[4958]: I1006 12:06:13.943393 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/07232aba-c139-41f7-b153-ab542bbfa39a-nova-novncproxy-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"07232aba-c139-41f7-b153-ab542bbfa39a\") " pod="openstack/nova-cell1-novncproxy-0" Oct 06 12:06:14 crc kubenswrapper[4958]: I1006 12:06:14.044574 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/07232aba-c139-41f7-b153-ab542bbfa39a-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"07232aba-c139-41f7-b153-ab542bbfa39a\") " pod="openstack/nova-cell1-novncproxy-0" Oct 06 12:06:14 crc kubenswrapper[4958]: I1006 12:06:14.044633 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gj8wj\" (UniqueName: \"kubernetes.io/projected/07232aba-c139-41f7-b153-ab542bbfa39a-kube-api-access-gj8wj\") pod \"nova-cell1-novncproxy-0\" (UID: \"07232aba-c139-41f7-b153-ab542bbfa39a\") " pod="openstack/nova-cell1-novncproxy-0" Oct 06 12:06:14 crc kubenswrapper[4958]: I1006 12:06:14.044670 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/07232aba-c139-41f7-b153-ab542bbfa39a-nova-novncproxy-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"07232aba-c139-41f7-b153-ab542bbfa39a\") " pod="openstack/nova-cell1-novncproxy-0" Oct 06 12:06:14 crc kubenswrapper[4958]: I1006 12:06:14.044749 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/07232aba-c139-41f7-b153-ab542bbfa39a-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"07232aba-c139-41f7-b153-ab542bbfa39a\") " pod="openstack/nova-cell1-novncproxy-0" Oct 06 12:06:14 crc kubenswrapper[4958]: I1006 12:06:14.044844 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/07232aba-c139-41f7-b153-ab542bbfa39a-vencrypt-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"07232aba-c139-41f7-b153-ab542bbfa39a\") " pod="openstack/nova-cell1-novncproxy-0" Oct 06 12:06:14 crc kubenswrapper[4958]: I1006 12:06:14.051946 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/07232aba-c139-41f7-b153-ab542bbfa39a-vencrypt-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"07232aba-c139-41f7-b153-ab542bbfa39a\") " pod="openstack/nova-cell1-novncproxy-0" Oct 06 12:06:14 crc kubenswrapper[4958]: I1006 12:06:14.053441 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/07232aba-c139-41f7-b153-ab542bbfa39a-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"07232aba-c139-41f7-b153-ab542bbfa39a\") " pod="openstack/nova-cell1-novncproxy-0" Oct 06 12:06:14 crc kubenswrapper[4958]: I1006 12:06:14.054046 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/07232aba-c139-41f7-b153-ab542bbfa39a-nova-novncproxy-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"07232aba-c139-41f7-b153-ab542bbfa39a\") " pod="openstack/nova-cell1-novncproxy-0" Oct 06 12:06:14 crc kubenswrapper[4958]: I1006 12:06:14.064276 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/07232aba-c139-41f7-b153-ab542bbfa39a-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"07232aba-c139-41f7-b153-ab542bbfa39a\") " pod="openstack/nova-cell1-novncproxy-0" Oct 06 12:06:14 crc kubenswrapper[4958]: I1006 12:06:14.068138 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gj8wj\" (UniqueName: \"kubernetes.io/projected/07232aba-c139-41f7-b153-ab542bbfa39a-kube-api-access-gj8wj\") pod \"nova-cell1-novncproxy-0\" (UID: \"07232aba-c139-41f7-b153-ab542bbfa39a\") " pod="openstack/nova-cell1-novncproxy-0" Oct 06 12:06:14 crc kubenswrapper[4958]: I1006 12:06:14.194615 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Oct 06 12:06:14 crc kubenswrapper[4958]: I1006 12:06:14.699847 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Oct 06 12:06:14 crc kubenswrapper[4958]: W1006 12:06:14.706366 4958 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod07232aba_c139_41f7_b153_ab542bbfa39a.slice/crio-1ef7e3e96372ca699b5271757d2a1b459ec9b76c21be3e156828df8bfffb80fd WatchSource:0}: Error finding container 1ef7e3e96372ca699b5271757d2a1b459ec9b76c21be3e156828df8bfffb80fd: Status 404 returned error can't find the container with id 1ef7e3e96372ca699b5271757d2a1b459ec9b76c21be3e156828df8bfffb80fd Oct 06 12:06:14 crc kubenswrapper[4958]: I1006 12:06:14.933628 4958 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6e659360-bc60-4dc2-90b0-af9567ac637d" path="/var/lib/kubelet/pods/6e659360-bc60-4dc2-90b0-af9567ac637d/volumes" Oct 06 12:06:15 crc kubenswrapper[4958]: I1006 12:06:15.495667 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"07232aba-c139-41f7-b153-ab542bbfa39a","Type":"ContainerStarted","Data":"b25a63746730f2bb70517e088dd08b2073fee46c3784cf446edab499b4e45433"} Oct 06 12:06:15 crc kubenswrapper[4958]: I1006 12:06:15.495724 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"07232aba-c139-41f7-b153-ab542bbfa39a","Type":"ContainerStarted","Data":"1ef7e3e96372ca699b5271757d2a1b459ec9b76c21be3e156828df8bfffb80fd"} Oct 06 12:06:15 crc kubenswrapper[4958]: I1006 12:06:15.529928 4958 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-novncproxy-0" podStartSLOduration=2.5299021809999998 podStartE2EDuration="2.529902181s" podCreationTimestamp="2025-10-06 12:06:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-06 12:06:15.517645212 +0000 UTC m=+1129.403670550" watchObservedRunningTime="2025-10-06 12:06:15.529902181 +0000 UTC m=+1129.415927489" Oct 06 12:06:15 crc kubenswrapper[4958]: I1006 12:06:15.579038 4958 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Oct 06 12:06:15 crc kubenswrapper[4958]: I1006 12:06:15.579796 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Oct 06 12:06:15 crc kubenswrapper[4958]: I1006 12:06:15.582914 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Oct 06 12:06:15 crc kubenswrapper[4958]: I1006 12:06:15.591658 4958 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Oct 06 12:06:16 crc kubenswrapper[4958]: I1006 12:06:16.502987 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Oct 06 12:06:16 crc kubenswrapper[4958]: I1006 12:06:16.507166 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Oct 06 12:06:16 crc kubenswrapper[4958]: I1006 12:06:16.697138 4958 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-59cf4bdb65-qz6ck"] Oct 06 12:06:16 crc kubenswrapper[4958]: I1006 12:06:16.748083 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-59cf4bdb65-qz6ck" Oct 06 12:06:16 crc kubenswrapper[4958]: I1006 12:06:16.777425 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-59cf4bdb65-qz6ck"] Oct 06 12:06:16 crc kubenswrapper[4958]: I1006 12:06:16.826586 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/bae20f40-c739-424c-bfe7-14968520377e-ovsdbserver-sb\") pod \"dnsmasq-dns-59cf4bdb65-qz6ck\" (UID: \"bae20f40-c739-424c-bfe7-14968520377e\") " pod="openstack/dnsmasq-dns-59cf4bdb65-qz6ck" Oct 06 12:06:16 crc kubenswrapper[4958]: I1006 12:06:16.826762 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/bae20f40-c739-424c-bfe7-14968520377e-dns-svc\") pod \"dnsmasq-dns-59cf4bdb65-qz6ck\" (UID: \"bae20f40-c739-424c-bfe7-14968520377e\") " pod="openstack/dnsmasq-dns-59cf4bdb65-qz6ck" Oct 06 12:06:16 crc kubenswrapper[4958]: I1006 12:06:16.826836 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/bae20f40-c739-424c-bfe7-14968520377e-dns-swift-storage-0\") pod \"dnsmasq-dns-59cf4bdb65-qz6ck\" (UID: \"bae20f40-c739-424c-bfe7-14968520377e\") " pod="openstack/dnsmasq-dns-59cf4bdb65-qz6ck" Oct 06 12:06:16 crc kubenswrapper[4958]: I1006 12:06:16.826890 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bae20f40-c739-424c-bfe7-14968520377e-config\") pod \"dnsmasq-dns-59cf4bdb65-qz6ck\" (UID: \"bae20f40-c739-424c-bfe7-14968520377e\") " pod="openstack/dnsmasq-dns-59cf4bdb65-qz6ck" Oct 06 12:06:16 crc kubenswrapper[4958]: I1006 12:06:16.826921 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-569lb\" (UniqueName: \"kubernetes.io/projected/bae20f40-c739-424c-bfe7-14968520377e-kube-api-access-569lb\") pod \"dnsmasq-dns-59cf4bdb65-qz6ck\" (UID: \"bae20f40-c739-424c-bfe7-14968520377e\") " pod="openstack/dnsmasq-dns-59cf4bdb65-qz6ck" Oct 06 12:06:16 crc kubenswrapper[4958]: I1006 12:06:16.826989 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/bae20f40-c739-424c-bfe7-14968520377e-ovsdbserver-nb\") pod \"dnsmasq-dns-59cf4bdb65-qz6ck\" (UID: \"bae20f40-c739-424c-bfe7-14968520377e\") " pod="openstack/dnsmasq-dns-59cf4bdb65-qz6ck" Oct 06 12:06:16 crc kubenswrapper[4958]: I1006 12:06:16.928979 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-569lb\" (UniqueName: \"kubernetes.io/projected/bae20f40-c739-424c-bfe7-14968520377e-kube-api-access-569lb\") pod \"dnsmasq-dns-59cf4bdb65-qz6ck\" (UID: \"bae20f40-c739-424c-bfe7-14968520377e\") " pod="openstack/dnsmasq-dns-59cf4bdb65-qz6ck" Oct 06 12:06:16 crc kubenswrapper[4958]: I1006 12:06:16.929065 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/bae20f40-c739-424c-bfe7-14968520377e-ovsdbserver-nb\") pod \"dnsmasq-dns-59cf4bdb65-qz6ck\" (UID: \"bae20f40-c739-424c-bfe7-14968520377e\") " pod="openstack/dnsmasq-dns-59cf4bdb65-qz6ck" Oct 06 12:06:16 crc kubenswrapper[4958]: I1006 12:06:16.929195 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/bae20f40-c739-424c-bfe7-14968520377e-ovsdbserver-sb\") pod \"dnsmasq-dns-59cf4bdb65-qz6ck\" (UID: \"bae20f40-c739-424c-bfe7-14968520377e\") " pod="openstack/dnsmasq-dns-59cf4bdb65-qz6ck" Oct 06 12:06:16 crc kubenswrapper[4958]: I1006 12:06:16.929280 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/bae20f40-c739-424c-bfe7-14968520377e-dns-svc\") pod \"dnsmasq-dns-59cf4bdb65-qz6ck\" (UID: \"bae20f40-c739-424c-bfe7-14968520377e\") " pod="openstack/dnsmasq-dns-59cf4bdb65-qz6ck" Oct 06 12:06:16 crc kubenswrapper[4958]: I1006 12:06:16.929469 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/bae20f40-c739-424c-bfe7-14968520377e-dns-swift-storage-0\") pod \"dnsmasq-dns-59cf4bdb65-qz6ck\" (UID: \"bae20f40-c739-424c-bfe7-14968520377e\") " pod="openstack/dnsmasq-dns-59cf4bdb65-qz6ck" Oct 06 12:06:16 crc kubenswrapper[4958]: I1006 12:06:16.929534 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bae20f40-c739-424c-bfe7-14968520377e-config\") pod \"dnsmasq-dns-59cf4bdb65-qz6ck\" (UID: \"bae20f40-c739-424c-bfe7-14968520377e\") " pod="openstack/dnsmasq-dns-59cf4bdb65-qz6ck" Oct 06 12:06:16 crc kubenswrapper[4958]: I1006 12:06:16.930264 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/bae20f40-c739-424c-bfe7-14968520377e-ovsdbserver-nb\") pod \"dnsmasq-dns-59cf4bdb65-qz6ck\" (UID: \"bae20f40-c739-424c-bfe7-14968520377e\") " pod="openstack/dnsmasq-dns-59cf4bdb65-qz6ck" Oct 06 12:06:16 crc kubenswrapper[4958]: I1006 12:06:16.930350 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/bae20f40-c739-424c-bfe7-14968520377e-ovsdbserver-sb\") pod \"dnsmasq-dns-59cf4bdb65-qz6ck\" (UID: \"bae20f40-c739-424c-bfe7-14968520377e\") " pod="openstack/dnsmasq-dns-59cf4bdb65-qz6ck" Oct 06 12:06:16 crc kubenswrapper[4958]: I1006 12:06:16.930435 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/bae20f40-c739-424c-bfe7-14968520377e-dns-svc\") pod \"dnsmasq-dns-59cf4bdb65-qz6ck\" (UID: \"bae20f40-c739-424c-bfe7-14968520377e\") " pod="openstack/dnsmasq-dns-59cf4bdb65-qz6ck" Oct 06 12:06:16 crc kubenswrapper[4958]: I1006 12:06:16.931100 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bae20f40-c739-424c-bfe7-14968520377e-config\") pod \"dnsmasq-dns-59cf4bdb65-qz6ck\" (UID: \"bae20f40-c739-424c-bfe7-14968520377e\") " pod="openstack/dnsmasq-dns-59cf4bdb65-qz6ck" Oct 06 12:06:16 crc kubenswrapper[4958]: I1006 12:06:16.931812 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/bae20f40-c739-424c-bfe7-14968520377e-dns-swift-storage-0\") pod \"dnsmasq-dns-59cf4bdb65-qz6ck\" (UID: \"bae20f40-c739-424c-bfe7-14968520377e\") " pod="openstack/dnsmasq-dns-59cf4bdb65-qz6ck" Oct 06 12:06:16 crc kubenswrapper[4958]: I1006 12:06:16.953782 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-569lb\" (UniqueName: \"kubernetes.io/projected/bae20f40-c739-424c-bfe7-14968520377e-kube-api-access-569lb\") pod \"dnsmasq-dns-59cf4bdb65-qz6ck\" (UID: \"bae20f40-c739-424c-bfe7-14968520377e\") " pod="openstack/dnsmasq-dns-59cf4bdb65-qz6ck" Oct 06 12:06:17 crc kubenswrapper[4958]: I1006 12:06:17.088845 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-59cf4bdb65-qz6ck" Oct 06 12:06:17 crc kubenswrapper[4958]: I1006 12:06:17.598641 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-59cf4bdb65-qz6ck"] Oct 06 12:06:17 crc kubenswrapper[4958]: W1006 12:06:17.602110 4958 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podbae20f40_c739_424c_bfe7_14968520377e.slice/crio-7b2f8e447ba92db42cf11fba39144d0e05a06de1cbba0408510bfe6e1c58a08a WatchSource:0}: Error finding container 7b2f8e447ba92db42cf11fba39144d0e05a06de1cbba0408510bfe6e1c58a08a: Status 404 returned error can't find the container with id 7b2f8e447ba92db42cf11fba39144d0e05a06de1cbba0408510bfe6e1c58a08a Oct 06 12:06:18 crc kubenswrapper[4958]: I1006 12:06:18.527099 4958 generic.go:334] "Generic (PLEG): container finished" podID="bae20f40-c739-424c-bfe7-14968520377e" containerID="30874e887853948e61c97e70c09d72a7ccf0b51f74d5603c645e9d7670def6be" exitCode=0 Oct 06 12:06:18 crc kubenswrapper[4958]: I1006 12:06:18.527186 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-59cf4bdb65-qz6ck" event={"ID":"bae20f40-c739-424c-bfe7-14968520377e","Type":"ContainerDied","Data":"30874e887853948e61c97e70c09d72a7ccf0b51f74d5603c645e9d7670def6be"} Oct 06 12:06:18 crc kubenswrapper[4958]: I1006 12:06:18.527706 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-59cf4bdb65-qz6ck" event={"ID":"bae20f40-c739-424c-bfe7-14968520377e","Type":"ContainerStarted","Data":"7b2f8e447ba92db42cf11fba39144d0e05a06de1cbba0408510bfe6e1c58a08a"} Oct 06 12:06:19 crc kubenswrapper[4958]: I1006 12:06:19.116128 4958 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Oct 06 12:06:19 crc kubenswrapper[4958]: I1006 12:06:19.116730 4958 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="b717fb97-37ee-4f1c-8d12-6a321800b724" containerName="ceilometer-central-agent" containerID="cri-o://0402e3edaa6063c8ec54b1356ae5f24b3d687dcf9f9e84c2c091243a0a451afe" gracePeriod=30 Oct 06 12:06:19 crc kubenswrapper[4958]: I1006 12:06:19.116801 4958 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="b717fb97-37ee-4f1c-8d12-6a321800b724" containerName="sg-core" containerID="cri-o://109cc25826ccd5930329b3abe387d929ae17493190c711369e9dc213a8315d53" gracePeriod=30 Oct 06 12:06:19 crc kubenswrapper[4958]: I1006 12:06:19.116854 4958 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="b717fb97-37ee-4f1c-8d12-6a321800b724" containerName="proxy-httpd" containerID="cri-o://bf21e16efe1f01b182013cf79fc7767e428d095a337f2a467d58dfc84a5c4a63" gracePeriod=30 Oct 06 12:06:19 crc kubenswrapper[4958]: I1006 12:06:19.116821 4958 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="b717fb97-37ee-4f1c-8d12-6a321800b724" containerName="ceilometer-notification-agent" containerID="cri-o://a82b27cd1ab6ecac9be1cb32cc121e334224154dde85da814ba53f32ccd38b85" gracePeriod=30 Oct 06 12:06:19 crc kubenswrapper[4958]: I1006 12:06:19.194965 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-novncproxy-0" Oct 06 12:06:19 crc kubenswrapper[4958]: I1006 12:06:19.283467 4958 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ceilometer-0" podUID="b717fb97-37ee-4f1c-8d12-6a321800b724" containerName="proxy-httpd" probeResult="failure" output="HTTP probe failed with statuscode: 502" Oct 06 12:06:19 crc kubenswrapper[4958]: I1006 12:06:19.461898 4958 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Oct 06 12:06:19 crc kubenswrapper[4958]: I1006 12:06:19.554761 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-59cf4bdb65-qz6ck" event={"ID":"bae20f40-c739-424c-bfe7-14968520377e","Type":"ContainerStarted","Data":"a89a96b1b69b14141f600174903d13b90a573a98c92971fefbdb276f665050b4"} Oct 06 12:06:19 crc kubenswrapper[4958]: I1006 12:06:19.554871 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-59cf4bdb65-qz6ck" Oct 06 12:06:19 crc kubenswrapper[4958]: I1006 12:06:19.558108 4958 generic.go:334] "Generic (PLEG): container finished" podID="b717fb97-37ee-4f1c-8d12-6a321800b724" containerID="bf21e16efe1f01b182013cf79fc7767e428d095a337f2a467d58dfc84a5c4a63" exitCode=0 Oct 06 12:06:19 crc kubenswrapper[4958]: I1006 12:06:19.558130 4958 generic.go:334] "Generic (PLEG): container finished" podID="b717fb97-37ee-4f1c-8d12-6a321800b724" containerID="109cc25826ccd5930329b3abe387d929ae17493190c711369e9dc213a8315d53" exitCode=2 Oct 06 12:06:19 crc kubenswrapper[4958]: I1006 12:06:19.558314 4958 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="78492be8-7a21-4a4b-902e-947656515f65" containerName="nova-api-log" containerID="cri-o://b8f39ff80a1fc3259a69e273f73d4e86d94f99fd26810cf2cf2c608a8978a626" gracePeriod=30 Oct 06 12:06:19 crc kubenswrapper[4958]: I1006 12:06:19.558512 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"b717fb97-37ee-4f1c-8d12-6a321800b724","Type":"ContainerDied","Data":"bf21e16efe1f01b182013cf79fc7767e428d095a337f2a467d58dfc84a5c4a63"} Oct 06 12:06:19 crc kubenswrapper[4958]: I1006 12:06:19.558537 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"b717fb97-37ee-4f1c-8d12-6a321800b724","Type":"ContainerDied","Data":"109cc25826ccd5930329b3abe387d929ae17493190c711369e9dc213a8315d53"} Oct 06 12:06:19 crc kubenswrapper[4958]: I1006 12:06:19.558613 4958 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="78492be8-7a21-4a4b-902e-947656515f65" containerName="nova-api-api" containerID="cri-o://2b8a7ab82f9011405de708d95c40916dd79c5fab582a2640ad096f63c0727506" gracePeriod=30 Oct 06 12:06:19 crc kubenswrapper[4958]: I1006 12:06:19.579050 4958 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-59cf4bdb65-qz6ck" podStartSLOduration=3.579027965 podStartE2EDuration="3.579027965s" podCreationTimestamp="2025-10-06 12:06:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-06 12:06:19.570939111 +0000 UTC m=+1133.456964429" watchObservedRunningTime="2025-10-06 12:06:19.579027965 +0000 UTC m=+1133.465053273" Oct 06 12:06:20 crc kubenswrapper[4958]: I1006 12:06:20.576298 4958 generic.go:334] "Generic (PLEG): container finished" podID="b717fb97-37ee-4f1c-8d12-6a321800b724" containerID="0402e3edaa6063c8ec54b1356ae5f24b3d687dcf9f9e84c2c091243a0a451afe" exitCode=0 Oct 06 12:06:20 crc kubenswrapper[4958]: I1006 12:06:20.576418 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"b717fb97-37ee-4f1c-8d12-6a321800b724","Type":"ContainerDied","Data":"0402e3edaa6063c8ec54b1356ae5f24b3d687dcf9f9e84c2c091243a0a451afe"} Oct 06 12:06:20 crc kubenswrapper[4958]: I1006 12:06:20.580606 4958 generic.go:334] "Generic (PLEG): container finished" podID="78492be8-7a21-4a4b-902e-947656515f65" containerID="b8f39ff80a1fc3259a69e273f73d4e86d94f99fd26810cf2cf2c608a8978a626" exitCode=143 Oct 06 12:06:20 crc kubenswrapper[4958]: I1006 12:06:20.581856 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"78492be8-7a21-4a4b-902e-947656515f65","Type":"ContainerDied","Data":"b8f39ff80a1fc3259a69e273f73d4e86d94f99fd26810cf2cf2c608a8978a626"} Oct 06 12:06:23 crc kubenswrapper[4958]: I1006 12:06:23.232940 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Oct 06 12:06:23 crc kubenswrapper[4958]: I1006 12:06:23.327575 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Oct 06 12:06:23 crc kubenswrapper[4958]: I1006 12:06:23.378002 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/78492be8-7a21-4a4b-902e-947656515f65-config-data\") pod \"78492be8-7a21-4a4b-902e-947656515f65\" (UID: \"78492be8-7a21-4a4b-902e-947656515f65\") " Oct 06 12:06:23 crc kubenswrapper[4958]: I1006 12:06:23.378092 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/78492be8-7a21-4a4b-902e-947656515f65-combined-ca-bundle\") pod \"78492be8-7a21-4a4b-902e-947656515f65\" (UID: \"78492be8-7a21-4a4b-902e-947656515f65\") " Oct 06 12:06:23 crc kubenswrapper[4958]: I1006 12:06:23.378178 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/78492be8-7a21-4a4b-902e-947656515f65-logs\") pod \"78492be8-7a21-4a4b-902e-947656515f65\" (UID: \"78492be8-7a21-4a4b-902e-947656515f65\") " Oct 06 12:06:23 crc kubenswrapper[4958]: I1006 12:06:23.378315 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ctgg8\" (UniqueName: \"kubernetes.io/projected/78492be8-7a21-4a4b-902e-947656515f65-kube-api-access-ctgg8\") pod \"78492be8-7a21-4a4b-902e-947656515f65\" (UID: \"78492be8-7a21-4a4b-902e-947656515f65\") " Oct 06 12:06:23 crc kubenswrapper[4958]: I1006 12:06:23.379134 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/78492be8-7a21-4a4b-902e-947656515f65-logs" (OuterVolumeSpecName: "logs") pod "78492be8-7a21-4a4b-902e-947656515f65" (UID: "78492be8-7a21-4a4b-902e-947656515f65"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 06 12:06:23 crc kubenswrapper[4958]: I1006 12:06:23.383446 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/78492be8-7a21-4a4b-902e-947656515f65-kube-api-access-ctgg8" (OuterVolumeSpecName: "kube-api-access-ctgg8") pod "78492be8-7a21-4a4b-902e-947656515f65" (UID: "78492be8-7a21-4a4b-902e-947656515f65"). InnerVolumeSpecName "kube-api-access-ctgg8". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 06 12:06:23 crc kubenswrapper[4958]: I1006 12:06:23.404822 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/78492be8-7a21-4a4b-902e-947656515f65-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "78492be8-7a21-4a4b-902e-947656515f65" (UID: "78492be8-7a21-4a4b-902e-947656515f65"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 06 12:06:23 crc kubenswrapper[4958]: I1006 12:06:23.417563 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/78492be8-7a21-4a4b-902e-947656515f65-config-data" (OuterVolumeSpecName: "config-data") pod "78492be8-7a21-4a4b-902e-947656515f65" (UID: "78492be8-7a21-4a4b-902e-947656515f65"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 06 12:06:23 crc kubenswrapper[4958]: I1006 12:06:23.479499 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-g8pg8\" (UniqueName: \"kubernetes.io/projected/b717fb97-37ee-4f1c-8d12-6a321800b724-kube-api-access-g8pg8\") pod \"b717fb97-37ee-4f1c-8d12-6a321800b724\" (UID: \"b717fb97-37ee-4f1c-8d12-6a321800b724\") " Oct 06 12:06:23 crc kubenswrapper[4958]: I1006 12:06:23.479626 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/b717fb97-37ee-4f1c-8d12-6a321800b724-sg-core-conf-yaml\") pod \"b717fb97-37ee-4f1c-8d12-6a321800b724\" (UID: \"b717fb97-37ee-4f1c-8d12-6a321800b724\") " Oct 06 12:06:23 crc kubenswrapper[4958]: I1006 12:06:23.479700 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/b717fb97-37ee-4f1c-8d12-6a321800b724-ceilometer-tls-certs\") pod \"b717fb97-37ee-4f1c-8d12-6a321800b724\" (UID: \"b717fb97-37ee-4f1c-8d12-6a321800b724\") " Oct 06 12:06:23 crc kubenswrapper[4958]: I1006 12:06:23.479734 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b717fb97-37ee-4f1c-8d12-6a321800b724-scripts\") pod \"b717fb97-37ee-4f1c-8d12-6a321800b724\" (UID: \"b717fb97-37ee-4f1c-8d12-6a321800b724\") " Oct 06 12:06:23 crc kubenswrapper[4958]: I1006 12:06:23.479784 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b717fb97-37ee-4f1c-8d12-6a321800b724-config-data\") pod \"b717fb97-37ee-4f1c-8d12-6a321800b724\" (UID: \"b717fb97-37ee-4f1c-8d12-6a321800b724\") " Oct 06 12:06:23 crc kubenswrapper[4958]: I1006 12:06:23.479863 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b717fb97-37ee-4f1c-8d12-6a321800b724-run-httpd\") pod \"b717fb97-37ee-4f1c-8d12-6a321800b724\" (UID: \"b717fb97-37ee-4f1c-8d12-6a321800b724\") " Oct 06 12:06:23 crc kubenswrapper[4958]: I1006 12:06:23.479897 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b717fb97-37ee-4f1c-8d12-6a321800b724-combined-ca-bundle\") pod \"b717fb97-37ee-4f1c-8d12-6a321800b724\" (UID: \"b717fb97-37ee-4f1c-8d12-6a321800b724\") " Oct 06 12:06:23 crc kubenswrapper[4958]: I1006 12:06:23.480227 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b717fb97-37ee-4f1c-8d12-6a321800b724-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "b717fb97-37ee-4f1c-8d12-6a321800b724" (UID: "b717fb97-37ee-4f1c-8d12-6a321800b724"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 06 12:06:23 crc kubenswrapper[4958]: I1006 12:06:23.479986 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b717fb97-37ee-4f1c-8d12-6a321800b724-log-httpd\") pod \"b717fb97-37ee-4f1c-8d12-6a321800b724\" (UID: \"b717fb97-37ee-4f1c-8d12-6a321800b724\") " Oct 06 12:06:23 crc kubenswrapper[4958]: I1006 12:06:23.480422 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b717fb97-37ee-4f1c-8d12-6a321800b724-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "b717fb97-37ee-4f1c-8d12-6a321800b724" (UID: "b717fb97-37ee-4f1c-8d12-6a321800b724"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 06 12:06:23 crc kubenswrapper[4958]: I1006 12:06:23.480856 4958 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b717fb97-37ee-4f1c-8d12-6a321800b724-log-httpd\") on node \"crc\" DevicePath \"\"" Oct 06 12:06:23 crc kubenswrapper[4958]: I1006 12:06:23.480884 4958 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/78492be8-7a21-4a4b-902e-947656515f65-config-data\") on node \"crc\" DevicePath \"\"" Oct 06 12:06:23 crc kubenswrapper[4958]: I1006 12:06:23.480898 4958 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/78492be8-7a21-4a4b-902e-947656515f65-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 06 12:06:23 crc kubenswrapper[4958]: I1006 12:06:23.480911 4958 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/78492be8-7a21-4a4b-902e-947656515f65-logs\") on node \"crc\" DevicePath \"\"" Oct 06 12:06:23 crc kubenswrapper[4958]: I1006 12:06:23.480924 4958 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b717fb97-37ee-4f1c-8d12-6a321800b724-run-httpd\") on node \"crc\" DevicePath \"\"" Oct 06 12:06:23 crc kubenswrapper[4958]: I1006 12:06:23.480938 4958 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ctgg8\" (UniqueName: \"kubernetes.io/projected/78492be8-7a21-4a4b-902e-947656515f65-kube-api-access-ctgg8\") on node \"crc\" DevicePath \"\"" Oct 06 12:06:23 crc kubenswrapper[4958]: I1006 12:06:23.483284 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b717fb97-37ee-4f1c-8d12-6a321800b724-scripts" (OuterVolumeSpecName: "scripts") pod "b717fb97-37ee-4f1c-8d12-6a321800b724" (UID: "b717fb97-37ee-4f1c-8d12-6a321800b724"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 06 12:06:23 crc kubenswrapper[4958]: I1006 12:06:23.483297 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b717fb97-37ee-4f1c-8d12-6a321800b724-kube-api-access-g8pg8" (OuterVolumeSpecName: "kube-api-access-g8pg8") pod "b717fb97-37ee-4f1c-8d12-6a321800b724" (UID: "b717fb97-37ee-4f1c-8d12-6a321800b724"). InnerVolumeSpecName "kube-api-access-g8pg8". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 06 12:06:23 crc kubenswrapper[4958]: I1006 12:06:23.523100 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b717fb97-37ee-4f1c-8d12-6a321800b724-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "b717fb97-37ee-4f1c-8d12-6a321800b724" (UID: "b717fb97-37ee-4f1c-8d12-6a321800b724"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 06 12:06:23 crc kubenswrapper[4958]: I1006 12:06:23.541820 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b717fb97-37ee-4f1c-8d12-6a321800b724-ceilometer-tls-certs" (OuterVolumeSpecName: "ceilometer-tls-certs") pod "b717fb97-37ee-4f1c-8d12-6a321800b724" (UID: "b717fb97-37ee-4f1c-8d12-6a321800b724"). InnerVolumeSpecName "ceilometer-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 06 12:06:23 crc kubenswrapper[4958]: I1006 12:06:23.582530 4958 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-g8pg8\" (UniqueName: \"kubernetes.io/projected/b717fb97-37ee-4f1c-8d12-6a321800b724-kube-api-access-g8pg8\") on node \"crc\" DevicePath \"\"" Oct 06 12:06:23 crc kubenswrapper[4958]: I1006 12:06:23.582563 4958 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/b717fb97-37ee-4f1c-8d12-6a321800b724-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Oct 06 12:06:23 crc kubenswrapper[4958]: I1006 12:06:23.582578 4958 reconciler_common.go:293] "Volume detached for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/b717fb97-37ee-4f1c-8d12-6a321800b724-ceilometer-tls-certs\") on node \"crc\" DevicePath \"\"" Oct 06 12:06:23 crc kubenswrapper[4958]: I1006 12:06:23.582591 4958 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b717fb97-37ee-4f1c-8d12-6a321800b724-scripts\") on node \"crc\" DevicePath \"\"" Oct 06 12:06:23 crc kubenswrapper[4958]: I1006 12:06:23.583577 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b717fb97-37ee-4f1c-8d12-6a321800b724-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "b717fb97-37ee-4f1c-8d12-6a321800b724" (UID: "b717fb97-37ee-4f1c-8d12-6a321800b724"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 06 12:06:23 crc kubenswrapper[4958]: I1006 12:06:23.599719 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b717fb97-37ee-4f1c-8d12-6a321800b724-config-data" (OuterVolumeSpecName: "config-data") pod "b717fb97-37ee-4f1c-8d12-6a321800b724" (UID: "b717fb97-37ee-4f1c-8d12-6a321800b724"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 06 12:06:23 crc kubenswrapper[4958]: I1006 12:06:23.611426 4958 generic.go:334] "Generic (PLEG): container finished" podID="b717fb97-37ee-4f1c-8d12-6a321800b724" containerID="a82b27cd1ab6ecac9be1cb32cc121e334224154dde85da814ba53f32ccd38b85" exitCode=0 Oct 06 12:06:23 crc kubenswrapper[4958]: I1006 12:06:23.611492 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"b717fb97-37ee-4f1c-8d12-6a321800b724","Type":"ContainerDied","Data":"a82b27cd1ab6ecac9be1cb32cc121e334224154dde85da814ba53f32ccd38b85"} Oct 06 12:06:23 crc kubenswrapper[4958]: I1006 12:06:23.611521 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"b717fb97-37ee-4f1c-8d12-6a321800b724","Type":"ContainerDied","Data":"8e446a6516ac9b4f0289af7f39be335d884475f0ecce22a89e96f21aa57c3d41"} Oct 06 12:06:23 crc kubenswrapper[4958]: I1006 12:06:23.611537 4958 scope.go:117] "RemoveContainer" containerID="bf21e16efe1f01b182013cf79fc7767e428d095a337f2a467d58dfc84a5c4a63" Oct 06 12:06:23 crc kubenswrapper[4958]: I1006 12:06:23.611696 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Oct 06 12:06:23 crc kubenswrapper[4958]: I1006 12:06:23.623043 4958 generic.go:334] "Generic (PLEG): container finished" podID="78492be8-7a21-4a4b-902e-947656515f65" containerID="2b8a7ab82f9011405de708d95c40916dd79c5fab582a2640ad096f63c0727506" exitCode=0 Oct 06 12:06:23 crc kubenswrapper[4958]: I1006 12:06:23.623118 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"78492be8-7a21-4a4b-902e-947656515f65","Type":"ContainerDied","Data":"2b8a7ab82f9011405de708d95c40916dd79c5fab582a2640ad096f63c0727506"} Oct 06 12:06:23 crc kubenswrapper[4958]: I1006 12:06:23.623196 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"78492be8-7a21-4a4b-902e-947656515f65","Type":"ContainerDied","Data":"980b3a5e5f6c99f071bf9499e728dd922f8cba8fccb21c00635469626c70d361"} Oct 06 12:06:23 crc kubenswrapper[4958]: I1006 12:06:23.623271 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Oct 06 12:06:23 crc kubenswrapper[4958]: I1006 12:06:23.645860 4958 scope.go:117] "RemoveContainer" containerID="109cc25826ccd5930329b3abe387d929ae17493190c711369e9dc213a8315d53" Oct 06 12:06:23 crc kubenswrapper[4958]: I1006 12:06:23.666552 4958 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Oct 06 12:06:23 crc kubenswrapper[4958]: I1006 12:06:23.677350 4958 scope.go:117] "RemoveContainer" containerID="a82b27cd1ab6ecac9be1cb32cc121e334224154dde85da814ba53f32ccd38b85" Oct 06 12:06:23 crc kubenswrapper[4958]: I1006 12:06:23.684227 4958 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b717fb97-37ee-4f1c-8d12-6a321800b724-config-data\") on node \"crc\" DevicePath \"\"" Oct 06 12:06:23 crc kubenswrapper[4958]: I1006 12:06:23.684265 4958 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b717fb97-37ee-4f1c-8d12-6a321800b724-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 06 12:06:23 crc kubenswrapper[4958]: I1006 12:06:23.686771 4958 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Oct 06 12:06:23 crc kubenswrapper[4958]: I1006 12:06:23.698241 4958 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Oct 06 12:06:23 crc kubenswrapper[4958]: I1006 12:06:23.705977 4958 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Oct 06 12:06:23 crc kubenswrapper[4958]: I1006 12:06:23.713495 4958 scope.go:117] "RemoveContainer" containerID="0402e3edaa6063c8ec54b1356ae5f24b3d687dcf9f9e84c2c091243a0a451afe" Oct 06 12:06:23 crc kubenswrapper[4958]: I1006 12:06:23.743256 4958 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Oct 06 12:06:23 crc kubenswrapper[4958]: E1006 12:06:23.744133 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b717fb97-37ee-4f1c-8d12-6a321800b724" containerName="ceilometer-notification-agent" Oct 06 12:06:23 crc kubenswrapper[4958]: I1006 12:06:23.744179 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="b717fb97-37ee-4f1c-8d12-6a321800b724" containerName="ceilometer-notification-agent" Oct 06 12:06:23 crc kubenswrapper[4958]: E1006 12:06:23.744220 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b717fb97-37ee-4f1c-8d12-6a321800b724" containerName="proxy-httpd" Oct 06 12:06:23 crc kubenswrapper[4958]: I1006 12:06:23.744230 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="b717fb97-37ee-4f1c-8d12-6a321800b724" containerName="proxy-httpd" Oct 06 12:06:23 crc kubenswrapper[4958]: E1006 12:06:23.744246 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="78492be8-7a21-4a4b-902e-947656515f65" containerName="nova-api-api" Oct 06 12:06:23 crc kubenswrapper[4958]: I1006 12:06:23.744256 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="78492be8-7a21-4a4b-902e-947656515f65" containerName="nova-api-api" Oct 06 12:06:23 crc kubenswrapper[4958]: E1006 12:06:23.744280 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b717fb97-37ee-4f1c-8d12-6a321800b724" containerName="sg-core" Oct 06 12:06:23 crc kubenswrapper[4958]: I1006 12:06:23.744289 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="b717fb97-37ee-4f1c-8d12-6a321800b724" containerName="sg-core" Oct 06 12:06:23 crc kubenswrapper[4958]: E1006 12:06:23.744317 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b717fb97-37ee-4f1c-8d12-6a321800b724" containerName="ceilometer-central-agent" Oct 06 12:06:23 crc kubenswrapper[4958]: I1006 12:06:23.744326 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="b717fb97-37ee-4f1c-8d12-6a321800b724" containerName="ceilometer-central-agent" Oct 06 12:06:23 crc kubenswrapper[4958]: E1006 12:06:23.744355 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="78492be8-7a21-4a4b-902e-947656515f65" containerName="nova-api-log" Oct 06 12:06:23 crc kubenswrapper[4958]: I1006 12:06:23.744364 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="78492be8-7a21-4a4b-902e-947656515f65" containerName="nova-api-log" Oct 06 12:06:23 crc kubenswrapper[4958]: I1006 12:06:23.745632 4958 memory_manager.go:354] "RemoveStaleState removing state" podUID="b717fb97-37ee-4f1c-8d12-6a321800b724" containerName="sg-core" Oct 06 12:06:23 crc kubenswrapper[4958]: I1006 12:06:23.745673 4958 memory_manager.go:354] "RemoveStaleState removing state" podUID="b717fb97-37ee-4f1c-8d12-6a321800b724" containerName="ceilometer-central-agent" Oct 06 12:06:23 crc kubenswrapper[4958]: I1006 12:06:23.745877 4958 memory_manager.go:354] "RemoveStaleState removing state" podUID="78492be8-7a21-4a4b-902e-947656515f65" containerName="nova-api-log" Oct 06 12:06:23 crc kubenswrapper[4958]: I1006 12:06:23.745966 4958 memory_manager.go:354] "RemoveStaleState removing state" podUID="b717fb97-37ee-4f1c-8d12-6a321800b724" containerName="proxy-httpd" Oct 06 12:06:23 crc kubenswrapper[4958]: I1006 12:06:23.745997 4958 memory_manager.go:354] "RemoveStaleState removing state" podUID="b717fb97-37ee-4f1c-8d12-6a321800b724" containerName="ceilometer-notification-agent" Oct 06 12:06:23 crc kubenswrapper[4958]: I1006 12:06:23.746052 4958 memory_manager.go:354] "RemoveStaleState removing state" podUID="78492be8-7a21-4a4b-902e-947656515f65" containerName="nova-api-api" Oct 06 12:06:23 crc kubenswrapper[4958]: I1006 12:06:23.752999 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Oct 06 12:06:23 crc kubenswrapper[4958]: I1006 12:06:23.755705 4958 scope.go:117] "RemoveContainer" containerID="bf21e16efe1f01b182013cf79fc7767e428d095a337f2a467d58dfc84a5c4a63" Oct 06 12:06:23 crc kubenswrapper[4958]: I1006 12:06:23.756382 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Oct 06 12:06:23 crc kubenswrapper[4958]: I1006 12:06:23.756514 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ceilometer-internal-svc" Oct 06 12:06:23 crc kubenswrapper[4958]: I1006 12:06:23.756963 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Oct 06 12:06:23 crc kubenswrapper[4958]: E1006 12:06:23.757004 4958 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"bf21e16efe1f01b182013cf79fc7767e428d095a337f2a467d58dfc84a5c4a63\": container with ID starting with bf21e16efe1f01b182013cf79fc7767e428d095a337f2a467d58dfc84a5c4a63 not found: ID does not exist" containerID="bf21e16efe1f01b182013cf79fc7767e428d095a337f2a467d58dfc84a5c4a63" Oct 06 12:06:23 crc kubenswrapper[4958]: I1006 12:06:23.757209 4958 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bf21e16efe1f01b182013cf79fc7767e428d095a337f2a467d58dfc84a5c4a63"} err="failed to get container status \"bf21e16efe1f01b182013cf79fc7767e428d095a337f2a467d58dfc84a5c4a63\": rpc error: code = NotFound desc = could not find container \"bf21e16efe1f01b182013cf79fc7767e428d095a337f2a467d58dfc84a5c4a63\": container with ID starting with bf21e16efe1f01b182013cf79fc7767e428d095a337f2a467d58dfc84a5c4a63 not found: ID does not exist" Oct 06 12:06:23 crc kubenswrapper[4958]: I1006 12:06:23.757300 4958 scope.go:117] "RemoveContainer" containerID="109cc25826ccd5930329b3abe387d929ae17493190c711369e9dc213a8315d53" Oct 06 12:06:23 crc kubenswrapper[4958]: E1006 12:06:23.757683 4958 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"109cc25826ccd5930329b3abe387d929ae17493190c711369e9dc213a8315d53\": container with ID starting with 109cc25826ccd5930329b3abe387d929ae17493190c711369e9dc213a8315d53 not found: ID does not exist" containerID="109cc25826ccd5930329b3abe387d929ae17493190c711369e9dc213a8315d53" Oct 06 12:06:23 crc kubenswrapper[4958]: I1006 12:06:23.757758 4958 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"109cc25826ccd5930329b3abe387d929ae17493190c711369e9dc213a8315d53"} err="failed to get container status \"109cc25826ccd5930329b3abe387d929ae17493190c711369e9dc213a8315d53\": rpc error: code = NotFound desc = could not find container \"109cc25826ccd5930329b3abe387d929ae17493190c711369e9dc213a8315d53\": container with ID starting with 109cc25826ccd5930329b3abe387d929ae17493190c711369e9dc213a8315d53 not found: ID does not exist" Oct 06 12:06:23 crc kubenswrapper[4958]: I1006 12:06:23.757787 4958 scope.go:117] "RemoveContainer" containerID="a82b27cd1ab6ecac9be1cb32cc121e334224154dde85da814ba53f32ccd38b85" Oct 06 12:06:23 crc kubenswrapper[4958]: E1006 12:06:23.758302 4958 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a82b27cd1ab6ecac9be1cb32cc121e334224154dde85da814ba53f32ccd38b85\": container with ID starting with a82b27cd1ab6ecac9be1cb32cc121e334224154dde85da814ba53f32ccd38b85 not found: ID does not exist" containerID="a82b27cd1ab6ecac9be1cb32cc121e334224154dde85da814ba53f32ccd38b85" Oct 06 12:06:23 crc kubenswrapper[4958]: I1006 12:06:23.758389 4958 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a82b27cd1ab6ecac9be1cb32cc121e334224154dde85da814ba53f32ccd38b85"} err="failed to get container status \"a82b27cd1ab6ecac9be1cb32cc121e334224154dde85da814ba53f32ccd38b85\": rpc error: code = NotFound desc = could not find container \"a82b27cd1ab6ecac9be1cb32cc121e334224154dde85da814ba53f32ccd38b85\": container with ID starting with a82b27cd1ab6ecac9be1cb32cc121e334224154dde85da814ba53f32ccd38b85 not found: ID does not exist" Oct 06 12:06:23 crc kubenswrapper[4958]: I1006 12:06:23.758479 4958 scope.go:117] "RemoveContainer" containerID="0402e3edaa6063c8ec54b1356ae5f24b3d687dcf9f9e84c2c091243a0a451afe" Oct 06 12:06:23 crc kubenswrapper[4958]: E1006 12:06:23.758829 4958 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0402e3edaa6063c8ec54b1356ae5f24b3d687dcf9f9e84c2c091243a0a451afe\": container with ID starting with 0402e3edaa6063c8ec54b1356ae5f24b3d687dcf9f9e84c2c091243a0a451afe not found: ID does not exist" containerID="0402e3edaa6063c8ec54b1356ae5f24b3d687dcf9f9e84c2c091243a0a451afe" Oct 06 12:06:23 crc kubenswrapper[4958]: I1006 12:06:23.758865 4958 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0402e3edaa6063c8ec54b1356ae5f24b3d687dcf9f9e84c2c091243a0a451afe"} err="failed to get container status \"0402e3edaa6063c8ec54b1356ae5f24b3d687dcf9f9e84c2c091243a0a451afe\": rpc error: code = NotFound desc = could not find container \"0402e3edaa6063c8ec54b1356ae5f24b3d687dcf9f9e84c2c091243a0a451afe\": container with ID starting with 0402e3edaa6063c8ec54b1356ae5f24b3d687dcf9f9e84c2c091243a0a451afe not found: ID does not exist" Oct 06 12:06:23 crc kubenswrapper[4958]: I1006 12:06:23.758907 4958 scope.go:117] "RemoveContainer" containerID="2b8a7ab82f9011405de708d95c40916dd79c5fab582a2640ad096f63c0727506" Oct 06 12:06:23 crc kubenswrapper[4958]: I1006 12:06:23.765468 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Oct 06 12:06:23 crc kubenswrapper[4958]: I1006 12:06:23.801339 4958 scope.go:117] "RemoveContainer" containerID="b8f39ff80a1fc3259a69e273f73d4e86d94f99fd26810cf2cf2c608a8978a626" Oct 06 12:06:23 crc kubenswrapper[4958]: I1006 12:06:23.801490 4958 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Oct 06 12:06:23 crc kubenswrapper[4958]: I1006 12:06:23.803383 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Oct 06 12:06:23 crc kubenswrapper[4958]: I1006 12:06:23.806590 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-internal-svc" Oct 06 12:06:23 crc kubenswrapper[4958]: I1006 12:06:23.806645 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-public-svc" Oct 06 12:06:23 crc kubenswrapper[4958]: I1006 12:06:23.807880 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Oct 06 12:06:23 crc kubenswrapper[4958]: I1006 12:06:23.810755 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Oct 06 12:06:23 crc kubenswrapper[4958]: I1006 12:06:23.863185 4958 scope.go:117] "RemoveContainer" containerID="2b8a7ab82f9011405de708d95c40916dd79c5fab582a2640ad096f63c0727506" Oct 06 12:06:23 crc kubenswrapper[4958]: E1006 12:06:23.866512 4958 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2b8a7ab82f9011405de708d95c40916dd79c5fab582a2640ad096f63c0727506\": container with ID starting with 2b8a7ab82f9011405de708d95c40916dd79c5fab582a2640ad096f63c0727506 not found: ID does not exist" containerID="2b8a7ab82f9011405de708d95c40916dd79c5fab582a2640ad096f63c0727506" Oct 06 12:06:23 crc kubenswrapper[4958]: I1006 12:06:23.866555 4958 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2b8a7ab82f9011405de708d95c40916dd79c5fab582a2640ad096f63c0727506"} err="failed to get container status \"2b8a7ab82f9011405de708d95c40916dd79c5fab582a2640ad096f63c0727506\": rpc error: code = NotFound desc = could not find container \"2b8a7ab82f9011405de708d95c40916dd79c5fab582a2640ad096f63c0727506\": container with ID starting with 2b8a7ab82f9011405de708d95c40916dd79c5fab582a2640ad096f63c0727506 not found: ID does not exist" Oct 06 12:06:23 crc kubenswrapper[4958]: I1006 12:06:23.866582 4958 scope.go:117] "RemoveContainer" containerID="b8f39ff80a1fc3259a69e273f73d4e86d94f99fd26810cf2cf2c608a8978a626" Oct 06 12:06:23 crc kubenswrapper[4958]: E1006 12:06:23.866837 4958 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b8f39ff80a1fc3259a69e273f73d4e86d94f99fd26810cf2cf2c608a8978a626\": container with ID starting with b8f39ff80a1fc3259a69e273f73d4e86d94f99fd26810cf2cf2c608a8978a626 not found: ID does not exist" containerID="b8f39ff80a1fc3259a69e273f73d4e86d94f99fd26810cf2cf2c608a8978a626" Oct 06 12:06:23 crc kubenswrapper[4958]: I1006 12:06:23.866876 4958 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b8f39ff80a1fc3259a69e273f73d4e86d94f99fd26810cf2cf2c608a8978a626"} err="failed to get container status \"b8f39ff80a1fc3259a69e273f73d4e86d94f99fd26810cf2cf2c608a8978a626\": rpc error: code = NotFound desc = could not find container \"b8f39ff80a1fc3259a69e273f73d4e86d94f99fd26810cf2cf2c608a8978a626\": container with ID starting with b8f39ff80a1fc3259a69e273f73d4e86d94f99fd26810cf2cf2c608a8978a626 not found: ID does not exist" Oct 06 12:06:23 crc kubenswrapper[4958]: I1006 12:06:23.887214 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zpt5d\" (UniqueName: \"kubernetes.io/projected/1f8ab826-499a-4820-8964-bef811918491-kube-api-access-zpt5d\") pod \"nova-api-0\" (UID: \"1f8ab826-499a-4820-8964-bef811918491\") " pod="openstack/nova-api-0" Oct 06 12:06:23 crc kubenswrapper[4958]: I1006 12:06:23.887283 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1f8ab826-499a-4820-8964-bef811918491-config-data\") pod \"nova-api-0\" (UID: \"1f8ab826-499a-4820-8964-bef811918491\") " pod="openstack/nova-api-0" Oct 06 12:06:23 crc kubenswrapper[4958]: I1006 12:06:23.887385 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1f8ab826-499a-4820-8964-bef811918491-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"1f8ab826-499a-4820-8964-bef811918491\") " pod="openstack/nova-api-0" Oct 06 12:06:23 crc kubenswrapper[4958]: I1006 12:06:23.887440 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/8110dd9c-85f9-4427-909e-1bc397a4678c-log-httpd\") pod \"ceilometer-0\" (UID: \"8110dd9c-85f9-4427-909e-1bc397a4678c\") " pod="openstack/ceilometer-0" Oct 06 12:06:23 crc kubenswrapper[4958]: I1006 12:06:23.887473 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/1f8ab826-499a-4820-8964-bef811918491-internal-tls-certs\") pod \"nova-api-0\" (UID: \"1f8ab826-499a-4820-8964-bef811918491\") " pod="openstack/nova-api-0" Oct 06 12:06:23 crc kubenswrapper[4958]: I1006 12:06:23.887536 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-shpcc\" (UniqueName: \"kubernetes.io/projected/8110dd9c-85f9-4427-909e-1bc397a4678c-kube-api-access-shpcc\") pod \"ceilometer-0\" (UID: \"8110dd9c-85f9-4427-909e-1bc397a4678c\") " pod="openstack/ceilometer-0" Oct 06 12:06:23 crc kubenswrapper[4958]: I1006 12:06:23.887621 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/8110dd9c-85f9-4427-909e-1bc397a4678c-run-httpd\") pod \"ceilometer-0\" (UID: \"8110dd9c-85f9-4427-909e-1bc397a4678c\") " pod="openstack/ceilometer-0" Oct 06 12:06:23 crc kubenswrapper[4958]: I1006 12:06:23.887685 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1f8ab826-499a-4820-8964-bef811918491-logs\") pod \"nova-api-0\" (UID: \"1f8ab826-499a-4820-8964-bef811918491\") " pod="openstack/nova-api-0" Oct 06 12:06:23 crc kubenswrapper[4958]: I1006 12:06:23.887759 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/8110dd9c-85f9-4427-909e-1bc397a4678c-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"8110dd9c-85f9-4427-909e-1bc397a4678c\") " pod="openstack/ceilometer-0" Oct 06 12:06:23 crc kubenswrapper[4958]: I1006 12:06:23.887799 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/8110dd9c-85f9-4427-909e-1bc397a4678c-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"8110dd9c-85f9-4427-909e-1bc397a4678c\") " pod="openstack/ceilometer-0" Oct 06 12:06:23 crc kubenswrapper[4958]: I1006 12:06:23.887882 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8110dd9c-85f9-4427-909e-1bc397a4678c-scripts\") pod \"ceilometer-0\" (UID: \"8110dd9c-85f9-4427-909e-1bc397a4678c\") " pod="openstack/ceilometer-0" Oct 06 12:06:23 crc kubenswrapper[4958]: I1006 12:06:23.887940 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8110dd9c-85f9-4427-909e-1bc397a4678c-config-data\") pod \"ceilometer-0\" (UID: \"8110dd9c-85f9-4427-909e-1bc397a4678c\") " pod="openstack/ceilometer-0" Oct 06 12:06:23 crc kubenswrapper[4958]: I1006 12:06:23.887998 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8110dd9c-85f9-4427-909e-1bc397a4678c-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"8110dd9c-85f9-4427-909e-1bc397a4678c\") " pod="openstack/ceilometer-0" Oct 06 12:06:23 crc kubenswrapper[4958]: I1006 12:06:23.888018 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/1f8ab826-499a-4820-8964-bef811918491-public-tls-certs\") pod \"nova-api-0\" (UID: \"1f8ab826-499a-4820-8964-bef811918491\") " pod="openstack/nova-api-0" Oct 06 12:06:23 crc kubenswrapper[4958]: I1006 12:06:23.989651 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8110dd9c-85f9-4427-909e-1bc397a4678c-scripts\") pod \"ceilometer-0\" (UID: \"8110dd9c-85f9-4427-909e-1bc397a4678c\") " pod="openstack/ceilometer-0" Oct 06 12:06:23 crc kubenswrapper[4958]: I1006 12:06:23.989701 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8110dd9c-85f9-4427-909e-1bc397a4678c-config-data\") pod \"ceilometer-0\" (UID: \"8110dd9c-85f9-4427-909e-1bc397a4678c\") " pod="openstack/ceilometer-0" Oct 06 12:06:23 crc kubenswrapper[4958]: I1006 12:06:23.989752 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8110dd9c-85f9-4427-909e-1bc397a4678c-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"8110dd9c-85f9-4427-909e-1bc397a4678c\") " pod="openstack/ceilometer-0" Oct 06 12:06:23 crc kubenswrapper[4958]: I1006 12:06:23.989768 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/1f8ab826-499a-4820-8964-bef811918491-public-tls-certs\") pod \"nova-api-0\" (UID: \"1f8ab826-499a-4820-8964-bef811918491\") " pod="openstack/nova-api-0" Oct 06 12:06:23 crc kubenswrapper[4958]: I1006 12:06:23.989817 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zpt5d\" (UniqueName: \"kubernetes.io/projected/1f8ab826-499a-4820-8964-bef811918491-kube-api-access-zpt5d\") pod \"nova-api-0\" (UID: \"1f8ab826-499a-4820-8964-bef811918491\") " pod="openstack/nova-api-0" Oct 06 12:06:23 crc kubenswrapper[4958]: I1006 12:06:23.989844 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1f8ab826-499a-4820-8964-bef811918491-config-data\") pod \"nova-api-0\" (UID: \"1f8ab826-499a-4820-8964-bef811918491\") " pod="openstack/nova-api-0" Oct 06 12:06:23 crc kubenswrapper[4958]: I1006 12:06:23.991267 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1f8ab826-499a-4820-8964-bef811918491-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"1f8ab826-499a-4820-8964-bef811918491\") " pod="openstack/nova-api-0" Oct 06 12:06:23 crc kubenswrapper[4958]: I1006 12:06:23.991363 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/8110dd9c-85f9-4427-909e-1bc397a4678c-log-httpd\") pod \"ceilometer-0\" (UID: \"8110dd9c-85f9-4427-909e-1bc397a4678c\") " pod="openstack/ceilometer-0" Oct 06 12:06:23 crc kubenswrapper[4958]: I1006 12:06:23.991467 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/1f8ab826-499a-4820-8964-bef811918491-internal-tls-certs\") pod \"nova-api-0\" (UID: \"1f8ab826-499a-4820-8964-bef811918491\") " pod="openstack/nova-api-0" Oct 06 12:06:23 crc kubenswrapper[4958]: I1006 12:06:23.991861 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/8110dd9c-85f9-4427-909e-1bc397a4678c-log-httpd\") pod \"ceilometer-0\" (UID: \"8110dd9c-85f9-4427-909e-1bc397a4678c\") " pod="openstack/ceilometer-0" Oct 06 12:06:23 crc kubenswrapper[4958]: I1006 12:06:23.992079 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-shpcc\" (UniqueName: \"kubernetes.io/projected/8110dd9c-85f9-4427-909e-1bc397a4678c-kube-api-access-shpcc\") pod \"ceilometer-0\" (UID: \"8110dd9c-85f9-4427-909e-1bc397a4678c\") " pod="openstack/ceilometer-0" Oct 06 12:06:23 crc kubenswrapper[4958]: I1006 12:06:23.992621 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/8110dd9c-85f9-4427-909e-1bc397a4678c-run-httpd\") pod \"ceilometer-0\" (UID: \"8110dd9c-85f9-4427-909e-1bc397a4678c\") " pod="openstack/ceilometer-0" Oct 06 12:06:23 crc kubenswrapper[4958]: I1006 12:06:23.992691 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1f8ab826-499a-4820-8964-bef811918491-logs\") pod \"nova-api-0\" (UID: \"1f8ab826-499a-4820-8964-bef811918491\") " pod="openstack/nova-api-0" Oct 06 12:06:23 crc kubenswrapper[4958]: I1006 12:06:23.993063 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/8110dd9c-85f9-4427-909e-1bc397a4678c-run-httpd\") pod \"ceilometer-0\" (UID: \"8110dd9c-85f9-4427-909e-1bc397a4678c\") " pod="openstack/ceilometer-0" Oct 06 12:06:23 crc kubenswrapper[4958]: I1006 12:06:23.993312 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/8110dd9c-85f9-4427-909e-1bc397a4678c-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"8110dd9c-85f9-4427-909e-1bc397a4678c\") " pod="openstack/ceilometer-0" Oct 06 12:06:23 crc kubenswrapper[4958]: I1006 12:06:23.993398 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/8110dd9c-85f9-4427-909e-1bc397a4678c-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"8110dd9c-85f9-4427-909e-1bc397a4678c\") " pod="openstack/ceilometer-0" Oct 06 12:06:23 crc kubenswrapper[4958]: I1006 12:06:23.993721 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1f8ab826-499a-4820-8964-bef811918491-logs\") pod \"nova-api-0\" (UID: \"1f8ab826-499a-4820-8964-bef811918491\") " pod="openstack/nova-api-0" Oct 06 12:06:23 crc kubenswrapper[4958]: I1006 12:06:23.994673 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8110dd9c-85f9-4427-909e-1bc397a4678c-config-data\") pod \"ceilometer-0\" (UID: \"8110dd9c-85f9-4427-909e-1bc397a4678c\") " pod="openstack/ceilometer-0" Oct 06 12:06:23 crc kubenswrapper[4958]: I1006 12:06:23.995293 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8110dd9c-85f9-4427-909e-1bc397a4678c-scripts\") pod \"ceilometer-0\" (UID: \"8110dd9c-85f9-4427-909e-1bc397a4678c\") " pod="openstack/ceilometer-0" Oct 06 12:06:23 crc kubenswrapper[4958]: I1006 12:06:23.996666 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1f8ab826-499a-4820-8964-bef811918491-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"1f8ab826-499a-4820-8964-bef811918491\") " pod="openstack/nova-api-0" Oct 06 12:06:24 crc kubenswrapper[4958]: I1006 12:06:24.005768 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/8110dd9c-85f9-4427-909e-1bc397a4678c-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"8110dd9c-85f9-4427-909e-1bc397a4678c\") " pod="openstack/ceilometer-0" Oct 06 12:06:24 crc kubenswrapper[4958]: I1006 12:06:24.005833 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/1f8ab826-499a-4820-8964-bef811918491-internal-tls-certs\") pod \"nova-api-0\" (UID: \"1f8ab826-499a-4820-8964-bef811918491\") " pod="openstack/nova-api-0" Oct 06 12:06:24 crc kubenswrapper[4958]: I1006 12:06:24.006194 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1f8ab826-499a-4820-8964-bef811918491-config-data\") pod \"nova-api-0\" (UID: \"1f8ab826-499a-4820-8964-bef811918491\") " pod="openstack/nova-api-0" Oct 06 12:06:24 crc kubenswrapper[4958]: I1006 12:06:24.006343 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8110dd9c-85f9-4427-909e-1bc397a4678c-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"8110dd9c-85f9-4427-909e-1bc397a4678c\") " pod="openstack/ceilometer-0" Oct 06 12:06:24 crc kubenswrapper[4958]: I1006 12:06:24.006542 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/1f8ab826-499a-4820-8964-bef811918491-public-tls-certs\") pod \"nova-api-0\" (UID: \"1f8ab826-499a-4820-8964-bef811918491\") " pod="openstack/nova-api-0" Oct 06 12:06:24 crc kubenswrapper[4958]: I1006 12:06:24.007465 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/8110dd9c-85f9-4427-909e-1bc397a4678c-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"8110dd9c-85f9-4427-909e-1bc397a4678c\") " pod="openstack/ceilometer-0" Oct 06 12:06:24 crc kubenswrapper[4958]: I1006 12:06:24.009932 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zpt5d\" (UniqueName: \"kubernetes.io/projected/1f8ab826-499a-4820-8964-bef811918491-kube-api-access-zpt5d\") pod \"nova-api-0\" (UID: \"1f8ab826-499a-4820-8964-bef811918491\") " pod="openstack/nova-api-0" Oct 06 12:06:24 crc kubenswrapper[4958]: I1006 12:06:24.013713 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-shpcc\" (UniqueName: \"kubernetes.io/projected/8110dd9c-85f9-4427-909e-1bc397a4678c-kube-api-access-shpcc\") pod \"ceilometer-0\" (UID: \"8110dd9c-85f9-4427-909e-1bc397a4678c\") " pod="openstack/ceilometer-0" Oct 06 12:06:24 crc kubenswrapper[4958]: I1006 12:06:24.084064 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Oct 06 12:06:24 crc kubenswrapper[4958]: I1006 12:06:24.161266 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Oct 06 12:06:24 crc kubenswrapper[4958]: I1006 12:06:24.195637 4958 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-cell1-novncproxy-0" Oct 06 12:06:24 crc kubenswrapper[4958]: I1006 12:06:24.212265 4958 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-cell1-novncproxy-0" Oct 06 12:06:24 crc kubenswrapper[4958]: I1006 12:06:24.649894 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell1-novncproxy-0" Oct 06 12:06:24 crc kubenswrapper[4958]: I1006 12:06:24.667101 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Oct 06 12:06:24 crc kubenswrapper[4958]: W1006 12:06:24.674852 4958 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod8110dd9c_85f9_4427_909e_1bc397a4678c.slice/crio-d629b48dfcd37e662b78d9a044faf4fff19d93a463961e9d71b9899b5535ad5f WatchSource:0}: Error finding container d629b48dfcd37e662b78d9a044faf4fff19d93a463961e9d71b9899b5535ad5f: Status 404 returned error can't find the container with id d629b48dfcd37e662b78d9a044faf4fff19d93a463961e9d71b9899b5535ad5f Oct 06 12:06:24 crc kubenswrapper[4958]: I1006 12:06:24.747355 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Oct 06 12:06:24 crc kubenswrapper[4958]: W1006 12:06:24.757999 4958 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod1f8ab826_499a_4820_8964_bef811918491.slice/crio-c4b48ab0059a31f2d5dffc8c9c436e750d8c91b7fb1828eb8514e51340e4c86b WatchSource:0}: Error finding container c4b48ab0059a31f2d5dffc8c9c436e750d8c91b7fb1828eb8514e51340e4c86b: Status 404 returned error can't find the container with id c4b48ab0059a31f2d5dffc8c9c436e750d8c91b7fb1828eb8514e51340e4c86b Oct 06 12:06:24 crc kubenswrapper[4958]: I1006 12:06:24.853199 4958 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-cell-mapping-xftf8"] Oct 06 12:06:24 crc kubenswrapper[4958]: I1006 12:06:24.861454 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-xftf8" Oct 06 12:06:24 crc kubenswrapper[4958]: I1006 12:06:24.863513 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-cell-mapping-xftf8"] Oct 06 12:06:24 crc kubenswrapper[4958]: I1006 12:06:24.864409 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-manage-config-data" Oct 06 12:06:24 crc kubenswrapper[4958]: I1006 12:06:24.864619 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-manage-scripts" Oct 06 12:06:24 crc kubenswrapper[4958]: I1006 12:06:24.933553 4958 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="78492be8-7a21-4a4b-902e-947656515f65" path="/var/lib/kubelet/pods/78492be8-7a21-4a4b-902e-947656515f65/volumes" Oct 06 12:06:24 crc kubenswrapper[4958]: I1006 12:06:24.934269 4958 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b717fb97-37ee-4f1c-8d12-6a321800b724" path="/var/lib/kubelet/pods/b717fb97-37ee-4f1c-8d12-6a321800b724/volumes" Oct 06 12:06:25 crc kubenswrapper[4958]: I1006 12:06:25.021387 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3da134f3-379d-4899-a301-d9091d57f4d4-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-xftf8\" (UID: \"3da134f3-379d-4899-a301-d9091d57f4d4\") " pod="openstack/nova-cell1-cell-mapping-xftf8" Oct 06 12:06:25 crc kubenswrapper[4958]: I1006 12:06:25.021635 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ch79z\" (UniqueName: \"kubernetes.io/projected/3da134f3-379d-4899-a301-d9091d57f4d4-kube-api-access-ch79z\") pod \"nova-cell1-cell-mapping-xftf8\" (UID: \"3da134f3-379d-4899-a301-d9091d57f4d4\") " pod="openstack/nova-cell1-cell-mapping-xftf8" Oct 06 12:06:25 crc kubenswrapper[4958]: I1006 12:06:25.021857 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3da134f3-379d-4899-a301-d9091d57f4d4-config-data\") pod \"nova-cell1-cell-mapping-xftf8\" (UID: \"3da134f3-379d-4899-a301-d9091d57f4d4\") " pod="openstack/nova-cell1-cell-mapping-xftf8" Oct 06 12:06:25 crc kubenswrapper[4958]: I1006 12:06:25.021932 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3da134f3-379d-4899-a301-d9091d57f4d4-scripts\") pod \"nova-cell1-cell-mapping-xftf8\" (UID: \"3da134f3-379d-4899-a301-d9091d57f4d4\") " pod="openstack/nova-cell1-cell-mapping-xftf8" Oct 06 12:06:25 crc kubenswrapper[4958]: I1006 12:06:25.123574 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3da134f3-379d-4899-a301-d9091d57f4d4-config-data\") pod \"nova-cell1-cell-mapping-xftf8\" (UID: \"3da134f3-379d-4899-a301-d9091d57f4d4\") " pod="openstack/nova-cell1-cell-mapping-xftf8" Oct 06 12:06:25 crc kubenswrapper[4958]: I1006 12:06:25.123629 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3da134f3-379d-4899-a301-d9091d57f4d4-scripts\") pod \"nova-cell1-cell-mapping-xftf8\" (UID: \"3da134f3-379d-4899-a301-d9091d57f4d4\") " pod="openstack/nova-cell1-cell-mapping-xftf8" Oct 06 12:06:25 crc kubenswrapper[4958]: I1006 12:06:25.123709 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3da134f3-379d-4899-a301-d9091d57f4d4-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-xftf8\" (UID: \"3da134f3-379d-4899-a301-d9091d57f4d4\") " pod="openstack/nova-cell1-cell-mapping-xftf8" Oct 06 12:06:25 crc kubenswrapper[4958]: I1006 12:06:25.123742 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ch79z\" (UniqueName: \"kubernetes.io/projected/3da134f3-379d-4899-a301-d9091d57f4d4-kube-api-access-ch79z\") pod \"nova-cell1-cell-mapping-xftf8\" (UID: \"3da134f3-379d-4899-a301-d9091d57f4d4\") " pod="openstack/nova-cell1-cell-mapping-xftf8" Oct 06 12:06:25 crc kubenswrapper[4958]: I1006 12:06:25.130670 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3da134f3-379d-4899-a301-d9091d57f4d4-scripts\") pod \"nova-cell1-cell-mapping-xftf8\" (UID: \"3da134f3-379d-4899-a301-d9091d57f4d4\") " pod="openstack/nova-cell1-cell-mapping-xftf8" Oct 06 12:06:25 crc kubenswrapper[4958]: I1006 12:06:25.131610 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3da134f3-379d-4899-a301-d9091d57f4d4-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-xftf8\" (UID: \"3da134f3-379d-4899-a301-d9091d57f4d4\") " pod="openstack/nova-cell1-cell-mapping-xftf8" Oct 06 12:06:25 crc kubenswrapper[4958]: I1006 12:06:25.132344 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3da134f3-379d-4899-a301-d9091d57f4d4-config-data\") pod \"nova-cell1-cell-mapping-xftf8\" (UID: \"3da134f3-379d-4899-a301-d9091d57f4d4\") " pod="openstack/nova-cell1-cell-mapping-xftf8" Oct 06 12:06:25 crc kubenswrapper[4958]: I1006 12:06:25.143903 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ch79z\" (UniqueName: \"kubernetes.io/projected/3da134f3-379d-4899-a301-d9091d57f4d4-kube-api-access-ch79z\") pod \"nova-cell1-cell-mapping-xftf8\" (UID: \"3da134f3-379d-4899-a301-d9091d57f4d4\") " pod="openstack/nova-cell1-cell-mapping-xftf8" Oct 06 12:06:25 crc kubenswrapper[4958]: I1006 12:06:25.226830 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-xftf8" Oct 06 12:06:25 crc kubenswrapper[4958]: I1006 12:06:25.645425 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"1f8ab826-499a-4820-8964-bef811918491","Type":"ContainerStarted","Data":"5e3d88c9499dd2a1ddcefe240497cd84d427ee163aab8fae08d92ea2a83c3f4e"} Oct 06 12:06:25 crc kubenswrapper[4958]: I1006 12:06:25.645729 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"1f8ab826-499a-4820-8964-bef811918491","Type":"ContainerStarted","Data":"986bd94f946ef4703e4af80f47070c12aadc9e45b3c10f79f4c6d0324906e708"} Oct 06 12:06:25 crc kubenswrapper[4958]: I1006 12:06:25.645742 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"1f8ab826-499a-4820-8964-bef811918491","Type":"ContainerStarted","Data":"c4b48ab0059a31f2d5dffc8c9c436e750d8c91b7fb1828eb8514e51340e4c86b"} Oct 06 12:06:25 crc kubenswrapper[4958]: I1006 12:06:25.648827 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"8110dd9c-85f9-4427-909e-1bc397a4678c","Type":"ContainerStarted","Data":"a796a99671b755e05fa2095eb5b4929199221188fac3d8427fe6e917a7b61d0a"} Oct 06 12:06:25 crc kubenswrapper[4958]: I1006 12:06:25.648879 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"8110dd9c-85f9-4427-909e-1bc397a4678c","Type":"ContainerStarted","Data":"d629b48dfcd37e662b78d9a044faf4fff19d93a463961e9d71b9899b5535ad5f"} Oct 06 12:06:25 crc kubenswrapper[4958]: I1006 12:06:25.662760 4958 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=2.662740277 podStartE2EDuration="2.662740277s" podCreationTimestamp="2025-10-06 12:06:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-06 12:06:25.660414397 +0000 UTC m=+1139.546439705" watchObservedRunningTime="2025-10-06 12:06:25.662740277 +0000 UTC m=+1139.548765585" Oct 06 12:06:25 crc kubenswrapper[4958]: I1006 12:06:25.726300 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-cell-mapping-xftf8"] Oct 06 12:06:26 crc kubenswrapper[4958]: I1006 12:06:26.668183 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-xftf8" event={"ID":"3da134f3-379d-4899-a301-d9091d57f4d4","Type":"ContainerStarted","Data":"8f9158940894e8da96a3f5f5ed001d8f3cbe61333921f60498bd6d6f7d4324df"} Oct 06 12:06:26 crc kubenswrapper[4958]: I1006 12:06:26.668532 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-xftf8" event={"ID":"3da134f3-379d-4899-a301-d9091d57f4d4","Type":"ContainerStarted","Data":"59dabcef55cc9282a641b7daa990ddd9ce70e2079985b440e5690e074a24d038"} Oct 06 12:06:26 crc kubenswrapper[4958]: I1006 12:06:26.670921 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"8110dd9c-85f9-4427-909e-1bc397a4678c","Type":"ContainerStarted","Data":"7a0c98c4dad04dda12c0a398b9d38af7f0de5b97a8516a5a062b722487802c75"} Oct 06 12:06:26 crc kubenswrapper[4958]: I1006 12:06:26.687365 4958 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-cell-mapping-xftf8" podStartSLOduration=2.687349532 podStartE2EDuration="2.687349532s" podCreationTimestamp="2025-10-06 12:06:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-06 12:06:26.68330162 +0000 UTC m=+1140.569326928" watchObservedRunningTime="2025-10-06 12:06:26.687349532 +0000 UTC m=+1140.573374840" Oct 06 12:06:27 crc kubenswrapper[4958]: I1006 12:06:27.091011 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-59cf4bdb65-qz6ck" Oct 06 12:06:27 crc kubenswrapper[4958]: I1006 12:06:27.163957 4958 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-845d6d6f59-jhwnw"] Oct 06 12:06:27 crc kubenswrapper[4958]: I1006 12:06:27.164232 4958 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-845d6d6f59-jhwnw" podUID="56bbbfd2-0526-4ce7-a41d-a18dc9a0a652" containerName="dnsmasq-dns" containerID="cri-o://71988c94015d22c0b2dbbc077888c692b3d96c6629f3713ac3772bcdd51e0241" gracePeriod=10 Oct 06 12:06:27 crc kubenswrapper[4958]: I1006 12:06:27.680655 4958 generic.go:334] "Generic (PLEG): container finished" podID="56bbbfd2-0526-4ce7-a41d-a18dc9a0a652" containerID="71988c94015d22c0b2dbbc077888c692b3d96c6629f3713ac3772bcdd51e0241" exitCode=0 Oct 06 12:06:27 crc kubenswrapper[4958]: I1006 12:06:27.680777 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-845d6d6f59-jhwnw" event={"ID":"56bbbfd2-0526-4ce7-a41d-a18dc9a0a652","Type":"ContainerDied","Data":"71988c94015d22c0b2dbbc077888c692b3d96c6629f3713ac3772bcdd51e0241"} Oct 06 12:06:27 crc kubenswrapper[4958]: I1006 12:06:27.681000 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-845d6d6f59-jhwnw" event={"ID":"56bbbfd2-0526-4ce7-a41d-a18dc9a0a652","Type":"ContainerDied","Data":"7f300d2531a071ea8fdb4cb4b3b5979880ef70cb515dc0e592768c769ebc37f5"} Oct 06 12:06:27 crc kubenswrapper[4958]: I1006 12:06:27.681016 4958 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7f300d2531a071ea8fdb4cb4b3b5979880ef70cb515dc0e592768c769ebc37f5" Oct 06 12:06:27 crc kubenswrapper[4958]: I1006 12:06:27.680787 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-845d6d6f59-jhwnw" Oct 06 12:06:27 crc kubenswrapper[4958]: I1006 12:06:27.683861 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"8110dd9c-85f9-4427-909e-1bc397a4678c","Type":"ContainerStarted","Data":"2b016211689aa723d7128317da44b4cbcff6499e710539adec6bd4d182e04823"} Oct 06 12:06:27 crc kubenswrapper[4958]: I1006 12:06:27.772269 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rzjfr\" (UniqueName: \"kubernetes.io/projected/56bbbfd2-0526-4ce7-a41d-a18dc9a0a652-kube-api-access-rzjfr\") pod \"56bbbfd2-0526-4ce7-a41d-a18dc9a0a652\" (UID: \"56bbbfd2-0526-4ce7-a41d-a18dc9a0a652\") " Oct 06 12:06:27 crc kubenswrapper[4958]: I1006 12:06:27.772341 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/56bbbfd2-0526-4ce7-a41d-a18dc9a0a652-config\") pod \"56bbbfd2-0526-4ce7-a41d-a18dc9a0a652\" (UID: \"56bbbfd2-0526-4ce7-a41d-a18dc9a0a652\") " Oct 06 12:06:27 crc kubenswrapper[4958]: I1006 12:06:27.772393 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/56bbbfd2-0526-4ce7-a41d-a18dc9a0a652-ovsdbserver-nb\") pod \"56bbbfd2-0526-4ce7-a41d-a18dc9a0a652\" (UID: \"56bbbfd2-0526-4ce7-a41d-a18dc9a0a652\") " Oct 06 12:06:27 crc kubenswrapper[4958]: I1006 12:06:27.773066 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/56bbbfd2-0526-4ce7-a41d-a18dc9a0a652-dns-svc\") pod \"56bbbfd2-0526-4ce7-a41d-a18dc9a0a652\" (UID: \"56bbbfd2-0526-4ce7-a41d-a18dc9a0a652\") " Oct 06 12:06:27 crc kubenswrapper[4958]: I1006 12:06:27.773253 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/56bbbfd2-0526-4ce7-a41d-a18dc9a0a652-ovsdbserver-sb\") pod \"56bbbfd2-0526-4ce7-a41d-a18dc9a0a652\" (UID: \"56bbbfd2-0526-4ce7-a41d-a18dc9a0a652\") " Oct 06 12:06:27 crc kubenswrapper[4958]: I1006 12:06:27.773353 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/56bbbfd2-0526-4ce7-a41d-a18dc9a0a652-dns-swift-storage-0\") pod \"56bbbfd2-0526-4ce7-a41d-a18dc9a0a652\" (UID: \"56bbbfd2-0526-4ce7-a41d-a18dc9a0a652\") " Oct 06 12:06:27 crc kubenswrapper[4958]: I1006 12:06:27.781327 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/56bbbfd2-0526-4ce7-a41d-a18dc9a0a652-kube-api-access-rzjfr" (OuterVolumeSpecName: "kube-api-access-rzjfr") pod "56bbbfd2-0526-4ce7-a41d-a18dc9a0a652" (UID: "56bbbfd2-0526-4ce7-a41d-a18dc9a0a652"). InnerVolumeSpecName "kube-api-access-rzjfr". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 06 12:06:27 crc kubenswrapper[4958]: I1006 12:06:27.825061 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/56bbbfd2-0526-4ce7-a41d-a18dc9a0a652-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "56bbbfd2-0526-4ce7-a41d-a18dc9a0a652" (UID: "56bbbfd2-0526-4ce7-a41d-a18dc9a0a652"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 06 12:06:27 crc kubenswrapper[4958]: I1006 12:06:27.829671 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/56bbbfd2-0526-4ce7-a41d-a18dc9a0a652-config" (OuterVolumeSpecName: "config") pod "56bbbfd2-0526-4ce7-a41d-a18dc9a0a652" (UID: "56bbbfd2-0526-4ce7-a41d-a18dc9a0a652"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 06 12:06:27 crc kubenswrapper[4958]: I1006 12:06:27.833689 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/56bbbfd2-0526-4ce7-a41d-a18dc9a0a652-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "56bbbfd2-0526-4ce7-a41d-a18dc9a0a652" (UID: "56bbbfd2-0526-4ce7-a41d-a18dc9a0a652"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 06 12:06:27 crc kubenswrapper[4958]: I1006 12:06:27.844638 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/56bbbfd2-0526-4ce7-a41d-a18dc9a0a652-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "56bbbfd2-0526-4ce7-a41d-a18dc9a0a652" (UID: "56bbbfd2-0526-4ce7-a41d-a18dc9a0a652"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 06 12:06:27 crc kubenswrapper[4958]: I1006 12:06:27.865506 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/56bbbfd2-0526-4ce7-a41d-a18dc9a0a652-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "56bbbfd2-0526-4ce7-a41d-a18dc9a0a652" (UID: "56bbbfd2-0526-4ce7-a41d-a18dc9a0a652"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 06 12:06:27 crc kubenswrapper[4958]: I1006 12:06:27.876363 4958 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rzjfr\" (UniqueName: \"kubernetes.io/projected/56bbbfd2-0526-4ce7-a41d-a18dc9a0a652-kube-api-access-rzjfr\") on node \"crc\" DevicePath \"\"" Oct 06 12:06:27 crc kubenswrapper[4958]: I1006 12:06:27.876614 4958 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/56bbbfd2-0526-4ce7-a41d-a18dc9a0a652-config\") on node \"crc\" DevicePath \"\"" Oct 06 12:06:27 crc kubenswrapper[4958]: I1006 12:06:27.876706 4958 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/56bbbfd2-0526-4ce7-a41d-a18dc9a0a652-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Oct 06 12:06:27 crc kubenswrapper[4958]: I1006 12:06:27.876784 4958 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/56bbbfd2-0526-4ce7-a41d-a18dc9a0a652-dns-svc\") on node \"crc\" DevicePath \"\"" Oct 06 12:06:27 crc kubenswrapper[4958]: I1006 12:06:27.876866 4958 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/56bbbfd2-0526-4ce7-a41d-a18dc9a0a652-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Oct 06 12:06:27 crc kubenswrapper[4958]: I1006 12:06:27.876943 4958 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/56bbbfd2-0526-4ce7-a41d-a18dc9a0a652-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Oct 06 12:06:28 crc kubenswrapper[4958]: I1006 12:06:28.693302 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-845d6d6f59-jhwnw" Oct 06 12:06:28 crc kubenswrapper[4958]: I1006 12:06:28.756503 4958 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-845d6d6f59-jhwnw"] Oct 06 12:06:28 crc kubenswrapper[4958]: I1006 12:06:28.767928 4958 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-845d6d6f59-jhwnw"] Oct 06 12:06:28 crc kubenswrapper[4958]: I1006 12:06:28.934044 4958 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="56bbbfd2-0526-4ce7-a41d-a18dc9a0a652" path="/var/lib/kubelet/pods/56bbbfd2-0526-4ce7-a41d-a18dc9a0a652/volumes" Oct 06 12:06:29 crc kubenswrapper[4958]: I1006 12:06:29.706080 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"8110dd9c-85f9-4427-909e-1bc397a4678c","Type":"ContainerStarted","Data":"3fa273dbfd97c68f6f6f69afb7ad6f49b56ed024df664d41584c2783a142a7be"} Oct 06 12:06:29 crc kubenswrapper[4958]: I1006 12:06:29.707343 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Oct 06 12:06:29 crc kubenswrapper[4958]: I1006 12:06:29.742978 4958 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.626908099 podStartE2EDuration="6.742950178s" podCreationTimestamp="2025-10-06 12:06:23 +0000 UTC" firstStartedPulling="2025-10-06 12:06:24.677985284 +0000 UTC m=+1138.564010592" lastFinishedPulling="2025-10-06 12:06:28.794027363 +0000 UTC m=+1142.680052671" observedRunningTime="2025-10-06 12:06:29.730762571 +0000 UTC m=+1143.616787879" watchObservedRunningTime="2025-10-06 12:06:29.742950178 +0000 UTC m=+1143.628975486" Oct 06 12:06:31 crc kubenswrapper[4958]: I1006 12:06:31.727279 4958 generic.go:334] "Generic (PLEG): container finished" podID="3da134f3-379d-4899-a301-d9091d57f4d4" containerID="8f9158940894e8da96a3f5f5ed001d8f3cbe61333921f60498bd6d6f7d4324df" exitCode=0 Oct 06 12:06:31 crc kubenswrapper[4958]: I1006 12:06:31.727411 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-xftf8" event={"ID":"3da134f3-379d-4899-a301-d9091d57f4d4","Type":"ContainerDied","Data":"8f9158940894e8da96a3f5f5ed001d8f3cbe61333921f60498bd6d6f7d4324df"} Oct 06 12:06:33 crc kubenswrapper[4958]: I1006 12:06:33.146078 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-xftf8" Oct 06 12:06:33 crc kubenswrapper[4958]: I1006 12:06:33.283118 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3da134f3-379d-4899-a301-d9091d57f4d4-config-data\") pod \"3da134f3-379d-4899-a301-d9091d57f4d4\" (UID: \"3da134f3-379d-4899-a301-d9091d57f4d4\") " Oct 06 12:06:33 crc kubenswrapper[4958]: I1006 12:06:33.283260 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3da134f3-379d-4899-a301-d9091d57f4d4-scripts\") pod \"3da134f3-379d-4899-a301-d9091d57f4d4\" (UID: \"3da134f3-379d-4899-a301-d9091d57f4d4\") " Oct 06 12:06:33 crc kubenswrapper[4958]: I1006 12:06:33.283327 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3da134f3-379d-4899-a301-d9091d57f4d4-combined-ca-bundle\") pod \"3da134f3-379d-4899-a301-d9091d57f4d4\" (UID: \"3da134f3-379d-4899-a301-d9091d57f4d4\") " Oct 06 12:06:33 crc kubenswrapper[4958]: I1006 12:06:33.283415 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ch79z\" (UniqueName: \"kubernetes.io/projected/3da134f3-379d-4899-a301-d9091d57f4d4-kube-api-access-ch79z\") pod \"3da134f3-379d-4899-a301-d9091d57f4d4\" (UID: \"3da134f3-379d-4899-a301-d9091d57f4d4\") " Oct 06 12:06:33 crc kubenswrapper[4958]: I1006 12:06:33.294304 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3da134f3-379d-4899-a301-d9091d57f4d4-scripts" (OuterVolumeSpecName: "scripts") pod "3da134f3-379d-4899-a301-d9091d57f4d4" (UID: "3da134f3-379d-4899-a301-d9091d57f4d4"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 06 12:06:33 crc kubenswrapper[4958]: I1006 12:06:33.294343 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3da134f3-379d-4899-a301-d9091d57f4d4-kube-api-access-ch79z" (OuterVolumeSpecName: "kube-api-access-ch79z") pod "3da134f3-379d-4899-a301-d9091d57f4d4" (UID: "3da134f3-379d-4899-a301-d9091d57f4d4"). InnerVolumeSpecName "kube-api-access-ch79z". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 06 12:06:33 crc kubenswrapper[4958]: I1006 12:06:33.322428 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3da134f3-379d-4899-a301-d9091d57f4d4-config-data" (OuterVolumeSpecName: "config-data") pod "3da134f3-379d-4899-a301-d9091d57f4d4" (UID: "3da134f3-379d-4899-a301-d9091d57f4d4"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 06 12:06:33 crc kubenswrapper[4958]: I1006 12:06:33.340520 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3da134f3-379d-4899-a301-d9091d57f4d4-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "3da134f3-379d-4899-a301-d9091d57f4d4" (UID: "3da134f3-379d-4899-a301-d9091d57f4d4"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 06 12:06:33 crc kubenswrapper[4958]: I1006 12:06:33.384830 4958 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ch79z\" (UniqueName: \"kubernetes.io/projected/3da134f3-379d-4899-a301-d9091d57f4d4-kube-api-access-ch79z\") on node \"crc\" DevicePath \"\"" Oct 06 12:06:33 crc kubenswrapper[4958]: I1006 12:06:33.385592 4958 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3da134f3-379d-4899-a301-d9091d57f4d4-config-data\") on node \"crc\" DevicePath \"\"" Oct 06 12:06:33 crc kubenswrapper[4958]: I1006 12:06:33.385641 4958 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3da134f3-379d-4899-a301-d9091d57f4d4-scripts\") on node \"crc\" DevicePath \"\"" Oct 06 12:06:33 crc kubenswrapper[4958]: I1006 12:06:33.385655 4958 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3da134f3-379d-4899-a301-d9091d57f4d4-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 06 12:06:33 crc kubenswrapper[4958]: I1006 12:06:33.752042 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-xftf8" event={"ID":"3da134f3-379d-4899-a301-d9091d57f4d4","Type":"ContainerDied","Data":"59dabcef55cc9282a641b7daa990ddd9ce70e2079985b440e5690e074a24d038"} Oct 06 12:06:33 crc kubenswrapper[4958]: I1006 12:06:33.752103 4958 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="59dabcef55cc9282a641b7daa990ddd9ce70e2079985b440e5690e074a24d038" Oct 06 12:06:33 crc kubenswrapper[4958]: I1006 12:06:33.752110 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-xftf8" Oct 06 12:06:33 crc kubenswrapper[4958]: I1006 12:06:33.949748 4958 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Oct 06 12:06:33 crc kubenswrapper[4958]: I1006 12:06:33.950125 4958 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="1f8ab826-499a-4820-8964-bef811918491" containerName="nova-api-log" containerID="cri-o://986bd94f946ef4703e4af80f47070c12aadc9e45b3c10f79f4c6d0324906e708" gracePeriod=30 Oct 06 12:06:33 crc kubenswrapper[4958]: I1006 12:06:33.950330 4958 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="1f8ab826-499a-4820-8964-bef811918491" containerName="nova-api-api" containerID="cri-o://5e3d88c9499dd2a1ddcefe240497cd84d427ee163aab8fae08d92ea2a83c3f4e" gracePeriod=30 Oct 06 12:06:33 crc kubenswrapper[4958]: I1006 12:06:33.975174 4958 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Oct 06 12:06:33 crc kubenswrapper[4958]: I1006 12:06:33.975425 4958 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-scheduler-0" podUID="b9b7229f-dbe0-4ff1-83a1-e7adf979a1c5" containerName="nova-scheduler-scheduler" containerID="cri-o://688cfc82e38fc05dbd4d1059a62732effe2a28b07687f722d3d0460704530bca" gracePeriod=30 Oct 06 12:06:34 crc kubenswrapper[4958]: I1006 12:06:34.007684 4958 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Oct 06 12:06:34 crc kubenswrapper[4958]: I1006 12:06:34.007980 4958 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="96807e80-df6e-4105-a74d-296cf572e80f" containerName="nova-metadata-log" containerID="cri-o://f05fa31c47201ef7924773b1b6352543125b33d31f360741f2dc2069e991936e" gracePeriod=30 Oct 06 12:06:34 crc kubenswrapper[4958]: I1006 12:06:34.008083 4958 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="96807e80-df6e-4105-a74d-296cf572e80f" containerName="nova-metadata-metadata" containerID="cri-o://0f8132c59efef53c444cef144397d153172122024dffdc5f94095ef023317a46" gracePeriod=30 Oct 06 12:06:34 crc kubenswrapper[4958]: I1006 12:06:34.656180 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Oct 06 12:06:34 crc kubenswrapper[4958]: I1006 12:06:34.721627 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/1f8ab826-499a-4820-8964-bef811918491-public-tls-certs\") pod \"1f8ab826-499a-4820-8964-bef811918491\" (UID: \"1f8ab826-499a-4820-8964-bef811918491\") " Oct 06 12:06:34 crc kubenswrapper[4958]: I1006 12:06:34.721677 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/1f8ab826-499a-4820-8964-bef811918491-internal-tls-certs\") pod \"1f8ab826-499a-4820-8964-bef811918491\" (UID: \"1f8ab826-499a-4820-8964-bef811918491\") " Oct 06 12:06:34 crc kubenswrapper[4958]: I1006 12:06:34.721748 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zpt5d\" (UniqueName: \"kubernetes.io/projected/1f8ab826-499a-4820-8964-bef811918491-kube-api-access-zpt5d\") pod \"1f8ab826-499a-4820-8964-bef811918491\" (UID: \"1f8ab826-499a-4820-8964-bef811918491\") " Oct 06 12:06:34 crc kubenswrapper[4958]: I1006 12:06:34.721773 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1f8ab826-499a-4820-8964-bef811918491-config-data\") pod \"1f8ab826-499a-4820-8964-bef811918491\" (UID: \"1f8ab826-499a-4820-8964-bef811918491\") " Oct 06 12:06:34 crc kubenswrapper[4958]: I1006 12:06:34.721790 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1f8ab826-499a-4820-8964-bef811918491-combined-ca-bundle\") pod \"1f8ab826-499a-4820-8964-bef811918491\" (UID: \"1f8ab826-499a-4820-8964-bef811918491\") " Oct 06 12:06:34 crc kubenswrapper[4958]: I1006 12:06:34.721863 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1f8ab826-499a-4820-8964-bef811918491-logs\") pod \"1f8ab826-499a-4820-8964-bef811918491\" (UID: \"1f8ab826-499a-4820-8964-bef811918491\") " Oct 06 12:06:34 crc kubenswrapper[4958]: I1006 12:06:34.722662 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1f8ab826-499a-4820-8964-bef811918491-logs" (OuterVolumeSpecName: "logs") pod "1f8ab826-499a-4820-8964-bef811918491" (UID: "1f8ab826-499a-4820-8964-bef811918491"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 06 12:06:34 crc kubenswrapper[4958]: I1006 12:06:34.738674 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1f8ab826-499a-4820-8964-bef811918491-kube-api-access-zpt5d" (OuterVolumeSpecName: "kube-api-access-zpt5d") pod "1f8ab826-499a-4820-8964-bef811918491" (UID: "1f8ab826-499a-4820-8964-bef811918491"). InnerVolumeSpecName "kube-api-access-zpt5d". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 06 12:06:34 crc kubenswrapper[4958]: I1006 12:06:34.766907 4958 generic.go:334] "Generic (PLEG): container finished" podID="96807e80-df6e-4105-a74d-296cf572e80f" containerID="f05fa31c47201ef7924773b1b6352543125b33d31f360741f2dc2069e991936e" exitCode=143 Oct 06 12:06:34 crc kubenswrapper[4958]: I1006 12:06:34.767399 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"96807e80-df6e-4105-a74d-296cf572e80f","Type":"ContainerDied","Data":"f05fa31c47201ef7924773b1b6352543125b33d31f360741f2dc2069e991936e"} Oct 06 12:06:34 crc kubenswrapper[4958]: I1006 12:06:34.768585 4958 generic.go:334] "Generic (PLEG): container finished" podID="b9b7229f-dbe0-4ff1-83a1-e7adf979a1c5" containerID="688cfc82e38fc05dbd4d1059a62732effe2a28b07687f722d3d0460704530bca" exitCode=0 Oct 06 12:06:34 crc kubenswrapper[4958]: I1006 12:06:34.768654 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"b9b7229f-dbe0-4ff1-83a1-e7adf979a1c5","Type":"ContainerDied","Data":"688cfc82e38fc05dbd4d1059a62732effe2a28b07687f722d3d0460704530bca"} Oct 06 12:06:34 crc kubenswrapper[4958]: I1006 12:06:34.770676 4958 generic.go:334] "Generic (PLEG): container finished" podID="1f8ab826-499a-4820-8964-bef811918491" containerID="5e3d88c9499dd2a1ddcefe240497cd84d427ee163aab8fae08d92ea2a83c3f4e" exitCode=0 Oct 06 12:06:34 crc kubenswrapper[4958]: I1006 12:06:34.770694 4958 generic.go:334] "Generic (PLEG): container finished" podID="1f8ab826-499a-4820-8964-bef811918491" containerID="986bd94f946ef4703e4af80f47070c12aadc9e45b3c10f79f4c6d0324906e708" exitCode=143 Oct 06 12:06:34 crc kubenswrapper[4958]: I1006 12:06:34.770709 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"1f8ab826-499a-4820-8964-bef811918491","Type":"ContainerDied","Data":"5e3d88c9499dd2a1ddcefe240497cd84d427ee163aab8fae08d92ea2a83c3f4e"} Oct 06 12:06:34 crc kubenswrapper[4958]: I1006 12:06:34.770723 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"1f8ab826-499a-4820-8964-bef811918491","Type":"ContainerDied","Data":"986bd94f946ef4703e4af80f47070c12aadc9e45b3c10f79f4c6d0324906e708"} Oct 06 12:06:34 crc kubenswrapper[4958]: I1006 12:06:34.770733 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"1f8ab826-499a-4820-8964-bef811918491","Type":"ContainerDied","Data":"c4b48ab0059a31f2d5dffc8c9c436e750d8c91b7fb1828eb8514e51340e4c86b"} Oct 06 12:06:34 crc kubenswrapper[4958]: I1006 12:06:34.770749 4958 scope.go:117] "RemoveContainer" containerID="5e3d88c9499dd2a1ddcefe240497cd84d427ee163aab8fae08d92ea2a83c3f4e" Oct 06 12:06:34 crc kubenswrapper[4958]: I1006 12:06:34.770895 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Oct 06 12:06:34 crc kubenswrapper[4958]: I1006 12:06:34.780301 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1f8ab826-499a-4820-8964-bef811918491-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "1f8ab826-499a-4820-8964-bef811918491" (UID: "1f8ab826-499a-4820-8964-bef811918491"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 06 12:06:34 crc kubenswrapper[4958]: I1006 12:06:34.781387 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1f8ab826-499a-4820-8964-bef811918491-config-data" (OuterVolumeSpecName: "config-data") pod "1f8ab826-499a-4820-8964-bef811918491" (UID: "1f8ab826-499a-4820-8964-bef811918491"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 06 12:06:34 crc kubenswrapper[4958]: I1006 12:06:34.796049 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1f8ab826-499a-4820-8964-bef811918491-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "1f8ab826-499a-4820-8964-bef811918491" (UID: "1f8ab826-499a-4820-8964-bef811918491"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 06 12:06:34 crc kubenswrapper[4958]: I1006 12:06:34.812195 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1f8ab826-499a-4820-8964-bef811918491-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "1f8ab826-499a-4820-8964-bef811918491" (UID: "1f8ab826-499a-4820-8964-bef811918491"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 06 12:06:34 crc kubenswrapper[4958]: I1006 12:06:34.824015 4958 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1f8ab826-499a-4820-8964-bef811918491-logs\") on node \"crc\" DevicePath \"\"" Oct 06 12:06:34 crc kubenswrapper[4958]: I1006 12:06:34.824070 4958 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/1f8ab826-499a-4820-8964-bef811918491-public-tls-certs\") on node \"crc\" DevicePath \"\"" Oct 06 12:06:34 crc kubenswrapper[4958]: I1006 12:06:34.824085 4958 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/1f8ab826-499a-4820-8964-bef811918491-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Oct 06 12:06:34 crc kubenswrapper[4958]: I1006 12:06:34.824094 4958 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zpt5d\" (UniqueName: \"kubernetes.io/projected/1f8ab826-499a-4820-8964-bef811918491-kube-api-access-zpt5d\") on node \"crc\" DevicePath \"\"" Oct 06 12:06:34 crc kubenswrapper[4958]: I1006 12:06:34.824102 4958 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1f8ab826-499a-4820-8964-bef811918491-config-data\") on node \"crc\" DevicePath \"\"" Oct 06 12:06:34 crc kubenswrapper[4958]: I1006 12:06:34.824110 4958 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1f8ab826-499a-4820-8964-bef811918491-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 06 12:06:34 crc kubenswrapper[4958]: I1006 12:06:34.824889 4958 scope.go:117] "RemoveContainer" containerID="986bd94f946ef4703e4af80f47070c12aadc9e45b3c10f79f4c6d0324906e708" Oct 06 12:06:34 crc kubenswrapper[4958]: I1006 12:06:34.852462 4958 scope.go:117] "RemoveContainer" containerID="5e3d88c9499dd2a1ddcefe240497cd84d427ee163aab8fae08d92ea2a83c3f4e" Oct 06 12:06:34 crc kubenswrapper[4958]: E1006 12:06:34.852893 4958 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5e3d88c9499dd2a1ddcefe240497cd84d427ee163aab8fae08d92ea2a83c3f4e\": container with ID starting with 5e3d88c9499dd2a1ddcefe240497cd84d427ee163aab8fae08d92ea2a83c3f4e not found: ID does not exist" containerID="5e3d88c9499dd2a1ddcefe240497cd84d427ee163aab8fae08d92ea2a83c3f4e" Oct 06 12:06:34 crc kubenswrapper[4958]: I1006 12:06:34.852921 4958 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5e3d88c9499dd2a1ddcefe240497cd84d427ee163aab8fae08d92ea2a83c3f4e"} err="failed to get container status \"5e3d88c9499dd2a1ddcefe240497cd84d427ee163aab8fae08d92ea2a83c3f4e\": rpc error: code = NotFound desc = could not find container \"5e3d88c9499dd2a1ddcefe240497cd84d427ee163aab8fae08d92ea2a83c3f4e\": container with ID starting with 5e3d88c9499dd2a1ddcefe240497cd84d427ee163aab8fae08d92ea2a83c3f4e not found: ID does not exist" Oct 06 12:06:34 crc kubenswrapper[4958]: I1006 12:06:34.852945 4958 scope.go:117] "RemoveContainer" containerID="986bd94f946ef4703e4af80f47070c12aadc9e45b3c10f79f4c6d0324906e708" Oct 06 12:06:34 crc kubenswrapper[4958]: E1006 12:06:34.853499 4958 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"986bd94f946ef4703e4af80f47070c12aadc9e45b3c10f79f4c6d0324906e708\": container with ID starting with 986bd94f946ef4703e4af80f47070c12aadc9e45b3c10f79f4c6d0324906e708 not found: ID does not exist" containerID="986bd94f946ef4703e4af80f47070c12aadc9e45b3c10f79f4c6d0324906e708" Oct 06 12:06:34 crc kubenswrapper[4958]: I1006 12:06:34.853519 4958 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"986bd94f946ef4703e4af80f47070c12aadc9e45b3c10f79f4c6d0324906e708"} err="failed to get container status \"986bd94f946ef4703e4af80f47070c12aadc9e45b3c10f79f4c6d0324906e708\": rpc error: code = NotFound desc = could not find container \"986bd94f946ef4703e4af80f47070c12aadc9e45b3c10f79f4c6d0324906e708\": container with ID starting with 986bd94f946ef4703e4af80f47070c12aadc9e45b3c10f79f4c6d0324906e708 not found: ID does not exist" Oct 06 12:06:34 crc kubenswrapper[4958]: I1006 12:06:34.853549 4958 scope.go:117] "RemoveContainer" containerID="5e3d88c9499dd2a1ddcefe240497cd84d427ee163aab8fae08d92ea2a83c3f4e" Oct 06 12:06:34 crc kubenswrapper[4958]: I1006 12:06:34.854301 4958 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5e3d88c9499dd2a1ddcefe240497cd84d427ee163aab8fae08d92ea2a83c3f4e"} err="failed to get container status \"5e3d88c9499dd2a1ddcefe240497cd84d427ee163aab8fae08d92ea2a83c3f4e\": rpc error: code = NotFound desc = could not find container \"5e3d88c9499dd2a1ddcefe240497cd84d427ee163aab8fae08d92ea2a83c3f4e\": container with ID starting with 5e3d88c9499dd2a1ddcefe240497cd84d427ee163aab8fae08d92ea2a83c3f4e not found: ID does not exist" Oct 06 12:06:34 crc kubenswrapper[4958]: I1006 12:06:34.854320 4958 scope.go:117] "RemoveContainer" containerID="986bd94f946ef4703e4af80f47070c12aadc9e45b3c10f79f4c6d0324906e708" Oct 06 12:06:34 crc kubenswrapper[4958]: I1006 12:06:34.854530 4958 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"986bd94f946ef4703e4af80f47070c12aadc9e45b3c10f79f4c6d0324906e708"} err="failed to get container status \"986bd94f946ef4703e4af80f47070c12aadc9e45b3c10f79f4c6d0324906e708\": rpc error: code = NotFound desc = could not find container \"986bd94f946ef4703e4af80f47070c12aadc9e45b3c10f79f4c6d0324906e708\": container with ID starting with 986bd94f946ef4703e4af80f47070c12aadc9e45b3c10f79f4c6d0324906e708 not found: ID does not exist" Oct 06 12:06:34 crc kubenswrapper[4958]: I1006 12:06:34.862768 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Oct 06 12:06:35 crc kubenswrapper[4958]: I1006 12:06:35.027113 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b9b7229f-dbe0-4ff1-83a1-e7adf979a1c5-combined-ca-bundle\") pod \"b9b7229f-dbe0-4ff1-83a1-e7adf979a1c5\" (UID: \"b9b7229f-dbe0-4ff1-83a1-e7adf979a1c5\") " Oct 06 12:06:35 crc kubenswrapper[4958]: I1006 12:06:35.027312 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tdzwx\" (UniqueName: \"kubernetes.io/projected/b9b7229f-dbe0-4ff1-83a1-e7adf979a1c5-kube-api-access-tdzwx\") pod \"b9b7229f-dbe0-4ff1-83a1-e7adf979a1c5\" (UID: \"b9b7229f-dbe0-4ff1-83a1-e7adf979a1c5\") " Oct 06 12:06:35 crc kubenswrapper[4958]: I1006 12:06:35.027429 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b9b7229f-dbe0-4ff1-83a1-e7adf979a1c5-config-data\") pod \"b9b7229f-dbe0-4ff1-83a1-e7adf979a1c5\" (UID: \"b9b7229f-dbe0-4ff1-83a1-e7adf979a1c5\") " Oct 06 12:06:35 crc kubenswrapper[4958]: I1006 12:06:35.031779 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b9b7229f-dbe0-4ff1-83a1-e7adf979a1c5-kube-api-access-tdzwx" (OuterVolumeSpecName: "kube-api-access-tdzwx") pod "b9b7229f-dbe0-4ff1-83a1-e7adf979a1c5" (UID: "b9b7229f-dbe0-4ff1-83a1-e7adf979a1c5"). InnerVolumeSpecName "kube-api-access-tdzwx". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 06 12:06:35 crc kubenswrapper[4958]: I1006 12:06:35.061266 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b9b7229f-dbe0-4ff1-83a1-e7adf979a1c5-config-data" (OuterVolumeSpecName: "config-data") pod "b9b7229f-dbe0-4ff1-83a1-e7adf979a1c5" (UID: "b9b7229f-dbe0-4ff1-83a1-e7adf979a1c5"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 06 12:06:35 crc kubenswrapper[4958]: I1006 12:06:35.070316 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b9b7229f-dbe0-4ff1-83a1-e7adf979a1c5-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "b9b7229f-dbe0-4ff1-83a1-e7adf979a1c5" (UID: "b9b7229f-dbe0-4ff1-83a1-e7adf979a1c5"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 06 12:06:35 crc kubenswrapper[4958]: I1006 12:06:35.130774 4958 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b9b7229f-dbe0-4ff1-83a1-e7adf979a1c5-config-data\") on node \"crc\" DevicePath \"\"" Oct 06 12:06:35 crc kubenswrapper[4958]: I1006 12:06:35.130841 4958 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b9b7229f-dbe0-4ff1-83a1-e7adf979a1c5-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 06 12:06:35 crc kubenswrapper[4958]: I1006 12:06:35.130860 4958 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tdzwx\" (UniqueName: \"kubernetes.io/projected/b9b7229f-dbe0-4ff1-83a1-e7adf979a1c5-kube-api-access-tdzwx\") on node \"crc\" DevicePath \"\"" Oct 06 12:06:35 crc kubenswrapper[4958]: I1006 12:06:35.203241 4958 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Oct 06 12:06:35 crc kubenswrapper[4958]: I1006 12:06:35.211855 4958 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Oct 06 12:06:35 crc kubenswrapper[4958]: I1006 12:06:35.231771 4958 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Oct 06 12:06:35 crc kubenswrapper[4958]: E1006 12:06:35.232224 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1f8ab826-499a-4820-8964-bef811918491" containerName="nova-api-log" Oct 06 12:06:35 crc kubenswrapper[4958]: I1006 12:06:35.232241 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="1f8ab826-499a-4820-8964-bef811918491" containerName="nova-api-log" Oct 06 12:06:35 crc kubenswrapper[4958]: E1006 12:06:35.232257 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="56bbbfd2-0526-4ce7-a41d-a18dc9a0a652" containerName="init" Oct 06 12:06:35 crc kubenswrapper[4958]: I1006 12:06:35.232264 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="56bbbfd2-0526-4ce7-a41d-a18dc9a0a652" containerName="init" Oct 06 12:06:35 crc kubenswrapper[4958]: E1006 12:06:35.232276 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b9b7229f-dbe0-4ff1-83a1-e7adf979a1c5" containerName="nova-scheduler-scheduler" Oct 06 12:06:35 crc kubenswrapper[4958]: I1006 12:06:35.232283 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="b9b7229f-dbe0-4ff1-83a1-e7adf979a1c5" containerName="nova-scheduler-scheduler" Oct 06 12:06:35 crc kubenswrapper[4958]: E1006 12:06:35.232299 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="56bbbfd2-0526-4ce7-a41d-a18dc9a0a652" containerName="dnsmasq-dns" Oct 06 12:06:35 crc kubenswrapper[4958]: I1006 12:06:35.232307 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="56bbbfd2-0526-4ce7-a41d-a18dc9a0a652" containerName="dnsmasq-dns" Oct 06 12:06:35 crc kubenswrapper[4958]: E1006 12:06:35.232318 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3da134f3-379d-4899-a301-d9091d57f4d4" containerName="nova-manage" Oct 06 12:06:35 crc kubenswrapper[4958]: I1006 12:06:35.232324 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="3da134f3-379d-4899-a301-d9091d57f4d4" containerName="nova-manage" Oct 06 12:06:35 crc kubenswrapper[4958]: E1006 12:06:35.232349 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1f8ab826-499a-4820-8964-bef811918491" containerName="nova-api-api" Oct 06 12:06:35 crc kubenswrapper[4958]: I1006 12:06:35.232357 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="1f8ab826-499a-4820-8964-bef811918491" containerName="nova-api-api" Oct 06 12:06:35 crc kubenswrapper[4958]: I1006 12:06:35.232543 4958 memory_manager.go:354] "RemoveStaleState removing state" podUID="56bbbfd2-0526-4ce7-a41d-a18dc9a0a652" containerName="dnsmasq-dns" Oct 06 12:06:35 crc kubenswrapper[4958]: I1006 12:06:35.232554 4958 memory_manager.go:354] "RemoveStaleState removing state" podUID="1f8ab826-499a-4820-8964-bef811918491" containerName="nova-api-api" Oct 06 12:06:35 crc kubenswrapper[4958]: I1006 12:06:35.232595 4958 memory_manager.go:354] "RemoveStaleState removing state" podUID="1f8ab826-499a-4820-8964-bef811918491" containerName="nova-api-log" Oct 06 12:06:35 crc kubenswrapper[4958]: I1006 12:06:35.232610 4958 memory_manager.go:354] "RemoveStaleState removing state" podUID="b9b7229f-dbe0-4ff1-83a1-e7adf979a1c5" containerName="nova-scheduler-scheduler" Oct 06 12:06:35 crc kubenswrapper[4958]: I1006 12:06:35.232623 4958 memory_manager.go:354] "RemoveStaleState removing state" podUID="3da134f3-379d-4899-a301-d9091d57f4d4" containerName="nova-manage" Oct 06 12:06:35 crc kubenswrapper[4958]: I1006 12:06:35.233784 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Oct 06 12:06:35 crc kubenswrapper[4958]: I1006 12:06:35.236414 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Oct 06 12:06:35 crc kubenswrapper[4958]: I1006 12:06:35.239094 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-internal-svc" Oct 06 12:06:35 crc kubenswrapper[4958]: I1006 12:06:35.243703 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-public-svc" Oct 06 12:06:35 crc kubenswrapper[4958]: I1006 12:06:35.280283 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Oct 06 12:06:35 crc kubenswrapper[4958]: I1006 12:06:35.436222 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/18ae50d9-6e14-4379-b6e2-6a1845859f0c-internal-tls-certs\") pod \"nova-api-0\" (UID: \"18ae50d9-6e14-4379-b6e2-6a1845859f0c\") " pod="openstack/nova-api-0" Oct 06 12:06:35 crc kubenswrapper[4958]: I1006 12:06:35.436288 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/18ae50d9-6e14-4379-b6e2-6a1845859f0c-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"18ae50d9-6e14-4379-b6e2-6a1845859f0c\") " pod="openstack/nova-api-0" Oct 06 12:06:35 crc kubenswrapper[4958]: I1006 12:06:35.436321 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/18ae50d9-6e14-4379-b6e2-6a1845859f0c-public-tls-certs\") pod \"nova-api-0\" (UID: \"18ae50d9-6e14-4379-b6e2-6a1845859f0c\") " pod="openstack/nova-api-0" Oct 06 12:06:35 crc kubenswrapper[4958]: I1006 12:06:35.436362 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/18ae50d9-6e14-4379-b6e2-6a1845859f0c-config-data\") pod \"nova-api-0\" (UID: \"18ae50d9-6e14-4379-b6e2-6a1845859f0c\") " pod="openstack/nova-api-0" Oct 06 12:06:35 crc kubenswrapper[4958]: I1006 12:06:35.436397 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9b96k\" (UniqueName: \"kubernetes.io/projected/18ae50d9-6e14-4379-b6e2-6a1845859f0c-kube-api-access-9b96k\") pod \"nova-api-0\" (UID: \"18ae50d9-6e14-4379-b6e2-6a1845859f0c\") " pod="openstack/nova-api-0" Oct 06 12:06:35 crc kubenswrapper[4958]: I1006 12:06:35.436538 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/18ae50d9-6e14-4379-b6e2-6a1845859f0c-logs\") pod \"nova-api-0\" (UID: \"18ae50d9-6e14-4379-b6e2-6a1845859f0c\") " pod="openstack/nova-api-0" Oct 06 12:06:35 crc kubenswrapper[4958]: I1006 12:06:35.537880 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9b96k\" (UniqueName: \"kubernetes.io/projected/18ae50d9-6e14-4379-b6e2-6a1845859f0c-kube-api-access-9b96k\") pod \"nova-api-0\" (UID: \"18ae50d9-6e14-4379-b6e2-6a1845859f0c\") " pod="openstack/nova-api-0" Oct 06 12:06:35 crc kubenswrapper[4958]: I1006 12:06:35.538220 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/18ae50d9-6e14-4379-b6e2-6a1845859f0c-logs\") pod \"nova-api-0\" (UID: \"18ae50d9-6e14-4379-b6e2-6a1845859f0c\") " pod="openstack/nova-api-0" Oct 06 12:06:35 crc kubenswrapper[4958]: I1006 12:06:35.538295 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/18ae50d9-6e14-4379-b6e2-6a1845859f0c-internal-tls-certs\") pod \"nova-api-0\" (UID: \"18ae50d9-6e14-4379-b6e2-6a1845859f0c\") " pod="openstack/nova-api-0" Oct 06 12:06:35 crc kubenswrapper[4958]: I1006 12:06:35.538349 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/18ae50d9-6e14-4379-b6e2-6a1845859f0c-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"18ae50d9-6e14-4379-b6e2-6a1845859f0c\") " pod="openstack/nova-api-0" Oct 06 12:06:35 crc kubenswrapper[4958]: I1006 12:06:35.538383 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/18ae50d9-6e14-4379-b6e2-6a1845859f0c-public-tls-certs\") pod \"nova-api-0\" (UID: \"18ae50d9-6e14-4379-b6e2-6a1845859f0c\") " pod="openstack/nova-api-0" Oct 06 12:06:35 crc kubenswrapper[4958]: I1006 12:06:35.538439 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/18ae50d9-6e14-4379-b6e2-6a1845859f0c-config-data\") pod \"nova-api-0\" (UID: \"18ae50d9-6e14-4379-b6e2-6a1845859f0c\") " pod="openstack/nova-api-0" Oct 06 12:06:35 crc kubenswrapper[4958]: I1006 12:06:35.538977 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/18ae50d9-6e14-4379-b6e2-6a1845859f0c-logs\") pod \"nova-api-0\" (UID: \"18ae50d9-6e14-4379-b6e2-6a1845859f0c\") " pod="openstack/nova-api-0" Oct 06 12:06:35 crc kubenswrapper[4958]: I1006 12:06:35.543934 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/18ae50d9-6e14-4379-b6e2-6a1845859f0c-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"18ae50d9-6e14-4379-b6e2-6a1845859f0c\") " pod="openstack/nova-api-0" Oct 06 12:06:35 crc kubenswrapper[4958]: I1006 12:06:35.546326 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/18ae50d9-6e14-4379-b6e2-6a1845859f0c-config-data\") pod \"nova-api-0\" (UID: \"18ae50d9-6e14-4379-b6e2-6a1845859f0c\") " pod="openstack/nova-api-0" Oct 06 12:06:35 crc kubenswrapper[4958]: I1006 12:06:35.546534 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/18ae50d9-6e14-4379-b6e2-6a1845859f0c-internal-tls-certs\") pod \"nova-api-0\" (UID: \"18ae50d9-6e14-4379-b6e2-6a1845859f0c\") " pod="openstack/nova-api-0" Oct 06 12:06:35 crc kubenswrapper[4958]: I1006 12:06:35.548274 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/18ae50d9-6e14-4379-b6e2-6a1845859f0c-public-tls-certs\") pod \"nova-api-0\" (UID: \"18ae50d9-6e14-4379-b6e2-6a1845859f0c\") " pod="openstack/nova-api-0" Oct 06 12:06:35 crc kubenswrapper[4958]: I1006 12:06:35.557936 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9b96k\" (UniqueName: \"kubernetes.io/projected/18ae50d9-6e14-4379-b6e2-6a1845859f0c-kube-api-access-9b96k\") pod \"nova-api-0\" (UID: \"18ae50d9-6e14-4379-b6e2-6a1845859f0c\") " pod="openstack/nova-api-0" Oct 06 12:06:35 crc kubenswrapper[4958]: I1006 12:06:35.574360 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Oct 06 12:06:35 crc kubenswrapper[4958]: I1006 12:06:35.787117 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"b9b7229f-dbe0-4ff1-83a1-e7adf979a1c5","Type":"ContainerDied","Data":"89833430fba07e3d60c6687319fe1dac04127e029505b51414790d33a7781233"} Oct 06 12:06:35 crc kubenswrapper[4958]: I1006 12:06:35.787263 4958 scope.go:117] "RemoveContainer" containerID="688cfc82e38fc05dbd4d1059a62732effe2a28b07687f722d3d0460704530bca" Oct 06 12:06:35 crc kubenswrapper[4958]: I1006 12:06:35.787658 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Oct 06 12:06:35 crc kubenswrapper[4958]: I1006 12:06:35.852134 4958 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Oct 06 12:06:35 crc kubenswrapper[4958]: I1006 12:06:35.866335 4958 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-scheduler-0"] Oct 06 12:06:35 crc kubenswrapper[4958]: I1006 12:06:35.881697 4958 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Oct 06 12:06:35 crc kubenswrapper[4958]: I1006 12:06:35.883674 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Oct 06 12:06:35 crc kubenswrapper[4958]: I1006 12:06:35.889095 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Oct 06 12:06:35 crc kubenswrapper[4958]: I1006 12:06:35.896794 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Oct 06 12:06:36 crc kubenswrapper[4958]: I1006 12:06:36.051498 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1b61e54d-6905-4ce9-b034-987af62ab20a-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"1b61e54d-6905-4ce9-b034-987af62ab20a\") " pod="openstack/nova-scheduler-0" Oct 06 12:06:36 crc kubenswrapper[4958]: I1006 12:06:36.052191 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lmccm\" (UniqueName: \"kubernetes.io/projected/1b61e54d-6905-4ce9-b034-987af62ab20a-kube-api-access-lmccm\") pod \"nova-scheduler-0\" (UID: \"1b61e54d-6905-4ce9-b034-987af62ab20a\") " pod="openstack/nova-scheduler-0" Oct 06 12:06:36 crc kubenswrapper[4958]: I1006 12:06:36.052255 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1b61e54d-6905-4ce9-b034-987af62ab20a-config-data\") pod \"nova-scheduler-0\" (UID: \"1b61e54d-6905-4ce9-b034-987af62ab20a\") " pod="openstack/nova-scheduler-0" Oct 06 12:06:36 crc kubenswrapper[4958]: I1006 12:06:36.097376 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Oct 06 12:06:36 crc kubenswrapper[4958]: I1006 12:06:36.154315 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1b61e54d-6905-4ce9-b034-987af62ab20a-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"1b61e54d-6905-4ce9-b034-987af62ab20a\") " pod="openstack/nova-scheduler-0" Oct 06 12:06:36 crc kubenswrapper[4958]: I1006 12:06:36.154384 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lmccm\" (UniqueName: \"kubernetes.io/projected/1b61e54d-6905-4ce9-b034-987af62ab20a-kube-api-access-lmccm\") pod \"nova-scheduler-0\" (UID: \"1b61e54d-6905-4ce9-b034-987af62ab20a\") " pod="openstack/nova-scheduler-0" Oct 06 12:06:36 crc kubenswrapper[4958]: I1006 12:06:36.154411 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1b61e54d-6905-4ce9-b034-987af62ab20a-config-data\") pod \"nova-scheduler-0\" (UID: \"1b61e54d-6905-4ce9-b034-987af62ab20a\") " pod="openstack/nova-scheduler-0" Oct 06 12:06:36 crc kubenswrapper[4958]: I1006 12:06:36.160323 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1b61e54d-6905-4ce9-b034-987af62ab20a-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"1b61e54d-6905-4ce9-b034-987af62ab20a\") " pod="openstack/nova-scheduler-0" Oct 06 12:06:36 crc kubenswrapper[4958]: I1006 12:06:36.160470 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1b61e54d-6905-4ce9-b034-987af62ab20a-config-data\") pod \"nova-scheduler-0\" (UID: \"1b61e54d-6905-4ce9-b034-987af62ab20a\") " pod="openstack/nova-scheduler-0" Oct 06 12:06:36 crc kubenswrapper[4958]: I1006 12:06:36.171039 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lmccm\" (UniqueName: \"kubernetes.io/projected/1b61e54d-6905-4ce9-b034-987af62ab20a-kube-api-access-lmccm\") pod \"nova-scheduler-0\" (UID: \"1b61e54d-6905-4ce9-b034-987af62ab20a\") " pod="openstack/nova-scheduler-0" Oct 06 12:06:36 crc kubenswrapper[4958]: I1006 12:06:36.201180 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Oct 06 12:06:36 crc kubenswrapper[4958]: I1006 12:06:36.645464 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Oct 06 12:06:36 crc kubenswrapper[4958]: W1006 12:06:36.653223 4958 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod1b61e54d_6905_4ce9_b034_987af62ab20a.slice/crio-79017fdc23b709116e30660fef98c51009370fb34f6745d3704fd9c019fbadbf WatchSource:0}: Error finding container 79017fdc23b709116e30660fef98c51009370fb34f6745d3704fd9c019fbadbf: Status 404 returned error can't find the container with id 79017fdc23b709116e30660fef98c51009370fb34f6745d3704fd9c019fbadbf Oct 06 12:06:36 crc kubenswrapper[4958]: I1006 12:06:36.801194 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"18ae50d9-6e14-4379-b6e2-6a1845859f0c","Type":"ContainerStarted","Data":"68f18460b6b901a1b0c307672153d0d7cbf39698f9c1959a26ad8e34796ee534"} Oct 06 12:06:36 crc kubenswrapper[4958]: I1006 12:06:36.802385 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"18ae50d9-6e14-4379-b6e2-6a1845859f0c","Type":"ContainerStarted","Data":"96f70d516be52d3a7b4f6f3f427a9152f9d5f073a7485721c2c378099b95bdd4"} Oct 06 12:06:36 crc kubenswrapper[4958]: I1006 12:06:36.802458 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"18ae50d9-6e14-4379-b6e2-6a1845859f0c","Type":"ContainerStarted","Data":"b7495911d8a1a8c92bff47703267d991aa3e7ff52df6742f800575b05592352b"} Oct 06 12:06:36 crc kubenswrapper[4958]: I1006 12:06:36.804385 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"1b61e54d-6905-4ce9-b034-987af62ab20a","Type":"ContainerStarted","Data":"79017fdc23b709116e30660fef98c51009370fb34f6745d3704fd9c019fbadbf"} Oct 06 12:06:36 crc kubenswrapper[4958]: I1006 12:06:36.825709 4958 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=1.825682634 podStartE2EDuration="1.825682634s" podCreationTimestamp="2025-10-06 12:06:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-06 12:06:36.82286544 +0000 UTC m=+1150.708890758" watchObservedRunningTime="2025-10-06 12:06:36.825682634 +0000 UTC m=+1150.711707942" Oct 06 12:06:36 crc kubenswrapper[4958]: I1006 12:06:36.936288 4958 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1f8ab826-499a-4820-8964-bef811918491" path="/var/lib/kubelet/pods/1f8ab826-499a-4820-8964-bef811918491/volumes" Oct 06 12:06:36 crc kubenswrapper[4958]: I1006 12:06:36.937204 4958 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b9b7229f-dbe0-4ff1-83a1-e7adf979a1c5" path="/var/lib/kubelet/pods/b9b7229f-dbe0-4ff1-83a1-e7adf979a1c5/volumes" Oct 06 12:06:37 crc kubenswrapper[4958]: I1006 12:06:37.149949 4958 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/nova-metadata-0" podUID="96807e80-df6e-4105-a74d-296cf572e80f" containerName="nova-metadata-log" probeResult="failure" output="Get \"https://10.217.0.194:8775/\": read tcp 10.217.0.2:59202->10.217.0.194:8775: read: connection reset by peer" Oct 06 12:06:37 crc kubenswrapper[4958]: I1006 12:06:37.150604 4958 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/nova-metadata-0" podUID="96807e80-df6e-4105-a74d-296cf572e80f" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"https://10.217.0.194:8775/\": read tcp 10.217.0.2:59204->10.217.0.194:8775: read: connection reset by peer" Oct 06 12:06:37 crc kubenswrapper[4958]: I1006 12:06:37.704846 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Oct 06 12:06:37 crc kubenswrapper[4958]: I1006 12:06:37.791913 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9fxrk\" (UniqueName: \"kubernetes.io/projected/96807e80-df6e-4105-a74d-296cf572e80f-kube-api-access-9fxrk\") pod \"96807e80-df6e-4105-a74d-296cf572e80f\" (UID: \"96807e80-df6e-4105-a74d-296cf572e80f\") " Oct 06 12:06:37 crc kubenswrapper[4958]: I1006 12:06:37.791956 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/96807e80-df6e-4105-a74d-296cf572e80f-config-data\") pod \"96807e80-df6e-4105-a74d-296cf572e80f\" (UID: \"96807e80-df6e-4105-a74d-296cf572e80f\") " Oct 06 12:06:37 crc kubenswrapper[4958]: I1006 12:06:37.791974 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/96807e80-df6e-4105-a74d-296cf572e80f-combined-ca-bundle\") pod \"96807e80-df6e-4105-a74d-296cf572e80f\" (UID: \"96807e80-df6e-4105-a74d-296cf572e80f\") " Oct 06 12:06:37 crc kubenswrapper[4958]: I1006 12:06:37.792011 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/96807e80-df6e-4105-a74d-296cf572e80f-logs\") pod \"96807e80-df6e-4105-a74d-296cf572e80f\" (UID: \"96807e80-df6e-4105-a74d-296cf572e80f\") " Oct 06 12:06:37 crc kubenswrapper[4958]: I1006 12:06:37.792067 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/96807e80-df6e-4105-a74d-296cf572e80f-nova-metadata-tls-certs\") pod \"96807e80-df6e-4105-a74d-296cf572e80f\" (UID: \"96807e80-df6e-4105-a74d-296cf572e80f\") " Oct 06 12:06:37 crc kubenswrapper[4958]: I1006 12:06:37.793360 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/96807e80-df6e-4105-a74d-296cf572e80f-logs" (OuterVolumeSpecName: "logs") pod "96807e80-df6e-4105-a74d-296cf572e80f" (UID: "96807e80-df6e-4105-a74d-296cf572e80f"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 06 12:06:37 crc kubenswrapper[4958]: I1006 12:06:37.801522 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/96807e80-df6e-4105-a74d-296cf572e80f-kube-api-access-9fxrk" (OuterVolumeSpecName: "kube-api-access-9fxrk") pod "96807e80-df6e-4105-a74d-296cf572e80f" (UID: "96807e80-df6e-4105-a74d-296cf572e80f"). InnerVolumeSpecName "kube-api-access-9fxrk". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 06 12:06:37 crc kubenswrapper[4958]: I1006 12:06:37.818566 4958 generic.go:334] "Generic (PLEG): container finished" podID="96807e80-df6e-4105-a74d-296cf572e80f" containerID="0f8132c59efef53c444cef144397d153172122024dffdc5f94095ef023317a46" exitCode=0 Oct 06 12:06:37 crc kubenswrapper[4958]: I1006 12:06:37.818644 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"96807e80-df6e-4105-a74d-296cf572e80f","Type":"ContainerDied","Data":"0f8132c59efef53c444cef144397d153172122024dffdc5f94095ef023317a46"} Oct 06 12:06:37 crc kubenswrapper[4958]: I1006 12:06:37.818683 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"96807e80-df6e-4105-a74d-296cf572e80f","Type":"ContainerDied","Data":"8003a94808936843e6f4ecfb2f75e90d53af0eb29be1a397de1df3c72b807356"} Oct 06 12:06:37 crc kubenswrapper[4958]: I1006 12:06:37.818707 4958 scope.go:117] "RemoveContainer" containerID="0f8132c59efef53c444cef144397d153172122024dffdc5f94095ef023317a46" Oct 06 12:06:37 crc kubenswrapper[4958]: I1006 12:06:37.818867 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Oct 06 12:06:37 crc kubenswrapper[4958]: I1006 12:06:37.840561 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"1b61e54d-6905-4ce9-b034-987af62ab20a","Type":"ContainerStarted","Data":"e1ddf8abd549282d708198488f83c05801c3369ddeabec86e4c3d61fa8a4e25d"} Oct 06 12:06:37 crc kubenswrapper[4958]: I1006 12:06:37.845124 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/96807e80-df6e-4105-a74d-296cf572e80f-config-data" (OuterVolumeSpecName: "config-data") pod "96807e80-df6e-4105-a74d-296cf572e80f" (UID: "96807e80-df6e-4105-a74d-296cf572e80f"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 06 12:06:37 crc kubenswrapper[4958]: I1006 12:06:37.856016 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/96807e80-df6e-4105-a74d-296cf572e80f-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "96807e80-df6e-4105-a74d-296cf572e80f" (UID: "96807e80-df6e-4105-a74d-296cf572e80f"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 06 12:06:37 crc kubenswrapper[4958]: I1006 12:06:37.865570 4958 scope.go:117] "RemoveContainer" containerID="f05fa31c47201ef7924773b1b6352543125b33d31f360741f2dc2069e991936e" Oct 06 12:06:37 crc kubenswrapper[4958]: I1006 12:06:37.868471 4958 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=2.8684406669999998 podStartE2EDuration="2.868440667s" podCreationTimestamp="2025-10-06 12:06:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-06 12:06:37.858375543 +0000 UTC m=+1151.744400861" watchObservedRunningTime="2025-10-06 12:06:37.868440667 +0000 UTC m=+1151.754465975" Oct 06 12:06:37 crc kubenswrapper[4958]: I1006 12:06:37.871405 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/96807e80-df6e-4105-a74d-296cf572e80f-nova-metadata-tls-certs" (OuterVolumeSpecName: "nova-metadata-tls-certs") pod "96807e80-df6e-4105-a74d-296cf572e80f" (UID: "96807e80-df6e-4105-a74d-296cf572e80f"). InnerVolumeSpecName "nova-metadata-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 06 12:06:37 crc kubenswrapper[4958]: I1006 12:06:37.882115 4958 scope.go:117] "RemoveContainer" containerID="0f8132c59efef53c444cef144397d153172122024dffdc5f94095ef023317a46" Oct 06 12:06:37 crc kubenswrapper[4958]: E1006 12:06:37.882559 4958 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0f8132c59efef53c444cef144397d153172122024dffdc5f94095ef023317a46\": container with ID starting with 0f8132c59efef53c444cef144397d153172122024dffdc5f94095ef023317a46 not found: ID does not exist" containerID="0f8132c59efef53c444cef144397d153172122024dffdc5f94095ef023317a46" Oct 06 12:06:37 crc kubenswrapper[4958]: I1006 12:06:37.882596 4958 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0f8132c59efef53c444cef144397d153172122024dffdc5f94095ef023317a46"} err="failed to get container status \"0f8132c59efef53c444cef144397d153172122024dffdc5f94095ef023317a46\": rpc error: code = NotFound desc = could not find container \"0f8132c59efef53c444cef144397d153172122024dffdc5f94095ef023317a46\": container with ID starting with 0f8132c59efef53c444cef144397d153172122024dffdc5f94095ef023317a46 not found: ID does not exist" Oct 06 12:06:37 crc kubenswrapper[4958]: I1006 12:06:37.882619 4958 scope.go:117] "RemoveContainer" containerID="f05fa31c47201ef7924773b1b6352543125b33d31f360741f2dc2069e991936e" Oct 06 12:06:37 crc kubenswrapper[4958]: E1006 12:06:37.882872 4958 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f05fa31c47201ef7924773b1b6352543125b33d31f360741f2dc2069e991936e\": container with ID starting with f05fa31c47201ef7924773b1b6352543125b33d31f360741f2dc2069e991936e not found: ID does not exist" containerID="f05fa31c47201ef7924773b1b6352543125b33d31f360741f2dc2069e991936e" Oct 06 12:06:37 crc kubenswrapper[4958]: I1006 12:06:37.882905 4958 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f05fa31c47201ef7924773b1b6352543125b33d31f360741f2dc2069e991936e"} err="failed to get container status \"f05fa31c47201ef7924773b1b6352543125b33d31f360741f2dc2069e991936e\": rpc error: code = NotFound desc = could not find container \"f05fa31c47201ef7924773b1b6352543125b33d31f360741f2dc2069e991936e\": container with ID starting with f05fa31c47201ef7924773b1b6352543125b33d31f360741f2dc2069e991936e not found: ID does not exist" Oct 06 12:06:37 crc kubenswrapper[4958]: I1006 12:06:37.901858 4958 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9fxrk\" (UniqueName: \"kubernetes.io/projected/96807e80-df6e-4105-a74d-296cf572e80f-kube-api-access-9fxrk\") on node \"crc\" DevicePath \"\"" Oct 06 12:06:37 crc kubenswrapper[4958]: I1006 12:06:37.901906 4958 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/96807e80-df6e-4105-a74d-296cf572e80f-config-data\") on node \"crc\" DevicePath \"\"" Oct 06 12:06:37 crc kubenswrapper[4958]: I1006 12:06:37.901923 4958 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/96807e80-df6e-4105-a74d-296cf572e80f-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 06 12:06:37 crc kubenswrapper[4958]: I1006 12:06:37.901948 4958 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/96807e80-df6e-4105-a74d-296cf572e80f-logs\") on node \"crc\" DevicePath \"\"" Oct 06 12:06:37 crc kubenswrapper[4958]: I1006 12:06:37.901962 4958 reconciler_common.go:293] "Volume detached for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/96807e80-df6e-4105-a74d-296cf572e80f-nova-metadata-tls-certs\") on node \"crc\" DevicePath \"\"" Oct 06 12:06:38 crc kubenswrapper[4958]: I1006 12:06:38.190300 4958 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Oct 06 12:06:38 crc kubenswrapper[4958]: I1006 12:06:38.201869 4958 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Oct 06 12:06:38 crc kubenswrapper[4958]: I1006 12:06:38.212548 4958 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Oct 06 12:06:38 crc kubenswrapper[4958]: E1006 12:06:38.212924 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="96807e80-df6e-4105-a74d-296cf572e80f" containerName="nova-metadata-metadata" Oct 06 12:06:38 crc kubenswrapper[4958]: I1006 12:06:38.212942 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="96807e80-df6e-4105-a74d-296cf572e80f" containerName="nova-metadata-metadata" Oct 06 12:06:38 crc kubenswrapper[4958]: E1006 12:06:38.212956 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="96807e80-df6e-4105-a74d-296cf572e80f" containerName="nova-metadata-log" Oct 06 12:06:38 crc kubenswrapper[4958]: I1006 12:06:38.212963 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="96807e80-df6e-4105-a74d-296cf572e80f" containerName="nova-metadata-log" Oct 06 12:06:38 crc kubenswrapper[4958]: I1006 12:06:38.213139 4958 memory_manager.go:354] "RemoveStaleState removing state" podUID="96807e80-df6e-4105-a74d-296cf572e80f" containerName="nova-metadata-log" Oct 06 12:06:38 crc kubenswrapper[4958]: I1006 12:06:38.213212 4958 memory_manager.go:354] "RemoveStaleState removing state" podUID="96807e80-df6e-4105-a74d-296cf572e80f" containerName="nova-metadata-metadata" Oct 06 12:06:38 crc kubenswrapper[4958]: I1006 12:06:38.214258 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Oct 06 12:06:38 crc kubenswrapper[4958]: I1006 12:06:38.216744 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Oct 06 12:06:38 crc kubenswrapper[4958]: I1006 12:06:38.216972 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-metadata-internal-svc" Oct 06 12:06:38 crc kubenswrapper[4958]: I1006 12:06:38.223767 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Oct 06 12:06:38 crc kubenswrapper[4958]: I1006 12:06:38.311579 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nw59d\" (UniqueName: \"kubernetes.io/projected/22f238b7-8e7b-408c-81cd-9635a10e7d3d-kube-api-access-nw59d\") pod \"nova-metadata-0\" (UID: \"22f238b7-8e7b-408c-81cd-9635a10e7d3d\") " pod="openstack/nova-metadata-0" Oct 06 12:06:38 crc kubenswrapper[4958]: I1006 12:06:38.311663 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/22f238b7-8e7b-408c-81cd-9635a10e7d3d-logs\") pod \"nova-metadata-0\" (UID: \"22f238b7-8e7b-408c-81cd-9635a10e7d3d\") " pod="openstack/nova-metadata-0" Oct 06 12:06:38 crc kubenswrapper[4958]: I1006 12:06:38.311736 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/22f238b7-8e7b-408c-81cd-9635a10e7d3d-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"22f238b7-8e7b-408c-81cd-9635a10e7d3d\") " pod="openstack/nova-metadata-0" Oct 06 12:06:38 crc kubenswrapper[4958]: I1006 12:06:38.311778 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/22f238b7-8e7b-408c-81cd-9635a10e7d3d-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"22f238b7-8e7b-408c-81cd-9635a10e7d3d\") " pod="openstack/nova-metadata-0" Oct 06 12:06:38 crc kubenswrapper[4958]: I1006 12:06:38.311825 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/22f238b7-8e7b-408c-81cd-9635a10e7d3d-config-data\") pod \"nova-metadata-0\" (UID: \"22f238b7-8e7b-408c-81cd-9635a10e7d3d\") " pod="openstack/nova-metadata-0" Oct 06 12:06:38 crc kubenswrapper[4958]: I1006 12:06:38.413941 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/22f238b7-8e7b-408c-81cd-9635a10e7d3d-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"22f238b7-8e7b-408c-81cd-9635a10e7d3d\") " pod="openstack/nova-metadata-0" Oct 06 12:06:38 crc kubenswrapper[4958]: I1006 12:06:38.413988 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/22f238b7-8e7b-408c-81cd-9635a10e7d3d-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"22f238b7-8e7b-408c-81cd-9635a10e7d3d\") " pod="openstack/nova-metadata-0" Oct 06 12:06:38 crc kubenswrapper[4958]: I1006 12:06:38.414023 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/22f238b7-8e7b-408c-81cd-9635a10e7d3d-config-data\") pod \"nova-metadata-0\" (UID: \"22f238b7-8e7b-408c-81cd-9635a10e7d3d\") " pod="openstack/nova-metadata-0" Oct 06 12:06:38 crc kubenswrapper[4958]: I1006 12:06:38.414170 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nw59d\" (UniqueName: \"kubernetes.io/projected/22f238b7-8e7b-408c-81cd-9635a10e7d3d-kube-api-access-nw59d\") pod \"nova-metadata-0\" (UID: \"22f238b7-8e7b-408c-81cd-9635a10e7d3d\") " pod="openstack/nova-metadata-0" Oct 06 12:06:38 crc kubenswrapper[4958]: I1006 12:06:38.414203 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/22f238b7-8e7b-408c-81cd-9635a10e7d3d-logs\") pod \"nova-metadata-0\" (UID: \"22f238b7-8e7b-408c-81cd-9635a10e7d3d\") " pod="openstack/nova-metadata-0" Oct 06 12:06:38 crc kubenswrapper[4958]: I1006 12:06:38.414654 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/22f238b7-8e7b-408c-81cd-9635a10e7d3d-logs\") pod \"nova-metadata-0\" (UID: \"22f238b7-8e7b-408c-81cd-9635a10e7d3d\") " pod="openstack/nova-metadata-0" Oct 06 12:06:38 crc kubenswrapper[4958]: I1006 12:06:38.419680 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/22f238b7-8e7b-408c-81cd-9635a10e7d3d-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"22f238b7-8e7b-408c-81cd-9635a10e7d3d\") " pod="openstack/nova-metadata-0" Oct 06 12:06:38 crc kubenswrapper[4958]: I1006 12:06:38.420049 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/22f238b7-8e7b-408c-81cd-9635a10e7d3d-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"22f238b7-8e7b-408c-81cd-9635a10e7d3d\") " pod="openstack/nova-metadata-0" Oct 06 12:06:38 crc kubenswrapper[4958]: I1006 12:06:38.436187 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/22f238b7-8e7b-408c-81cd-9635a10e7d3d-config-data\") pod \"nova-metadata-0\" (UID: \"22f238b7-8e7b-408c-81cd-9635a10e7d3d\") " pod="openstack/nova-metadata-0" Oct 06 12:06:38 crc kubenswrapper[4958]: I1006 12:06:38.444487 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nw59d\" (UniqueName: \"kubernetes.io/projected/22f238b7-8e7b-408c-81cd-9635a10e7d3d-kube-api-access-nw59d\") pod \"nova-metadata-0\" (UID: \"22f238b7-8e7b-408c-81cd-9635a10e7d3d\") " pod="openstack/nova-metadata-0" Oct 06 12:06:38 crc kubenswrapper[4958]: I1006 12:06:38.532235 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Oct 06 12:06:38 crc kubenswrapper[4958]: I1006 12:06:38.923276 4958 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="96807e80-df6e-4105-a74d-296cf572e80f" path="/var/lib/kubelet/pods/96807e80-df6e-4105-a74d-296cf572e80f/volumes" Oct 06 12:06:39 crc kubenswrapper[4958]: W1006 12:06:39.052977 4958 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod22f238b7_8e7b_408c_81cd_9635a10e7d3d.slice/crio-e7c78e70c8147387e2d5b8530eecfa657a6abbf4e48e0026b44ca5c53f2c98de WatchSource:0}: Error finding container e7c78e70c8147387e2d5b8530eecfa657a6abbf4e48e0026b44ca5c53f2c98de: Status 404 returned error can't find the container with id e7c78e70c8147387e2d5b8530eecfa657a6abbf4e48e0026b44ca5c53f2c98de Oct 06 12:06:39 crc kubenswrapper[4958]: I1006 12:06:39.059554 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Oct 06 12:06:39 crc kubenswrapper[4958]: I1006 12:06:39.862287 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"22f238b7-8e7b-408c-81cd-9635a10e7d3d","Type":"ContainerStarted","Data":"0717dd7fcc23402631be4d90d1665468e844cc2657417bb1b911d7d849d37675"} Oct 06 12:06:39 crc kubenswrapper[4958]: I1006 12:06:39.862574 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"22f238b7-8e7b-408c-81cd-9635a10e7d3d","Type":"ContainerStarted","Data":"b583a1bdb314b164a6f3026f204dbba10a3d85fc987d417dc151b2b7dcc96c77"} Oct 06 12:06:39 crc kubenswrapper[4958]: I1006 12:06:39.862588 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"22f238b7-8e7b-408c-81cd-9635a10e7d3d","Type":"ContainerStarted","Data":"e7c78e70c8147387e2d5b8530eecfa657a6abbf4e48e0026b44ca5c53f2c98de"} Oct 06 12:06:39 crc kubenswrapper[4958]: I1006 12:06:39.882035 4958 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=1.882019882 podStartE2EDuration="1.882019882s" podCreationTimestamp="2025-10-06 12:06:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-06 12:06:39.880699372 +0000 UTC m=+1153.766724710" watchObservedRunningTime="2025-10-06 12:06:39.882019882 +0000 UTC m=+1153.768045190" Oct 06 12:06:41 crc kubenswrapper[4958]: I1006 12:06:41.202069 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Oct 06 12:06:43 crc kubenswrapper[4958]: I1006 12:06:43.532987 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Oct 06 12:06:43 crc kubenswrapper[4958]: I1006 12:06:43.533438 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Oct 06 12:06:45 crc kubenswrapper[4958]: I1006 12:06:45.575090 4958 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Oct 06 12:06:45 crc kubenswrapper[4958]: I1006 12:06:45.575552 4958 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Oct 06 12:06:46 crc kubenswrapper[4958]: I1006 12:06:46.201653 4958 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-scheduler-0" Oct 06 12:06:46 crc kubenswrapper[4958]: I1006 12:06:46.233416 4958 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-scheduler-0" Oct 06 12:06:46 crc kubenswrapper[4958]: I1006 12:06:46.587401 4958 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="18ae50d9-6e14-4379-b6e2-6a1845859f0c" containerName="nova-api-api" probeResult="failure" output="Get \"https://10.217.0.204:8774/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Oct 06 12:06:46 crc kubenswrapper[4958]: I1006 12:06:46.587425 4958 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="18ae50d9-6e14-4379-b6e2-6a1845859f0c" containerName="nova-api-log" probeResult="failure" output="Get \"https://10.217.0.204:8774/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Oct 06 12:06:46 crc kubenswrapper[4958]: I1006 12:06:46.995933 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-scheduler-0" Oct 06 12:06:48 crc kubenswrapper[4958]: I1006 12:06:48.533175 4958 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Oct 06 12:06:48 crc kubenswrapper[4958]: I1006 12:06:48.533612 4958 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Oct 06 12:06:49 crc kubenswrapper[4958]: I1006 12:06:49.547321 4958 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="22f238b7-8e7b-408c-81cd-9635a10e7d3d" containerName="nova-metadata-log" probeResult="failure" output="Get \"https://10.217.0.206:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Oct 06 12:06:49 crc kubenswrapper[4958]: I1006 12:06:49.548328 4958 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="22f238b7-8e7b-408c-81cd-9635a10e7d3d" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"https://10.217.0.206:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Oct 06 12:06:54 crc kubenswrapper[4958]: I1006 12:06:54.098007 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ceilometer-0" Oct 06 12:06:55 crc kubenswrapper[4958]: I1006 12:06:55.584004 4958 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Oct 06 12:06:55 crc kubenswrapper[4958]: I1006 12:06:55.584874 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Oct 06 12:06:55 crc kubenswrapper[4958]: I1006 12:06:55.587336 4958 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Oct 06 12:06:55 crc kubenswrapper[4958]: I1006 12:06:55.593616 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Oct 06 12:06:56 crc kubenswrapper[4958]: I1006 12:06:56.059393 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Oct 06 12:06:56 crc kubenswrapper[4958]: I1006 12:06:56.066369 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Oct 06 12:06:58 crc kubenswrapper[4958]: I1006 12:06:58.542037 4958 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Oct 06 12:06:58 crc kubenswrapper[4958]: I1006 12:06:58.543381 4958 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Oct 06 12:06:58 crc kubenswrapper[4958]: I1006 12:06:58.550984 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Oct 06 12:06:58 crc kubenswrapper[4958]: I1006 12:06:58.551333 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Oct 06 12:07:06 crc kubenswrapper[4958]: I1006 12:07:06.303343 4958 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-server-0"] Oct 06 12:07:07 crc kubenswrapper[4958]: I1006 12:07:07.364467 4958 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Oct 06 12:07:10 crc kubenswrapper[4958]: I1006 12:07:10.463593 4958 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/rabbitmq-server-0" podUID="2542eba4-43d2-4108-a6f0-8eb4a1714f77" containerName="rabbitmq" containerID="cri-o://327e47a6ce4967557ddf6edf15c775605ac3aaa4237286f9ef9ba8bb333db27e" gracePeriod=604796 Oct 06 12:07:11 crc kubenswrapper[4958]: I1006 12:07:11.540614 4958 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/rabbitmq-cell1-server-0" podUID="4c931ada-9afe-4ec4-9f75-42db89dc36e8" containerName="rabbitmq" containerID="cri-o://bbc482e823030cc744cfe19b36baff376ce7c79d0bc3b6dd654dc5f2f2130684" gracePeriod=604796 Oct 06 12:07:16 crc kubenswrapper[4958]: I1006 12:07:16.284905 4958 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/rabbitmq-server-0" podUID="2542eba4-43d2-4108-a6f0-8eb4a1714f77" containerName="rabbitmq" probeResult="failure" output="dial tcp 10.217.0.101:5671: connect: connection refused" Oct 06 12:07:16 crc kubenswrapper[4958]: I1006 12:07:16.609337 4958 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/rabbitmq-cell1-server-0" podUID="4c931ada-9afe-4ec4-9f75-42db89dc36e8" containerName="rabbitmq" probeResult="failure" output="dial tcp 10.217.0.102:5671: connect: connection refused" Oct 06 12:07:17 crc kubenswrapper[4958]: I1006 12:07:17.125871 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Oct 06 12:07:17 crc kubenswrapper[4958]: I1006 12:07:17.227974 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/2542eba4-43d2-4108-a6f0-8eb4a1714f77-server-conf\") pod \"2542eba4-43d2-4108-a6f0-8eb4a1714f77\" (UID: \"2542eba4-43d2-4108-a6f0-8eb4a1714f77\") " Oct 06 12:07:17 crc kubenswrapper[4958]: I1006 12:07:17.228051 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/2542eba4-43d2-4108-a6f0-8eb4a1714f77-pod-info\") pod \"2542eba4-43d2-4108-a6f0-8eb4a1714f77\" (UID: \"2542eba4-43d2-4108-a6f0-8eb4a1714f77\") " Oct 06 12:07:17 crc kubenswrapper[4958]: I1006 12:07:17.228105 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/2542eba4-43d2-4108-a6f0-8eb4a1714f77-rabbitmq-tls\") pod \"2542eba4-43d2-4108-a6f0-8eb4a1714f77\" (UID: \"2542eba4-43d2-4108-a6f0-8eb4a1714f77\") " Oct 06 12:07:17 crc kubenswrapper[4958]: I1006 12:07:17.228139 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/2542eba4-43d2-4108-a6f0-8eb4a1714f77-rabbitmq-plugins\") pod \"2542eba4-43d2-4108-a6f0-8eb4a1714f77\" (UID: \"2542eba4-43d2-4108-a6f0-8eb4a1714f77\") " Oct 06 12:07:17 crc kubenswrapper[4958]: I1006 12:07:17.228199 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/2542eba4-43d2-4108-a6f0-8eb4a1714f77-erlang-cookie-secret\") pod \"2542eba4-43d2-4108-a6f0-8eb4a1714f77\" (UID: \"2542eba4-43d2-4108-a6f0-8eb4a1714f77\") " Oct 06 12:07:17 crc kubenswrapper[4958]: I1006 12:07:17.228226 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/2542eba4-43d2-4108-a6f0-8eb4a1714f77-rabbitmq-confd\") pod \"2542eba4-43d2-4108-a6f0-8eb4a1714f77\" (UID: \"2542eba4-43d2-4108-a6f0-8eb4a1714f77\") " Oct 06 12:07:17 crc kubenswrapper[4958]: I1006 12:07:17.228279 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/2542eba4-43d2-4108-a6f0-8eb4a1714f77-rabbitmq-erlang-cookie\") pod \"2542eba4-43d2-4108-a6f0-8eb4a1714f77\" (UID: \"2542eba4-43d2-4108-a6f0-8eb4a1714f77\") " Oct 06 12:07:17 crc kubenswrapper[4958]: I1006 12:07:17.228317 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2ghxb\" (UniqueName: \"kubernetes.io/projected/2542eba4-43d2-4108-a6f0-8eb4a1714f77-kube-api-access-2ghxb\") pod \"2542eba4-43d2-4108-a6f0-8eb4a1714f77\" (UID: \"2542eba4-43d2-4108-a6f0-8eb4a1714f77\") " Oct 06 12:07:17 crc kubenswrapper[4958]: I1006 12:07:17.228405 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/2542eba4-43d2-4108-a6f0-8eb4a1714f77-plugins-conf\") pod \"2542eba4-43d2-4108-a6f0-8eb4a1714f77\" (UID: \"2542eba4-43d2-4108-a6f0-8eb4a1714f77\") " Oct 06 12:07:17 crc kubenswrapper[4958]: I1006 12:07:17.228441 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"persistence\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"2542eba4-43d2-4108-a6f0-8eb4a1714f77\" (UID: \"2542eba4-43d2-4108-a6f0-8eb4a1714f77\") " Oct 06 12:07:17 crc kubenswrapper[4958]: I1006 12:07:17.228535 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/2542eba4-43d2-4108-a6f0-8eb4a1714f77-config-data\") pod \"2542eba4-43d2-4108-a6f0-8eb4a1714f77\" (UID: \"2542eba4-43d2-4108-a6f0-8eb4a1714f77\") " Oct 06 12:07:17 crc kubenswrapper[4958]: I1006 12:07:17.228881 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2542eba4-43d2-4108-a6f0-8eb4a1714f77-rabbitmq-plugins" (OuterVolumeSpecName: "rabbitmq-plugins") pod "2542eba4-43d2-4108-a6f0-8eb4a1714f77" (UID: "2542eba4-43d2-4108-a6f0-8eb4a1714f77"). InnerVolumeSpecName "rabbitmq-plugins". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 06 12:07:17 crc kubenswrapper[4958]: I1006 12:07:17.229174 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2542eba4-43d2-4108-a6f0-8eb4a1714f77-plugins-conf" (OuterVolumeSpecName: "plugins-conf") pod "2542eba4-43d2-4108-a6f0-8eb4a1714f77" (UID: "2542eba4-43d2-4108-a6f0-8eb4a1714f77"). InnerVolumeSpecName "plugins-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 06 12:07:17 crc kubenswrapper[4958]: I1006 12:07:17.229521 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2542eba4-43d2-4108-a6f0-8eb4a1714f77-rabbitmq-erlang-cookie" (OuterVolumeSpecName: "rabbitmq-erlang-cookie") pod "2542eba4-43d2-4108-a6f0-8eb4a1714f77" (UID: "2542eba4-43d2-4108-a6f0-8eb4a1714f77"). InnerVolumeSpecName "rabbitmq-erlang-cookie". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 06 12:07:17 crc kubenswrapper[4958]: I1006 12:07:17.237304 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage10-crc" (OuterVolumeSpecName: "persistence") pod "2542eba4-43d2-4108-a6f0-8eb4a1714f77" (UID: "2542eba4-43d2-4108-a6f0-8eb4a1714f77"). InnerVolumeSpecName "local-storage10-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Oct 06 12:07:17 crc kubenswrapper[4958]: I1006 12:07:17.240950 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2542eba4-43d2-4108-a6f0-8eb4a1714f77-erlang-cookie-secret" (OuterVolumeSpecName: "erlang-cookie-secret") pod "2542eba4-43d2-4108-a6f0-8eb4a1714f77" (UID: "2542eba4-43d2-4108-a6f0-8eb4a1714f77"). InnerVolumeSpecName "erlang-cookie-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 06 12:07:17 crc kubenswrapper[4958]: I1006 12:07:17.241561 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/downward-api/2542eba4-43d2-4108-a6f0-8eb4a1714f77-pod-info" (OuterVolumeSpecName: "pod-info") pod "2542eba4-43d2-4108-a6f0-8eb4a1714f77" (UID: "2542eba4-43d2-4108-a6f0-8eb4a1714f77"). InnerVolumeSpecName "pod-info". PluginName "kubernetes.io/downward-api", VolumeGidValue "" Oct 06 12:07:17 crc kubenswrapper[4958]: I1006 12:07:17.243109 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2542eba4-43d2-4108-a6f0-8eb4a1714f77-rabbitmq-tls" (OuterVolumeSpecName: "rabbitmq-tls") pod "2542eba4-43d2-4108-a6f0-8eb4a1714f77" (UID: "2542eba4-43d2-4108-a6f0-8eb4a1714f77"). InnerVolumeSpecName "rabbitmq-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 06 12:07:17 crc kubenswrapper[4958]: I1006 12:07:17.252581 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2542eba4-43d2-4108-a6f0-8eb4a1714f77-kube-api-access-2ghxb" (OuterVolumeSpecName: "kube-api-access-2ghxb") pod "2542eba4-43d2-4108-a6f0-8eb4a1714f77" (UID: "2542eba4-43d2-4108-a6f0-8eb4a1714f77"). InnerVolumeSpecName "kube-api-access-2ghxb". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 06 12:07:17 crc kubenswrapper[4958]: I1006 12:07:17.266406 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2542eba4-43d2-4108-a6f0-8eb4a1714f77-config-data" (OuterVolumeSpecName: "config-data") pod "2542eba4-43d2-4108-a6f0-8eb4a1714f77" (UID: "2542eba4-43d2-4108-a6f0-8eb4a1714f77"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 06 12:07:17 crc kubenswrapper[4958]: I1006 12:07:17.278408 4958 generic.go:334] "Generic (PLEG): container finished" podID="2542eba4-43d2-4108-a6f0-8eb4a1714f77" containerID="327e47a6ce4967557ddf6edf15c775605ac3aaa4237286f9ef9ba8bb333db27e" exitCode=0 Oct 06 12:07:17 crc kubenswrapper[4958]: I1006 12:07:17.278446 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"2542eba4-43d2-4108-a6f0-8eb4a1714f77","Type":"ContainerDied","Data":"327e47a6ce4967557ddf6edf15c775605ac3aaa4237286f9ef9ba8bb333db27e"} Oct 06 12:07:17 crc kubenswrapper[4958]: I1006 12:07:17.278471 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"2542eba4-43d2-4108-a6f0-8eb4a1714f77","Type":"ContainerDied","Data":"3950b8bfde8a63186deae60f23ede73b06b58698e28846afd98c8d3d17e8edf2"} Oct 06 12:07:17 crc kubenswrapper[4958]: I1006 12:07:17.278485 4958 scope.go:117] "RemoveContainer" containerID="327e47a6ce4967557ddf6edf15c775605ac3aaa4237286f9ef9ba8bb333db27e" Oct 06 12:07:17 crc kubenswrapper[4958]: I1006 12:07:17.278592 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Oct 06 12:07:17 crc kubenswrapper[4958]: I1006 12:07:17.293717 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2542eba4-43d2-4108-a6f0-8eb4a1714f77-server-conf" (OuterVolumeSpecName: "server-conf") pod "2542eba4-43d2-4108-a6f0-8eb4a1714f77" (UID: "2542eba4-43d2-4108-a6f0-8eb4a1714f77"). InnerVolumeSpecName "server-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 06 12:07:17 crc kubenswrapper[4958]: I1006 12:07:17.330974 4958 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/2542eba4-43d2-4108-a6f0-8eb4a1714f77-rabbitmq-tls\") on node \"crc\" DevicePath \"\"" Oct 06 12:07:17 crc kubenswrapper[4958]: I1006 12:07:17.331012 4958 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/2542eba4-43d2-4108-a6f0-8eb4a1714f77-rabbitmq-plugins\") on node \"crc\" DevicePath \"\"" Oct 06 12:07:17 crc kubenswrapper[4958]: I1006 12:07:17.331025 4958 reconciler_common.go:293] "Volume detached for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/2542eba4-43d2-4108-a6f0-8eb4a1714f77-erlang-cookie-secret\") on node \"crc\" DevicePath \"\"" Oct 06 12:07:17 crc kubenswrapper[4958]: I1006 12:07:17.331038 4958 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/2542eba4-43d2-4108-a6f0-8eb4a1714f77-rabbitmq-erlang-cookie\") on node \"crc\" DevicePath \"\"" Oct 06 12:07:17 crc kubenswrapper[4958]: I1006 12:07:17.331051 4958 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2ghxb\" (UniqueName: \"kubernetes.io/projected/2542eba4-43d2-4108-a6f0-8eb4a1714f77-kube-api-access-2ghxb\") on node \"crc\" DevicePath \"\"" Oct 06 12:07:17 crc kubenswrapper[4958]: I1006 12:07:17.331062 4958 reconciler_common.go:293] "Volume detached for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/2542eba4-43d2-4108-a6f0-8eb4a1714f77-plugins-conf\") on node \"crc\" DevicePath \"\"" Oct 06 12:07:17 crc kubenswrapper[4958]: I1006 12:07:17.331098 4958 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") on node \"crc\" " Oct 06 12:07:17 crc kubenswrapper[4958]: I1006 12:07:17.331111 4958 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/2542eba4-43d2-4108-a6f0-8eb4a1714f77-config-data\") on node \"crc\" DevicePath \"\"" Oct 06 12:07:17 crc kubenswrapper[4958]: I1006 12:07:17.331123 4958 reconciler_common.go:293] "Volume detached for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/2542eba4-43d2-4108-a6f0-8eb4a1714f77-server-conf\") on node \"crc\" DevicePath \"\"" Oct 06 12:07:17 crc kubenswrapper[4958]: I1006 12:07:17.331134 4958 reconciler_common.go:293] "Volume detached for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/2542eba4-43d2-4108-a6f0-8eb4a1714f77-pod-info\") on node \"crc\" DevicePath \"\"" Oct 06 12:07:17 crc kubenswrapper[4958]: I1006 12:07:17.346916 4958 scope.go:117] "RemoveContainer" containerID="7e0e80d0691399bbda45e00199dfd3f3bdb4ab8902cdd466a7d63235da759697" Oct 06 12:07:17 crc kubenswrapper[4958]: I1006 12:07:17.371454 4958 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage10-crc" (UniqueName: "kubernetes.io/local-volume/local-storage10-crc") on node "crc" Oct 06 12:07:17 crc kubenswrapper[4958]: I1006 12:07:17.373018 4958 scope.go:117] "RemoveContainer" containerID="327e47a6ce4967557ddf6edf15c775605ac3aaa4237286f9ef9ba8bb333db27e" Oct 06 12:07:17 crc kubenswrapper[4958]: E1006 12:07:17.373479 4958 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"327e47a6ce4967557ddf6edf15c775605ac3aaa4237286f9ef9ba8bb333db27e\": container with ID starting with 327e47a6ce4967557ddf6edf15c775605ac3aaa4237286f9ef9ba8bb333db27e not found: ID does not exist" containerID="327e47a6ce4967557ddf6edf15c775605ac3aaa4237286f9ef9ba8bb333db27e" Oct 06 12:07:17 crc kubenswrapper[4958]: I1006 12:07:17.373553 4958 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"327e47a6ce4967557ddf6edf15c775605ac3aaa4237286f9ef9ba8bb333db27e"} err="failed to get container status \"327e47a6ce4967557ddf6edf15c775605ac3aaa4237286f9ef9ba8bb333db27e\": rpc error: code = NotFound desc = could not find container \"327e47a6ce4967557ddf6edf15c775605ac3aaa4237286f9ef9ba8bb333db27e\": container with ID starting with 327e47a6ce4967557ddf6edf15c775605ac3aaa4237286f9ef9ba8bb333db27e not found: ID does not exist" Oct 06 12:07:17 crc kubenswrapper[4958]: I1006 12:07:17.373610 4958 scope.go:117] "RemoveContainer" containerID="7e0e80d0691399bbda45e00199dfd3f3bdb4ab8902cdd466a7d63235da759697" Oct 06 12:07:17 crc kubenswrapper[4958]: E1006 12:07:17.373972 4958 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7e0e80d0691399bbda45e00199dfd3f3bdb4ab8902cdd466a7d63235da759697\": container with ID starting with 7e0e80d0691399bbda45e00199dfd3f3bdb4ab8902cdd466a7d63235da759697 not found: ID does not exist" containerID="7e0e80d0691399bbda45e00199dfd3f3bdb4ab8902cdd466a7d63235da759697" Oct 06 12:07:17 crc kubenswrapper[4958]: I1006 12:07:17.374018 4958 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7e0e80d0691399bbda45e00199dfd3f3bdb4ab8902cdd466a7d63235da759697"} err="failed to get container status \"7e0e80d0691399bbda45e00199dfd3f3bdb4ab8902cdd466a7d63235da759697\": rpc error: code = NotFound desc = could not find container \"7e0e80d0691399bbda45e00199dfd3f3bdb4ab8902cdd466a7d63235da759697\": container with ID starting with 7e0e80d0691399bbda45e00199dfd3f3bdb4ab8902cdd466a7d63235da759697 not found: ID does not exist" Oct 06 12:07:17 crc kubenswrapper[4958]: I1006 12:07:17.381497 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2542eba4-43d2-4108-a6f0-8eb4a1714f77-rabbitmq-confd" (OuterVolumeSpecName: "rabbitmq-confd") pod "2542eba4-43d2-4108-a6f0-8eb4a1714f77" (UID: "2542eba4-43d2-4108-a6f0-8eb4a1714f77"). InnerVolumeSpecName "rabbitmq-confd". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 06 12:07:17 crc kubenswrapper[4958]: I1006 12:07:17.432493 4958 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/2542eba4-43d2-4108-a6f0-8eb4a1714f77-rabbitmq-confd\") on node \"crc\" DevicePath \"\"" Oct 06 12:07:17 crc kubenswrapper[4958]: I1006 12:07:17.432528 4958 reconciler_common.go:293] "Volume detached for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") on node \"crc\" DevicePath \"\"" Oct 06 12:07:17 crc kubenswrapper[4958]: I1006 12:07:17.625955 4958 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-server-0"] Oct 06 12:07:17 crc kubenswrapper[4958]: I1006 12:07:17.640510 4958 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/rabbitmq-server-0"] Oct 06 12:07:17 crc kubenswrapper[4958]: I1006 12:07:17.653187 4958 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-server-0"] Oct 06 12:07:17 crc kubenswrapper[4958]: E1006 12:07:17.653758 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2542eba4-43d2-4108-a6f0-8eb4a1714f77" containerName="rabbitmq" Oct 06 12:07:17 crc kubenswrapper[4958]: I1006 12:07:17.653797 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="2542eba4-43d2-4108-a6f0-8eb4a1714f77" containerName="rabbitmq" Oct 06 12:07:17 crc kubenswrapper[4958]: E1006 12:07:17.653816 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2542eba4-43d2-4108-a6f0-8eb4a1714f77" containerName="setup-container" Oct 06 12:07:17 crc kubenswrapper[4958]: I1006 12:07:17.653823 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="2542eba4-43d2-4108-a6f0-8eb4a1714f77" containerName="setup-container" Oct 06 12:07:17 crc kubenswrapper[4958]: I1006 12:07:17.654055 4958 memory_manager.go:354] "RemoveStaleState removing state" podUID="2542eba4-43d2-4108-a6f0-8eb4a1714f77" containerName="rabbitmq" Oct 06 12:07:17 crc kubenswrapper[4958]: I1006 12:07:17.655355 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Oct 06 12:07:17 crc kubenswrapper[4958]: I1006 12:07:17.657863 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-erlang-cookie" Oct 06 12:07:17 crc kubenswrapper[4958]: I1006 12:07:17.658383 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-server-dockercfg-jkfbz" Oct 06 12:07:17 crc kubenswrapper[4958]: I1006 12:07:17.658752 4958 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-server-conf" Oct 06 12:07:17 crc kubenswrapper[4958]: I1006 12:07:17.659177 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-svc" Oct 06 12:07:17 crc kubenswrapper[4958]: I1006 12:07:17.659241 4958 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-plugins-conf" Oct 06 12:07:17 crc kubenswrapper[4958]: I1006 12:07:17.659408 4958 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-config-data" Oct 06 12:07:17 crc kubenswrapper[4958]: I1006 12:07:17.659478 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-default-user" Oct 06 12:07:17 crc kubenswrapper[4958]: I1006 12:07:17.685460 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Oct 06 12:07:17 crc kubenswrapper[4958]: I1006 12:07:17.839883 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"rabbitmq-server-0\" (UID: \"02fc87b1-4709-4476-a597-9154c5c3a322\") " pod="openstack/rabbitmq-server-0" Oct 06 12:07:17 crc kubenswrapper[4958]: I1006 12:07:17.839980 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/02fc87b1-4709-4476-a597-9154c5c3a322-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"02fc87b1-4709-4476-a597-9154c5c3a322\") " pod="openstack/rabbitmq-server-0" Oct 06 12:07:17 crc kubenswrapper[4958]: I1006 12:07:17.840002 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/02fc87b1-4709-4476-a597-9154c5c3a322-server-conf\") pod \"rabbitmq-server-0\" (UID: \"02fc87b1-4709-4476-a597-9154c5c3a322\") " pod="openstack/rabbitmq-server-0" Oct 06 12:07:17 crc kubenswrapper[4958]: I1006 12:07:17.840127 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/02fc87b1-4709-4476-a597-9154c5c3a322-pod-info\") pod \"rabbitmq-server-0\" (UID: \"02fc87b1-4709-4476-a597-9154c5c3a322\") " pod="openstack/rabbitmq-server-0" Oct 06 12:07:17 crc kubenswrapper[4958]: I1006 12:07:17.840180 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/02fc87b1-4709-4476-a597-9154c5c3a322-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"02fc87b1-4709-4476-a597-9154c5c3a322\") " pod="openstack/rabbitmq-server-0" Oct 06 12:07:17 crc kubenswrapper[4958]: I1006 12:07:17.840208 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/02fc87b1-4709-4476-a597-9154c5c3a322-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"02fc87b1-4709-4476-a597-9154c5c3a322\") " pod="openstack/rabbitmq-server-0" Oct 06 12:07:17 crc kubenswrapper[4958]: I1006 12:07:17.840272 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/02fc87b1-4709-4476-a597-9154c5c3a322-config-data\") pod \"rabbitmq-server-0\" (UID: \"02fc87b1-4709-4476-a597-9154c5c3a322\") " pod="openstack/rabbitmq-server-0" Oct 06 12:07:17 crc kubenswrapper[4958]: I1006 12:07:17.840298 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/02fc87b1-4709-4476-a597-9154c5c3a322-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"02fc87b1-4709-4476-a597-9154c5c3a322\") " pod="openstack/rabbitmq-server-0" Oct 06 12:07:17 crc kubenswrapper[4958]: I1006 12:07:17.840358 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/02fc87b1-4709-4476-a597-9154c5c3a322-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"02fc87b1-4709-4476-a597-9154c5c3a322\") " pod="openstack/rabbitmq-server-0" Oct 06 12:07:17 crc kubenswrapper[4958]: I1006 12:07:17.840378 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/02fc87b1-4709-4476-a597-9154c5c3a322-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"02fc87b1-4709-4476-a597-9154c5c3a322\") " pod="openstack/rabbitmq-server-0" Oct 06 12:07:17 crc kubenswrapper[4958]: I1006 12:07:17.840400 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4kvtc\" (UniqueName: \"kubernetes.io/projected/02fc87b1-4709-4476-a597-9154c5c3a322-kube-api-access-4kvtc\") pod \"rabbitmq-server-0\" (UID: \"02fc87b1-4709-4476-a597-9154c5c3a322\") " pod="openstack/rabbitmq-server-0" Oct 06 12:07:17 crc kubenswrapper[4958]: I1006 12:07:17.942414 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/02fc87b1-4709-4476-a597-9154c5c3a322-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"02fc87b1-4709-4476-a597-9154c5c3a322\") " pod="openstack/rabbitmq-server-0" Oct 06 12:07:17 crc kubenswrapper[4958]: I1006 12:07:17.942456 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/02fc87b1-4709-4476-a597-9154c5c3a322-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"02fc87b1-4709-4476-a597-9154c5c3a322\") " pod="openstack/rabbitmq-server-0" Oct 06 12:07:17 crc kubenswrapper[4958]: I1006 12:07:17.942482 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4kvtc\" (UniqueName: \"kubernetes.io/projected/02fc87b1-4709-4476-a597-9154c5c3a322-kube-api-access-4kvtc\") pod \"rabbitmq-server-0\" (UID: \"02fc87b1-4709-4476-a597-9154c5c3a322\") " pod="openstack/rabbitmq-server-0" Oct 06 12:07:17 crc kubenswrapper[4958]: I1006 12:07:17.942535 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"rabbitmq-server-0\" (UID: \"02fc87b1-4709-4476-a597-9154c5c3a322\") " pod="openstack/rabbitmq-server-0" Oct 06 12:07:17 crc kubenswrapper[4958]: I1006 12:07:17.942575 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/02fc87b1-4709-4476-a597-9154c5c3a322-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"02fc87b1-4709-4476-a597-9154c5c3a322\") " pod="openstack/rabbitmq-server-0" Oct 06 12:07:17 crc kubenswrapper[4958]: I1006 12:07:17.942597 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/02fc87b1-4709-4476-a597-9154c5c3a322-server-conf\") pod \"rabbitmq-server-0\" (UID: \"02fc87b1-4709-4476-a597-9154c5c3a322\") " pod="openstack/rabbitmq-server-0" Oct 06 12:07:17 crc kubenswrapper[4958]: I1006 12:07:17.942673 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/02fc87b1-4709-4476-a597-9154c5c3a322-pod-info\") pod \"rabbitmq-server-0\" (UID: \"02fc87b1-4709-4476-a597-9154c5c3a322\") " pod="openstack/rabbitmq-server-0" Oct 06 12:07:17 crc kubenswrapper[4958]: I1006 12:07:17.942706 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/02fc87b1-4709-4476-a597-9154c5c3a322-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"02fc87b1-4709-4476-a597-9154c5c3a322\") " pod="openstack/rabbitmq-server-0" Oct 06 12:07:17 crc kubenswrapper[4958]: I1006 12:07:17.942740 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/02fc87b1-4709-4476-a597-9154c5c3a322-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"02fc87b1-4709-4476-a597-9154c5c3a322\") " pod="openstack/rabbitmq-server-0" Oct 06 12:07:17 crc kubenswrapper[4958]: I1006 12:07:17.942771 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/02fc87b1-4709-4476-a597-9154c5c3a322-config-data\") pod \"rabbitmq-server-0\" (UID: \"02fc87b1-4709-4476-a597-9154c5c3a322\") " pod="openstack/rabbitmq-server-0" Oct 06 12:07:17 crc kubenswrapper[4958]: I1006 12:07:17.942796 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/02fc87b1-4709-4476-a597-9154c5c3a322-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"02fc87b1-4709-4476-a597-9154c5c3a322\") " pod="openstack/rabbitmq-server-0" Oct 06 12:07:17 crc kubenswrapper[4958]: I1006 12:07:17.944733 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/02fc87b1-4709-4476-a597-9154c5c3a322-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"02fc87b1-4709-4476-a597-9154c5c3a322\") " pod="openstack/rabbitmq-server-0" Oct 06 12:07:17 crc kubenswrapper[4958]: I1006 12:07:17.944925 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/02fc87b1-4709-4476-a597-9154c5c3a322-server-conf\") pod \"rabbitmq-server-0\" (UID: \"02fc87b1-4709-4476-a597-9154c5c3a322\") " pod="openstack/rabbitmq-server-0" Oct 06 12:07:17 crc kubenswrapper[4958]: I1006 12:07:17.945210 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/02fc87b1-4709-4476-a597-9154c5c3a322-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"02fc87b1-4709-4476-a597-9154c5c3a322\") " pod="openstack/rabbitmq-server-0" Oct 06 12:07:17 crc kubenswrapper[4958]: I1006 12:07:17.945597 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/02fc87b1-4709-4476-a597-9154c5c3a322-config-data\") pod \"rabbitmq-server-0\" (UID: \"02fc87b1-4709-4476-a597-9154c5c3a322\") " pod="openstack/rabbitmq-server-0" Oct 06 12:07:17 crc kubenswrapper[4958]: I1006 12:07:17.945624 4958 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"rabbitmq-server-0\" (UID: \"02fc87b1-4709-4476-a597-9154c5c3a322\") device mount path \"/mnt/openstack/pv10\"" pod="openstack/rabbitmq-server-0" Oct 06 12:07:17 crc kubenswrapper[4958]: I1006 12:07:17.947073 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/02fc87b1-4709-4476-a597-9154c5c3a322-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"02fc87b1-4709-4476-a597-9154c5c3a322\") " pod="openstack/rabbitmq-server-0" Oct 06 12:07:17 crc kubenswrapper[4958]: I1006 12:07:17.948296 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/02fc87b1-4709-4476-a597-9154c5c3a322-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"02fc87b1-4709-4476-a597-9154c5c3a322\") " pod="openstack/rabbitmq-server-0" Oct 06 12:07:17 crc kubenswrapper[4958]: I1006 12:07:17.949261 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/02fc87b1-4709-4476-a597-9154c5c3a322-pod-info\") pod \"rabbitmq-server-0\" (UID: \"02fc87b1-4709-4476-a597-9154c5c3a322\") " pod="openstack/rabbitmq-server-0" Oct 06 12:07:17 crc kubenswrapper[4958]: I1006 12:07:17.949914 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/02fc87b1-4709-4476-a597-9154c5c3a322-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"02fc87b1-4709-4476-a597-9154c5c3a322\") " pod="openstack/rabbitmq-server-0" Oct 06 12:07:17 crc kubenswrapper[4958]: I1006 12:07:17.955820 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/02fc87b1-4709-4476-a597-9154c5c3a322-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"02fc87b1-4709-4476-a597-9154c5c3a322\") " pod="openstack/rabbitmq-server-0" Oct 06 12:07:17 crc kubenswrapper[4958]: I1006 12:07:17.966094 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4kvtc\" (UniqueName: \"kubernetes.io/projected/02fc87b1-4709-4476-a597-9154c5c3a322-kube-api-access-4kvtc\") pod \"rabbitmq-server-0\" (UID: \"02fc87b1-4709-4476-a597-9154c5c3a322\") " pod="openstack/rabbitmq-server-0" Oct 06 12:07:17 crc kubenswrapper[4958]: I1006 12:07:17.997577 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"rabbitmq-server-0\" (UID: \"02fc87b1-4709-4476-a597-9154c5c3a322\") " pod="openstack/rabbitmq-server-0" Oct 06 12:07:18 crc kubenswrapper[4958]: I1006 12:07:18.099796 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Oct 06 12:07:18 crc kubenswrapper[4958]: I1006 12:07:18.246975 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/4c931ada-9afe-4ec4-9f75-42db89dc36e8-config-data\") pod \"4c931ada-9afe-4ec4-9f75-42db89dc36e8\" (UID: \"4c931ada-9afe-4ec4-9f75-42db89dc36e8\") " Oct 06 12:07:18 crc kubenswrapper[4958]: I1006 12:07:18.247050 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/4c931ada-9afe-4ec4-9f75-42db89dc36e8-pod-info\") pod \"4c931ada-9afe-4ec4-9f75-42db89dc36e8\" (UID: \"4c931ada-9afe-4ec4-9f75-42db89dc36e8\") " Oct 06 12:07:18 crc kubenswrapper[4958]: I1006 12:07:18.247087 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pf6dt\" (UniqueName: \"kubernetes.io/projected/4c931ada-9afe-4ec4-9f75-42db89dc36e8-kube-api-access-pf6dt\") pod \"4c931ada-9afe-4ec4-9f75-42db89dc36e8\" (UID: \"4c931ada-9afe-4ec4-9f75-42db89dc36e8\") " Oct 06 12:07:18 crc kubenswrapper[4958]: I1006 12:07:18.247126 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/4c931ada-9afe-4ec4-9f75-42db89dc36e8-plugins-conf\") pod \"4c931ada-9afe-4ec4-9f75-42db89dc36e8\" (UID: \"4c931ada-9afe-4ec4-9f75-42db89dc36e8\") " Oct 06 12:07:18 crc kubenswrapper[4958]: I1006 12:07:18.247171 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/4c931ada-9afe-4ec4-9f75-42db89dc36e8-rabbitmq-erlang-cookie\") pod \"4c931ada-9afe-4ec4-9f75-42db89dc36e8\" (UID: \"4c931ada-9afe-4ec4-9f75-42db89dc36e8\") " Oct 06 12:07:18 crc kubenswrapper[4958]: I1006 12:07:18.247223 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/4c931ada-9afe-4ec4-9f75-42db89dc36e8-erlang-cookie-secret\") pod \"4c931ada-9afe-4ec4-9f75-42db89dc36e8\" (UID: \"4c931ada-9afe-4ec4-9f75-42db89dc36e8\") " Oct 06 12:07:18 crc kubenswrapper[4958]: I1006 12:07:18.247278 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"persistence\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"4c931ada-9afe-4ec4-9f75-42db89dc36e8\" (UID: \"4c931ada-9afe-4ec4-9f75-42db89dc36e8\") " Oct 06 12:07:18 crc kubenswrapper[4958]: I1006 12:07:18.247328 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/4c931ada-9afe-4ec4-9f75-42db89dc36e8-server-conf\") pod \"4c931ada-9afe-4ec4-9f75-42db89dc36e8\" (UID: \"4c931ada-9afe-4ec4-9f75-42db89dc36e8\") " Oct 06 12:07:18 crc kubenswrapper[4958]: I1006 12:07:18.247364 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/4c931ada-9afe-4ec4-9f75-42db89dc36e8-rabbitmq-tls\") pod \"4c931ada-9afe-4ec4-9f75-42db89dc36e8\" (UID: \"4c931ada-9afe-4ec4-9f75-42db89dc36e8\") " Oct 06 12:07:18 crc kubenswrapper[4958]: I1006 12:07:18.247394 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/4c931ada-9afe-4ec4-9f75-42db89dc36e8-rabbitmq-confd\") pod \"4c931ada-9afe-4ec4-9f75-42db89dc36e8\" (UID: \"4c931ada-9afe-4ec4-9f75-42db89dc36e8\") " Oct 06 12:07:18 crc kubenswrapper[4958]: I1006 12:07:18.247411 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/4c931ada-9afe-4ec4-9f75-42db89dc36e8-rabbitmq-plugins\") pod \"4c931ada-9afe-4ec4-9f75-42db89dc36e8\" (UID: \"4c931ada-9afe-4ec4-9f75-42db89dc36e8\") " Oct 06 12:07:18 crc kubenswrapper[4958]: I1006 12:07:18.248120 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4c931ada-9afe-4ec4-9f75-42db89dc36e8-rabbitmq-plugins" (OuterVolumeSpecName: "rabbitmq-plugins") pod "4c931ada-9afe-4ec4-9f75-42db89dc36e8" (UID: "4c931ada-9afe-4ec4-9f75-42db89dc36e8"). InnerVolumeSpecName "rabbitmq-plugins". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 06 12:07:18 crc kubenswrapper[4958]: I1006 12:07:18.252086 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4c931ada-9afe-4ec4-9f75-42db89dc36e8-erlang-cookie-secret" (OuterVolumeSpecName: "erlang-cookie-secret") pod "4c931ada-9afe-4ec4-9f75-42db89dc36e8" (UID: "4c931ada-9afe-4ec4-9f75-42db89dc36e8"). InnerVolumeSpecName "erlang-cookie-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 06 12:07:18 crc kubenswrapper[4958]: I1006 12:07:18.252118 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4c931ada-9afe-4ec4-9f75-42db89dc36e8-rabbitmq-erlang-cookie" (OuterVolumeSpecName: "rabbitmq-erlang-cookie") pod "4c931ada-9afe-4ec4-9f75-42db89dc36e8" (UID: "4c931ada-9afe-4ec4-9f75-42db89dc36e8"). InnerVolumeSpecName "rabbitmq-erlang-cookie". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 06 12:07:18 crc kubenswrapper[4958]: I1006 12:07:18.252514 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4c931ada-9afe-4ec4-9f75-42db89dc36e8-plugins-conf" (OuterVolumeSpecName: "plugins-conf") pod "4c931ada-9afe-4ec4-9f75-42db89dc36e8" (UID: "4c931ada-9afe-4ec4-9f75-42db89dc36e8"). InnerVolumeSpecName "plugins-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 06 12:07:18 crc kubenswrapper[4958]: I1006 12:07:18.252826 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/downward-api/4c931ada-9afe-4ec4-9f75-42db89dc36e8-pod-info" (OuterVolumeSpecName: "pod-info") pod "4c931ada-9afe-4ec4-9f75-42db89dc36e8" (UID: "4c931ada-9afe-4ec4-9f75-42db89dc36e8"). InnerVolumeSpecName "pod-info". PluginName "kubernetes.io/downward-api", VolumeGidValue "" Oct 06 12:07:18 crc kubenswrapper[4958]: I1006 12:07:18.253127 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage04-crc" (OuterVolumeSpecName: "persistence") pod "4c931ada-9afe-4ec4-9f75-42db89dc36e8" (UID: "4c931ada-9afe-4ec4-9f75-42db89dc36e8"). InnerVolumeSpecName "local-storage04-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Oct 06 12:07:18 crc kubenswrapper[4958]: I1006 12:07:18.253654 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4c931ada-9afe-4ec4-9f75-42db89dc36e8-rabbitmq-tls" (OuterVolumeSpecName: "rabbitmq-tls") pod "4c931ada-9afe-4ec4-9f75-42db89dc36e8" (UID: "4c931ada-9afe-4ec4-9f75-42db89dc36e8"). InnerVolumeSpecName "rabbitmq-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 06 12:07:18 crc kubenswrapper[4958]: I1006 12:07:18.255944 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4c931ada-9afe-4ec4-9f75-42db89dc36e8-kube-api-access-pf6dt" (OuterVolumeSpecName: "kube-api-access-pf6dt") pod "4c931ada-9afe-4ec4-9f75-42db89dc36e8" (UID: "4c931ada-9afe-4ec4-9f75-42db89dc36e8"). InnerVolumeSpecName "kube-api-access-pf6dt". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 06 12:07:18 crc kubenswrapper[4958]: I1006 12:07:18.274317 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4c931ada-9afe-4ec4-9f75-42db89dc36e8-config-data" (OuterVolumeSpecName: "config-data") pod "4c931ada-9afe-4ec4-9f75-42db89dc36e8" (UID: "4c931ada-9afe-4ec4-9f75-42db89dc36e8"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 06 12:07:18 crc kubenswrapper[4958]: I1006 12:07:18.277983 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Oct 06 12:07:18 crc kubenswrapper[4958]: I1006 12:07:18.292839 4958 generic.go:334] "Generic (PLEG): container finished" podID="4c931ada-9afe-4ec4-9f75-42db89dc36e8" containerID="bbc482e823030cc744cfe19b36baff376ce7c79d0bc3b6dd654dc5f2f2130684" exitCode=0 Oct 06 12:07:18 crc kubenswrapper[4958]: I1006 12:07:18.293000 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Oct 06 12:07:18 crc kubenswrapper[4958]: I1006 12:07:18.293106 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"4c931ada-9afe-4ec4-9f75-42db89dc36e8","Type":"ContainerDied","Data":"bbc482e823030cc744cfe19b36baff376ce7c79d0bc3b6dd654dc5f2f2130684"} Oct 06 12:07:18 crc kubenswrapper[4958]: I1006 12:07:18.293175 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"4c931ada-9afe-4ec4-9f75-42db89dc36e8","Type":"ContainerDied","Data":"76747f183e6353869963159a41c388cd7ddec98d4e1de9ee4df6d13537377d5c"} Oct 06 12:07:18 crc kubenswrapper[4958]: I1006 12:07:18.293195 4958 scope.go:117] "RemoveContainer" containerID="bbc482e823030cc744cfe19b36baff376ce7c79d0bc3b6dd654dc5f2f2130684" Oct 06 12:07:18 crc kubenswrapper[4958]: I1006 12:07:18.296187 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4c931ada-9afe-4ec4-9f75-42db89dc36e8-server-conf" (OuterVolumeSpecName: "server-conf") pod "4c931ada-9afe-4ec4-9f75-42db89dc36e8" (UID: "4c931ada-9afe-4ec4-9f75-42db89dc36e8"). InnerVolumeSpecName "server-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 06 12:07:18 crc kubenswrapper[4958]: I1006 12:07:18.334239 4958 scope.go:117] "RemoveContainer" containerID="9d985a2890095c07e0d679b7da0afb5dd7107bb0d4001822c4341d3e1ea05dc6" Oct 06 12:07:18 crc kubenswrapper[4958]: I1006 12:07:18.349319 4958 reconciler_common.go:293] "Volume detached for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/4c931ada-9afe-4ec4-9f75-42db89dc36e8-plugins-conf\") on node \"crc\" DevicePath \"\"" Oct 06 12:07:18 crc kubenswrapper[4958]: I1006 12:07:18.349343 4958 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/4c931ada-9afe-4ec4-9f75-42db89dc36e8-rabbitmq-erlang-cookie\") on node \"crc\" DevicePath \"\"" Oct 06 12:07:18 crc kubenswrapper[4958]: I1006 12:07:18.349353 4958 reconciler_common.go:293] "Volume detached for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/4c931ada-9afe-4ec4-9f75-42db89dc36e8-erlang-cookie-secret\") on node \"crc\" DevicePath \"\"" Oct 06 12:07:18 crc kubenswrapper[4958]: I1006 12:07:18.349374 4958 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") on node \"crc\" " Oct 06 12:07:18 crc kubenswrapper[4958]: I1006 12:07:18.349383 4958 reconciler_common.go:293] "Volume detached for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/4c931ada-9afe-4ec4-9f75-42db89dc36e8-server-conf\") on node \"crc\" DevicePath \"\"" Oct 06 12:07:18 crc kubenswrapper[4958]: I1006 12:07:18.349391 4958 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/4c931ada-9afe-4ec4-9f75-42db89dc36e8-rabbitmq-tls\") on node \"crc\" DevicePath \"\"" Oct 06 12:07:18 crc kubenswrapper[4958]: I1006 12:07:18.349399 4958 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/4c931ada-9afe-4ec4-9f75-42db89dc36e8-rabbitmq-plugins\") on node \"crc\" DevicePath \"\"" Oct 06 12:07:18 crc kubenswrapper[4958]: I1006 12:07:18.349407 4958 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/4c931ada-9afe-4ec4-9f75-42db89dc36e8-config-data\") on node \"crc\" DevicePath \"\"" Oct 06 12:07:18 crc kubenswrapper[4958]: I1006 12:07:18.349414 4958 reconciler_common.go:293] "Volume detached for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/4c931ada-9afe-4ec4-9f75-42db89dc36e8-pod-info\") on node \"crc\" DevicePath \"\"" Oct 06 12:07:18 crc kubenswrapper[4958]: I1006 12:07:18.349438 4958 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pf6dt\" (UniqueName: \"kubernetes.io/projected/4c931ada-9afe-4ec4-9f75-42db89dc36e8-kube-api-access-pf6dt\") on node \"crc\" DevicePath \"\"" Oct 06 12:07:18 crc kubenswrapper[4958]: I1006 12:07:18.355948 4958 scope.go:117] "RemoveContainer" containerID="bbc482e823030cc744cfe19b36baff376ce7c79d0bc3b6dd654dc5f2f2130684" Oct 06 12:07:18 crc kubenswrapper[4958]: E1006 12:07:18.359853 4958 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"bbc482e823030cc744cfe19b36baff376ce7c79d0bc3b6dd654dc5f2f2130684\": container with ID starting with bbc482e823030cc744cfe19b36baff376ce7c79d0bc3b6dd654dc5f2f2130684 not found: ID does not exist" containerID="bbc482e823030cc744cfe19b36baff376ce7c79d0bc3b6dd654dc5f2f2130684" Oct 06 12:07:18 crc kubenswrapper[4958]: I1006 12:07:18.359902 4958 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bbc482e823030cc744cfe19b36baff376ce7c79d0bc3b6dd654dc5f2f2130684"} err="failed to get container status \"bbc482e823030cc744cfe19b36baff376ce7c79d0bc3b6dd654dc5f2f2130684\": rpc error: code = NotFound desc = could not find container \"bbc482e823030cc744cfe19b36baff376ce7c79d0bc3b6dd654dc5f2f2130684\": container with ID starting with bbc482e823030cc744cfe19b36baff376ce7c79d0bc3b6dd654dc5f2f2130684 not found: ID does not exist" Oct 06 12:07:18 crc kubenswrapper[4958]: I1006 12:07:18.359931 4958 scope.go:117] "RemoveContainer" containerID="9d985a2890095c07e0d679b7da0afb5dd7107bb0d4001822c4341d3e1ea05dc6" Oct 06 12:07:18 crc kubenswrapper[4958]: E1006 12:07:18.360404 4958 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9d985a2890095c07e0d679b7da0afb5dd7107bb0d4001822c4341d3e1ea05dc6\": container with ID starting with 9d985a2890095c07e0d679b7da0afb5dd7107bb0d4001822c4341d3e1ea05dc6 not found: ID does not exist" containerID="9d985a2890095c07e0d679b7da0afb5dd7107bb0d4001822c4341d3e1ea05dc6" Oct 06 12:07:18 crc kubenswrapper[4958]: I1006 12:07:18.360452 4958 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9d985a2890095c07e0d679b7da0afb5dd7107bb0d4001822c4341d3e1ea05dc6"} err="failed to get container status \"9d985a2890095c07e0d679b7da0afb5dd7107bb0d4001822c4341d3e1ea05dc6\": rpc error: code = NotFound desc = could not find container \"9d985a2890095c07e0d679b7da0afb5dd7107bb0d4001822c4341d3e1ea05dc6\": container with ID starting with 9d985a2890095c07e0d679b7da0afb5dd7107bb0d4001822c4341d3e1ea05dc6 not found: ID does not exist" Oct 06 12:07:18 crc kubenswrapper[4958]: I1006 12:07:18.371640 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4c931ada-9afe-4ec4-9f75-42db89dc36e8-rabbitmq-confd" (OuterVolumeSpecName: "rabbitmq-confd") pod "4c931ada-9afe-4ec4-9f75-42db89dc36e8" (UID: "4c931ada-9afe-4ec4-9f75-42db89dc36e8"). InnerVolumeSpecName "rabbitmq-confd". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 06 12:07:18 crc kubenswrapper[4958]: I1006 12:07:18.381274 4958 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage04-crc" (UniqueName: "kubernetes.io/local-volume/local-storage04-crc") on node "crc" Oct 06 12:07:18 crc kubenswrapper[4958]: I1006 12:07:18.452419 4958 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/4c931ada-9afe-4ec4-9f75-42db89dc36e8-rabbitmq-confd\") on node \"crc\" DevicePath \"\"" Oct 06 12:07:18 crc kubenswrapper[4958]: I1006 12:07:18.452742 4958 reconciler_common.go:293] "Volume detached for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") on node \"crc\" DevicePath \"\"" Oct 06 12:07:18 crc kubenswrapper[4958]: I1006 12:07:18.645218 4958 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Oct 06 12:07:18 crc kubenswrapper[4958]: I1006 12:07:18.663799 4958 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Oct 06 12:07:18 crc kubenswrapper[4958]: I1006 12:07:18.690337 4958 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Oct 06 12:07:18 crc kubenswrapper[4958]: E1006 12:07:18.690776 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4c931ada-9afe-4ec4-9f75-42db89dc36e8" containerName="rabbitmq" Oct 06 12:07:18 crc kubenswrapper[4958]: I1006 12:07:18.690793 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="4c931ada-9afe-4ec4-9f75-42db89dc36e8" containerName="rabbitmq" Oct 06 12:07:18 crc kubenswrapper[4958]: E1006 12:07:18.690817 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4c931ada-9afe-4ec4-9f75-42db89dc36e8" containerName="setup-container" Oct 06 12:07:18 crc kubenswrapper[4958]: I1006 12:07:18.690823 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="4c931ada-9afe-4ec4-9f75-42db89dc36e8" containerName="setup-container" Oct 06 12:07:18 crc kubenswrapper[4958]: I1006 12:07:18.690987 4958 memory_manager.go:354] "RemoveStaleState removing state" podUID="4c931ada-9afe-4ec4-9f75-42db89dc36e8" containerName="rabbitmq" Oct 06 12:07:18 crc kubenswrapper[4958]: I1006 12:07:18.691986 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Oct 06 12:07:18 crc kubenswrapper[4958]: I1006 12:07:18.696596 4958 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-config-data" Oct 06 12:07:18 crc kubenswrapper[4958]: I1006 12:07:18.697435 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-cell1-svc" Oct 06 12:07:18 crc kubenswrapper[4958]: I1006 12:07:18.697572 4958 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-server-conf" Oct 06 12:07:18 crc kubenswrapper[4958]: I1006 12:07:18.697691 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-default-user" Oct 06 12:07:18 crc kubenswrapper[4958]: I1006 12:07:18.697794 4958 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-plugins-conf" Oct 06 12:07:18 crc kubenswrapper[4958]: I1006 12:07:18.698033 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-server-dockercfg-82c5t" Oct 06 12:07:18 crc kubenswrapper[4958]: I1006 12:07:18.698211 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-erlang-cookie" Oct 06 12:07:18 crc kubenswrapper[4958]: I1006 12:07:18.700285 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Oct 06 12:07:18 crc kubenswrapper[4958]: I1006 12:07:18.762046 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/35b611fd-63b3-4146-b713-3fef7c26c3c7-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"35b611fd-63b3-4146-b713-3fef7c26c3c7\") " pod="openstack/rabbitmq-cell1-server-0" Oct 06 12:07:18 crc kubenswrapper[4958]: I1006 12:07:18.762096 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/35b611fd-63b3-4146-b713-3fef7c26c3c7-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"35b611fd-63b3-4146-b713-3fef7c26c3c7\") " pod="openstack/rabbitmq-cell1-server-0" Oct 06 12:07:18 crc kubenswrapper[4958]: I1006 12:07:18.762199 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-glvr7\" (UniqueName: \"kubernetes.io/projected/35b611fd-63b3-4146-b713-3fef7c26c3c7-kube-api-access-glvr7\") pod \"rabbitmq-cell1-server-0\" (UID: \"35b611fd-63b3-4146-b713-3fef7c26c3c7\") " pod="openstack/rabbitmq-cell1-server-0" Oct 06 12:07:18 crc kubenswrapper[4958]: I1006 12:07:18.762260 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/35b611fd-63b3-4146-b713-3fef7c26c3c7-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"35b611fd-63b3-4146-b713-3fef7c26c3c7\") " pod="openstack/rabbitmq-cell1-server-0" Oct 06 12:07:18 crc kubenswrapper[4958]: I1006 12:07:18.762308 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/35b611fd-63b3-4146-b713-3fef7c26c3c7-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"35b611fd-63b3-4146-b713-3fef7c26c3c7\") " pod="openstack/rabbitmq-cell1-server-0" Oct 06 12:07:18 crc kubenswrapper[4958]: I1006 12:07:18.762334 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/35b611fd-63b3-4146-b713-3fef7c26c3c7-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"35b611fd-63b3-4146-b713-3fef7c26c3c7\") " pod="openstack/rabbitmq-cell1-server-0" Oct 06 12:07:18 crc kubenswrapper[4958]: I1006 12:07:18.762358 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/35b611fd-63b3-4146-b713-3fef7c26c3c7-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"35b611fd-63b3-4146-b713-3fef7c26c3c7\") " pod="openstack/rabbitmq-cell1-server-0" Oct 06 12:07:18 crc kubenswrapper[4958]: I1006 12:07:18.774988 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/35b611fd-63b3-4146-b713-3fef7c26c3c7-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"35b611fd-63b3-4146-b713-3fef7c26c3c7\") " pod="openstack/rabbitmq-cell1-server-0" Oct 06 12:07:18 crc kubenswrapper[4958]: I1006 12:07:18.775063 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/35b611fd-63b3-4146-b713-3fef7c26c3c7-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"35b611fd-63b3-4146-b713-3fef7c26c3c7\") " pod="openstack/rabbitmq-cell1-server-0" Oct 06 12:07:18 crc kubenswrapper[4958]: I1006 12:07:18.775131 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"35b611fd-63b3-4146-b713-3fef7c26c3c7\") " pod="openstack/rabbitmq-cell1-server-0" Oct 06 12:07:18 crc kubenswrapper[4958]: I1006 12:07:18.775218 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/35b611fd-63b3-4146-b713-3fef7c26c3c7-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"35b611fd-63b3-4146-b713-3fef7c26c3c7\") " pod="openstack/rabbitmq-cell1-server-0" Oct 06 12:07:18 crc kubenswrapper[4958]: I1006 12:07:18.777301 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Oct 06 12:07:18 crc kubenswrapper[4958]: I1006 12:07:18.877345 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/35b611fd-63b3-4146-b713-3fef7c26c3c7-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"35b611fd-63b3-4146-b713-3fef7c26c3c7\") " pod="openstack/rabbitmq-cell1-server-0" Oct 06 12:07:18 crc kubenswrapper[4958]: I1006 12:07:18.877603 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/35b611fd-63b3-4146-b713-3fef7c26c3c7-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"35b611fd-63b3-4146-b713-3fef7c26c3c7\") " pod="openstack/rabbitmq-cell1-server-0" Oct 06 12:07:18 crc kubenswrapper[4958]: I1006 12:07:18.877638 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"35b611fd-63b3-4146-b713-3fef7c26c3c7\") " pod="openstack/rabbitmq-cell1-server-0" Oct 06 12:07:18 crc kubenswrapper[4958]: I1006 12:07:18.877672 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/35b611fd-63b3-4146-b713-3fef7c26c3c7-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"35b611fd-63b3-4146-b713-3fef7c26c3c7\") " pod="openstack/rabbitmq-cell1-server-0" Oct 06 12:07:18 crc kubenswrapper[4958]: I1006 12:07:18.877718 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/35b611fd-63b3-4146-b713-3fef7c26c3c7-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"35b611fd-63b3-4146-b713-3fef7c26c3c7\") " pod="openstack/rabbitmq-cell1-server-0" Oct 06 12:07:18 crc kubenswrapper[4958]: I1006 12:07:18.877739 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/35b611fd-63b3-4146-b713-3fef7c26c3c7-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"35b611fd-63b3-4146-b713-3fef7c26c3c7\") " pod="openstack/rabbitmq-cell1-server-0" Oct 06 12:07:18 crc kubenswrapper[4958]: I1006 12:07:18.877774 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-glvr7\" (UniqueName: \"kubernetes.io/projected/35b611fd-63b3-4146-b713-3fef7c26c3c7-kube-api-access-glvr7\") pod \"rabbitmq-cell1-server-0\" (UID: \"35b611fd-63b3-4146-b713-3fef7c26c3c7\") " pod="openstack/rabbitmq-cell1-server-0" Oct 06 12:07:18 crc kubenswrapper[4958]: I1006 12:07:18.877953 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/35b611fd-63b3-4146-b713-3fef7c26c3c7-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"35b611fd-63b3-4146-b713-3fef7c26c3c7\") " pod="openstack/rabbitmq-cell1-server-0" Oct 06 12:07:18 crc kubenswrapper[4958]: I1006 12:07:18.878513 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/35b611fd-63b3-4146-b713-3fef7c26c3c7-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"35b611fd-63b3-4146-b713-3fef7c26c3c7\") " pod="openstack/rabbitmq-cell1-server-0" Oct 06 12:07:18 crc kubenswrapper[4958]: I1006 12:07:18.878570 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/35b611fd-63b3-4146-b713-3fef7c26c3c7-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"35b611fd-63b3-4146-b713-3fef7c26c3c7\") " pod="openstack/rabbitmq-cell1-server-0" Oct 06 12:07:18 crc kubenswrapper[4958]: I1006 12:07:18.878619 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/35b611fd-63b3-4146-b713-3fef7c26c3c7-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"35b611fd-63b3-4146-b713-3fef7c26c3c7\") " pod="openstack/rabbitmq-cell1-server-0" Oct 06 12:07:18 crc kubenswrapper[4958]: I1006 12:07:18.878638 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/35b611fd-63b3-4146-b713-3fef7c26c3c7-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"35b611fd-63b3-4146-b713-3fef7c26c3c7\") " pod="openstack/rabbitmq-cell1-server-0" Oct 06 12:07:18 crc kubenswrapper[4958]: I1006 12:07:18.879031 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/35b611fd-63b3-4146-b713-3fef7c26c3c7-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"35b611fd-63b3-4146-b713-3fef7c26c3c7\") " pod="openstack/rabbitmq-cell1-server-0" Oct 06 12:07:18 crc kubenswrapper[4958]: I1006 12:07:18.879170 4958 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"35b611fd-63b3-4146-b713-3fef7c26c3c7\") device mount path \"/mnt/openstack/pv04\"" pod="openstack/rabbitmq-cell1-server-0" Oct 06 12:07:18 crc kubenswrapper[4958]: I1006 12:07:18.879295 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/35b611fd-63b3-4146-b713-3fef7c26c3c7-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"35b611fd-63b3-4146-b713-3fef7c26c3c7\") " pod="openstack/rabbitmq-cell1-server-0" Oct 06 12:07:18 crc kubenswrapper[4958]: I1006 12:07:18.879400 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/35b611fd-63b3-4146-b713-3fef7c26c3c7-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"35b611fd-63b3-4146-b713-3fef7c26c3c7\") " pod="openstack/rabbitmq-cell1-server-0" Oct 06 12:07:18 crc kubenswrapper[4958]: I1006 12:07:18.879169 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/35b611fd-63b3-4146-b713-3fef7c26c3c7-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"35b611fd-63b3-4146-b713-3fef7c26c3c7\") " pod="openstack/rabbitmq-cell1-server-0" Oct 06 12:07:18 crc kubenswrapper[4958]: I1006 12:07:18.882592 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/35b611fd-63b3-4146-b713-3fef7c26c3c7-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"35b611fd-63b3-4146-b713-3fef7c26c3c7\") " pod="openstack/rabbitmq-cell1-server-0" Oct 06 12:07:18 crc kubenswrapper[4958]: I1006 12:07:18.883094 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/35b611fd-63b3-4146-b713-3fef7c26c3c7-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"35b611fd-63b3-4146-b713-3fef7c26c3c7\") " pod="openstack/rabbitmq-cell1-server-0" Oct 06 12:07:18 crc kubenswrapper[4958]: I1006 12:07:18.884034 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/35b611fd-63b3-4146-b713-3fef7c26c3c7-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"35b611fd-63b3-4146-b713-3fef7c26c3c7\") " pod="openstack/rabbitmq-cell1-server-0" Oct 06 12:07:18 crc kubenswrapper[4958]: I1006 12:07:18.886897 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/35b611fd-63b3-4146-b713-3fef7c26c3c7-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"35b611fd-63b3-4146-b713-3fef7c26c3c7\") " pod="openstack/rabbitmq-cell1-server-0" Oct 06 12:07:18 crc kubenswrapper[4958]: I1006 12:07:18.897137 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-glvr7\" (UniqueName: \"kubernetes.io/projected/35b611fd-63b3-4146-b713-3fef7c26c3c7-kube-api-access-glvr7\") pod \"rabbitmq-cell1-server-0\" (UID: \"35b611fd-63b3-4146-b713-3fef7c26c3c7\") " pod="openstack/rabbitmq-cell1-server-0" Oct 06 12:07:18 crc kubenswrapper[4958]: I1006 12:07:18.931777 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"35b611fd-63b3-4146-b713-3fef7c26c3c7\") " pod="openstack/rabbitmq-cell1-server-0" Oct 06 12:07:18 crc kubenswrapper[4958]: I1006 12:07:18.942478 4958 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2542eba4-43d2-4108-a6f0-8eb4a1714f77" path="/var/lib/kubelet/pods/2542eba4-43d2-4108-a6f0-8eb4a1714f77/volumes" Oct 06 12:07:18 crc kubenswrapper[4958]: I1006 12:07:18.944743 4958 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4c931ada-9afe-4ec4-9f75-42db89dc36e8" path="/var/lib/kubelet/pods/4c931ada-9afe-4ec4-9f75-42db89dc36e8/volumes" Oct 06 12:07:19 crc kubenswrapper[4958]: I1006 12:07:19.228561 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Oct 06 12:07:19 crc kubenswrapper[4958]: I1006 12:07:19.339962 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"02fc87b1-4709-4476-a597-9154c5c3a322","Type":"ContainerStarted","Data":"09b59f456324eaecf22fb9f64ee232ceb3c4933e4500e725ef9fb48fdd316287"} Oct 06 12:07:19 crc kubenswrapper[4958]: I1006 12:07:19.544239 4958 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-67b789f86c-69mp6"] Oct 06 12:07:19 crc kubenswrapper[4958]: I1006 12:07:19.545711 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-67b789f86c-69mp6" Oct 06 12:07:19 crc kubenswrapper[4958]: I1006 12:07:19.549983 4958 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-edpm-ipam" Oct 06 12:07:19 crc kubenswrapper[4958]: I1006 12:07:19.557042 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-67b789f86c-69mp6"] Oct 06 12:07:19 crc kubenswrapper[4958]: I1006 12:07:19.677479 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Oct 06 12:07:19 crc kubenswrapper[4958]: W1006 12:07:19.682048 4958 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod35b611fd_63b3_4146_b713_3fef7c26c3c7.slice/crio-4c90c70e3a436c04abed1bab3c9cdd0b15e17d88e4abbd782eea106a295b8954 WatchSource:0}: Error finding container 4c90c70e3a436c04abed1bab3c9cdd0b15e17d88e4abbd782eea106a295b8954: Status 404 returned error can't find the container with id 4c90c70e3a436c04abed1bab3c9cdd0b15e17d88e4abbd782eea106a295b8954 Oct 06 12:07:19 crc kubenswrapper[4958]: I1006 12:07:19.693101 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c3215d95-bf62-4837-9c49-bb0b3f9661ae-dns-svc\") pod \"dnsmasq-dns-67b789f86c-69mp6\" (UID: \"c3215d95-bf62-4837-9c49-bb0b3f9661ae\") " pod="openstack/dnsmasq-dns-67b789f86c-69mp6" Oct 06 12:07:19 crc kubenswrapper[4958]: I1006 12:07:19.693294 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c3215d95-bf62-4837-9c49-bb0b3f9661ae-config\") pod \"dnsmasq-dns-67b789f86c-69mp6\" (UID: \"c3215d95-bf62-4837-9c49-bb0b3f9661ae\") " pod="openstack/dnsmasq-dns-67b789f86c-69mp6" Oct 06 12:07:19 crc kubenswrapper[4958]: I1006 12:07:19.693411 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/c3215d95-bf62-4837-9c49-bb0b3f9661ae-ovsdbserver-nb\") pod \"dnsmasq-dns-67b789f86c-69mp6\" (UID: \"c3215d95-bf62-4837-9c49-bb0b3f9661ae\") " pod="openstack/dnsmasq-dns-67b789f86c-69mp6" Oct 06 12:07:19 crc kubenswrapper[4958]: I1006 12:07:19.693673 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/c3215d95-bf62-4837-9c49-bb0b3f9661ae-openstack-edpm-ipam\") pod \"dnsmasq-dns-67b789f86c-69mp6\" (UID: \"c3215d95-bf62-4837-9c49-bb0b3f9661ae\") " pod="openstack/dnsmasq-dns-67b789f86c-69mp6" Oct 06 12:07:19 crc kubenswrapper[4958]: I1006 12:07:19.693867 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/c3215d95-bf62-4837-9c49-bb0b3f9661ae-dns-swift-storage-0\") pod \"dnsmasq-dns-67b789f86c-69mp6\" (UID: \"c3215d95-bf62-4837-9c49-bb0b3f9661ae\") " pod="openstack/dnsmasq-dns-67b789f86c-69mp6" Oct 06 12:07:19 crc kubenswrapper[4958]: I1006 12:07:19.693897 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/c3215d95-bf62-4837-9c49-bb0b3f9661ae-ovsdbserver-sb\") pod \"dnsmasq-dns-67b789f86c-69mp6\" (UID: \"c3215d95-bf62-4837-9c49-bb0b3f9661ae\") " pod="openstack/dnsmasq-dns-67b789f86c-69mp6" Oct 06 12:07:19 crc kubenswrapper[4958]: I1006 12:07:19.694220 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xdmfr\" (UniqueName: \"kubernetes.io/projected/c3215d95-bf62-4837-9c49-bb0b3f9661ae-kube-api-access-xdmfr\") pod \"dnsmasq-dns-67b789f86c-69mp6\" (UID: \"c3215d95-bf62-4837-9c49-bb0b3f9661ae\") " pod="openstack/dnsmasq-dns-67b789f86c-69mp6" Oct 06 12:07:19 crc kubenswrapper[4958]: I1006 12:07:19.795882 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/c3215d95-bf62-4837-9c49-bb0b3f9661ae-dns-swift-storage-0\") pod \"dnsmasq-dns-67b789f86c-69mp6\" (UID: \"c3215d95-bf62-4837-9c49-bb0b3f9661ae\") " pod="openstack/dnsmasq-dns-67b789f86c-69mp6" Oct 06 12:07:19 crc kubenswrapper[4958]: I1006 12:07:19.795935 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/c3215d95-bf62-4837-9c49-bb0b3f9661ae-ovsdbserver-sb\") pod \"dnsmasq-dns-67b789f86c-69mp6\" (UID: \"c3215d95-bf62-4837-9c49-bb0b3f9661ae\") " pod="openstack/dnsmasq-dns-67b789f86c-69mp6" Oct 06 12:07:19 crc kubenswrapper[4958]: I1006 12:07:19.796039 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xdmfr\" (UniqueName: \"kubernetes.io/projected/c3215d95-bf62-4837-9c49-bb0b3f9661ae-kube-api-access-xdmfr\") pod \"dnsmasq-dns-67b789f86c-69mp6\" (UID: \"c3215d95-bf62-4837-9c49-bb0b3f9661ae\") " pod="openstack/dnsmasq-dns-67b789f86c-69mp6" Oct 06 12:07:19 crc kubenswrapper[4958]: I1006 12:07:19.796122 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c3215d95-bf62-4837-9c49-bb0b3f9661ae-dns-svc\") pod \"dnsmasq-dns-67b789f86c-69mp6\" (UID: \"c3215d95-bf62-4837-9c49-bb0b3f9661ae\") " pod="openstack/dnsmasq-dns-67b789f86c-69mp6" Oct 06 12:07:19 crc kubenswrapper[4958]: I1006 12:07:19.796170 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c3215d95-bf62-4837-9c49-bb0b3f9661ae-config\") pod \"dnsmasq-dns-67b789f86c-69mp6\" (UID: \"c3215d95-bf62-4837-9c49-bb0b3f9661ae\") " pod="openstack/dnsmasq-dns-67b789f86c-69mp6" Oct 06 12:07:19 crc kubenswrapper[4958]: I1006 12:07:19.796231 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/c3215d95-bf62-4837-9c49-bb0b3f9661ae-ovsdbserver-nb\") pod \"dnsmasq-dns-67b789f86c-69mp6\" (UID: \"c3215d95-bf62-4837-9c49-bb0b3f9661ae\") " pod="openstack/dnsmasq-dns-67b789f86c-69mp6" Oct 06 12:07:19 crc kubenswrapper[4958]: I1006 12:07:19.796851 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/c3215d95-bf62-4837-9c49-bb0b3f9661ae-dns-swift-storage-0\") pod \"dnsmasq-dns-67b789f86c-69mp6\" (UID: \"c3215d95-bf62-4837-9c49-bb0b3f9661ae\") " pod="openstack/dnsmasq-dns-67b789f86c-69mp6" Oct 06 12:07:19 crc kubenswrapper[4958]: I1006 12:07:19.797109 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/c3215d95-bf62-4837-9c49-bb0b3f9661ae-ovsdbserver-sb\") pod \"dnsmasq-dns-67b789f86c-69mp6\" (UID: \"c3215d95-bf62-4837-9c49-bb0b3f9661ae\") " pod="openstack/dnsmasq-dns-67b789f86c-69mp6" Oct 06 12:07:19 crc kubenswrapper[4958]: I1006 12:07:19.797142 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c3215d95-bf62-4837-9c49-bb0b3f9661ae-dns-svc\") pod \"dnsmasq-dns-67b789f86c-69mp6\" (UID: \"c3215d95-bf62-4837-9c49-bb0b3f9661ae\") " pod="openstack/dnsmasq-dns-67b789f86c-69mp6" Oct 06 12:07:19 crc kubenswrapper[4958]: I1006 12:07:19.797274 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c3215d95-bf62-4837-9c49-bb0b3f9661ae-config\") pod \"dnsmasq-dns-67b789f86c-69mp6\" (UID: \"c3215d95-bf62-4837-9c49-bb0b3f9661ae\") " pod="openstack/dnsmasq-dns-67b789f86c-69mp6" Oct 06 12:07:19 crc kubenswrapper[4958]: I1006 12:07:19.797529 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/c3215d95-bf62-4837-9c49-bb0b3f9661ae-ovsdbserver-nb\") pod \"dnsmasq-dns-67b789f86c-69mp6\" (UID: \"c3215d95-bf62-4837-9c49-bb0b3f9661ae\") " pod="openstack/dnsmasq-dns-67b789f86c-69mp6" Oct 06 12:07:19 crc kubenswrapper[4958]: I1006 12:07:19.799031 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/c3215d95-bf62-4837-9c49-bb0b3f9661ae-openstack-edpm-ipam\") pod \"dnsmasq-dns-67b789f86c-69mp6\" (UID: \"c3215d95-bf62-4837-9c49-bb0b3f9661ae\") " pod="openstack/dnsmasq-dns-67b789f86c-69mp6" Oct 06 12:07:19 crc kubenswrapper[4958]: I1006 12:07:19.799096 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/c3215d95-bf62-4837-9c49-bb0b3f9661ae-openstack-edpm-ipam\") pod \"dnsmasq-dns-67b789f86c-69mp6\" (UID: \"c3215d95-bf62-4837-9c49-bb0b3f9661ae\") " pod="openstack/dnsmasq-dns-67b789f86c-69mp6" Oct 06 12:07:19 crc kubenswrapper[4958]: I1006 12:07:19.831202 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xdmfr\" (UniqueName: \"kubernetes.io/projected/c3215d95-bf62-4837-9c49-bb0b3f9661ae-kube-api-access-xdmfr\") pod \"dnsmasq-dns-67b789f86c-69mp6\" (UID: \"c3215d95-bf62-4837-9c49-bb0b3f9661ae\") " pod="openstack/dnsmasq-dns-67b789f86c-69mp6" Oct 06 12:07:19 crc kubenswrapper[4958]: I1006 12:07:19.861978 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-67b789f86c-69mp6" Oct 06 12:07:20 crc kubenswrapper[4958]: I1006 12:07:20.356959 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"02fc87b1-4709-4476-a597-9154c5c3a322","Type":"ContainerStarted","Data":"b1188a043d641446dc6e6de9255e33aa6ea7159e14634014b416566af9a60c9a"} Oct 06 12:07:20 crc kubenswrapper[4958]: I1006 12:07:20.359389 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"35b611fd-63b3-4146-b713-3fef7c26c3c7","Type":"ContainerStarted","Data":"4c90c70e3a436c04abed1bab3c9cdd0b15e17d88e4abbd782eea106a295b8954"} Oct 06 12:07:20 crc kubenswrapper[4958]: I1006 12:07:20.404627 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-67b789f86c-69mp6"] Oct 06 12:07:20 crc kubenswrapper[4958]: W1006 12:07:20.405662 4958 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc3215d95_bf62_4837_9c49_bb0b3f9661ae.slice/crio-6adced24431f778a43261782d88df5718b3e01a1b9cabf315acf2f081a99a710 WatchSource:0}: Error finding container 6adced24431f778a43261782d88df5718b3e01a1b9cabf315acf2f081a99a710: Status 404 returned error can't find the container with id 6adced24431f778a43261782d88df5718b3e01a1b9cabf315acf2f081a99a710 Oct 06 12:07:21 crc kubenswrapper[4958]: I1006 12:07:21.373410 4958 generic.go:334] "Generic (PLEG): container finished" podID="c3215d95-bf62-4837-9c49-bb0b3f9661ae" containerID="3d2051f62af88253d6a0b63103ce26d43ad74b29eaa513c6c661de6de5d65a18" exitCode=0 Oct 06 12:07:21 crc kubenswrapper[4958]: I1006 12:07:21.373492 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-67b789f86c-69mp6" event={"ID":"c3215d95-bf62-4837-9c49-bb0b3f9661ae","Type":"ContainerDied","Data":"3d2051f62af88253d6a0b63103ce26d43ad74b29eaa513c6c661de6de5d65a18"} Oct 06 12:07:21 crc kubenswrapper[4958]: I1006 12:07:21.373854 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-67b789f86c-69mp6" event={"ID":"c3215d95-bf62-4837-9c49-bb0b3f9661ae","Type":"ContainerStarted","Data":"6adced24431f778a43261782d88df5718b3e01a1b9cabf315acf2f081a99a710"} Oct 06 12:07:22 crc kubenswrapper[4958]: I1006 12:07:22.387101 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"35b611fd-63b3-4146-b713-3fef7c26c3c7","Type":"ContainerStarted","Data":"93127c3e5cb3f039de7fd74a6833d511741288e5c72d76b34cb93575301a20f0"} Oct 06 12:07:22 crc kubenswrapper[4958]: I1006 12:07:22.393053 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-67b789f86c-69mp6" event={"ID":"c3215d95-bf62-4837-9c49-bb0b3f9661ae","Type":"ContainerStarted","Data":"c3c3a595edba1d6d1b5fef7fe143648648025deba9d2356690546b00ec14cebe"} Oct 06 12:07:22 crc kubenswrapper[4958]: I1006 12:07:22.393552 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-67b789f86c-69mp6" Oct 06 12:07:22 crc kubenswrapper[4958]: I1006 12:07:22.455709 4958 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-67b789f86c-69mp6" podStartSLOduration=3.455680606 podStartE2EDuration="3.455680606s" podCreationTimestamp="2025-10-06 12:07:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-06 12:07:22.44893438 +0000 UTC m=+1196.334959698" watchObservedRunningTime="2025-10-06 12:07:22.455680606 +0000 UTC m=+1196.341705964" Oct 06 12:07:23 crc kubenswrapper[4958]: I1006 12:07:23.801464 4958 patch_prober.go:28] interesting pod/machine-config-daemon-whw6z container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 06 12:07:23 crc kubenswrapper[4958]: I1006 12:07:23.802233 4958 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-whw6z" podUID="1a63a5e9-6d00-40ff-a080-cfcaf03a1c1b" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 06 12:07:29 crc kubenswrapper[4958]: I1006 12:07:29.863302 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-67b789f86c-69mp6" Oct 06 12:07:29 crc kubenswrapper[4958]: I1006 12:07:29.952636 4958 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-59cf4bdb65-qz6ck"] Oct 06 12:07:29 crc kubenswrapper[4958]: I1006 12:07:29.953077 4958 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-59cf4bdb65-qz6ck" podUID="bae20f40-c739-424c-bfe7-14968520377e" containerName="dnsmasq-dns" containerID="cri-o://a89a96b1b69b14141f600174903d13b90a573a98c92971fefbdb276f665050b4" gracePeriod=10 Oct 06 12:07:30 crc kubenswrapper[4958]: I1006 12:07:30.082235 4958 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-cb6ffcf87-5lvxx"] Oct 06 12:07:30 crc kubenswrapper[4958]: I1006 12:07:30.084382 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-cb6ffcf87-5lvxx" Oct 06 12:07:30 crc kubenswrapper[4958]: I1006 12:07:30.122303 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-cb6ffcf87-5lvxx"] Oct 06 12:07:30 crc kubenswrapper[4958]: I1006 12:07:30.233812 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/760d1ffb-81bb-4765-865c-c655d0886553-openstack-edpm-ipam\") pod \"dnsmasq-dns-cb6ffcf87-5lvxx\" (UID: \"760d1ffb-81bb-4765-865c-c655d0886553\") " pod="openstack/dnsmasq-dns-cb6ffcf87-5lvxx" Oct 06 12:07:30 crc kubenswrapper[4958]: I1006 12:07:30.233857 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/760d1ffb-81bb-4765-865c-c655d0886553-dns-swift-storage-0\") pod \"dnsmasq-dns-cb6ffcf87-5lvxx\" (UID: \"760d1ffb-81bb-4765-865c-c655d0886553\") " pod="openstack/dnsmasq-dns-cb6ffcf87-5lvxx" Oct 06 12:07:30 crc kubenswrapper[4958]: I1006 12:07:30.233900 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/760d1ffb-81bb-4765-865c-c655d0886553-ovsdbserver-sb\") pod \"dnsmasq-dns-cb6ffcf87-5lvxx\" (UID: \"760d1ffb-81bb-4765-865c-c655d0886553\") " pod="openstack/dnsmasq-dns-cb6ffcf87-5lvxx" Oct 06 12:07:30 crc kubenswrapper[4958]: I1006 12:07:30.233941 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/760d1ffb-81bb-4765-865c-c655d0886553-config\") pod \"dnsmasq-dns-cb6ffcf87-5lvxx\" (UID: \"760d1ffb-81bb-4765-865c-c655d0886553\") " pod="openstack/dnsmasq-dns-cb6ffcf87-5lvxx" Oct 06 12:07:30 crc kubenswrapper[4958]: I1006 12:07:30.233982 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bxvfd\" (UniqueName: \"kubernetes.io/projected/760d1ffb-81bb-4765-865c-c655d0886553-kube-api-access-bxvfd\") pod \"dnsmasq-dns-cb6ffcf87-5lvxx\" (UID: \"760d1ffb-81bb-4765-865c-c655d0886553\") " pod="openstack/dnsmasq-dns-cb6ffcf87-5lvxx" Oct 06 12:07:30 crc kubenswrapper[4958]: I1006 12:07:30.234030 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/760d1ffb-81bb-4765-865c-c655d0886553-ovsdbserver-nb\") pod \"dnsmasq-dns-cb6ffcf87-5lvxx\" (UID: \"760d1ffb-81bb-4765-865c-c655d0886553\") " pod="openstack/dnsmasq-dns-cb6ffcf87-5lvxx" Oct 06 12:07:30 crc kubenswrapper[4958]: I1006 12:07:30.234048 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/760d1ffb-81bb-4765-865c-c655d0886553-dns-svc\") pod \"dnsmasq-dns-cb6ffcf87-5lvxx\" (UID: \"760d1ffb-81bb-4765-865c-c655d0886553\") " pod="openstack/dnsmasq-dns-cb6ffcf87-5lvxx" Oct 06 12:07:30 crc kubenswrapper[4958]: I1006 12:07:30.335996 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/760d1ffb-81bb-4765-865c-c655d0886553-openstack-edpm-ipam\") pod \"dnsmasq-dns-cb6ffcf87-5lvxx\" (UID: \"760d1ffb-81bb-4765-865c-c655d0886553\") " pod="openstack/dnsmasq-dns-cb6ffcf87-5lvxx" Oct 06 12:07:30 crc kubenswrapper[4958]: I1006 12:07:30.336331 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/760d1ffb-81bb-4765-865c-c655d0886553-dns-swift-storage-0\") pod \"dnsmasq-dns-cb6ffcf87-5lvxx\" (UID: \"760d1ffb-81bb-4765-865c-c655d0886553\") " pod="openstack/dnsmasq-dns-cb6ffcf87-5lvxx" Oct 06 12:07:30 crc kubenswrapper[4958]: I1006 12:07:30.336379 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/760d1ffb-81bb-4765-865c-c655d0886553-ovsdbserver-sb\") pod \"dnsmasq-dns-cb6ffcf87-5lvxx\" (UID: \"760d1ffb-81bb-4765-865c-c655d0886553\") " pod="openstack/dnsmasq-dns-cb6ffcf87-5lvxx" Oct 06 12:07:30 crc kubenswrapper[4958]: I1006 12:07:30.336421 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/760d1ffb-81bb-4765-865c-c655d0886553-config\") pod \"dnsmasq-dns-cb6ffcf87-5lvxx\" (UID: \"760d1ffb-81bb-4765-865c-c655d0886553\") " pod="openstack/dnsmasq-dns-cb6ffcf87-5lvxx" Oct 06 12:07:30 crc kubenswrapper[4958]: I1006 12:07:30.336464 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bxvfd\" (UniqueName: \"kubernetes.io/projected/760d1ffb-81bb-4765-865c-c655d0886553-kube-api-access-bxvfd\") pod \"dnsmasq-dns-cb6ffcf87-5lvxx\" (UID: \"760d1ffb-81bb-4765-865c-c655d0886553\") " pod="openstack/dnsmasq-dns-cb6ffcf87-5lvxx" Oct 06 12:07:30 crc kubenswrapper[4958]: I1006 12:07:30.336507 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/760d1ffb-81bb-4765-865c-c655d0886553-ovsdbserver-nb\") pod \"dnsmasq-dns-cb6ffcf87-5lvxx\" (UID: \"760d1ffb-81bb-4765-865c-c655d0886553\") " pod="openstack/dnsmasq-dns-cb6ffcf87-5lvxx" Oct 06 12:07:30 crc kubenswrapper[4958]: I1006 12:07:30.336526 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/760d1ffb-81bb-4765-865c-c655d0886553-dns-svc\") pod \"dnsmasq-dns-cb6ffcf87-5lvxx\" (UID: \"760d1ffb-81bb-4765-865c-c655d0886553\") " pod="openstack/dnsmasq-dns-cb6ffcf87-5lvxx" Oct 06 12:07:30 crc kubenswrapper[4958]: I1006 12:07:30.337453 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/760d1ffb-81bb-4765-865c-c655d0886553-dns-svc\") pod \"dnsmasq-dns-cb6ffcf87-5lvxx\" (UID: \"760d1ffb-81bb-4765-865c-c655d0886553\") " pod="openstack/dnsmasq-dns-cb6ffcf87-5lvxx" Oct 06 12:07:30 crc kubenswrapper[4958]: I1006 12:07:30.338071 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/760d1ffb-81bb-4765-865c-c655d0886553-openstack-edpm-ipam\") pod \"dnsmasq-dns-cb6ffcf87-5lvxx\" (UID: \"760d1ffb-81bb-4765-865c-c655d0886553\") " pod="openstack/dnsmasq-dns-cb6ffcf87-5lvxx" Oct 06 12:07:30 crc kubenswrapper[4958]: I1006 12:07:30.338720 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/760d1ffb-81bb-4765-865c-c655d0886553-dns-swift-storage-0\") pod \"dnsmasq-dns-cb6ffcf87-5lvxx\" (UID: \"760d1ffb-81bb-4765-865c-c655d0886553\") " pod="openstack/dnsmasq-dns-cb6ffcf87-5lvxx" Oct 06 12:07:30 crc kubenswrapper[4958]: I1006 12:07:30.339445 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/760d1ffb-81bb-4765-865c-c655d0886553-ovsdbserver-sb\") pod \"dnsmasq-dns-cb6ffcf87-5lvxx\" (UID: \"760d1ffb-81bb-4765-865c-c655d0886553\") " pod="openstack/dnsmasq-dns-cb6ffcf87-5lvxx" Oct 06 12:07:30 crc kubenswrapper[4958]: I1006 12:07:30.339798 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/760d1ffb-81bb-4765-865c-c655d0886553-ovsdbserver-nb\") pod \"dnsmasq-dns-cb6ffcf87-5lvxx\" (UID: \"760d1ffb-81bb-4765-865c-c655d0886553\") " pod="openstack/dnsmasq-dns-cb6ffcf87-5lvxx" Oct 06 12:07:30 crc kubenswrapper[4958]: I1006 12:07:30.339895 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/760d1ffb-81bb-4765-865c-c655d0886553-config\") pod \"dnsmasq-dns-cb6ffcf87-5lvxx\" (UID: \"760d1ffb-81bb-4765-865c-c655d0886553\") " pod="openstack/dnsmasq-dns-cb6ffcf87-5lvxx" Oct 06 12:07:30 crc kubenswrapper[4958]: I1006 12:07:30.356589 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bxvfd\" (UniqueName: \"kubernetes.io/projected/760d1ffb-81bb-4765-865c-c655d0886553-kube-api-access-bxvfd\") pod \"dnsmasq-dns-cb6ffcf87-5lvxx\" (UID: \"760d1ffb-81bb-4765-865c-c655d0886553\") " pod="openstack/dnsmasq-dns-cb6ffcf87-5lvxx" Oct 06 12:07:30 crc kubenswrapper[4958]: I1006 12:07:30.436926 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-cb6ffcf87-5lvxx" Oct 06 12:07:30 crc kubenswrapper[4958]: I1006 12:07:30.588197 4958 generic.go:334] "Generic (PLEG): container finished" podID="bae20f40-c739-424c-bfe7-14968520377e" containerID="a89a96b1b69b14141f600174903d13b90a573a98c92971fefbdb276f665050b4" exitCode=0 Oct 06 12:07:30 crc kubenswrapper[4958]: I1006 12:07:30.588241 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-59cf4bdb65-qz6ck" event={"ID":"bae20f40-c739-424c-bfe7-14968520377e","Type":"ContainerDied","Data":"a89a96b1b69b14141f600174903d13b90a573a98c92971fefbdb276f665050b4"} Oct 06 12:07:30 crc kubenswrapper[4958]: I1006 12:07:30.588267 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-59cf4bdb65-qz6ck" event={"ID":"bae20f40-c739-424c-bfe7-14968520377e","Type":"ContainerDied","Data":"7b2f8e447ba92db42cf11fba39144d0e05a06de1cbba0408510bfe6e1c58a08a"} Oct 06 12:07:30 crc kubenswrapper[4958]: I1006 12:07:30.588279 4958 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7b2f8e447ba92db42cf11fba39144d0e05a06de1cbba0408510bfe6e1c58a08a" Oct 06 12:07:30 crc kubenswrapper[4958]: I1006 12:07:30.588800 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-59cf4bdb65-qz6ck" Oct 06 12:07:31 crc kubenswrapper[4958]: I1006 12:07:30.752996 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/bae20f40-c739-424c-bfe7-14968520377e-dns-svc\") pod \"bae20f40-c739-424c-bfe7-14968520377e\" (UID: \"bae20f40-c739-424c-bfe7-14968520377e\") " Oct 06 12:07:31 crc kubenswrapper[4958]: I1006 12:07:30.753345 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/bae20f40-c739-424c-bfe7-14968520377e-dns-swift-storage-0\") pod \"bae20f40-c739-424c-bfe7-14968520377e\" (UID: \"bae20f40-c739-424c-bfe7-14968520377e\") " Oct 06 12:07:31 crc kubenswrapper[4958]: I1006 12:07:30.753374 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bae20f40-c739-424c-bfe7-14968520377e-config\") pod \"bae20f40-c739-424c-bfe7-14968520377e\" (UID: \"bae20f40-c739-424c-bfe7-14968520377e\") " Oct 06 12:07:31 crc kubenswrapper[4958]: I1006 12:07:30.753395 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-569lb\" (UniqueName: \"kubernetes.io/projected/bae20f40-c739-424c-bfe7-14968520377e-kube-api-access-569lb\") pod \"bae20f40-c739-424c-bfe7-14968520377e\" (UID: \"bae20f40-c739-424c-bfe7-14968520377e\") " Oct 06 12:07:31 crc kubenswrapper[4958]: I1006 12:07:30.753413 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/bae20f40-c739-424c-bfe7-14968520377e-ovsdbserver-nb\") pod \"bae20f40-c739-424c-bfe7-14968520377e\" (UID: \"bae20f40-c739-424c-bfe7-14968520377e\") " Oct 06 12:07:31 crc kubenswrapper[4958]: I1006 12:07:30.753462 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/bae20f40-c739-424c-bfe7-14968520377e-ovsdbserver-sb\") pod \"bae20f40-c739-424c-bfe7-14968520377e\" (UID: \"bae20f40-c739-424c-bfe7-14968520377e\") " Oct 06 12:07:31 crc kubenswrapper[4958]: I1006 12:07:30.763887 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bae20f40-c739-424c-bfe7-14968520377e-kube-api-access-569lb" (OuterVolumeSpecName: "kube-api-access-569lb") pod "bae20f40-c739-424c-bfe7-14968520377e" (UID: "bae20f40-c739-424c-bfe7-14968520377e"). InnerVolumeSpecName "kube-api-access-569lb". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 06 12:07:31 crc kubenswrapper[4958]: I1006 12:07:30.804136 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bae20f40-c739-424c-bfe7-14968520377e-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "bae20f40-c739-424c-bfe7-14968520377e" (UID: "bae20f40-c739-424c-bfe7-14968520377e"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 06 12:07:31 crc kubenswrapper[4958]: I1006 12:07:30.810720 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bae20f40-c739-424c-bfe7-14968520377e-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "bae20f40-c739-424c-bfe7-14968520377e" (UID: "bae20f40-c739-424c-bfe7-14968520377e"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 06 12:07:31 crc kubenswrapper[4958]: I1006 12:07:30.822753 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bae20f40-c739-424c-bfe7-14968520377e-config" (OuterVolumeSpecName: "config") pod "bae20f40-c739-424c-bfe7-14968520377e" (UID: "bae20f40-c739-424c-bfe7-14968520377e"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 06 12:07:31 crc kubenswrapper[4958]: I1006 12:07:30.827353 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bae20f40-c739-424c-bfe7-14968520377e-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "bae20f40-c739-424c-bfe7-14968520377e" (UID: "bae20f40-c739-424c-bfe7-14968520377e"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 06 12:07:31 crc kubenswrapper[4958]: I1006 12:07:30.831075 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bae20f40-c739-424c-bfe7-14968520377e-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "bae20f40-c739-424c-bfe7-14968520377e" (UID: "bae20f40-c739-424c-bfe7-14968520377e"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 06 12:07:31 crc kubenswrapper[4958]: I1006 12:07:30.856169 4958 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/bae20f40-c739-424c-bfe7-14968520377e-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Oct 06 12:07:31 crc kubenswrapper[4958]: I1006 12:07:30.856193 4958 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bae20f40-c739-424c-bfe7-14968520377e-config\") on node \"crc\" DevicePath \"\"" Oct 06 12:07:31 crc kubenswrapper[4958]: I1006 12:07:30.856202 4958 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/bae20f40-c739-424c-bfe7-14968520377e-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Oct 06 12:07:31 crc kubenswrapper[4958]: I1006 12:07:30.856210 4958 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-569lb\" (UniqueName: \"kubernetes.io/projected/bae20f40-c739-424c-bfe7-14968520377e-kube-api-access-569lb\") on node \"crc\" DevicePath \"\"" Oct 06 12:07:31 crc kubenswrapper[4958]: I1006 12:07:30.856220 4958 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/bae20f40-c739-424c-bfe7-14968520377e-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Oct 06 12:07:31 crc kubenswrapper[4958]: I1006 12:07:30.856227 4958 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/bae20f40-c739-424c-bfe7-14968520377e-dns-svc\") on node \"crc\" DevicePath \"\"" Oct 06 12:07:31 crc kubenswrapper[4958]: I1006 12:07:31.596086 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-59cf4bdb65-qz6ck" Oct 06 12:07:31 crc kubenswrapper[4958]: I1006 12:07:31.615279 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-cb6ffcf87-5lvxx"] Oct 06 12:07:31 crc kubenswrapper[4958]: I1006 12:07:31.624097 4958 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-59cf4bdb65-qz6ck"] Oct 06 12:07:31 crc kubenswrapper[4958]: I1006 12:07:31.632539 4958 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-59cf4bdb65-qz6ck"] Oct 06 12:07:32 crc kubenswrapper[4958]: I1006 12:07:32.607688 4958 generic.go:334] "Generic (PLEG): container finished" podID="760d1ffb-81bb-4765-865c-c655d0886553" containerID="42c3cad5d5213ebff618ade8cb679b2aaeaad2770349c751cdd090ae58e64949" exitCode=0 Oct 06 12:07:32 crc kubenswrapper[4958]: I1006 12:07:32.607756 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-cb6ffcf87-5lvxx" event={"ID":"760d1ffb-81bb-4765-865c-c655d0886553","Type":"ContainerDied","Data":"42c3cad5d5213ebff618ade8cb679b2aaeaad2770349c751cdd090ae58e64949"} Oct 06 12:07:32 crc kubenswrapper[4958]: I1006 12:07:32.609350 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-cb6ffcf87-5lvxx" event={"ID":"760d1ffb-81bb-4765-865c-c655d0886553","Type":"ContainerStarted","Data":"96340e40b0d1fd12e48a534387decab02dca9411b54abef7d49a4a2fce3624af"} Oct 06 12:07:32 crc kubenswrapper[4958]: I1006 12:07:32.931847 4958 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bae20f40-c739-424c-bfe7-14968520377e" path="/var/lib/kubelet/pods/bae20f40-c739-424c-bfe7-14968520377e/volumes" Oct 06 12:07:33 crc kubenswrapper[4958]: I1006 12:07:33.626233 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-cb6ffcf87-5lvxx" event={"ID":"760d1ffb-81bb-4765-865c-c655d0886553","Type":"ContainerStarted","Data":"a71cd59187933c54abdf5ba50ab2e36002e60008558f8f1d146fa16bc1fde4bf"} Oct 06 12:07:33 crc kubenswrapper[4958]: I1006 12:07:33.627068 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-cb6ffcf87-5lvxx" Oct 06 12:07:33 crc kubenswrapper[4958]: I1006 12:07:33.655655 4958 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-cb6ffcf87-5lvxx" podStartSLOduration=3.6556334120000002 podStartE2EDuration="3.655633412s" podCreationTimestamp="2025-10-06 12:07:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-06 12:07:33.647879996 +0000 UTC m=+1207.533905304" watchObservedRunningTime="2025-10-06 12:07:33.655633412 +0000 UTC m=+1207.541658720" Oct 06 12:07:40 crc kubenswrapper[4958]: I1006 12:07:40.439308 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-cb6ffcf87-5lvxx" Oct 06 12:07:40 crc kubenswrapper[4958]: I1006 12:07:40.537718 4958 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-67b789f86c-69mp6"] Oct 06 12:07:40 crc kubenswrapper[4958]: I1006 12:07:40.538276 4958 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-67b789f86c-69mp6" podUID="c3215d95-bf62-4837-9c49-bb0b3f9661ae" containerName="dnsmasq-dns" containerID="cri-o://c3c3a595edba1d6d1b5fef7fe143648648025deba9d2356690546b00ec14cebe" gracePeriod=10 Oct 06 12:07:40 crc kubenswrapper[4958]: I1006 12:07:40.719724 4958 generic.go:334] "Generic (PLEG): container finished" podID="c3215d95-bf62-4837-9c49-bb0b3f9661ae" containerID="c3c3a595edba1d6d1b5fef7fe143648648025deba9d2356690546b00ec14cebe" exitCode=0 Oct 06 12:07:40 crc kubenswrapper[4958]: I1006 12:07:40.719773 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-67b789f86c-69mp6" event={"ID":"c3215d95-bf62-4837-9c49-bb0b3f9661ae","Type":"ContainerDied","Data":"c3c3a595edba1d6d1b5fef7fe143648648025deba9d2356690546b00ec14cebe"} Oct 06 12:07:41 crc kubenswrapper[4958]: I1006 12:07:41.102023 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-67b789f86c-69mp6" Oct 06 12:07:41 crc kubenswrapper[4958]: I1006 12:07:41.185113 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/c3215d95-bf62-4837-9c49-bb0b3f9661ae-dns-swift-storage-0\") pod \"c3215d95-bf62-4837-9c49-bb0b3f9661ae\" (UID: \"c3215d95-bf62-4837-9c49-bb0b3f9661ae\") " Oct 06 12:07:41 crc kubenswrapper[4958]: I1006 12:07:41.185239 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/c3215d95-bf62-4837-9c49-bb0b3f9661ae-ovsdbserver-nb\") pod \"c3215d95-bf62-4837-9c49-bb0b3f9661ae\" (UID: \"c3215d95-bf62-4837-9c49-bb0b3f9661ae\") " Oct 06 12:07:41 crc kubenswrapper[4958]: I1006 12:07:41.185332 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/c3215d95-bf62-4837-9c49-bb0b3f9661ae-ovsdbserver-sb\") pod \"c3215d95-bf62-4837-9c49-bb0b3f9661ae\" (UID: \"c3215d95-bf62-4837-9c49-bb0b3f9661ae\") " Oct 06 12:07:41 crc kubenswrapper[4958]: I1006 12:07:41.185391 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c3215d95-bf62-4837-9c49-bb0b3f9661ae-dns-svc\") pod \"c3215d95-bf62-4837-9c49-bb0b3f9661ae\" (UID: \"c3215d95-bf62-4837-9c49-bb0b3f9661ae\") " Oct 06 12:07:41 crc kubenswrapper[4958]: I1006 12:07:41.185453 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c3215d95-bf62-4837-9c49-bb0b3f9661ae-config\") pod \"c3215d95-bf62-4837-9c49-bb0b3f9661ae\" (UID: \"c3215d95-bf62-4837-9c49-bb0b3f9661ae\") " Oct 06 12:07:41 crc kubenswrapper[4958]: I1006 12:07:41.185482 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/c3215d95-bf62-4837-9c49-bb0b3f9661ae-openstack-edpm-ipam\") pod \"c3215d95-bf62-4837-9c49-bb0b3f9661ae\" (UID: \"c3215d95-bf62-4837-9c49-bb0b3f9661ae\") " Oct 06 12:07:41 crc kubenswrapper[4958]: I1006 12:07:41.185613 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xdmfr\" (UniqueName: \"kubernetes.io/projected/c3215d95-bf62-4837-9c49-bb0b3f9661ae-kube-api-access-xdmfr\") pod \"c3215d95-bf62-4837-9c49-bb0b3f9661ae\" (UID: \"c3215d95-bf62-4837-9c49-bb0b3f9661ae\") " Oct 06 12:07:41 crc kubenswrapper[4958]: I1006 12:07:41.200987 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c3215d95-bf62-4837-9c49-bb0b3f9661ae-kube-api-access-xdmfr" (OuterVolumeSpecName: "kube-api-access-xdmfr") pod "c3215d95-bf62-4837-9c49-bb0b3f9661ae" (UID: "c3215d95-bf62-4837-9c49-bb0b3f9661ae"). InnerVolumeSpecName "kube-api-access-xdmfr". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 06 12:07:41 crc kubenswrapper[4958]: I1006 12:07:41.248134 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c3215d95-bf62-4837-9c49-bb0b3f9661ae-openstack-edpm-ipam" (OuterVolumeSpecName: "openstack-edpm-ipam") pod "c3215d95-bf62-4837-9c49-bb0b3f9661ae" (UID: "c3215d95-bf62-4837-9c49-bb0b3f9661ae"). InnerVolumeSpecName "openstack-edpm-ipam". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 06 12:07:41 crc kubenswrapper[4958]: I1006 12:07:41.249526 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c3215d95-bf62-4837-9c49-bb0b3f9661ae-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "c3215d95-bf62-4837-9c49-bb0b3f9661ae" (UID: "c3215d95-bf62-4837-9c49-bb0b3f9661ae"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 06 12:07:41 crc kubenswrapper[4958]: I1006 12:07:41.254223 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c3215d95-bf62-4837-9c49-bb0b3f9661ae-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "c3215d95-bf62-4837-9c49-bb0b3f9661ae" (UID: "c3215d95-bf62-4837-9c49-bb0b3f9661ae"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 06 12:07:41 crc kubenswrapper[4958]: I1006 12:07:41.271434 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c3215d95-bf62-4837-9c49-bb0b3f9661ae-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "c3215d95-bf62-4837-9c49-bb0b3f9661ae" (UID: "c3215d95-bf62-4837-9c49-bb0b3f9661ae"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 06 12:07:41 crc kubenswrapper[4958]: I1006 12:07:41.271549 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c3215d95-bf62-4837-9c49-bb0b3f9661ae-config" (OuterVolumeSpecName: "config") pod "c3215d95-bf62-4837-9c49-bb0b3f9661ae" (UID: "c3215d95-bf62-4837-9c49-bb0b3f9661ae"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 06 12:07:41 crc kubenswrapper[4958]: I1006 12:07:41.280894 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c3215d95-bf62-4837-9c49-bb0b3f9661ae-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "c3215d95-bf62-4837-9c49-bb0b3f9661ae" (UID: "c3215d95-bf62-4837-9c49-bb0b3f9661ae"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 06 12:07:41 crc kubenswrapper[4958]: I1006 12:07:41.288783 4958 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c3215d95-bf62-4837-9c49-bb0b3f9661ae-dns-svc\") on node \"crc\" DevicePath \"\"" Oct 06 12:07:41 crc kubenswrapper[4958]: I1006 12:07:41.288802 4958 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c3215d95-bf62-4837-9c49-bb0b3f9661ae-config\") on node \"crc\" DevicePath \"\"" Oct 06 12:07:41 crc kubenswrapper[4958]: I1006 12:07:41.288812 4958 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/c3215d95-bf62-4837-9c49-bb0b3f9661ae-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Oct 06 12:07:41 crc kubenswrapper[4958]: I1006 12:07:41.288822 4958 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xdmfr\" (UniqueName: \"kubernetes.io/projected/c3215d95-bf62-4837-9c49-bb0b3f9661ae-kube-api-access-xdmfr\") on node \"crc\" DevicePath \"\"" Oct 06 12:07:41 crc kubenswrapper[4958]: I1006 12:07:41.288830 4958 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/c3215d95-bf62-4837-9c49-bb0b3f9661ae-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Oct 06 12:07:41 crc kubenswrapper[4958]: I1006 12:07:41.288838 4958 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/c3215d95-bf62-4837-9c49-bb0b3f9661ae-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Oct 06 12:07:41 crc kubenswrapper[4958]: I1006 12:07:41.288846 4958 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/c3215d95-bf62-4837-9c49-bb0b3f9661ae-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Oct 06 12:07:41 crc kubenswrapper[4958]: I1006 12:07:41.732902 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-67b789f86c-69mp6" event={"ID":"c3215d95-bf62-4837-9c49-bb0b3f9661ae","Type":"ContainerDied","Data":"6adced24431f778a43261782d88df5718b3e01a1b9cabf315acf2f081a99a710"} Oct 06 12:07:41 crc kubenswrapper[4958]: I1006 12:07:41.732953 4958 scope.go:117] "RemoveContainer" containerID="c3c3a595edba1d6d1b5fef7fe143648648025deba9d2356690546b00ec14cebe" Oct 06 12:07:41 crc kubenswrapper[4958]: I1006 12:07:41.733115 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-67b789f86c-69mp6" Oct 06 12:07:41 crc kubenswrapper[4958]: I1006 12:07:41.769214 4958 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-67b789f86c-69mp6"] Oct 06 12:07:41 crc kubenswrapper[4958]: I1006 12:07:41.781093 4958 scope.go:117] "RemoveContainer" containerID="3d2051f62af88253d6a0b63103ce26d43ad74b29eaa513c6c661de6de5d65a18" Oct 06 12:07:41 crc kubenswrapper[4958]: I1006 12:07:41.782685 4958 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-67b789f86c-69mp6"] Oct 06 12:07:42 crc kubenswrapper[4958]: I1006 12:07:42.930904 4958 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c3215d95-bf62-4837-9c49-bb0b3f9661ae" path="/var/lib/kubelet/pods/c3215d95-bf62-4837-9c49-bb0b3f9661ae/volumes" Oct 06 12:07:52 crc kubenswrapper[4958]: I1006 12:07:52.853272 4958 generic.go:334] "Generic (PLEG): container finished" podID="02fc87b1-4709-4476-a597-9154c5c3a322" containerID="b1188a043d641446dc6e6de9255e33aa6ea7159e14634014b416566af9a60c9a" exitCode=0 Oct 06 12:07:52 crc kubenswrapper[4958]: I1006 12:07:52.853350 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"02fc87b1-4709-4476-a597-9154c5c3a322","Type":"ContainerDied","Data":"b1188a043d641446dc6e6de9255e33aa6ea7159e14634014b416566af9a60c9a"} Oct 06 12:07:53 crc kubenswrapper[4958]: I1006 12:07:53.468848 4958 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-jchd4"] Oct 06 12:07:53 crc kubenswrapper[4958]: E1006 12:07:53.469255 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bae20f40-c739-424c-bfe7-14968520377e" containerName="dnsmasq-dns" Oct 06 12:07:53 crc kubenswrapper[4958]: I1006 12:07:53.469285 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="bae20f40-c739-424c-bfe7-14968520377e" containerName="dnsmasq-dns" Oct 06 12:07:53 crc kubenswrapper[4958]: E1006 12:07:53.469305 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bae20f40-c739-424c-bfe7-14968520377e" containerName="init" Oct 06 12:07:53 crc kubenswrapper[4958]: I1006 12:07:53.469313 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="bae20f40-c739-424c-bfe7-14968520377e" containerName="init" Oct 06 12:07:53 crc kubenswrapper[4958]: E1006 12:07:53.469327 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c3215d95-bf62-4837-9c49-bb0b3f9661ae" containerName="dnsmasq-dns" Oct 06 12:07:53 crc kubenswrapper[4958]: I1006 12:07:53.469335 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="c3215d95-bf62-4837-9c49-bb0b3f9661ae" containerName="dnsmasq-dns" Oct 06 12:07:53 crc kubenswrapper[4958]: E1006 12:07:53.469357 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c3215d95-bf62-4837-9c49-bb0b3f9661ae" containerName="init" Oct 06 12:07:53 crc kubenswrapper[4958]: I1006 12:07:53.469364 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="c3215d95-bf62-4837-9c49-bb0b3f9661ae" containerName="init" Oct 06 12:07:53 crc kubenswrapper[4958]: I1006 12:07:53.469564 4958 memory_manager.go:354] "RemoveStaleState removing state" podUID="c3215d95-bf62-4837-9c49-bb0b3f9661ae" containerName="dnsmasq-dns" Oct 06 12:07:53 crc kubenswrapper[4958]: I1006 12:07:53.469581 4958 memory_manager.go:354] "RemoveStaleState removing state" podUID="bae20f40-c739-424c-bfe7-14968520377e" containerName="dnsmasq-dns" Oct 06 12:07:53 crc kubenswrapper[4958]: I1006 12:07:53.470344 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-jchd4" Oct 06 12:07:53 crc kubenswrapper[4958]: I1006 12:07:53.473946 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Oct 06 12:07:53 crc kubenswrapper[4958]: I1006 12:07:53.474814 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Oct 06 12:07:53 crc kubenswrapper[4958]: I1006 12:07:53.476615 4958 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Oct 06 12:07:53 crc kubenswrapper[4958]: I1006 12:07:53.480031 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-jchd4"] Oct 06 12:07:53 crc kubenswrapper[4958]: I1006 12:07:53.486070 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-frkbw" Oct 06 12:07:53 crc kubenswrapper[4958]: I1006 12:07:53.520337 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/99935553-e8d4-497e-be84-8fa4a807fd72-inventory\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-jchd4\" (UID: \"99935553-e8d4-497e-be84-8fa4a807fd72\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-jchd4" Oct 06 12:07:53 crc kubenswrapper[4958]: I1006 12:07:53.520429 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/99935553-e8d4-497e-be84-8fa4a807fd72-ssh-key\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-jchd4\" (UID: \"99935553-e8d4-497e-be84-8fa4a807fd72\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-jchd4" Oct 06 12:07:53 crc kubenswrapper[4958]: I1006 12:07:53.520599 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/99935553-e8d4-497e-be84-8fa4a807fd72-repo-setup-combined-ca-bundle\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-jchd4\" (UID: \"99935553-e8d4-497e-be84-8fa4a807fd72\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-jchd4" Oct 06 12:07:53 crc kubenswrapper[4958]: I1006 12:07:53.520830 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q5n6c\" (UniqueName: \"kubernetes.io/projected/99935553-e8d4-497e-be84-8fa4a807fd72-kube-api-access-q5n6c\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-jchd4\" (UID: \"99935553-e8d4-497e-be84-8fa4a807fd72\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-jchd4" Oct 06 12:07:53 crc kubenswrapper[4958]: I1006 12:07:53.622168 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q5n6c\" (UniqueName: \"kubernetes.io/projected/99935553-e8d4-497e-be84-8fa4a807fd72-kube-api-access-q5n6c\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-jchd4\" (UID: \"99935553-e8d4-497e-be84-8fa4a807fd72\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-jchd4" Oct 06 12:07:53 crc kubenswrapper[4958]: I1006 12:07:53.622534 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/99935553-e8d4-497e-be84-8fa4a807fd72-inventory\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-jchd4\" (UID: \"99935553-e8d4-497e-be84-8fa4a807fd72\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-jchd4" Oct 06 12:07:53 crc kubenswrapper[4958]: I1006 12:07:53.622584 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/99935553-e8d4-497e-be84-8fa4a807fd72-ssh-key\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-jchd4\" (UID: \"99935553-e8d4-497e-be84-8fa4a807fd72\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-jchd4" Oct 06 12:07:53 crc kubenswrapper[4958]: I1006 12:07:53.623392 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/99935553-e8d4-497e-be84-8fa4a807fd72-repo-setup-combined-ca-bundle\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-jchd4\" (UID: \"99935553-e8d4-497e-be84-8fa4a807fd72\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-jchd4" Oct 06 12:07:53 crc kubenswrapper[4958]: I1006 12:07:53.627752 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/99935553-e8d4-497e-be84-8fa4a807fd72-inventory\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-jchd4\" (UID: \"99935553-e8d4-497e-be84-8fa4a807fd72\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-jchd4" Oct 06 12:07:53 crc kubenswrapper[4958]: I1006 12:07:53.627831 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/99935553-e8d4-497e-be84-8fa4a807fd72-ssh-key\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-jchd4\" (UID: \"99935553-e8d4-497e-be84-8fa4a807fd72\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-jchd4" Oct 06 12:07:53 crc kubenswrapper[4958]: I1006 12:07:53.629112 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/99935553-e8d4-497e-be84-8fa4a807fd72-repo-setup-combined-ca-bundle\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-jchd4\" (UID: \"99935553-e8d4-497e-be84-8fa4a807fd72\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-jchd4" Oct 06 12:07:53 crc kubenswrapper[4958]: I1006 12:07:53.643004 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q5n6c\" (UniqueName: \"kubernetes.io/projected/99935553-e8d4-497e-be84-8fa4a807fd72-kube-api-access-q5n6c\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-jchd4\" (UID: \"99935553-e8d4-497e-be84-8fa4a807fd72\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-jchd4" Oct 06 12:07:53 crc kubenswrapper[4958]: I1006 12:07:53.784665 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-jchd4" Oct 06 12:07:53 crc kubenswrapper[4958]: I1006 12:07:53.809671 4958 patch_prober.go:28] interesting pod/machine-config-daemon-whw6z container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 06 12:07:53 crc kubenswrapper[4958]: I1006 12:07:53.809766 4958 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-whw6z" podUID="1a63a5e9-6d00-40ff-a080-cfcaf03a1c1b" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 06 12:07:53 crc kubenswrapper[4958]: I1006 12:07:53.865436 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"02fc87b1-4709-4476-a597-9154c5c3a322","Type":"ContainerStarted","Data":"db5d2fa25f78416e6e47fc1413e215c958065de426854e14fbf75c3f9427d84d"} Oct 06 12:07:53 crc kubenswrapper[4958]: I1006 12:07:53.866427 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-server-0" Oct 06 12:07:54 crc kubenswrapper[4958]: W1006 12:07:54.408611 4958 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod99935553_e8d4_497e_be84_8fa4a807fd72.slice/crio-b038c42b0dce4f80d624c0d988021fc8170b573a3cfc5e14527e74d40de87db0 WatchSource:0}: Error finding container b038c42b0dce4f80d624c0d988021fc8170b573a3cfc5e14527e74d40de87db0: Status 404 returned error can't find the container with id b038c42b0dce4f80d624c0d988021fc8170b573a3cfc5e14527e74d40de87db0 Oct 06 12:07:54 crc kubenswrapper[4958]: I1006 12:07:54.411861 4958 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Oct 06 12:07:54 crc kubenswrapper[4958]: I1006 12:07:54.412221 4958 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-server-0" podStartSLOduration=37.41220282 podStartE2EDuration="37.41220282s" podCreationTimestamp="2025-10-06 12:07:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-06 12:07:53.895527547 +0000 UTC m=+1227.781552875" watchObservedRunningTime="2025-10-06 12:07:54.41220282 +0000 UTC m=+1228.298228138" Oct 06 12:07:54 crc kubenswrapper[4958]: I1006 12:07:54.412742 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-jchd4"] Oct 06 12:07:54 crc kubenswrapper[4958]: I1006 12:07:54.876267 4958 generic.go:334] "Generic (PLEG): container finished" podID="35b611fd-63b3-4146-b713-3fef7c26c3c7" containerID="93127c3e5cb3f039de7fd74a6833d511741288e5c72d76b34cb93575301a20f0" exitCode=0 Oct 06 12:07:54 crc kubenswrapper[4958]: I1006 12:07:54.876384 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"35b611fd-63b3-4146-b713-3fef7c26c3c7","Type":"ContainerDied","Data":"93127c3e5cb3f039de7fd74a6833d511741288e5c72d76b34cb93575301a20f0"} Oct 06 12:07:54 crc kubenswrapper[4958]: I1006 12:07:54.881565 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-jchd4" event={"ID":"99935553-e8d4-497e-be84-8fa4a807fd72","Type":"ContainerStarted","Data":"b038c42b0dce4f80d624c0d988021fc8170b573a3cfc5e14527e74d40de87db0"} Oct 06 12:07:55 crc kubenswrapper[4958]: I1006 12:07:55.908594 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"35b611fd-63b3-4146-b713-3fef7c26c3c7","Type":"ContainerStarted","Data":"2634aa06dad5bb2a712d2a80d72ca14334d781cf8bd3121011676d0d4249fe99"} Oct 06 12:07:55 crc kubenswrapper[4958]: I1006 12:07:55.909012 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-cell1-server-0" Oct 06 12:07:55 crc kubenswrapper[4958]: I1006 12:07:55.938622 4958 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-cell1-server-0" podStartSLOduration=37.938601585 podStartE2EDuration="37.938601585s" podCreationTimestamp="2025-10-06 12:07:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-06 12:07:55.93482925 +0000 UTC m=+1229.820854578" watchObservedRunningTime="2025-10-06 12:07:55.938601585 +0000 UTC m=+1229.824626893" Oct 06 12:08:05 crc kubenswrapper[4958]: I1006 12:08:05.018377 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-jchd4" event={"ID":"99935553-e8d4-497e-be84-8fa4a807fd72","Type":"ContainerStarted","Data":"80b0ce6c359caf908598fa5c31c08d42e7f0c06eea6922b4e76d31f45083c5ed"} Oct 06 12:08:05 crc kubenswrapper[4958]: I1006 12:08:05.036888 4958 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-jchd4" podStartSLOduration=1.882319101 podStartE2EDuration="12.036868854s" podCreationTimestamp="2025-10-06 12:07:53 +0000 UTC" firstStartedPulling="2025-10-06 12:07:54.411522839 +0000 UTC m=+1228.297548157" lastFinishedPulling="2025-10-06 12:08:04.566072602 +0000 UTC m=+1238.452097910" observedRunningTime="2025-10-06 12:08:05.033681937 +0000 UTC m=+1238.919707255" watchObservedRunningTime="2025-10-06 12:08:05.036868854 +0000 UTC m=+1238.922894172" Oct 06 12:08:08 crc kubenswrapper[4958]: I1006 12:08:08.281479 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-server-0" Oct 06 12:08:09 crc kubenswrapper[4958]: I1006 12:08:09.232302 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-cell1-server-0" Oct 06 12:08:17 crc kubenswrapper[4958]: I1006 12:08:17.152724 4958 generic.go:334] "Generic (PLEG): container finished" podID="99935553-e8d4-497e-be84-8fa4a807fd72" containerID="80b0ce6c359caf908598fa5c31c08d42e7f0c06eea6922b4e76d31f45083c5ed" exitCode=0 Oct 06 12:08:17 crc kubenswrapper[4958]: I1006 12:08:17.153087 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-jchd4" event={"ID":"99935553-e8d4-497e-be84-8fa4a807fd72","Type":"ContainerDied","Data":"80b0ce6c359caf908598fa5c31c08d42e7f0c06eea6922b4e76d31f45083c5ed"} Oct 06 12:08:18 crc kubenswrapper[4958]: I1006 12:08:18.694077 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-jchd4" Oct 06 12:08:18 crc kubenswrapper[4958]: I1006 12:08:18.788139 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/99935553-e8d4-497e-be84-8fa4a807fd72-ssh-key\") pod \"99935553-e8d4-497e-be84-8fa4a807fd72\" (UID: \"99935553-e8d4-497e-be84-8fa4a807fd72\") " Oct 06 12:08:18 crc kubenswrapper[4958]: I1006 12:08:18.788233 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-q5n6c\" (UniqueName: \"kubernetes.io/projected/99935553-e8d4-497e-be84-8fa4a807fd72-kube-api-access-q5n6c\") pod \"99935553-e8d4-497e-be84-8fa4a807fd72\" (UID: \"99935553-e8d4-497e-be84-8fa4a807fd72\") " Oct 06 12:08:18 crc kubenswrapper[4958]: I1006 12:08:18.788367 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/99935553-e8d4-497e-be84-8fa4a807fd72-repo-setup-combined-ca-bundle\") pod \"99935553-e8d4-497e-be84-8fa4a807fd72\" (UID: \"99935553-e8d4-497e-be84-8fa4a807fd72\") " Oct 06 12:08:18 crc kubenswrapper[4958]: I1006 12:08:18.788443 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/99935553-e8d4-497e-be84-8fa4a807fd72-inventory\") pod \"99935553-e8d4-497e-be84-8fa4a807fd72\" (UID: \"99935553-e8d4-497e-be84-8fa4a807fd72\") " Oct 06 12:08:18 crc kubenswrapper[4958]: I1006 12:08:18.793915 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/99935553-e8d4-497e-be84-8fa4a807fd72-repo-setup-combined-ca-bundle" (OuterVolumeSpecName: "repo-setup-combined-ca-bundle") pod "99935553-e8d4-497e-be84-8fa4a807fd72" (UID: "99935553-e8d4-497e-be84-8fa4a807fd72"). InnerVolumeSpecName "repo-setup-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 06 12:08:18 crc kubenswrapper[4958]: I1006 12:08:18.794630 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/99935553-e8d4-497e-be84-8fa4a807fd72-kube-api-access-q5n6c" (OuterVolumeSpecName: "kube-api-access-q5n6c") pod "99935553-e8d4-497e-be84-8fa4a807fd72" (UID: "99935553-e8d4-497e-be84-8fa4a807fd72"). InnerVolumeSpecName "kube-api-access-q5n6c". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 06 12:08:18 crc kubenswrapper[4958]: I1006 12:08:18.821223 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/99935553-e8d4-497e-be84-8fa4a807fd72-inventory" (OuterVolumeSpecName: "inventory") pod "99935553-e8d4-497e-be84-8fa4a807fd72" (UID: "99935553-e8d4-497e-be84-8fa4a807fd72"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 06 12:08:18 crc kubenswrapper[4958]: I1006 12:08:18.826601 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/99935553-e8d4-497e-be84-8fa4a807fd72-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "99935553-e8d4-497e-be84-8fa4a807fd72" (UID: "99935553-e8d4-497e-be84-8fa4a807fd72"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 06 12:08:18 crc kubenswrapper[4958]: I1006 12:08:18.890714 4958 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/99935553-e8d4-497e-be84-8fa4a807fd72-ssh-key\") on node \"crc\" DevicePath \"\"" Oct 06 12:08:18 crc kubenswrapper[4958]: I1006 12:08:18.890755 4958 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-q5n6c\" (UniqueName: \"kubernetes.io/projected/99935553-e8d4-497e-be84-8fa4a807fd72-kube-api-access-q5n6c\") on node \"crc\" DevicePath \"\"" Oct 06 12:08:18 crc kubenswrapper[4958]: I1006 12:08:18.890774 4958 reconciler_common.go:293] "Volume detached for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/99935553-e8d4-497e-be84-8fa4a807fd72-repo-setup-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 06 12:08:18 crc kubenswrapper[4958]: I1006 12:08:18.890794 4958 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/99935553-e8d4-497e-be84-8fa4a807fd72-inventory\") on node \"crc\" DevicePath \"\"" Oct 06 12:08:19 crc kubenswrapper[4958]: I1006 12:08:19.180846 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-jchd4" event={"ID":"99935553-e8d4-497e-be84-8fa4a807fd72","Type":"ContainerDied","Data":"b038c42b0dce4f80d624c0d988021fc8170b573a3cfc5e14527e74d40de87db0"} Oct 06 12:08:19 crc kubenswrapper[4958]: I1006 12:08:19.180922 4958 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b038c42b0dce4f80d624c0d988021fc8170b573a3cfc5e14527e74d40de87db0" Oct 06 12:08:19 crc kubenswrapper[4958]: I1006 12:08:19.180930 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-jchd4" Oct 06 12:08:19 crc kubenswrapper[4958]: I1006 12:08:19.289282 4958 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/redhat-edpm-deployment-openstack-edpm-ipam-m996d"] Oct 06 12:08:19 crc kubenswrapper[4958]: E1006 12:08:19.289820 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="99935553-e8d4-497e-be84-8fa4a807fd72" containerName="repo-setup-edpm-deployment-openstack-edpm-ipam" Oct 06 12:08:19 crc kubenswrapper[4958]: I1006 12:08:19.289842 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="99935553-e8d4-497e-be84-8fa4a807fd72" containerName="repo-setup-edpm-deployment-openstack-edpm-ipam" Oct 06 12:08:19 crc kubenswrapper[4958]: I1006 12:08:19.290099 4958 memory_manager.go:354] "RemoveStaleState removing state" podUID="99935553-e8d4-497e-be84-8fa4a807fd72" containerName="repo-setup-edpm-deployment-openstack-edpm-ipam" Oct 06 12:08:19 crc kubenswrapper[4958]: I1006 12:08:19.290864 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-m996d" Oct 06 12:08:19 crc kubenswrapper[4958]: I1006 12:08:19.298391 4958 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Oct 06 12:08:19 crc kubenswrapper[4958]: I1006 12:08:19.298703 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Oct 06 12:08:19 crc kubenswrapper[4958]: I1006 12:08:19.298834 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Oct 06 12:08:19 crc kubenswrapper[4958]: I1006 12:08:19.300286 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-frkbw" Oct 06 12:08:19 crc kubenswrapper[4958]: I1006 12:08:19.303202 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/redhat-edpm-deployment-openstack-edpm-ipam-m996d"] Oct 06 12:08:19 crc kubenswrapper[4958]: I1006 12:08:19.403517 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/3dd583f8-4c3f-4059-8b0f-621021a4eaa1-inventory\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-m996d\" (UID: \"3dd583f8-4c3f-4059-8b0f-621021a4eaa1\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-m996d" Oct 06 12:08:19 crc kubenswrapper[4958]: I1006 12:08:19.403679 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sdj8d\" (UniqueName: \"kubernetes.io/projected/3dd583f8-4c3f-4059-8b0f-621021a4eaa1-kube-api-access-sdj8d\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-m996d\" (UID: \"3dd583f8-4c3f-4059-8b0f-621021a4eaa1\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-m996d" Oct 06 12:08:19 crc kubenswrapper[4958]: I1006 12:08:19.403742 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/3dd583f8-4c3f-4059-8b0f-621021a4eaa1-ssh-key\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-m996d\" (UID: \"3dd583f8-4c3f-4059-8b0f-621021a4eaa1\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-m996d" Oct 06 12:08:19 crc kubenswrapper[4958]: I1006 12:08:19.518596 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sdj8d\" (UniqueName: \"kubernetes.io/projected/3dd583f8-4c3f-4059-8b0f-621021a4eaa1-kube-api-access-sdj8d\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-m996d\" (UID: \"3dd583f8-4c3f-4059-8b0f-621021a4eaa1\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-m996d" Oct 06 12:08:19 crc kubenswrapper[4958]: I1006 12:08:19.518677 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/3dd583f8-4c3f-4059-8b0f-621021a4eaa1-ssh-key\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-m996d\" (UID: \"3dd583f8-4c3f-4059-8b0f-621021a4eaa1\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-m996d" Oct 06 12:08:19 crc kubenswrapper[4958]: I1006 12:08:19.518741 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/3dd583f8-4c3f-4059-8b0f-621021a4eaa1-inventory\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-m996d\" (UID: \"3dd583f8-4c3f-4059-8b0f-621021a4eaa1\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-m996d" Oct 06 12:08:19 crc kubenswrapper[4958]: I1006 12:08:19.522755 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/3dd583f8-4c3f-4059-8b0f-621021a4eaa1-inventory\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-m996d\" (UID: \"3dd583f8-4c3f-4059-8b0f-621021a4eaa1\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-m996d" Oct 06 12:08:19 crc kubenswrapper[4958]: I1006 12:08:19.538930 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/3dd583f8-4c3f-4059-8b0f-621021a4eaa1-ssh-key\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-m996d\" (UID: \"3dd583f8-4c3f-4059-8b0f-621021a4eaa1\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-m996d" Oct 06 12:08:19 crc kubenswrapper[4958]: I1006 12:08:19.547110 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sdj8d\" (UniqueName: \"kubernetes.io/projected/3dd583f8-4c3f-4059-8b0f-621021a4eaa1-kube-api-access-sdj8d\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-m996d\" (UID: \"3dd583f8-4c3f-4059-8b0f-621021a4eaa1\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-m996d" Oct 06 12:08:19 crc kubenswrapper[4958]: I1006 12:08:19.629056 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-m996d" Oct 06 12:08:20 crc kubenswrapper[4958]: I1006 12:08:20.248139 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/redhat-edpm-deployment-openstack-edpm-ipam-m996d"] Oct 06 12:08:21 crc kubenswrapper[4958]: I1006 12:08:21.208211 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-m996d" event={"ID":"3dd583f8-4c3f-4059-8b0f-621021a4eaa1","Type":"ContainerStarted","Data":"71754d1e2f268bc6f0385db345d31fd582efecefd3af0bac15534f4b49bdfffd"} Oct 06 12:08:22 crc kubenswrapper[4958]: I1006 12:08:22.223416 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-m996d" event={"ID":"3dd583f8-4c3f-4059-8b0f-621021a4eaa1","Type":"ContainerStarted","Data":"022185a68a791b9a1db25dfbaa21f8e514aa54e8e297321ba6fbf00b78fb0c87"} Oct 06 12:08:22 crc kubenswrapper[4958]: I1006 12:08:22.252900 4958 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-m996d" podStartSLOduration=2.227656163 podStartE2EDuration="3.252868639s" podCreationTimestamp="2025-10-06 12:08:19 +0000 UTC" firstStartedPulling="2025-10-06 12:08:20.252577467 +0000 UTC m=+1254.138602815" lastFinishedPulling="2025-10-06 12:08:21.277789983 +0000 UTC m=+1255.163815291" observedRunningTime="2025-10-06 12:08:22.245749882 +0000 UTC m=+1256.131775250" watchObservedRunningTime="2025-10-06 12:08:22.252868639 +0000 UTC m=+1256.138893987" Oct 06 12:08:23 crc kubenswrapper[4958]: I1006 12:08:23.802091 4958 patch_prober.go:28] interesting pod/machine-config-daemon-whw6z container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 06 12:08:23 crc kubenswrapper[4958]: I1006 12:08:23.802177 4958 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-whw6z" podUID="1a63a5e9-6d00-40ff-a080-cfcaf03a1c1b" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 06 12:08:23 crc kubenswrapper[4958]: I1006 12:08:23.802225 4958 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-whw6z" Oct 06 12:08:23 crc kubenswrapper[4958]: I1006 12:08:23.803037 4958 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"58063f347fe5bcd8235160b2a3cc2b46057a0950357e7063907acb38c848571d"} pod="openshift-machine-config-operator/machine-config-daemon-whw6z" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Oct 06 12:08:23 crc kubenswrapper[4958]: I1006 12:08:23.803109 4958 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-whw6z" podUID="1a63a5e9-6d00-40ff-a080-cfcaf03a1c1b" containerName="machine-config-daemon" containerID="cri-o://58063f347fe5bcd8235160b2a3cc2b46057a0950357e7063907acb38c848571d" gracePeriod=600 Oct 06 12:08:24 crc kubenswrapper[4958]: I1006 12:08:24.247066 4958 generic.go:334] "Generic (PLEG): container finished" podID="3dd583f8-4c3f-4059-8b0f-621021a4eaa1" containerID="022185a68a791b9a1db25dfbaa21f8e514aa54e8e297321ba6fbf00b78fb0c87" exitCode=0 Oct 06 12:08:24 crc kubenswrapper[4958]: I1006 12:08:24.247159 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-m996d" event={"ID":"3dd583f8-4c3f-4059-8b0f-621021a4eaa1","Type":"ContainerDied","Data":"022185a68a791b9a1db25dfbaa21f8e514aa54e8e297321ba6fbf00b78fb0c87"} Oct 06 12:08:24 crc kubenswrapper[4958]: I1006 12:08:24.250357 4958 generic.go:334] "Generic (PLEG): container finished" podID="1a63a5e9-6d00-40ff-a080-cfcaf03a1c1b" containerID="58063f347fe5bcd8235160b2a3cc2b46057a0950357e7063907acb38c848571d" exitCode=0 Oct 06 12:08:24 crc kubenswrapper[4958]: I1006 12:08:24.250398 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-whw6z" event={"ID":"1a63a5e9-6d00-40ff-a080-cfcaf03a1c1b","Type":"ContainerDied","Data":"58063f347fe5bcd8235160b2a3cc2b46057a0950357e7063907acb38c848571d"} Oct 06 12:08:24 crc kubenswrapper[4958]: I1006 12:08:24.250482 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-whw6z" event={"ID":"1a63a5e9-6d00-40ff-a080-cfcaf03a1c1b","Type":"ContainerStarted","Data":"206f17391c9f0682bafb2e779c977ed48daa6668033056ec39d18f8882e93154"} Oct 06 12:08:24 crc kubenswrapper[4958]: I1006 12:08:24.250506 4958 scope.go:117] "RemoveContainer" containerID="d557eb58019a825d47465bf11cc9a867134d838f4f8d2d5b54e629c90b675773" Oct 06 12:08:25 crc kubenswrapper[4958]: I1006 12:08:25.703593 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-m996d" Oct 06 12:08:25 crc kubenswrapper[4958]: I1006 12:08:25.766609 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sdj8d\" (UniqueName: \"kubernetes.io/projected/3dd583f8-4c3f-4059-8b0f-621021a4eaa1-kube-api-access-sdj8d\") pod \"3dd583f8-4c3f-4059-8b0f-621021a4eaa1\" (UID: \"3dd583f8-4c3f-4059-8b0f-621021a4eaa1\") " Oct 06 12:08:25 crc kubenswrapper[4958]: I1006 12:08:25.767886 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/3dd583f8-4c3f-4059-8b0f-621021a4eaa1-inventory\") pod \"3dd583f8-4c3f-4059-8b0f-621021a4eaa1\" (UID: \"3dd583f8-4c3f-4059-8b0f-621021a4eaa1\") " Oct 06 12:08:25 crc kubenswrapper[4958]: I1006 12:08:25.768281 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/3dd583f8-4c3f-4059-8b0f-621021a4eaa1-ssh-key\") pod \"3dd583f8-4c3f-4059-8b0f-621021a4eaa1\" (UID: \"3dd583f8-4c3f-4059-8b0f-621021a4eaa1\") " Oct 06 12:08:25 crc kubenswrapper[4958]: I1006 12:08:25.780424 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3dd583f8-4c3f-4059-8b0f-621021a4eaa1-kube-api-access-sdj8d" (OuterVolumeSpecName: "kube-api-access-sdj8d") pod "3dd583f8-4c3f-4059-8b0f-621021a4eaa1" (UID: "3dd583f8-4c3f-4059-8b0f-621021a4eaa1"). InnerVolumeSpecName "kube-api-access-sdj8d". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 06 12:08:25 crc kubenswrapper[4958]: I1006 12:08:25.796985 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3dd583f8-4c3f-4059-8b0f-621021a4eaa1-inventory" (OuterVolumeSpecName: "inventory") pod "3dd583f8-4c3f-4059-8b0f-621021a4eaa1" (UID: "3dd583f8-4c3f-4059-8b0f-621021a4eaa1"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 06 12:08:25 crc kubenswrapper[4958]: I1006 12:08:25.797433 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3dd583f8-4c3f-4059-8b0f-621021a4eaa1-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "3dd583f8-4c3f-4059-8b0f-621021a4eaa1" (UID: "3dd583f8-4c3f-4059-8b0f-621021a4eaa1"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 06 12:08:25 crc kubenswrapper[4958]: I1006 12:08:25.870617 4958 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/3dd583f8-4c3f-4059-8b0f-621021a4eaa1-inventory\") on node \"crc\" DevicePath \"\"" Oct 06 12:08:25 crc kubenswrapper[4958]: I1006 12:08:25.870989 4958 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/3dd583f8-4c3f-4059-8b0f-621021a4eaa1-ssh-key\") on node \"crc\" DevicePath \"\"" Oct 06 12:08:25 crc kubenswrapper[4958]: I1006 12:08:25.871000 4958 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sdj8d\" (UniqueName: \"kubernetes.io/projected/3dd583f8-4c3f-4059-8b0f-621021a4eaa1-kube-api-access-sdj8d\") on node \"crc\" DevicePath \"\"" Oct 06 12:08:26 crc kubenswrapper[4958]: I1006 12:08:26.290286 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-m996d" event={"ID":"3dd583f8-4c3f-4059-8b0f-621021a4eaa1","Type":"ContainerDied","Data":"71754d1e2f268bc6f0385db345d31fd582efecefd3af0bac15534f4b49bdfffd"} Oct 06 12:08:26 crc kubenswrapper[4958]: I1006 12:08:26.290323 4958 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="71754d1e2f268bc6f0385db345d31fd582efecefd3af0bac15534f4b49bdfffd" Oct 06 12:08:26 crc kubenswrapper[4958]: I1006 12:08:26.290346 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-m996d" Oct 06 12:08:26 crc kubenswrapper[4958]: I1006 12:08:26.354852 4958 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-64zqq"] Oct 06 12:08:26 crc kubenswrapper[4958]: E1006 12:08:26.355301 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3dd583f8-4c3f-4059-8b0f-621021a4eaa1" containerName="redhat-edpm-deployment-openstack-edpm-ipam" Oct 06 12:08:26 crc kubenswrapper[4958]: I1006 12:08:26.355325 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="3dd583f8-4c3f-4059-8b0f-621021a4eaa1" containerName="redhat-edpm-deployment-openstack-edpm-ipam" Oct 06 12:08:26 crc kubenswrapper[4958]: I1006 12:08:26.355526 4958 memory_manager.go:354] "RemoveStaleState removing state" podUID="3dd583f8-4c3f-4059-8b0f-621021a4eaa1" containerName="redhat-edpm-deployment-openstack-edpm-ipam" Oct 06 12:08:26 crc kubenswrapper[4958]: I1006 12:08:26.356087 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-64zqq" Oct 06 12:08:26 crc kubenswrapper[4958]: I1006 12:08:26.358408 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-frkbw" Oct 06 12:08:26 crc kubenswrapper[4958]: I1006 12:08:26.358532 4958 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Oct 06 12:08:26 crc kubenswrapper[4958]: I1006 12:08:26.358571 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Oct 06 12:08:26 crc kubenswrapper[4958]: I1006 12:08:26.358950 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Oct 06 12:08:26 crc kubenswrapper[4958]: I1006 12:08:26.380656 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/bff355e0-d99f-4997-81e9-849deb8cea2a-inventory\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-64zqq\" (UID: \"bff355e0-d99f-4997-81e9-849deb8cea2a\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-64zqq" Oct 06 12:08:26 crc kubenswrapper[4958]: I1006 12:08:26.380727 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/bff355e0-d99f-4997-81e9-849deb8cea2a-ssh-key\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-64zqq\" (UID: \"bff355e0-d99f-4997-81e9-849deb8cea2a\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-64zqq" Oct 06 12:08:26 crc kubenswrapper[4958]: I1006 12:08:26.380883 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bff355e0-d99f-4997-81e9-849deb8cea2a-bootstrap-combined-ca-bundle\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-64zqq\" (UID: \"bff355e0-d99f-4997-81e9-849deb8cea2a\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-64zqq" Oct 06 12:08:26 crc kubenswrapper[4958]: I1006 12:08:26.381253 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g9m52\" (UniqueName: \"kubernetes.io/projected/bff355e0-d99f-4997-81e9-849deb8cea2a-kube-api-access-g9m52\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-64zqq\" (UID: \"bff355e0-d99f-4997-81e9-849deb8cea2a\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-64zqq" Oct 06 12:08:26 crc kubenswrapper[4958]: I1006 12:08:26.386384 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-64zqq"] Oct 06 12:08:26 crc kubenswrapper[4958]: I1006 12:08:26.483486 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g9m52\" (UniqueName: \"kubernetes.io/projected/bff355e0-d99f-4997-81e9-849deb8cea2a-kube-api-access-g9m52\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-64zqq\" (UID: \"bff355e0-d99f-4997-81e9-849deb8cea2a\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-64zqq" Oct 06 12:08:26 crc kubenswrapper[4958]: I1006 12:08:26.483677 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/bff355e0-d99f-4997-81e9-849deb8cea2a-inventory\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-64zqq\" (UID: \"bff355e0-d99f-4997-81e9-849deb8cea2a\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-64zqq" Oct 06 12:08:26 crc kubenswrapper[4958]: I1006 12:08:26.483708 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/bff355e0-d99f-4997-81e9-849deb8cea2a-ssh-key\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-64zqq\" (UID: \"bff355e0-d99f-4997-81e9-849deb8cea2a\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-64zqq" Oct 06 12:08:26 crc kubenswrapper[4958]: I1006 12:08:26.483756 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bff355e0-d99f-4997-81e9-849deb8cea2a-bootstrap-combined-ca-bundle\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-64zqq\" (UID: \"bff355e0-d99f-4997-81e9-849deb8cea2a\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-64zqq" Oct 06 12:08:26 crc kubenswrapper[4958]: I1006 12:08:26.490318 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/bff355e0-d99f-4997-81e9-849deb8cea2a-ssh-key\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-64zqq\" (UID: \"bff355e0-d99f-4997-81e9-849deb8cea2a\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-64zqq" Oct 06 12:08:26 crc kubenswrapper[4958]: I1006 12:08:26.490572 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bff355e0-d99f-4997-81e9-849deb8cea2a-bootstrap-combined-ca-bundle\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-64zqq\" (UID: \"bff355e0-d99f-4997-81e9-849deb8cea2a\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-64zqq" Oct 06 12:08:26 crc kubenswrapper[4958]: I1006 12:08:26.490370 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/bff355e0-d99f-4997-81e9-849deb8cea2a-inventory\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-64zqq\" (UID: \"bff355e0-d99f-4997-81e9-849deb8cea2a\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-64zqq" Oct 06 12:08:26 crc kubenswrapper[4958]: I1006 12:08:26.500610 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g9m52\" (UniqueName: \"kubernetes.io/projected/bff355e0-d99f-4997-81e9-849deb8cea2a-kube-api-access-g9m52\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-64zqq\" (UID: \"bff355e0-d99f-4997-81e9-849deb8cea2a\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-64zqq" Oct 06 12:08:26 crc kubenswrapper[4958]: I1006 12:08:26.686657 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-64zqq" Oct 06 12:08:27 crc kubenswrapper[4958]: I1006 12:08:27.282373 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-64zqq"] Oct 06 12:08:27 crc kubenswrapper[4958]: W1006 12:08:27.285194 4958 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podbff355e0_d99f_4997_81e9_849deb8cea2a.slice/crio-705c50520a380f703a728bdc5be6fcc36c71c528e3949f81afb919333bd5e84d WatchSource:0}: Error finding container 705c50520a380f703a728bdc5be6fcc36c71c528e3949f81afb919333bd5e84d: Status 404 returned error can't find the container with id 705c50520a380f703a728bdc5be6fcc36c71c528e3949f81afb919333bd5e84d Oct 06 12:08:27 crc kubenswrapper[4958]: I1006 12:08:27.306850 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-64zqq" event={"ID":"bff355e0-d99f-4997-81e9-849deb8cea2a","Type":"ContainerStarted","Data":"705c50520a380f703a728bdc5be6fcc36c71c528e3949f81afb919333bd5e84d"} Oct 06 12:08:27 crc kubenswrapper[4958]: I1006 12:08:27.945814 4958 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Oct 06 12:08:28 crc kubenswrapper[4958]: I1006 12:08:28.317978 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-64zqq" event={"ID":"bff355e0-d99f-4997-81e9-849deb8cea2a","Type":"ContainerStarted","Data":"c3630b079b6e72fc7c91725b1a6c7675013987051e89903c7dab2f95cdd35649"} Oct 06 12:08:28 crc kubenswrapper[4958]: I1006 12:08:28.339355 4958 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-64zqq" podStartSLOduration=1.685461637 podStartE2EDuration="2.339330788s" podCreationTimestamp="2025-10-06 12:08:26 +0000 UTC" firstStartedPulling="2025-10-06 12:08:27.289013915 +0000 UTC m=+1261.175039233" lastFinishedPulling="2025-10-06 12:08:27.942883076 +0000 UTC m=+1261.828908384" observedRunningTime="2025-10-06 12:08:28.333023866 +0000 UTC m=+1262.219049174" watchObservedRunningTime="2025-10-06 12:08:28.339330788 +0000 UTC m=+1262.225356106" Oct 06 12:09:28 crc kubenswrapper[4958]: I1006 12:09:28.326849 4958 scope.go:117] "RemoveContainer" containerID="30cd0c216f2064ac080a7bb4f98aba5b4c0e6e0ef7cc12f7ee211c9ad90da191" Oct 06 12:09:28 crc kubenswrapper[4958]: I1006 12:09:28.365136 4958 scope.go:117] "RemoveContainer" containerID="8d78d00d2c22dce09d3e13698fded654ebb19a162eb7bd6955ceee62046878be" Oct 06 12:10:28 crc kubenswrapper[4958]: I1006 12:10:28.480505 4958 scope.go:117] "RemoveContainer" containerID="5614923537c082f06e78dccd10ac04a4d36118a089c1c129838c8de2cb99de2e" Oct 06 12:10:28 crc kubenswrapper[4958]: I1006 12:10:28.501923 4958 scope.go:117] "RemoveContainer" containerID="c90491938e74b470e788868037a5482006c19d5c5efdcc1ba2eb386461331631" Oct 06 12:10:53 crc kubenswrapper[4958]: I1006 12:10:53.801564 4958 patch_prober.go:28] interesting pod/machine-config-daemon-whw6z container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 06 12:10:53 crc kubenswrapper[4958]: I1006 12:10:53.802312 4958 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-whw6z" podUID="1a63a5e9-6d00-40ff-a080-cfcaf03a1c1b" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 06 12:11:08 crc kubenswrapper[4958]: I1006 12:11:08.422966 4958 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-x7552"] Oct 06 12:11:08 crc kubenswrapper[4958]: I1006 12:11:08.429512 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-x7552" Oct 06 12:11:08 crc kubenswrapper[4958]: I1006 12:11:08.470814 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-x7552"] Oct 06 12:11:08 crc kubenswrapper[4958]: I1006 12:11:08.515782 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4af9779c-4e49-4d5a-a09d-fd97e878ddcb-catalog-content\") pod \"redhat-operators-x7552\" (UID: \"4af9779c-4e49-4d5a-a09d-fd97e878ddcb\") " pod="openshift-marketplace/redhat-operators-x7552" Oct 06 12:11:08 crc kubenswrapper[4958]: I1006 12:11:08.515894 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pvqjb\" (UniqueName: \"kubernetes.io/projected/4af9779c-4e49-4d5a-a09d-fd97e878ddcb-kube-api-access-pvqjb\") pod \"redhat-operators-x7552\" (UID: \"4af9779c-4e49-4d5a-a09d-fd97e878ddcb\") " pod="openshift-marketplace/redhat-operators-x7552" Oct 06 12:11:08 crc kubenswrapper[4958]: I1006 12:11:08.516473 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4af9779c-4e49-4d5a-a09d-fd97e878ddcb-utilities\") pod \"redhat-operators-x7552\" (UID: \"4af9779c-4e49-4d5a-a09d-fd97e878ddcb\") " pod="openshift-marketplace/redhat-operators-x7552" Oct 06 12:11:08 crc kubenswrapper[4958]: I1006 12:11:08.619538 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4af9779c-4e49-4d5a-a09d-fd97e878ddcb-catalog-content\") pod \"redhat-operators-x7552\" (UID: \"4af9779c-4e49-4d5a-a09d-fd97e878ddcb\") " pod="openshift-marketplace/redhat-operators-x7552" Oct 06 12:11:08 crc kubenswrapper[4958]: I1006 12:11:08.619623 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pvqjb\" (UniqueName: \"kubernetes.io/projected/4af9779c-4e49-4d5a-a09d-fd97e878ddcb-kube-api-access-pvqjb\") pod \"redhat-operators-x7552\" (UID: \"4af9779c-4e49-4d5a-a09d-fd97e878ddcb\") " pod="openshift-marketplace/redhat-operators-x7552" Oct 06 12:11:08 crc kubenswrapper[4958]: I1006 12:11:08.619857 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4af9779c-4e49-4d5a-a09d-fd97e878ddcb-utilities\") pod \"redhat-operators-x7552\" (UID: \"4af9779c-4e49-4d5a-a09d-fd97e878ddcb\") " pod="openshift-marketplace/redhat-operators-x7552" Oct 06 12:11:08 crc kubenswrapper[4958]: I1006 12:11:08.620546 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4af9779c-4e49-4d5a-a09d-fd97e878ddcb-utilities\") pod \"redhat-operators-x7552\" (UID: \"4af9779c-4e49-4d5a-a09d-fd97e878ddcb\") " pod="openshift-marketplace/redhat-operators-x7552" Oct 06 12:11:08 crc kubenswrapper[4958]: I1006 12:11:08.620621 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4af9779c-4e49-4d5a-a09d-fd97e878ddcb-catalog-content\") pod \"redhat-operators-x7552\" (UID: \"4af9779c-4e49-4d5a-a09d-fd97e878ddcb\") " pod="openshift-marketplace/redhat-operators-x7552" Oct 06 12:11:08 crc kubenswrapper[4958]: I1006 12:11:08.645085 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pvqjb\" (UniqueName: \"kubernetes.io/projected/4af9779c-4e49-4d5a-a09d-fd97e878ddcb-kube-api-access-pvqjb\") pod \"redhat-operators-x7552\" (UID: \"4af9779c-4e49-4d5a-a09d-fd97e878ddcb\") " pod="openshift-marketplace/redhat-operators-x7552" Oct 06 12:11:08 crc kubenswrapper[4958]: I1006 12:11:08.768558 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-x7552" Oct 06 12:11:09 crc kubenswrapper[4958]: I1006 12:11:09.248672 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-x7552"] Oct 06 12:11:10 crc kubenswrapper[4958]: I1006 12:11:10.178431 4958 generic.go:334] "Generic (PLEG): container finished" podID="4af9779c-4e49-4d5a-a09d-fd97e878ddcb" containerID="8c5e737a04bbff8acf2afb3f753385c2f1ff04cec018213fb9b2dd1c2f8935be" exitCode=0 Oct 06 12:11:10 crc kubenswrapper[4958]: I1006 12:11:10.178704 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-x7552" event={"ID":"4af9779c-4e49-4d5a-a09d-fd97e878ddcb","Type":"ContainerDied","Data":"8c5e737a04bbff8acf2afb3f753385c2f1ff04cec018213fb9b2dd1c2f8935be"} Oct 06 12:11:10 crc kubenswrapper[4958]: I1006 12:11:10.178733 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-x7552" event={"ID":"4af9779c-4e49-4d5a-a09d-fd97e878ddcb","Type":"ContainerStarted","Data":"61157f40ec5fb2d5c97e6b60bda620ef441478c5cbc025ddefc45914875d22b7"} Oct 06 12:11:12 crc kubenswrapper[4958]: I1006 12:11:12.199635 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-x7552" event={"ID":"4af9779c-4e49-4d5a-a09d-fd97e878ddcb","Type":"ContainerStarted","Data":"e61a7a5addad249e0f397d2920c6147602c8bb10c11a83f900929fa3206af751"} Oct 06 12:11:13 crc kubenswrapper[4958]: I1006 12:11:13.220844 4958 generic.go:334] "Generic (PLEG): container finished" podID="4af9779c-4e49-4d5a-a09d-fd97e878ddcb" containerID="e61a7a5addad249e0f397d2920c6147602c8bb10c11a83f900929fa3206af751" exitCode=0 Oct 06 12:11:13 crc kubenswrapper[4958]: I1006 12:11:13.220967 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-x7552" event={"ID":"4af9779c-4e49-4d5a-a09d-fd97e878ddcb","Type":"ContainerDied","Data":"e61a7a5addad249e0f397d2920c6147602c8bb10c11a83f900929fa3206af751"} Oct 06 12:11:15 crc kubenswrapper[4958]: I1006 12:11:15.254936 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-x7552" event={"ID":"4af9779c-4e49-4d5a-a09d-fd97e878ddcb","Type":"ContainerStarted","Data":"431b9728f478874f76de7801513f3250237d5fddaa54d5659c191df84a162ded"} Oct 06 12:11:15 crc kubenswrapper[4958]: I1006 12:11:15.289170 4958 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-x7552" podStartSLOduration=3.450485161 podStartE2EDuration="7.289112675s" podCreationTimestamp="2025-10-06 12:11:08 +0000 UTC" firstStartedPulling="2025-10-06 12:11:10.181431202 +0000 UTC m=+1424.067456520" lastFinishedPulling="2025-10-06 12:11:14.020058686 +0000 UTC m=+1427.906084034" observedRunningTime="2025-10-06 12:11:15.286024135 +0000 UTC m=+1429.172049523" watchObservedRunningTime="2025-10-06 12:11:15.289112675 +0000 UTC m=+1429.175137993" Oct 06 12:11:18 crc kubenswrapper[4958]: I1006 12:11:18.769381 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-x7552" Oct 06 12:11:18 crc kubenswrapper[4958]: I1006 12:11:18.769877 4958 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-x7552" Oct 06 12:11:19 crc kubenswrapper[4958]: I1006 12:11:19.835756 4958 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-x7552" podUID="4af9779c-4e49-4d5a-a09d-fd97e878ddcb" containerName="registry-server" probeResult="failure" output=< Oct 06 12:11:19 crc kubenswrapper[4958]: timeout: failed to connect service ":50051" within 1s Oct 06 12:11:19 crc kubenswrapper[4958]: > Oct 06 12:11:23 crc kubenswrapper[4958]: I1006 12:11:23.801817 4958 patch_prober.go:28] interesting pod/machine-config-daemon-whw6z container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 06 12:11:23 crc kubenswrapper[4958]: I1006 12:11:23.803330 4958 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-whw6z" podUID="1a63a5e9-6d00-40ff-a080-cfcaf03a1c1b" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 06 12:11:28 crc kubenswrapper[4958]: I1006 12:11:28.564503 4958 scope.go:117] "RemoveContainer" containerID="792e79d8587fa71c5c2166f0c3dea77b11fbc99452d0f4541c45ecf36b647b1c" Oct 06 12:11:28 crc kubenswrapper[4958]: I1006 12:11:28.596600 4958 scope.go:117] "RemoveContainer" containerID="a173d851cb9195499aa959a8606ee7d4412483ddb3785677a4229f4657e64563" Oct 06 12:11:28 crc kubenswrapper[4958]: I1006 12:11:28.632840 4958 scope.go:117] "RemoveContainer" containerID="52d57eb75384e2796cd25f00fea69b3e1d15a653bd62d82d2ef2327172ba9adb" Oct 06 12:11:28 crc kubenswrapper[4958]: I1006 12:11:28.664983 4958 scope.go:117] "RemoveContainer" containerID="afe45e6488225ef89ec4227b4ff72c61d54c922f0650d1793e15bd207c2ce8cc" Oct 06 12:11:28 crc kubenswrapper[4958]: I1006 12:11:28.697992 4958 scope.go:117] "RemoveContainer" containerID="ebe011f647b372de72d69d4f11ca1b0380879092844dd16287ee5becbf99b893" Oct 06 12:11:28 crc kubenswrapper[4958]: I1006 12:11:28.726443 4958 scope.go:117] "RemoveContainer" containerID="a510678ee0d09a9b3aebaa16ea4ffde1fa2ecf35424ec5581a23101af74b1ddf" Oct 06 12:11:28 crc kubenswrapper[4958]: I1006 12:11:28.744631 4958 scope.go:117] "RemoveContainer" containerID="e0e9f7afc89b0f570dfd1840af7adfbc86f969e4f962b7456b60abfa70cc1bf5" Oct 06 12:11:28 crc kubenswrapper[4958]: I1006 12:11:28.761100 4958 scope.go:117] "RemoveContainer" containerID="7cfe3f206e733dd9574ff8d71c0e8684a9da98fd84191044392c6443c09e4f98" Oct 06 12:11:28 crc kubenswrapper[4958]: I1006 12:11:28.832285 4958 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-x7552" Oct 06 12:11:28 crc kubenswrapper[4958]: I1006 12:11:28.883773 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-x7552" Oct 06 12:11:29 crc kubenswrapper[4958]: I1006 12:11:29.068518 4958 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-x7552"] Oct 06 12:11:30 crc kubenswrapper[4958]: I1006 12:11:30.430770 4958 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-x7552" podUID="4af9779c-4e49-4d5a-a09d-fd97e878ddcb" containerName="registry-server" containerID="cri-o://431b9728f478874f76de7801513f3250237d5fddaa54d5659c191df84a162ded" gracePeriod=2 Oct 06 12:11:30 crc kubenswrapper[4958]: I1006 12:11:30.871860 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-x7552" Oct 06 12:11:30 crc kubenswrapper[4958]: I1006 12:11:30.919422 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4af9779c-4e49-4d5a-a09d-fd97e878ddcb-utilities\") pod \"4af9779c-4e49-4d5a-a09d-fd97e878ddcb\" (UID: \"4af9779c-4e49-4d5a-a09d-fd97e878ddcb\") " Oct 06 12:11:30 crc kubenswrapper[4958]: I1006 12:11:30.919792 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4af9779c-4e49-4d5a-a09d-fd97e878ddcb-catalog-content\") pod \"4af9779c-4e49-4d5a-a09d-fd97e878ddcb\" (UID: \"4af9779c-4e49-4d5a-a09d-fd97e878ddcb\") " Oct 06 12:11:30 crc kubenswrapper[4958]: I1006 12:11:30.920028 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pvqjb\" (UniqueName: \"kubernetes.io/projected/4af9779c-4e49-4d5a-a09d-fd97e878ddcb-kube-api-access-pvqjb\") pod \"4af9779c-4e49-4d5a-a09d-fd97e878ddcb\" (UID: \"4af9779c-4e49-4d5a-a09d-fd97e878ddcb\") " Oct 06 12:11:30 crc kubenswrapper[4958]: I1006 12:11:30.921549 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4af9779c-4e49-4d5a-a09d-fd97e878ddcb-utilities" (OuterVolumeSpecName: "utilities") pod "4af9779c-4e49-4d5a-a09d-fd97e878ddcb" (UID: "4af9779c-4e49-4d5a-a09d-fd97e878ddcb"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 06 12:11:30 crc kubenswrapper[4958]: I1006 12:11:30.935207 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4af9779c-4e49-4d5a-a09d-fd97e878ddcb-kube-api-access-pvqjb" (OuterVolumeSpecName: "kube-api-access-pvqjb") pod "4af9779c-4e49-4d5a-a09d-fd97e878ddcb" (UID: "4af9779c-4e49-4d5a-a09d-fd97e878ddcb"). InnerVolumeSpecName "kube-api-access-pvqjb". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 06 12:11:31 crc kubenswrapper[4958]: I1006 12:11:31.022538 4958 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pvqjb\" (UniqueName: \"kubernetes.io/projected/4af9779c-4e49-4d5a-a09d-fd97e878ddcb-kube-api-access-pvqjb\") on node \"crc\" DevicePath \"\"" Oct 06 12:11:31 crc kubenswrapper[4958]: I1006 12:11:31.022577 4958 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4af9779c-4e49-4d5a-a09d-fd97e878ddcb-utilities\") on node \"crc\" DevicePath \"\"" Oct 06 12:11:31 crc kubenswrapper[4958]: I1006 12:11:31.022913 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4af9779c-4e49-4d5a-a09d-fd97e878ddcb-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "4af9779c-4e49-4d5a-a09d-fd97e878ddcb" (UID: "4af9779c-4e49-4d5a-a09d-fd97e878ddcb"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 06 12:11:31 crc kubenswrapper[4958]: I1006 12:11:31.124486 4958 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4af9779c-4e49-4d5a-a09d-fd97e878ddcb-catalog-content\") on node \"crc\" DevicePath \"\"" Oct 06 12:11:31 crc kubenswrapper[4958]: I1006 12:11:31.446319 4958 generic.go:334] "Generic (PLEG): container finished" podID="4af9779c-4e49-4d5a-a09d-fd97e878ddcb" containerID="431b9728f478874f76de7801513f3250237d5fddaa54d5659c191df84a162ded" exitCode=0 Oct 06 12:11:31 crc kubenswrapper[4958]: I1006 12:11:31.446367 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-x7552" event={"ID":"4af9779c-4e49-4d5a-a09d-fd97e878ddcb","Type":"ContainerDied","Data":"431b9728f478874f76de7801513f3250237d5fddaa54d5659c191df84a162ded"} Oct 06 12:11:31 crc kubenswrapper[4958]: I1006 12:11:31.446392 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-x7552" event={"ID":"4af9779c-4e49-4d5a-a09d-fd97e878ddcb","Type":"ContainerDied","Data":"61157f40ec5fb2d5c97e6b60bda620ef441478c5cbc025ddefc45914875d22b7"} Oct 06 12:11:31 crc kubenswrapper[4958]: I1006 12:11:31.446390 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-x7552" Oct 06 12:11:31 crc kubenswrapper[4958]: I1006 12:11:31.446409 4958 scope.go:117] "RemoveContainer" containerID="431b9728f478874f76de7801513f3250237d5fddaa54d5659c191df84a162ded" Oct 06 12:11:31 crc kubenswrapper[4958]: I1006 12:11:31.485666 4958 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-x7552"] Oct 06 12:11:31 crc kubenswrapper[4958]: I1006 12:11:31.488593 4958 scope.go:117] "RemoveContainer" containerID="e61a7a5addad249e0f397d2920c6147602c8bb10c11a83f900929fa3206af751" Oct 06 12:11:31 crc kubenswrapper[4958]: I1006 12:11:31.495007 4958 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-x7552"] Oct 06 12:11:31 crc kubenswrapper[4958]: I1006 12:11:31.520554 4958 scope.go:117] "RemoveContainer" containerID="8c5e737a04bbff8acf2afb3f753385c2f1ff04cec018213fb9b2dd1c2f8935be" Oct 06 12:11:31 crc kubenswrapper[4958]: I1006 12:11:31.563113 4958 scope.go:117] "RemoveContainer" containerID="431b9728f478874f76de7801513f3250237d5fddaa54d5659c191df84a162ded" Oct 06 12:11:31 crc kubenswrapper[4958]: E1006 12:11:31.563505 4958 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"431b9728f478874f76de7801513f3250237d5fddaa54d5659c191df84a162ded\": container with ID starting with 431b9728f478874f76de7801513f3250237d5fddaa54d5659c191df84a162ded not found: ID does not exist" containerID="431b9728f478874f76de7801513f3250237d5fddaa54d5659c191df84a162ded" Oct 06 12:11:31 crc kubenswrapper[4958]: I1006 12:11:31.563539 4958 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"431b9728f478874f76de7801513f3250237d5fddaa54d5659c191df84a162ded"} err="failed to get container status \"431b9728f478874f76de7801513f3250237d5fddaa54d5659c191df84a162ded\": rpc error: code = NotFound desc = could not find container \"431b9728f478874f76de7801513f3250237d5fddaa54d5659c191df84a162ded\": container with ID starting with 431b9728f478874f76de7801513f3250237d5fddaa54d5659c191df84a162ded not found: ID does not exist" Oct 06 12:11:31 crc kubenswrapper[4958]: I1006 12:11:31.563560 4958 scope.go:117] "RemoveContainer" containerID="e61a7a5addad249e0f397d2920c6147602c8bb10c11a83f900929fa3206af751" Oct 06 12:11:31 crc kubenswrapper[4958]: E1006 12:11:31.563972 4958 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e61a7a5addad249e0f397d2920c6147602c8bb10c11a83f900929fa3206af751\": container with ID starting with e61a7a5addad249e0f397d2920c6147602c8bb10c11a83f900929fa3206af751 not found: ID does not exist" containerID="e61a7a5addad249e0f397d2920c6147602c8bb10c11a83f900929fa3206af751" Oct 06 12:11:31 crc kubenswrapper[4958]: I1006 12:11:31.564025 4958 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e61a7a5addad249e0f397d2920c6147602c8bb10c11a83f900929fa3206af751"} err="failed to get container status \"e61a7a5addad249e0f397d2920c6147602c8bb10c11a83f900929fa3206af751\": rpc error: code = NotFound desc = could not find container \"e61a7a5addad249e0f397d2920c6147602c8bb10c11a83f900929fa3206af751\": container with ID starting with e61a7a5addad249e0f397d2920c6147602c8bb10c11a83f900929fa3206af751 not found: ID does not exist" Oct 06 12:11:31 crc kubenswrapper[4958]: I1006 12:11:31.564139 4958 scope.go:117] "RemoveContainer" containerID="8c5e737a04bbff8acf2afb3f753385c2f1ff04cec018213fb9b2dd1c2f8935be" Oct 06 12:11:31 crc kubenswrapper[4958]: E1006 12:11:31.564526 4958 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8c5e737a04bbff8acf2afb3f753385c2f1ff04cec018213fb9b2dd1c2f8935be\": container with ID starting with 8c5e737a04bbff8acf2afb3f753385c2f1ff04cec018213fb9b2dd1c2f8935be not found: ID does not exist" containerID="8c5e737a04bbff8acf2afb3f753385c2f1ff04cec018213fb9b2dd1c2f8935be" Oct 06 12:11:31 crc kubenswrapper[4958]: I1006 12:11:31.564560 4958 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8c5e737a04bbff8acf2afb3f753385c2f1ff04cec018213fb9b2dd1c2f8935be"} err="failed to get container status \"8c5e737a04bbff8acf2afb3f753385c2f1ff04cec018213fb9b2dd1c2f8935be\": rpc error: code = NotFound desc = could not find container \"8c5e737a04bbff8acf2afb3f753385c2f1ff04cec018213fb9b2dd1c2f8935be\": container with ID starting with 8c5e737a04bbff8acf2afb3f753385c2f1ff04cec018213fb9b2dd1c2f8935be not found: ID does not exist" Oct 06 12:11:32 crc kubenswrapper[4958]: I1006 12:11:32.949773 4958 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4af9779c-4e49-4d5a-a09d-fd97e878ddcb" path="/var/lib/kubelet/pods/4af9779c-4e49-4d5a-a09d-fd97e878ddcb/volumes" Oct 06 12:11:38 crc kubenswrapper[4958]: I1006 12:11:38.533310 4958 generic.go:334] "Generic (PLEG): container finished" podID="bff355e0-d99f-4997-81e9-849deb8cea2a" containerID="c3630b079b6e72fc7c91725b1a6c7675013987051e89903c7dab2f95cdd35649" exitCode=0 Oct 06 12:11:38 crc kubenswrapper[4958]: I1006 12:11:38.533392 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-64zqq" event={"ID":"bff355e0-d99f-4997-81e9-849deb8cea2a","Type":"ContainerDied","Data":"c3630b079b6e72fc7c91725b1a6c7675013987051e89903c7dab2f95cdd35649"} Oct 06 12:11:40 crc kubenswrapper[4958]: I1006 12:11:40.016248 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-64zqq" Oct 06 12:11:40 crc kubenswrapper[4958]: I1006 12:11:40.128795 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/bff355e0-d99f-4997-81e9-849deb8cea2a-inventory\") pod \"bff355e0-d99f-4997-81e9-849deb8cea2a\" (UID: \"bff355e0-d99f-4997-81e9-849deb8cea2a\") " Oct 06 12:11:40 crc kubenswrapper[4958]: I1006 12:11:40.128847 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bff355e0-d99f-4997-81e9-849deb8cea2a-bootstrap-combined-ca-bundle\") pod \"bff355e0-d99f-4997-81e9-849deb8cea2a\" (UID: \"bff355e0-d99f-4997-81e9-849deb8cea2a\") " Oct 06 12:11:40 crc kubenswrapper[4958]: I1006 12:11:40.129062 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/bff355e0-d99f-4997-81e9-849deb8cea2a-ssh-key\") pod \"bff355e0-d99f-4997-81e9-849deb8cea2a\" (UID: \"bff355e0-d99f-4997-81e9-849deb8cea2a\") " Oct 06 12:11:40 crc kubenswrapper[4958]: I1006 12:11:40.129122 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-g9m52\" (UniqueName: \"kubernetes.io/projected/bff355e0-d99f-4997-81e9-849deb8cea2a-kube-api-access-g9m52\") pod \"bff355e0-d99f-4997-81e9-849deb8cea2a\" (UID: \"bff355e0-d99f-4997-81e9-849deb8cea2a\") " Oct 06 12:11:40 crc kubenswrapper[4958]: I1006 12:11:40.134817 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bff355e0-d99f-4997-81e9-849deb8cea2a-bootstrap-combined-ca-bundle" (OuterVolumeSpecName: "bootstrap-combined-ca-bundle") pod "bff355e0-d99f-4997-81e9-849deb8cea2a" (UID: "bff355e0-d99f-4997-81e9-849deb8cea2a"). InnerVolumeSpecName "bootstrap-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 06 12:11:40 crc kubenswrapper[4958]: I1006 12:11:40.138759 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bff355e0-d99f-4997-81e9-849deb8cea2a-kube-api-access-g9m52" (OuterVolumeSpecName: "kube-api-access-g9m52") pod "bff355e0-d99f-4997-81e9-849deb8cea2a" (UID: "bff355e0-d99f-4997-81e9-849deb8cea2a"). InnerVolumeSpecName "kube-api-access-g9m52". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 06 12:11:40 crc kubenswrapper[4958]: I1006 12:11:40.157507 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bff355e0-d99f-4997-81e9-849deb8cea2a-inventory" (OuterVolumeSpecName: "inventory") pod "bff355e0-d99f-4997-81e9-849deb8cea2a" (UID: "bff355e0-d99f-4997-81e9-849deb8cea2a"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 06 12:11:40 crc kubenswrapper[4958]: I1006 12:11:40.174527 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bff355e0-d99f-4997-81e9-849deb8cea2a-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "bff355e0-d99f-4997-81e9-849deb8cea2a" (UID: "bff355e0-d99f-4997-81e9-849deb8cea2a"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 06 12:11:40 crc kubenswrapper[4958]: I1006 12:11:40.232656 4958 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/bff355e0-d99f-4997-81e9-849deb8cea2a-ssh-key\") on node \"crc\" DevicePath \"\"" Oct 06 12:11:40 crc kubenswrapper[4958]: I1006 12:11:40.232725 4958 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-g9m52\" (UniqueName: \"kubernetes.io/projected/bff355e0-d99f-4997-81e9-849deb8cea2a-kube-api-access-g9m52\") on node \"crc\" DevicePath \"\"" Oct 06 12:11:40 crc kubenswrapper[4958]: I1006 12:11:40.232755 4958 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/bff355e0-d99f-4997-81e9-849deb8cea2a-inventory\") on node \"crc\" DevicePath \"\"" Oct 06 12:11:40 crc kubenswrapper[4958]: I1006 12:11:40.232780 4958 reconciler_common.go:293] "Volume detached for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bff355e0-d99f-4997-81e9-849deb8cea2a-bootstrap-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 06 12:11:40 crc kubenswrapper[4958]: I1006 12:11:40.562995 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-64zqq" event={"ID":"bff355e0-d99f-4997-81e9-849deb8cea2a","Type":"ContainerDied","Data":"705c50520a380f703a728bdc5be6fcc36c71c528e3949f81afb919333bd5e84d"} Oct 06 12:11:40 crc kubenswrapper[4958]: I1006 12:11:40.563230 4958 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="705c50520a380f703a728bdc5be6fcc36c71c528e3949f81afb919333bd5e84d" Oct 06 12:11:40 crc kubenswrapper[4958]: I1006 12:11:40.563090 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-64zqq" Oct 06 12:11:40 crc kubenswrapper[4958]: I1006 12:11:40.660519 4958 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/download-cache-edpm-deployment-openstack-edpm-ipam-xg9gd"] Oct 06 12:11:40 crc kubenswrapper[4958]: E1006 12:11:40.660963 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bff355e0-d99f-4997-81e9-849deb8cea2a" containerName="bootstrap-edpm-deployment-openstack-edpm-ipam" Oct 06 12:11:40 crc kubenswrapper[4958]: I1006 12:11:40.660976 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="bff355e0-d99f-4997-81e9-849deb8cea2a" containerName="bootstrap-edpm-deployment-openstack-edpm-ipam" Oct 06 12:11:40 crc kubenswrapper[4958]: E1006 12:11:40.661005 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4af9779c-4e49-4d5a-a09d-fd97e878ddcb" containerName="extract-content" Oct 06 12:11:40 crc kubenswrapper[4958]: I1006 12:11:40.661012 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="4af9779c-4e49-4d5a-a09d-fd97e878ddcb" containerName="extract-content" Oct 06 12:11:40 crc kubenswrapper[4958]: E1006 12:11:40.661026 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4af9779c-4e49-4d5a-a09d-fd97e878ddcb" containerName="registry-server" Oct 06 12:11:40 crc kubenswrapper[4958]: I1006 12:11:40.661032 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="4af9779c-4e49-4d5a-a09d-fd97e878ddcb" containerName="registry-server" Oct 06 12:11:40 crc kubenswrapper[4958]: E1006 12:11:40.661054 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4af9779c-4e49-4d5a-a09d-fd97e878ddcb" containerName="extract-utilities" Oct 06 12:11:40 crc kubenswrapper[4958]: I1006 12:11:40.661060 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="4af9779c-4e49-4d5a-a09d-fd97e878ddcb" containerName="extract-utilities" Oct 06 12:11:40 crc kubenswrapper[4958]: I1006 12:11:40.661257 4958 memory_manager.go:354] "RemoveStaleState removing state" podUID="4af9779c-4e49-4d5a-a09d-fd97e878ddcb" containerName="registry-server" Oct 06 12:11:40 crc kubenswrapper[4958]: I1006 12:11:40.661278 4958 memory_manager.go:354] "RemoveStaleState removing state" podUID="bff355e0-d99f-4997-81e9-849deb8cea2a" containerName="bootstrap-edpm-deployment-openstack-edpm-ipam" Oct 06 12:11:40 crc kubenswrapper[4958]: I1006 12:11:40.661839 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-xg9gd" Oct 06 12:11:40 crc kubenswrapper[4958]: I1006 12:11:40.666519 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-frkbw" Oct 06 12:11:40 crc kubenswrapper[4958]: I1006 12:11:40.666814 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Oct 06 12:11:40 crc kubenswrapper[4958]: I1006 12:11:40.666979 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Oct 06 12:11:40 crc kubenswrapper[4958]: I1006 12:11:40.667188 4958 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Oct 06 12:11:40 crc kubenswrapper[4958]: I1006 12:11:40.692931 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/download-cache-edpm-deployment-openstack-edpm-ipam-xg9gd"] Oct 06 12:11:40 crc kubenswrapper[4958]: I1006 12:11:40.741041 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mpslq\" (UniqueName: \"kubernetes.io/projected/81afece6-fe0a-491c-94b8-3b19d00058c5-kube-api-access-mpslq\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-xg9gd\" (UID: \"81afece6-fe0a-491c-94b8-3b19d00058c5\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-xg9gd" Oct 06 12:11:40 crc kubenswrapper[4958]: I1006 12:11:40.741149 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/81afece6-fe0a-491c-94b8-3b19d00058c5-ssh-key\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-xg9gd\" (UID: \"81afece6-fe0a-491c-94b8-3b19d00058c5\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-xg9gd" Oct 06 12:11:40 crc kubenswrapper[4958]: I1006 12:11:40.741411 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/81afece6-fe0a-491c-94b8-3b19d00058c5-inventory\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-xg9gd\" (UID: \"81afece6-fe0a-491c-94b8-3b19d00058c5\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-xg9gd" Oct 06 12:11:40 crc kubenswrapper[4958]: I1006 12:11:40.842587 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/81afece6-fe0a-491c-94b8-3b19d00058c5-inventory\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-xg9gd\" (UID: \"81afece6-fe0a-491c-94b8-3b19d00058c5\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-xg9gd" Oct 06 12:11:40 crc kubenswrapper[4958]: I1006 12:11:40.842872 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mpslq\" (UniqueName: \"kubernetes.io/projected/81afece6-fe0a-491c-94b8-3b19d00058c5-kube-api-access-mpslq\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-xg9gd\" (UID: \"81afece6-fe0a-491c-94b8-3b19d00058c5\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-xg9gd" Oct 06 12:11:40 crc kubenswrapper[4958]: I1006 12:11:40.842999 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/81afece6-fe0a-491c-94b8-3b19d00058c5-ssh-key\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-xg9gd\" (UID: \"81afece6-fe0a-491c-94b8-3b19d00058c5\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-xg9gd" Oct 06 12:11:40 crc kubenswrapper[4958]: I1006 12:11:40.847261 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/81afece6-fe0a-491c-94b8-3b19d00058c5-inventory\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-xg9gd\" (UID: \"81afece6-fe0a-491c-94b8-3b19d00058c5\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-xg9gd" Oct 06 12:11:40 crc kubenswrapper[4958]: I1006 12:11:40.848629 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/81afece6-fe0a-491c-94b8-3b19d00058c5-ssh-key\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-xg9gd\" (UID: \"81afece6-fe0a-491c-94b8-3b19d00058c5\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-xg9gd" Oct 06 12:11:40 crc kubenswrapper[4958]: I1006 12:11:40.860811 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mpslq\" (UniqueName: \"kubernetes.io/projected/81afece6-fe0a-491c-94b8-3b19d00058c5-kube-api-access-mpslq\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-xg9gd\" (UID: \"81afece6-fe0a-491c-94b8-3b19d00058c5\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-xg9gd" Oct 06 12:11:40 crc kubenswrapper[4958]: I1006 12:11:40.982659 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-xg9gd" Oct 06 12:11:41 crc kubenswrapper[4958]: I1006 12:11:41.526685 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/download-cache-edpm-deployment-openstack-edpm-ipam-xg9gd"] Oct 06 12:11:41 crc kubenswrapper[4958]: I1006 12:11:41.572892 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-xg9gd" event={"ID":"81afece6-fe0a-491c-94b8-3b19d00058c5","Type":"ContainerStarted","Data":"7c2a480af02dc55bd17b19f3f65a8a5a6c4dac9a2b31a75a82217659047fd4d9"} Oct 06 12:11:42 crc kubenswrapper[4958]: I1006 12:11:42.908101 4958 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-mj6dj"] Oct 06 12:11:42 crc kubenswrapper[4958]: I1006 12:11:42.911772 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-mj6dj" Oct 06 12:11:42 crc kubenswrapper[4958]: I1006 12:11:42.938824 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-mj6dj"] Oct 06 12:11:42 crc kubenswrapper[4958]: I1006 12:11:42.994873 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/83d7decd-c310-40e5-b87e-8a1b75d2ffce-catalog-content\") pod \"certified-operators-mj6dj\" (UID: \"83d7decd-c310-40e5-b87e-8a1b75d2ffce\") " pod="openshift-marketplace/certified-operators-mj6dj" Oct 06 12:11:42 crc kubenswrapper[4958]: I1006 12:11:42.994958 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-79nzq\" (UniqueName: \"kubernetes.io/projected/83d7decd-c310-40e5-b87e-8a1b75d2ffce-kube-api-access-79nzq\") pod \"certified-operators-mj6dj\" (UID: \"83d7decd-c310-40e5-b87e-8a1b75d2ffce\") " pod="openshift-marketplace/certified-operators-mj6dj" Oct 06 12:11:42 crc kubenswrapper[4958]: I1006 12:11:42.995173 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/83d7decd-c310-40e5-b87e-8a1b75d2ffce-utilities\") pod \"certified-operators-mj6dj\" (UID: \"83d7decd-c310-40e5-b87e-8a1b75d2ffce\") " pod="openshift-marketplace/certified-operators-mj6dj" Oct 06 12:11:43 crc kubenswrapper[4958]: I1006 12:11:43.097012 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/83d7decd-c310-40e5-b87e-8a1b75d2ffce-catalog-content\") pod \"certified-operators-mj6dj\" (UID: \"83d7decd-c310-40e5-b87e-8a1b75d2ffce\") " pod="openshift-marketplace/certified-operators-mj6dj" Oct 06 12:11:43 crc kubenswrapper[4958]: I1006 12:11:43.097103 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-79nzq\" (UniqueName: \"kubernetes.io/projected/83d7decd-c310-40e5-b87e-8a1b75d2ffce-kube-api-access-79nzq\") pod \"certified-operators-mj6dj\" (UID: \"83d7decd-c310-40e5-b87e-8a1b75d2ffce\") " pod="openshift-marketplace/certified-operators-mj6dj" Oct 06 12:11:43 crc kubenswrapper[4958]: I1006 12:11:43.097230 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/83d7decd-c310-40e5-b87e-8a1b75d2ffce-utilities\") pod \"certified-operators-mj6dj\" (UID: \"83d7decd-c310-40e5-b87e-8a1b75d2ffce\") " pod="openshift-marketplace/certified-operators-mj6dj" Oct 06 12:11:43 crc kubenswrapper[4958]: I1006 12:11:43.098113 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/83d7decd-c310-40e5-b87e-8a1b75d2ffce-utilities\") pod \"certified-operators-mj6dj\" (UID: \"83d7decd-c310-40e5-b87e-8a1b75d2ffce\") " pod="openshift-marketplace/certified-operators-mj6dj" Oct 06 12:11:43 crc kubenswrapper[4958]: I1006 12:11:43.098568 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/83d7decd-c310-40e5-b87e-8a1b75d2ffce-catalog-content\") pod \"certified-operators-mj6dj\" (UID: \"83d7decd-c310-40e5-b87e-8a1b75d2ffce\") " pod="openshift-marketplace/certified-operators-mj6dj" Oct 06 12:11:43 crc kubenswrapper[4958]: I1006 12:11:43.139600 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-79nzq\" (UniqueName: \"kubernetes.io/projected/83d7decd-c310-40e5-b87e-8a1b75d2ffce-kube-api-access-79nzq\") pod \"certified-operators-mj6dj\" (UID: \"83d7decd-c310-40e5-b87e-8a1b75d2ffce\") " pod="openshift-marketplace/certified-operators-mj6dj" Oct 06 12:11:43 crc kubenswrapper[4958]: I1006 12:11:43.233652 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-mj6dj" Oct 06 12:11:43 crc kubenswrapper[4958]: I1006 12:11:43.595438 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-xg9gd" event={"ID":"81afece6-fe0a-491c-94b8-3b19d00058c5","Type":"ContainerStarted","Data":"a9a69170b511d8d41357ecaab3d125513a120366faf37482e8c3cf417db1a97f"} Oct 06 12:11:43 crc kubenswrapper[4958]: I1006 12:11:43.610823 4958 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-xg9gd" podStartSLOduration=2.531261741 podStartE2EDuration="3.610807314s" podCreationTimestamp="2025-10-06 12:11:40 +0000 UTC" firstStartedPulling="2025-10-06 12:11:41.537922102 +0000 UTC m=+1455.423947440" lastFinishedPulling="2025-10-06 12:11:42.617467705 +0000 UTC m=+1456.503493013" observedRunningTime="2025-10-06 12:11:43.610737972 +0000 UTC m=+1457.496763280" watchObservedRunningTime="2025-10-06 12:11:43.610807314 +0000 UTC m=+1457.496832622" Oct 06 12:11:43 crc kubenswrapper[4958]: W1006 12:11:43.819236 4958 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod83d7decd_c310_40e5_b87e_8a1b75d2ffce.slice/crio-22275e2a7f12ac951fe23fd1613d9c42376eb685d63466273d6442e39c9ae7e2 WatchSource:0}: Error finding container 22275e2a7f12ac951fe23fd1613d9c42376eb685d63466273d6442e39c9ae7e2: Status 404 returned error can't find the container with id 22275e2a7f12ac951fe23fd1613d9c42376eb685d63466273d6442e39c9ae7e2 Oct 06 12:11:43 crc kubenswrapper[4958]: I1006 12:11:43.823367 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-mj6dj"] Oct 06 12:11:44 crc kubenswrapper[4958]: I1006 12:11:44.608342 4958 generic.go:334] "Generic (PLEG): container finished" podID="83d7decd-c310-40e5-b87e-8a1b75d2ffce" containerID="632ca81c959c3318fc0a3794c85643d8158f1bec780b9dadd3eac094aa965947" exitCode=0 Oct 06 12:11:44 crc kubenswrapper[4958]: I1006 12:11:44.609717 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-mj6dj" event={"ID":"83d7decd-c310-40e5-b87e-8a1b75d2ffce","Type":"ContainerDied","Data":"632ca81c959c3318fc0a3794c85643d8158f1bec780b9dadd3eac094aa965947"} Oct 06 12:11:44 crc kubenswrapper[4958]: I1006 12:11:44.609743 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-mj6dj" event={"ID":"83d7decd-c310-40e5-b87e-8a1b75d2ffce","Type":"ContainerStarted","Data":"22275e2a7f12ac951fe23fd1613d9c42376eb685d63466273d6442e39c9ae7e2"} Oct 06 12:11:45 crc kubenswrapper[4958]: I1006 12:11:45.621636 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-mj6dj" event={"ID":"83d7decd-c310-40e5-b87e-8a1b75d2ffce","Type":"ContainerStarted","Data":"6a7c68d6e6d31b0b721a1868c79802c5e9fd4b56cff83bac8107af45ba9f6513"} Oct 06 12:11:46 crc kubenswrapper[4958]: I1006 12:11:46.636970 4958 generic.go:334] "Generic (PLEG): container finished" podID="83d7decd-c310-40e5-b87e-8a1b75d2ffce" containerID="6a7c68d6e6d31b0b721a1868c79802c5e9fd4b56cff83bac8107af45ba9f6513" exitCode=0 Oct 06 12:11:46 crc kubenswrapper[4958]: I1006 12:11:46.637111 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-mj6dj" event={"ID":"83d7decd-c310-40e5-b87e-8a1b75d2ffce","Type":"ContainerDied","Data":"6a7c68d6e6d31b0b721a1868c79802c5e9fd4b56cff83bac8107af45ba9f6513"} Oct 06 12:11:47 crc kubenswrapper[4958]: I1006 12:11:47.655740 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-mj6dj" event={"ID":"83d7decd-c310-40e5-b87e-8a1b75d2ffce","Type":"ContainerStarted","Data":"b954dc941b8a8393103b929415ad0496581e40a427890ab28956e1243fa82b98"} Oct 06 12:11:47 crc kubenswrapper[4958]: I1006 12:11:47.688879 4958 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-mj6dj" podStartSLOduration=3.258341362 podStartE2EDuration="5.688854313s" podCreationTimestamp="2025-10-06 12:11:42 +0000 UTC" firstStartedPulling="2025-10-06 12:11:44.610980242 +0000 UTC m=+1458.497005550" lastFinishedPulling="2025-10-06 12:11:47.041493163 +0000 UTC m=+1460.927518501" observedRunningTime="2025-10-06 12:11:47.678201342 +0000 UTC m=+1461.564226680" watchObservedRunningTime="2025-10-06 12:11:47.688854313 +0000 UTC m=+1461.574879631" Oct 06 12:11:53 crc kubenswrapper[4958]: I1006 12:11:53.234407 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-mj6dj" Oct 06 12:11:53 crc kubenswrapper[4958]: I1006 12:11:53.234889 4958 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-mj6dj" Oct 06 12:11:53 crc kubenswrapper[4958]: I1006 12:11:53.315405 4958 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-mj6dj" Oct 06 12:11:53 crc kubenswrapper[4958]: I1006 12:11:53.802207 4958 patch_prober.go:28] interesting pod/machine-config-daemon-whw6z container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 06 12:11:53 crc kubenswrapper[4958]: I1006 12:11:53.802639 4958 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-whw6z" podUID="1a63a5e9-6d00-40ff-a080-cfcaf03a1c1b" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 06 12:11:53 crc kubenswrapper[4958]: I1006 12:11:53.802717 4958 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-whw6z" Oct 06 12:11:53 crc kubenswrapper[4958]: I1006 12:11:53.803944 4958 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"206f17391c9f0682bafb2e779c977ed48daa6668033056ec39d18f8882e93154"} pod="openshift-machine-config-operator/machine-config-daemon-whw6z" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Oct 06 12:11:53 crc kubenswrapper[4958]: I1006 12:11:53.804063 4958 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-whw6z" podUID="1a63a5e9-6d00-40ff-a080-cfcaf03a1c1b" containerName="machine-config-daemon" containerID="cri-o://206f17391c9f0682bafb2e779c977ed48daa6668033056ec39d18f8882e93154" gracePeriod=600 Oct 06 12:11:53 crc kubenswrapper[4958]: I1006 12:11:53.805309 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-mj6dj" Oct 06 12:11:53 crc kubenswrapper[4958]: I1006 12:11:53.886427 4958 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-mj6dj"] Oct 06 12:11:54 crc kubenswrapper[4958]: I1006 12:11:54.733701 4958 generic.go:334] "Generic (PLEG): container finished" podID="1a63a5e9-6d00-40ff-a080-cfcaf03a1c1b" containerID="206f17391c9f0682bafb2e779c977ed48daa6668033056ec39d18f8882e93154" exitCode=0 Oct 06 12:11:54 crc kubenswrapper[4958]: I1006 12:11:54.733779 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-whw6z" event={"ID":"1a63a5e9-6d00-40ff-a080-cfcaf03a1c1b","Type":"ContainerDied","Data":"206f17391c9f0682bafb2e779c977ed48daa6668033056ec39d18f8882e93154"} Oct 06 12:11:54 crc kubenswrapper[4958]: I1006 12:11:54.734267 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-whw6z" event={"ID":"1a63a5e9-6d00-40ff-a080-cfcaf03a1c1b","Type":"ContainerStarted","Data":"92e208915ca70ac5b3441238341d5b28c612e6494612e3dece5b9499a5bc906f"} Oct 06 12:11:54 crc kubenswrapper[4958]: I1006 12:11:54.734292 4958 scope.go:117] "RemoveContainer" containerID="58063f347fe5bcd8235160b2a3cc2b46057a0950357e7063907acb38c848571d" Oct 06 12:11:55 crc kubenswrapper[4958]: I1006 12:11:55.751617 4958 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-mj6dj" podUID="83d7decd-c310-40e5-b87e-8a1b75d2ffce" containerName="registry-server" containerID="cri-o://b954dc941b8a8393103b929415ad0496581e40a427890ab28956e1243fa82b98" gracePeriod=2 Oct 06 12:11:55 crc kubenswrapper[4958]: I1006 12:11:55.989437 4958 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-mk78f"] Oct 06 12:11:55 crc kubenswrapper[4958]: I1006 12:11:55.993511 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-mk78f" Oct 06 12:11:56 crc kubenswrapper[4958]: I1006 12:11:56.031354 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-mk78f"] Oct 06 12:11:56 crc kubenswrapper[4958]: I1006 12:11:56.181948 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/156e92a9-90ff-464a-aaca-a46a8ffc24ff-catalog-content\") pod \"redhat-marketplace-mk78f\" (UID: \"156e92a9-90ff-464a-aaca-a46a8ffc24ff\") " pod="openshift-marketplace/redhat-marketplace-mk78f" Oct 06 12:11:56 crc kubenswrapper[4958]: I1006 12:11:56.182094 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/156e92a9-90ff-464a-aaca-a46a8ffc24ff-utilities\") pod \"redhat-marketplace-mk78f\" (UID: \"156e92a9-90ff-464a-aaca-a46a8ffc24ff\") " pod="openshift-marketplace/redhat-marketplace-mk78f" Oct 06 12:11:56 crc kubenswrapper[4958]: I1006 12:11:56.182251 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ls5bl\" (UniqueName: \"kubernetes.io/projected/156e92a9-90ff-464a-aaca-a46a8ffc24ff-kube-api-access-ls5bl\") pod \"redhat-marketplace-mk78f\" (UID: \"156e92a9-90ff-464a-aaca-a46a8ffc24ff\") " pod="openshift-marketplace/redhat-marketplace-mk78f" Oct 06 12:11:56 crc kubenswrapper[4958]: I1006 12:11:56.283702 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/156e92a9-90ff-464a-aaca-a46a8ffc24ff-catalog-content\") pod \"redhat-marketplace-mk78f\" (UID: \"156e92a9-90ff-464a-aaca-a46a8ffc24ff\") " pod="openshift-marketplace/redhat-marketplace-mk78f" Oct 06 12:11:56 crc kubenswrapper[4958]: I1006 12:11:56.283766 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/156e92a9-90ff-464a-aaca-a46a8ffc24ff-utilities\") pod \"redhat-marketplace-mk78f\" (UID: \"156e92a9-90ff-464a-aaca-a46a8ffc24ff\") " pod="openshift-marketplace/redhat-marketplace-mk78f" Oct 06 12:11:56 crc kubenswrapper[4958]: I1006 12:11:56.283886 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ls5bl\" (UniqueName: \"kubernetes.io/projected/156e92a9-90ff-464a-aaca-a46a8ffc24ff-kube-api-access-ls5bl\") pod \"redhat-marketplace-mk78f\" (UID: \"156e92a9-90ff-464a-aaca-a46a8ffc24ff\") " pod="openshift-marketplace/redhat-marketplace-mk78f" Oct 06 12:11:56 crc kubenswrapper[4958]: I1006 12:11:56.284217 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/156e92a9-90ff-464a-aaca-a46a8ffc24ff-catalog-content\") pod \"redhat-marketplace-mk78f\" (UID: \"156e92a9-90ff-464a-aaca-a46a8ffc24ff\") " pod="openshift-marketplace/redhat-marketplace-mk78f" Oct 06 12:11:56 crc kubenswrapper[4958]: I1006 12:11:56.284350 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/156e92a9-90ff-464a-aaca-a46a8ffc24ff-utilities\") pod \"redhat-marketplace-mk78f\" (UID: \"156e92a9-90ff-464a-aaca-a46a8ffc24ff\") " pod="openshift-marketplace/redhat-marketplace-mk78f" Oct 06 12:11:56 crc kubenswrapper[4958]: I1006 12:11:56.302677 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ls5bl\" (UniqueName: \"kubernetes.io/projected/156e92a9-90ff-464a-aaca-a46a8ffc24ff-kube-api-access-ls5bl\") pod \"redhat-marketplace-mk78f\" (UID: \"156e92a9-90ff-464a-aaca-a46a8ffc24ff\") " pod="openshift-marketplace/redhat-marketplace-mk78f" Oct 06 12:11:56 crc kubenswrapper[4958]: I1006 12:11:56.317721 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-mk78f" Oct 06 12:11:56 crc kubenswrapper[4958]: I1006 12:11:56.423834 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-mj6dj" Oct 06 12:11:56 crc kubenswrapper[4958]: I1006 12:11:56.588687 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/83d7decd-c310-40e5-b87e-8a1b75d2ffce-utilities\") pod \"83d7decd-c310-40e5-b87e-8a1b75d2ffce\" (UID: \"83d7decd-c310-40e5-b87e-8a1b75d2ffce\") " Oct 06 12:11:56 crc kubenswrapper[4958]: I1006 12:11:56.588867 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-79nzq\" (UniqueName: \"kubernetes.io/projected/83d7decd-c310-40e5-b87e-8a1b75d2ffce-kube-api-access-79nzq\") pod \"83d7decd-c310-40e5-b87e-8a1b75d2ffce\" (UID: \"83d7decd-c310-40e5-b87e-8a1b75d2ffce\") " Oct 06 12:11:56 crc kubenswrapper[4958]: I1006 12:11:56.588955 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/83d7decd-c310-40e5-b87e-8a1b75d2ffce-catalog-content\") pod \"83d7decd-c310-40e5-b87e-8a1b75d2ffce\" (UID: \"83d7decd-c310-40e5-b87e-8a1b75d2ffce\") " Oct 06 12:11:56 crc kubenswrapper[4958]: I1006 12:11:56.590189 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/83d7decd-c310-40e5-b87e-8a1b75d2ffce-utilities" (OuterVolumeSpecName: "utilities") pod "83d7decd-c310-40e5-b87e-8a1b75d2ffce" (UID: "83d7decd-c310-40e5-b87e-8a1b75d2ffce"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 06 12:11:56 crc kubenswrapper[4958]: I1006 12:11:56.595325 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/83d7decd-c310-40e5-b87e-8a1b75d2ffce-kube-api-access-79nzq" (OuterVolumeSpecName: "kube-api-access-79nzq") pod "83d7decd-c310-40e5-b87e-8a1b75d2ffce" (UID: "83d7decd-c310-40e5-b87e-8a1b75d2ffce"). InnerVolumeSpecName "kube-api-access-79nzq". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 06 12:11:56 crc kubenswrapper[4958]: I1006 12:11:56.631384 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/83d7decd-c310-40e5-b87e-8a1b75d2ffce-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "83d7decd-c310-40e5-b87e-8a1b75d2ffce" (UID: "83d7decd-c310-40e5-b87e-8a1b75d2ffce"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 06 12:11:56 crc kubenswrapper[4958]: I1006 12:11:56.691619 4958 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/83d7decd-c310-40e5-b87e-8a1b75d2ffce-utilities\") on node \"crc\" DevicePath \"\"" Oct 06 12:11:56 crc kubenswrapper[4958]: I1006 12:11:56.691658 4958 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-79nzq\" (UniqueName: \"kubernetes.io/projected/83d7decd-c310-40e5-b87e-8a1b75d2ffce-kube-api-access-79nzq\") on node \"crc\" DevicePath \"\"" Oct 06 12:11:56 crc kubenswrapper[4958]: I1006 12:11:56.691671 4958 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/83d7decd-c310-40e5-b87e-8a1b75d2ffce-catalog-content\") on node \"crc\" DevicePath \"\"" Oct 06 12:11:56 crc kubenswrapper[4958]: I1006 12:11:56.768125 4958 generic.go:334] "Generic (PLEG): container finished" podID="83d7decd-c310-40e5-b87e-8a1b75d2ffce" containerID="b954dc941b8a8393103b929415ad0496581e40a427890ab28956e1243fa82b98" exitCode=0 Oct 06 12:11:56 crc kubenswrapper[4958]: I1006 12:11:56.768204 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-mj6dj" event={"ID":"83d7decd-c310-40e5-b87e-8a1b75d2ffce","Type":"ContainerDied","Data":"b954dc941b8a8393103b929415ad0496581e40a427890ab28956e1243fa82b98"} Oct 06 12:11:56 crc kubenswrapper[4958]: I1006 12:11:56.768262 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-mj6dj" Oct 06 12:11:56 crc kubenswrapper[4958]: I1006 12:11:56.768300 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-mj6dj" event={"ID":"83d7decd-c310-40e5-b87e-8a1b75d2ffce","Type":"ContainerDied","Data":"22275e2a7f12ac951fe23fd1613d9c42376eb685d63466273d6442e39c9ae7e2"} Oct 06 12:11:56 crc kubenswrapper[4958]: I1006 12:11:56.768328 4958 scope.go:117] "RemoveContainer" containerID="b954dc941b8a8393103b929415ad0496581e40a427890ab28956e1243fa82b98" Oct 06 12:11:56 crc kubenswrapper[4958]: I1006 12:11:56.772312 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-mk78f"] Oct 06 12:11:56 crc kubenswrapper[4958]: W1006 12:11:56.784872 4958 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod156e92a9_90ff_464a_aaca_a46a8ffc24ff.slice/crio-a1c56b81b9122905cc41a6132753851241c9137a5a326dcccb68b3414d9b3ded WatchSource:0}: Error finding container a1c56b81b9122905cc41a6132753851241c9137a5a326dcccb68b3414d9b3ded: Status 404 returned error can't find the container with id a1c56b81b9122905cc41a6132753851241c9137a5a326dcccb68b3414d9b3ded Oct 06 12:11:56 crc kubenswrapper[4958]: I1006 12:11:56.814166 4958 scope.go:117] "RemoveContainer" containerID="6a7c68d6e6d31b0b721a1868c79802c5e9fd4b56cff83bac8107af45ba9f6513" Oct 06 12:11:56 crc kubenswrapper[4958]: I1006 12:11:56.826974 4958 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-mj6dj"] Oct 06 12:11:56 crc kubenswrapper[4958]: I1006 12:11:56.842454 4958 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-mj6dj"] Oct 06 12:11:56 crc kubenswrapper[4958]: I1006 12:11:56.849416 4958 scope.go:117] "RemoveContainer" containerID="632ca81c959c3318fc0a3794c85643d8158f1bec780b9dadd3eac094aa965947" Oct 06 12:11:56 crc kubenswrapper[4958]: I1006 12:11:56.882616 4958 scope.go:117] "RemoveContainer" containerID="b954dc941b8a8393103b929415ad0496581e40a427890ab28956e1243fa82b98" Oct 06 12:11:56 crc kubenswrapper[4958]: E1006 12:11:56.883238 4958 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b954dc941b8a8393103b929415ad0496581e40a427890ab28956e1243fa82b98\": container with ID starting with b954dc941b8a8393103b929415ad0496581e40a427890ab28956e1243fa82b98 not found: ID does not exist" containerID="b954dc941b8a8393103b929415ad0496581e40a427890ab28956e1243fa82b98" Oct 06 12:11:56 crc kubenswrapper[4958]: I1006 12:11:56.883271 4958 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b954dc941b8a8393103b929415ad0496581e40a427890ab28956e1243fa82b98"} err="failed to get container status \"b954dc941b8a8393103b929415ad0496581e40a427890ab28956e1243fa82b98\": rpc error: code = NotFound desc = could not find container \"b954dc941b8a8393103b929415ad0496581e40a427890ab28956e1243fa82b98\": container with ID starting with b954dc941b8a8393103b929415ad0496581e40a427890ab28956e1243fa82b98 not found: ID does not exist" Oct 06 12:11:56 crc kubenswrapper[4958]: I1006 12:11:56.883293 4958 scope.go:117] "RemoveContainer" containerID="6a7c68d6e6d31b0b721a1868c79802c5e9fd4b56cff83bac8107af45ba9f6513" Oct 06 12:11:56 crc kubenswrapper[4958]: E1006 12:11:56.883692 4958 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6a7c68d6e6d31b0b721a1868c79802c5e9fd4b56cff83bac8107af45ba9f6513\": container with ID starting with 6a7c68d6e6d31b0b721a1868c79802c5e9fd4b56cff83bac8107af45ba9f6513 not found: ID does not exist" containerID="6a7c68d6e6d31b0b721a1868c79802c5e9fd4b56cff83bac8107af45ba9f6513" Oct 06 12:11:56 crc kubenswrapper[4958]: I1006 12:11:56.883720 4958 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6a7c68d6e6d31b0b721a1868c79802c5e9fd4b56cff83bac8107af45ba9f6513"} err="failed to get container status \"6a7c68d6e6d31b0b721a1868c79802c5e9fd4b56cff83bac8107af45ba9f6513\": rpc error: code = NotFound desc = could not find container \"6a7c68d6e6d31b0b721a1868c79802c5e9fd4b56cff83bac8107af45ba9f6513\": container with ID starting with 6a7c68d6e6d31b0b721a1868c79802c5e9fd4b56cff83bac8107af45ba9f6513 not found: ID does not exist" Oct 06 12:11:56 crc kubenswrapper[4958]: I1006 12:11:56.883742 4958 scope.go:117] "RemoveContainer" containerID="632ca81c959c3318fc0a3794c85643d8158f1bec780b9dadd3eac094aa965947" Oct 06 12:11:56 crc kubenswrapper[4958]: E1006 12:11:56.884042 4958 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"632ca81c959c3318fc0a3794c85643d8158f1bec780b9dadd3eac094aa965947\": container with ID starting with 632ca81c959c3318fc0a3794c85643d8158f1bec780b9dadd3eac094aa965947 not found: ID does not exist" containerID="632ca81c959c3318fc0a3794c85643d8158f1bec780b9dadd3eac094aa965947" Oct 06 12:11:56 crc kubenswrapper[4958]: I1006 12:11:56.884110 4958 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"632ca81c959c3318fc0a3794c85643d8158f1bec780b9dadd3eac094aa965947"} err="failed to get container status \"632ca81c959c3318fc0a3794c85643d8158f1bec780b9dadd3eac094aa965947\": rpc error: code = NotFound desc = could not find container \"632ca81c959c3318fc0a3794c85643d8158f1bec780b9dadd3eac094aa965947\": container with ID starting with 632ca81c959c3318fc0a3794c85643d8158f1bec780b9dadd3eac094aa965947 not found: ID does not exist" Oct 06 12:11:56 crc kubenswrapper[4958]: I1006 12:11:56.961142 4958 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="83d7decd-c310-40e5-b87e-8a1b75d2ffce" path="/var/lib/kubelet/pods/83d7decd-c310-40e5-b87e-8a1b75d2ffce/volumes" Oct 06 12:11:57 crc kubenswrapper[4958]: I1006 12:11:57.779474 4958 generic.go:334] "Generic (PLEG): container finished" podID="156e92a9-90ff-464a-aaca-a46a8ffc24ff" containerID="0024c46a8ddbcab340b2ed803e561714d453ed7bd9c84b955e1dda91c6c4958d" exitCode=0 Oct 06 12:11:57 crc kubenswrapper[4958]: I1006 12:11:57.779566 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-mk78f" event={"ID":"156e92a9-90ff-464a-aaca-a46a8ffc24ff","Type":"ContainerDied","Data":"0024c46a8ddbcab340b2ed803e561714d453ed7bd9c84b955e1dda91c6c4958d"} Oct 06 12:11:57 crc kubenswrapper[4958]: I1006 12:11:57.779819 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-mk78f" event={"ID":"156e92a9-90ff-464a-aaca-a46a8ffc24ff","Type":"ContainerStarted","Data":"a1c56b81b9122905cc41a6132753851241c9137a5a326dcccb68b3414d9b3ded"} Oct 06 12:11:58 crc kubenswrapper[4958]: I1006 12:11:58.792684 4958 generic.go:334] "Generic (PLEG): container finished" podID="156e92a9-90ff-464a-aaca-a46a8ffc24ff" containerID="984d29ac0efea8ed1050247de7fd231a7ca043b129ee8287c94ea7d84536b1fa" exitCode=0 Oct 06 12:11:58 crc kubenswrapper[4958]: I1006 12:11:58.792732 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-mk78f" event={"ID":"156e92a9-90ff-464a-aaca-a46a8ffc24ff","Type":"ContainerDied","Data":"984d29ac0efea8ed1050247de7fd231a7ca043b129ee8287c94ea7d84536b1fa"} Oct 06 12:11:59 crc kubenswrapper[4958]: I1006 12:11:59.803025 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-mk78f" event={"ID":"156e92a9-90ff-464a-aaca-a46a8ffc24ff","Type":"ContainerStarted","Data":"3d444960cb7ce196040bb7c9f91a91443e043bb7d01456ac7044751a8a308d09"} Oct 06 12:11:59 crc kubenswrapper[4958]: I1006 12:11:59.825772 4958 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-mk78f" podStartSLOduration=3.125820126 podStartE2EDuration="4.825753669s" podCreationTimestamp="2025-10-06 12:11:55 +0000 UTC" firstStartedPulling="2025-10-06 12:11:57.782420753 +0000 UTC m=+1471.668446051" lastFinishedPulling="2025-10-06 12:11:59.482354286 +0000 UTC m=+1473.368379594" observedRunningTime="2025-10-06 12:11:59.822130293 +0000 UTC m=+1473.708155611" watchObservedRunningTime="2025-10-06 12:11:59.825753669 +0000 UTC m=+1473.711778977" Oct 06 12:12:06 crc kubenswrapper[4958]: I1006 12:12:06.318543 4958 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-mk78f" Oct 06 12:12:06 crc kubenswrapper[4958]: I1006 12:12:06.318987 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-mk78f" Oct 06 12:12:06 crc kubenswrapper[4958]: I1006 12:12:06.360958 4958 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-mk78f" Oct 06 12:12:06 crc kubenswrapper[4958]: I1006 12:12:06.939412 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-mk78f" Oct 06 12:12:07 crc kubenswrapper[4958]: I1006 12:12:07.001680 4958 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-mk78f"] Oct 06 12:12:08 crc kubenswrapper[4958]: I1006 12:12:08.936807 4958 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-mk78f" podUID="156e92a9-90ff-464a-aaca-a46a8ffc24ff" containerName="registry-server" containerID="cri-o://3d444960cb7ce196040bb7c9f91a91443e043bb7d01456ac7044751a8a308d09" gracePeriod=2 Oct 06 12:12:09 crc kubenswrapper[4958]: I1006 12:12:09.447007 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-mk78f" Oct 06 12:12:09 crc kubenswrapper[4958]: I1006 12:12:09.551728 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/156e92a9-90ff-464a-aaca-a46a8ffc24ff-utilities\") pod \"156e92a9-90ff-464a-aaca-a46a8ffc24ff\" (UID: \"156e92a9-90ff-464a-aaca-a46a8ffc24ff\") " Oct 06 12:12:09 crc kubenswrapper[4958]: I1006 12:12:09.551887 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ls5bl\" (UniqueName: \"kubernetes.io/projected/156e92a9-90ff-464a-aaca-a46a8ffc24ff-kube-api-access-ls5bl\") pod \"156e92a9-90ff-464a-aaca-a46a8ffc24ff\" (UID: \"156e92a9-90ff-464a-aaca-a46a8ffc24ff\") " Oct 06 12:12:09 crc kubenswrapper[4958]: I1006 12:12:09.551925 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/156e92a9-90ff-464a-aaca-a46a8ffc24ff-catalog-content\") pod \"156e92a9-90ff-464a-aaca-a46a8ffc24ff\" (UID: \"156e92a9-90ff-464a-aaca-a46a8ffc24ff\") " Oct 06 12:12:09 crc kubenswrapper[4958]: I1006 12:12:09.552968 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/156e92a9-90ff-464a-aaca-a46a8ffc24ff-utilities" (OuterVolumeSpecName: "utilities") pod "156e92a9-90ff-464a-aaca-a46a8ffc24ff" (UID: "156e92a9-90ff-464a-aaca-a46a8ffc24ff"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 06 12:12:09 crc kubenswrapper[4958]: I1006 12:12:09.558140 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/156e92a9-90ff-464a-aaca-a46a8ffc24ff-kube-api-access-ls5bl" (OuterVolumeSpecName: "kube-api-access-ls5bl") pod "156e92a9-90ff-464a-aaca-a46a8ffc24ff" (UID: "156e92a9-90ff-464a-aaca-a46a8ffc24ff"). InnerVolumeSpecName "kube-api-access-ls5bl". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 06 12:12:09 crc kubenswrapper[4958]: I1006 12:12:09.566686 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/156e92a9-90ff-464a-aaca-a46a8ffc24ff-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "156e92a9-90ff-464a-aaca-a46a8ffc24ff" (UID: "156e92a9-90ff-464a-aaca-a46a8ffc24ff"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 06 12:12:09 crc kubenswrapper[4958]: I1006 12:12:09.654729 4958 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/156e92a9-90ff-464a-aaca-a46a8ffc24ff-utilities\") on node \"crc\" DevicePath \"\"" Oct 06 12:12:09 crc kubenswrapper[4958]: I1006 12:12:09.654769 4958 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ls5bl\" (UniqueName: \"kubernetes.io/projected/156e92a9-90ff-464a-aaca-a46a8ffc24ff-kube-api-access-ls5bl\") on node \"crc\" DevicePath \"\"" Oct 06 12:12:09 crc kubenswrapper[4958]: I1006 12:12:09.654782 4958 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/156e92a9-90ff-464a-aaca-a46a8ffc24ff-catalog-content\") on node \"crc\" DevicePath \"\"" Oct 06 12:12:09 crc kubenswrapper[4958]: I1006 12:12:09.949870 4958 generic.go:334] "Generic (PLEG): container finished" podID="156e92a9-90ff-464a-aaca-a46a8ffc24ff" containerID="3d444960cb7ce196040bb7c9f91a91443e043bb7d01456ac7044751a8a308d09" exitCode=0 Oct 06 12:12:09 crc kubenswrapper[4958]: I1006 12:12:09.949991 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-mk78f" Oct 06 12:12:09 crc kubenswrapper[4958]: I1006 12:12:09.949994 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-mk78f" event={"ID":"156e92a9-90ff-464a-aaca-a46a8ffc24ff","Type":"ContainerDied","Data":"3d444960cb7ce196040bb7c9f91a91443e043bb7d01456ac7044751a8a308d09"} Oct 06 12:12:09 crc kubenswrapper[4958]: I1006 12:12:09.950201 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-mk78f" event={"ID":"156e92a9-90ff-464a-aaca-a46a8ffc24ff","Type":"ContainerDied","Data":"a1c56b81b9122905cc41a6132753851241c9137a5a326dcccb68b3414d9b3ded"} Oct 06 12:12:09 crc kubenswrapper[4958]: I1006 12:12:09.950244 4958 scope.go:117] "RemoveContainer" containerID="3d444960cb7ce196040bb7c9f91a91443e043bb7d01456ac7044751a8a308d09" Oct 06 12:12:09 crc kubenswrapper[4958]: I1006 12:12:09.983540 4958 scope.go:117] "RemoveContainer" containerID="984d29ac0efea8ed1050247de7fd231a7ca043b129ee8287c94ea7d84536b1fa" Oct 06 12:12:10 crc kubenswrapper[4958]: I1006 12:12:10.020481 4958 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-mk78f"] Oct 06 12:12:10 crc kubenswrapper[4958]: I1006 12:12:10.020897 4958 scope.go:117] "RemoveContainer" containerID="0024c46a8ddbcab340b2ed803e561714d453ed7bd9c84b955e1dda91c6c4958d" Oct 06 12:12:10 crc kubenswrapper[4958]: I1006 12:12:10.033451 4958 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-mk78f"] Oct 06 12:12:10 crc kubenswrapper[4958]: I1006 12:12:10.068737 4958 scope.go:117] "RemoveContainer" containerID="3d444960cb7ce196040bb7c9f91a91443e043bb7d01456ac7044751a8a308d09" Oct 06 12:12:10 crc kubenswrapper[4958]: E1006 12:12:10.074363 4958 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3d444960cb7ce196040bb7c9f91a91443e043bb7d01456ac7044751a8a308d09\": container with ID starting with 3d444960cb7ce196040bb7c9f91a91443e043bb7d01456ac7044751a8a308d09 not found: ID does not exist" containerID="3d444960cb7ce196040bb7c9f91a91443e043bb7d01456ac7044751a8a308d09" Oct 06 12:12:10 crc kubenswrapper[4958]: I1006 12:12:10.074422 4958 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3d444960cb7ce196040bb7c9f91a91443e043bb7d01456ac7044751a8a308d09"} err="failed to get container status \"3d444960cb7ce196040bb7c9f91a91443e043bb7d01456ac7044751a8a308d09\": rpc error: code = NotFound desc = could not find container \"3d444960cb7ce196040bb7c9f91a91443e043bb7d01456ac7044751a8a308d09\": container with ID starting with 3d444960cb7ce196040bb7c9f91a91443e043bb7d01456ac7044751a8a308d09 not found: ID does not exist" Oct 06 12:12:10 crc kubenswrapper[4958]: I1006 12:12:10.074453 4958 scope.go:117] "RemoveContainer" containerID="984d29ac0efea8ed1050247de7fd231a7ca043b129ee8287c94ea7d84536b1fa" Oct 06 12:12:10 crc kubenswrapper[4958]: E1006 12:12:10.074823 4958 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"984d29ac0efea8ed1050247de7fd231a7ca043b129ee8287c94ea7d84536b1fa\": container with ID starting with 984d29ac0efea8ed1050247de7fd231a7ca043b129ee8287c94ea7d84536b1fa not found: ID does not exist" containerID="984d29ac0efea8ed1050247de7fd231a7ca043b129ee8287c94ea7d84536b1fa" Oct 06 12:12:10 crc kubenswrapper[4958]: I1006 12:12:10.074914 4958 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"984d29ac0efea8ed1050247de7fd231a7ca043b129ee8287c94ea7d84536b1fa"} err="failed to get container status \"984d29ac0efea8ed1050247de7fd231a7ca043b129ee8287c94ea7d84536b1fa\": rpc error: code = NotFound desc = could not find container \"984d29ac0efea8ed1050247de7fd231a7ca043b129ee8287c94ea7d84536b1fa\": container with ID starting with 984d29ac0efea8ed1050247de7fd231a7ca043b129ee8287c94ea7d84536b1fa not found: ID does not exist" Oct 06 12:12:10 crc kubenswrapper[4958]: I1006 12:12:10.074934 4958 scope.go:117] "RemoveContainer" containerID="0024c46a8ddbcab340b2ed803e561714d453ed7bd9c84b955e1dda91c6c4958d" Oct 06 12:12:10 crc kubenswrapper[4958]: E1006 12:12:10.075209 4958 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0024c46a8ddbcab340b2ed803e561714d453ed7bd9c84b955e1dda91c6c4958d\": container with ID starting with 0024c46a8ddbcab340b2ed803e561714d453ed7bd9c84b955e1dda91c6c4958d not found: ID does not exist" containerID="0024c46a8ddbcab340b2ed803e561714d453ed7bd9c84b955e1dda91c6c4958d" Oct 06 12:12:10 crc kubenswrapper[4958]: I1006 12:12:10.075239 4958 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0024c46a8ddbcab340b2ed803e561714d453ed7bd9c84b955e1dda91c6c4958d"} err="failed to get container status \"0024c46a8ddbcab340b2ed803e561714d453ed7bd9c84b955e1dda91c6c4958d\": rpc error: code = NotFound desc = could not find container \"0024c46a8ddbcab340b2ed803e561714d453ed7bd9c84b955e1dda91c6c4958d\": container with ID starting with 0024c46a8ddbcab340b2ed803e561714d453ed7bd9c84b955e1dda91c6c4958d not found: ID does not exist" Oct 06 12:12:10 crc kubenswrapper[4958]: I1006 12:12:10.930278 4958 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="156e92a9-90ff-464a-aaca-a46a8ffc24ff" path="/var/lib/kubelet/pods/156e92a9-90ff-464a-aaca-a46a8ffc24ff/volumes" Oct 06 12:12:28 crc kubenswrapper[4958]: I1006 12:12:28.836643 4958 scope.go:117] "RemoveContainer" containerID="a89a96b1b69b14141f600174903d13b90a573a98c92971fefbdb276f665050b4" Oct 06 12:12:28 crc kubenswrapper[4958]: I1006 12:12:28.868797 4958 scope.go:117] "RemoveContainer" containerID="edf653c280a78f421d48acef18d37cfb503a817ceee78823a587dfa97a2a76b8" Oct 06 12:12:28 crc kubenswrapper[4958]: I1006 12:12:28.904107 4958 scope.go:117] "RemoveContainer" containerID="30874e887853948e61c97e70c09d72a7ccf0b51f74d5603c645e9d7670def6be" Oct 06 12:12:28 crc kubenswrapper[4958]: I1006 12:12:28.971512 4958 scope.go:117] "RemoveContainer" containerID="e78290cb4959e8057ec06906db8acb4e63f48e640c93a51ac03fd0ab1c5a4644" Oct 06 12:12:29 crc kubenswrapper[4958]: I1006 12:12:29.025746 4958 scope.go:117] "RemoveContainer" containerID="0f6c302137de9d82ddf5bd0f335f230fa69764352d0459545e4aeeecb5c7622d" Oct 06 12:12:29 crc kubenswrapper[4958]: I1006 12:12:29.045717 4958 scope.go:117] "RemoveContainer" containerID="71988c94015d22c0b2dbbc077888c692b3d96c6629f3713ac3772bcdd51e0241" Oct 06 12:12:44 crc kubenswrapper[4958]: I1006 12:12:44.076228 4958 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-db-create-wqr9j"] Oct 06 12:12:44 crc kubenswrapper[4958]: I1006 12:12:44.094291 4958 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-db-create-r2wq4"] Oct 06 12:12:44 crc kubenswrapper[4958]: I1006 12:12:44.103111 4958 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-db-create-wqr9j"] Oct 06 12:12:44 crc kubenswrapper[4958]: I1006 12:12:44.115763 4958 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-db-create-kcdjr"] Oct 06 12:12:44 crc kubenswrapper[4958]: I1006 12:12:44.128122 4958 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-db-create-kcdjr"] Oct 06 12:12:44 crc kubenswrapper[4958]: I1006 12:12:44.137385 4958 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-db-create-r2wq4"] Oct 06 12:12:44 crc kubenswrapper[4958]: I1006 12:12:44.934555 4958 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="07512fa7-630e-4f59-b11f-ef1d0f014d88" path="/var/lib/kubelet/pods/07512fa7-630e-4f59-b11f-ef1d0f014d88/volumes" Oct 06 12:12:44 crc kubenswrapper[4958]: I1006 12:12:44.935794 4958 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0f21ab19-342f-4a65-9ebc-37c9e83c1099" path="/var/lib/kubelet/pods/0f21ab19-342f-4a65-9ebc-37c9e83c1099/volumes" Oct 06 12:12:44 crc kubenswrapper[4958]: I1006 12:12:44.936950 4958 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c828918b-7cf5-4ac3-8edb-a28834ce4249" path="/var/lib/kubelet/pods/c828918b-7cf5-4ac3-8edb-a28834ce4249/volumes" Oct 06 12:12:53 crc kubenswrapper[4958]: I1006 12:12:53.062906 4958 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-ad5d-account-create-bwdzc"] Oct 06 12:12:53 crc kubenswrapper[4958]: I1006 12:12:53.075979 4958 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-8f99-account-create-xhcbg"] Oct 06 12:12:53 crc kubenswrapper[4958]: I1006 12:12:53.083399 4958 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-ad5d-account-create-bwdzc"] Oct 06 12:12:53 crc kubenswrapper[4958]: I1006 12:12:53.089723 4958 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-8f99-account-create-xhcbg"] Oct 06 12:12:54 crc kubenswrapper[4958]: I1006 12:12:54.927926 4958 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8e9b74b3-2333-45a2-9f90-c015f543989d" path="/var/lib/kubelet/pods/8e9b74b3-2333-45a2-9f90-c015f543989d/volumes" Oct 06 12:12:54 crc kubenswrapper[4958]: I1006 12:12:54.929129 4958 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f203980b-58c6-4be4-8b4c-7f3e67b7de9c" path="/var/lib/kubelet/pods/f203980b-58c6-4be4-8b4c-7f3e67b7de9c/volumes" Oct 06 12:13:12 crc kubenswrapper[4958]: I1006 12:13:12.043296 4958 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-9c95-account-create-drzjl"] Oct 06 12:13:12 crc kubenswrapper[4958]: I1006 12:13:12.056172 4958 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-9c95-account-create-drzjl"] Oct 06 12:13:12 crc kubenswrapper[4958]: I1006 12:13:12.922905 4958 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b899478f-34ca-4002-837c-7e795f4dc77a" path="/var/lib/kubelet/pods/b899478f-34ca-4002-837c-7e795f4dc77a/volumes" Oct 06 12:13:14 crc kubenswrapper[4958]: I1006 12:13:14.028051 4958 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-db-create-4wj4v"] Oct 06 12:13:14 crc kubenswrapper[4958]: I1006 12:13:14.034957 4958 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-db-create-4wj4v"] Oct 06 12:13:14 crc kubenswrapper[4958]: I1006 12:13:14.924350 4958 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="131f77e7-dc40-48c1-87c5-1a59b9ff226e" path="/var/lib/kubelet/pods/131f77e7-dc40-48c1-87c5-1a59b9ff226e/volumes" Oct 06 12:13:15 crc kubenswrapper[4958]: I1006 12:13:15.042299 4958 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-db-create-pvr9g"] Oct 06 12:13:15 crc kubenswrapper[4958]: I1006 12:13:15.058087 4958 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-db-create-pvr9g"] Oct 06 12:13:15 crc kubenswrapper[4958]: I1006 12:13:15.070942 4958 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-db-create-vtjz2"] Oct 06 12:13:15 crc kubenswrapper[4958]: I1006 12:13:15.084697 4958 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-db-create-vtjz2"] Oct 06 12:13:16 crc kubenswrapper[4958]: I1006 12:13:16.932341 4958 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5193f80f-d1c5-44bc-ad58-5c3b1a9f3977" path="/var/lib/kubelet/pods/5193f80f-d1c5-44bc-ad58-5c3b1a9f3977/volumes" Oct 06 12:13:16 crc kubenswrapper[4958]: I1006 12:13:16.935232 4958 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ecceb49a-e566-4bbb-8f62-c4d0e59bd18d" path="/var/lib/kubelet/pods/ecceb49a-e566-4bbb-8f62-c4d0e59bd18d/volumes" Oct 06 12:13:21 crc kubenswrapper[4958]: I1006 12:13:21.789092 4958 generic.go:334] "Generic (PLEG): container finished" podID="81afece6-fe0a-491c-94b8-3b19d00058c5" containerID="a9a69170b511d8d41357ecaab3d125513a120366faf37482e8c3cf417db1a97f" exitCode=0 Oct 06 12:13:21 crc kubenswrapper[4958]: I1006 12:13:21.789227 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-xg9gd" event={"ID":"81afece6-fe0a-491c-94b8-3b19d00058c5","Type":"ContainerDied","Data":"a9a69170b511d8d41357ecaab3d125513a120366faf37482e8c3cf417db1a97f"} Oct 06 12:13:23 crc kubenswrapper[4958]: I1006 12:13:23.258027 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-xg9gd" Oct 06 12:13:23 crc kubenswrapper[4958]: I1006 12:13:23.398615 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mpslq\" (UniqueName: \"kubernetes.io/projected/81afece6-fe0a-491c-94b8-3b19d00058c5-kube-api-access-mpslq\") pod \"81afece6-fe0a-491c-94b8-3b19d00058c5\" (UID: \"81afece6-fe0a-491c-94b8-3b19d00058c5\") " Oct 06 12:13:23 crc kubenswrapper[4958]: I1006 12:13:23.399290 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/81afece6-fe0a-491c-94b8-3b19d00058c5-ssh-key\") pod \"81afece6-fe0a-491c-94b8-3b19d00058c5\" (UID: \"81afece6-fe0a-491c-94b8-3b19d00058c5\") " Oct 06 12:13:23 crc kubenswrapper[4958]: I1006 12:13:23.399327 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/81afece6-fe0a-491c-94b8-3b19d00058c5-inventory\") pod \"81afece6-fe0a-491c-94b8-3b19d00058c5\" (UID: \"81afece6-fe0a-491c-94b8-3b19d00058c5\") " Oct 06 12:13:23 crc kubenswrapper[4958]: I1006 12:13:23.404503 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/81afece6-fe0a-491c-94b8-3b19d00058c5-kube-api-access-mpslq" (OuterVolumeSpecName: "kube-api-access-mpslq") pod "81afece6-fe0a-491c-94b8-3b19d00058c5" (UID: "81afece6-fe0a-491c-94b8-3b19d00058c5"). InnerVolumeSpecName "kube-api-access-mpslq". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 06 12:13:23 crc kubenswrapper[4958]: I1006 12:13:23.429875 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/81afece6-fe0a-491c-94b8-3b19d00058c5-inventory" (OuterVolumeSpecName: "inventory") pod "81afece6-fe0a-491c-94b8-3b19d00058c5" (UID: "81afece6-fe0a-491c-94b8-3b19d00058c5"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 06 12:13:23 crc kubenswrapper[4958]: I1006 12:13:23.431510 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/81afece6-fe0a-491c-94b8-3b19d00058c5-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "81afece6-fe0a-491c-94b8-3b19d00058c5" (UID: "81afece6-fe0a-491c-94b8-3b19d00058c5"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 06 12:13:23 crc kubenswrapper[4958]: I1006 12:13:23.501065 4958 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/81afece6-fe0a-491c-94b8-3b19d00058c5-ssh-key\") on node \"crc\" DevicePath \"\"" Oct 06 12:13:23 crc kubenswrapper[4958]: I1006 12:13:23.501111 4958 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/81afece6-fe0a-491c-94b8-3b19d00058c5-inventory\") on node \"crc\" DevicePath \"\"" Oct 06 12:13:23 crc kubenswrapper[4958]: I1006 12:13:23.501121 4958 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mpslq\" (UniqueName: \"kubernetes.io/projected/81afece6-fe0a-491c-94b8-3b19d00058c5-kube-api-access-mpslq\") on node \"crc\" DevicePath \"\"" Oct 06 12:13:23 crc kubenswrapper[4958]: I1006 12:13:23.806536 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-xg9gd" event={"ID":"81afece6-fe0a-491c-94b8-3b19d00058c5","Type":"ContainerDied","Data":"7c2a480af02dc55bd17b19f3f65a8a5a6c4dac9a2b31a75a82217659047fd4d9"} Oct 06 12:13:23 crc kubenswrapper[4958]: I1006 12:13:23.806609 4958 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7c2a480af02dc55bd17b19f3f65a8a5a6c4dac9a2b31a75a82217659047fd4d9" Oct 06 12:13:23 crc kubenswrapper[4958]: I1006 12:13:23.806573 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-xg9gd" Oct 06 12:13:23 crc kubenswrapper[4958]: I1006 12:13:23.904985 4958 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/configure-network-edpm-deployment-openstack-edpm-ipam-7whq7"] Oct 06 12:13:23 crc kubenswrapper[4958]: E1006 12:13:23.905863 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="83d7decd-c310-40e5-b87e-8a1b75d2ffce" containerName="extract-utilities" Oct 06 12:13:23 crc kubenswrapper[4958]: I1006 12:13:23.905928 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="83d7decd-c310-40e5-b87e-8a1b75d2ffce" containerName="extract-utilities" Oct 06 12:13:23 crc kubenswrapper[4958]: E1006 12:13:23.906018 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="83d7decd-c310-40e5-b87e-8a1b75d2ffce" containerName="registry-server" Oct 06 12:13:23 crc kubenswrapper[4958]: I1006 12:13:23.906074 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="83d7decd-c310-40e5-b87e-8a1b75d2ffce" containerName="registry-server" Oct 06 12:13:23 crc kubenswrapper[4958]: E1006 12:13:23.906159 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="83d7decd-c310-40e5-b87e-8a1b75d2ffce" containerName="extract-content" Oct 06 12:13:23 crc kubenswrapper[4958]: I1006 12:13:23.906211 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="83d7decd-c310-40e5-b87e-8a1b75d2ffce" containerName="extract-content" Oct 06 12:13:23 crc kubenswrapper[4958]: E1006 12:13:23.906266 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="156e92a9-90ff-464a-aaca-a46a8ffc24ff" containerName="registry-server" Oct 06 12:13:23 crc kubenswrapper[4958]: I1006 12:13:23.906324 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="156e92a9-90ff-464a-aaca-a46a8ffc24ff" containerName="registry-server" Oct 06 12:13:23 crc kubenswrapper[4958]: E1006 12:13:23.906381 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="81afece6-fe0a-491c-94b8-3b19d00058c5" containerName="download-cache-edpm-deployment-openstack-edpm-ipam" Oct 06 12:13:23 crc kubenswrapper[4958]: I1006 12:13:23.906435 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="81afece6-fe0a-491c-94b8-3b19d00058c5" containerName="download-cache-edpm-deployment-openstack-edpm-ipam" Oct 06 12:13:23 crc kubenswrapper[4958]: E1006 12:13:23.906495 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="156e92a9-90ff-464a-aaca-a46a8ffc24ff" containerName="extract-utilities" Oct 06 12:13:23 crc kubenswrapper[4958]: I1006 12:13:23.906545 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="156e92a9-90ff-464a-aaca-a46a8ffc24ff" containerName="extract-utilities" Oct 06 12:13:23 crc kubenswrapper[4958]: E1006 12:13:23.906617 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="156e92a9-90ff-464a-aaca-a46a8ffc24ff" containerName="extract-content" Oct 06 12:13:23 crc kubenswrapper[4958]: I1006 12:13:23.906683 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="156e92a9-90ff-464a-aaca-a46a8ffc24ff" containerName="extract-content" Oct 06 12:13:23 crc kubenswrapper[4958]: I1006 12:13:23.907201 4958 memory_manager.go:354] "RemoveStaleState removing state" podUID="83d7decd-c310-40e5-b87e-8a1b75d2ffce" containerName="registry-server" Oct 06 12:13:23 crc kubenswrapper[4958]: I1006 12:13:23.907281 4958 memory_manager.go:354] "RemoveStaleState removing state" podUID="156e92a9-90ff-464a-aaca-a46a8ffc24ff" containerName="registry-server" Oct 06 12:13:23 crc kubenswrapper[4958]: I1006 12:13:23.907355 4958 memory_manager.go:354] "RemoveStaleState removing state" podUID="81afece6-fe0a-491c-94b8-3b19d00058c5" containerName="download-cache-edpm-deployment-openstack-edpm-ipam" Oct 06 12:13:23 crc kubenswrapper[4958]: I1006 12:13:23.908328 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-7whq7" Oct 06 12:13:23 crc kubenswrapper[4958]: I1006 12:13:23.921430 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Oct 06 12:13:23 crc kubenswrapper[4958]: I1006 12:13:23.921608 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-frkbw" Oct 06 12:13:23 crc kubenswrapper[4958]: I1006 12:13:23.921782 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Oct 06 12:13:23 crc kubenswrapper[4958]: I1006 12:13:23.921909 4958 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Oct 06 12:13:23 crc kubenswrapper[4958]: I1006 12:13:23.945199 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-network-edpm-deployment-openstack-edpm-ipam-7whq7"] Oct 06 12:13:24 crc kubenswrapper[4958]: I1006 12:13:24.017834 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tzz4k\" (UniqueName: \"kubernetes.io/projected/55d6c75b-9ef4-4576-bdc9-46bd62865410-kube-api-access-tzz4k\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-7whq7\" (UID: \"55d6c75b-9ef4-4576-bdc9-46bd62865410\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-7whq7" Oct 06 12:13:24 crc kubenswrapper[4958]: I1006 12:13:24.017927 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/55d6c75b-9ef4-4576-bdc9-46bd62865410-inventory\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-7whq7\" (UID: \"55d6c75b-9ef4-4576-bdc9-46bd62865410\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-7whq7" Oct 06 12:13:24 crc kubenswrapper[4958]: I1006 12:13:24.017992 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/55d6c75b-9ef4-4576-bdc9-46bd62865410-ssh-key\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-7whq7\" (UID: \"55d6c75b-9ef4-4576-bdc9-46bd62865410\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-7whq7" Oct 06 12:13:24 crc kubenswrapper[4958]: I1006 12:13:24.120155 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/55d6c75b-9ef4-4576-bdc9-46bd62865410-inventory\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-7whq7\" (UID: \"55d6c75b-9ef4-4576-bdc9-46bd62865410\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-7whq7" Oct 06 12:13:24 crc kubenswrapper[4958]: I1006 12:13:24.120239 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/55d6c75b-9ef4-4576-bdc9-46bd62865410-ssh-key\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-7whq7\" (UID: \"55d6c75b-9ef4-4576-bdc9-46bd62865410\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-7whq7" Oct 06 12:13:24 crc kubenswrapper[4958]: I1006 12:13:24.120394 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tzz4k\" (UniqueName: \"kubernetes.io/projected/55d6c75b-9ef4-4576-bdc9-46bd62865410-kube-api-access-tzz4k\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-7whq7\" (UID: \"55d6c75b-9ef4-4576-bdc9-46bd62865410\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-7whq7" Oct 06 12:13:24 crc kubenswrapper[4958]: I1006 12:13:24.125902 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/55d6c75b-9ef4-4576-bdc9-46bd62865410-inventory\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-7whq7\" (UID: \"55d6c75b-9ef4-4576-bdc9-46bd62865410\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-7whq7" Oct 06 12:13:24 crc kubenswrapper[4958]: I1006 12:13:24.126474 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/55d6c75b-9ef4-4576-bdc9-46bd62865410-ssh-key\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-7whq7\" (UID: \"55d6c75b-9ef4-4576-bdc9-46bd62865410\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-7whq7" Oct 06 12:13:24 crc kubenswrapper[4958]: I1006 12:13:24.144861 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tzz4k\" (UniqueName: \"kubernetes.io/projected/55d6c75b-9ef4-4576-bdc9-46bd62865410-kube-api-access-tzz4k\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-7whq7\" (UID: \"55d6c75b-9ef4-4576-bdc9-46bd62865410\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-7whq7" Oct 06 12:13:24 crc kubenswrapper[4958]: I1006 12:13:24.241339 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-7whq7" Oct 06 12:13:24 crc kubenswrapper[4958]: I1006 12:13:24.825491 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-network-edpm-deployment-openstack-edpm-ipam-7whq7"] Oct 06 12:13:24 crc kubenswrapper[4958]: I1006 12:13:24.830851 4958 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Oct 06 12:13:25 crc kubenswrapper[4958]: I1006 12:13:25.044232 4958 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-db-sync-rs8jp"] Oct 06 12:13:25 crc kubenswrapper[4958]: I1006 12:13:25.054826 4958 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-db-sync-rs8jp"] Oct 06 12:13:25 crc kubenswrapper[4958]: I1006 12:13:25.827030 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-7whq7" event={"ID":"55d6c75b-9ef4-4576-bdc9-46bd62865410","Type":"ContainerStarted","Data":"8268cc8b502e5958c1e1bac9b37a526920f55dd82fe47833a0ea995988a15ef5"} Oct 06 12:13:26 crc kubenswrapper[4958]: I1006 12:13:26.836178 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-7whq7" event={"ID":"55d6c75b-9ef4-4576-bdc9-46bd62865410","Type":"ContainerStarted","Data":"2fdf38403478aa77725b993a9a8b7de1c0f0166fd4b487babe4a77c4b6a6378b"} Oct 06 12:13:26 crc kubenswrapper[4958]: I1006 12:13:26.855916 4958 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-7whq7" podStartSLOduration=2.756515882 podStartE2EDuration="3.855896437s" podCreationTimestamp="2025-10-06 12:13:23 +0000 UTC" firstStartedPulling="2025-10-06 12:13:24.830645706 +0000 UTC m=+1558.716671014" lastFinishedPulling="2025-10-06 12:13:25.930026211 +0000 UTC m=+1559.816051569" observedRunningTime="2025-10-06 12:13:26.853826698 +0000 UTC m=+1560.739852076" watchObservedRunningTime="2025-10-06 12:13:26.855896437 +0000 UTC m=+1560.741921786" Oct 06 12:13:26 crc kubenswrapper[4958]: I1006 12:13:26.932614 4958 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="dfe42ef2-678c-4c5d-af64-8cce7c3eb55b" path="/var/lib/kubelet/pods/dfe42ef2-678c-4c5d-af64-8cce7c3eb55b/volumes" Oct 06 12:13:29 crc kubenswrapper[4958]: I1006 12:13:29.184947 4958 scope.go:117] "RemoveContainer" containerID="2dc678b984de79d7288a020c06b74c7886ffbef49af3e052f28644726affd6bf" Oct 06 12:13:29 crc kubenswrapper[4958]: I1006 12:13:29.221060 4958 scope.go:117] "RemoveContainer" containerID="89026b4879c20b132c0de374e0bcb9900df5c58f7291f97f1d0d09e516d209ec" Oct 06 12:13:29 crc kubenswrapper[4958]: I1006 12:13:29.259799 4958 scope.go:117] "RemoveContainer" containerID="13a53203f35bee5adaaa48ed11d86bee6ef684b94bdd7507a19bb89089a904b5" Oct 06 12:13:29 crc kubenswrapper[4958]: I1006 12:13:29.303557 4958 scope.go:117] "RemoveContainer" containerID="21f68007797b01fab017c4476f71a86655b274243b82ee6d9974b37f51cae81b" Oct 06 12:13:29 crc kubenswrapper[4958]: I1006 12:13:29.344982 4958 scope.go:117] "RemoveContainer" containerID="8dabafb2cf0213490fa5b5bea5dbc8c60a6d07907857f8ae7b51cd92f834644c" Oct 06 12:13:29 crc kubenswrapper[4958]: I1006 12:13:29.385555 4958 scope.go:117] "RemoveContainer" containerID="7d28a66531b28268da699c6de6f051713fc683d75174a9fa0bc34d70271ee2fa" Oct 06 12:13:29 crc kubenswrapper[4958]: I1006 12:13:29.432570 4958 scope.go:117] "RemoveContainer" containerID="b3e58623d5f51a6601bc53362f56c55b0979b0e6b819c574d4828c6e866c78ed" Oct 06 12:13:29 crc kubenswrapper[4958]: I1006 12:13:29.459397 4958 scope.go:117] "RemoveContainer" containerID="fb52276eccb8dd8524101c011d1b2c9d89ecb3e4454b9b7c163f658f2020be99" Oct 06 12:13:29 crc kubenswrapper[4958]: I1006 12:13:29.482165 4958 scope.go:117] "RemoveContainer" containerID="94995a963fdafe4a2ba4bef81b89b364bcfc0f8134d6805905d95f7cf5bb7c16" Oct 06 12:13:29 crc kubenswrapper[4958]: I1006 12:13:29.507327 4958 scope.go:117] "RemoveContainer" containerID="2359cce51ae6486b852d0fabec225f8159ba0a26aa51b3ac1aebcb8cdf74d4e5" Oct 06 12:13:33 crc kubenswrapper[4958]: I1006 12:13:33.066200 4958 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-408a-account-create-vwbr4"] Oct 06 12:13:33 crc kubenswrapper[4958]: I1006 12:13:33.077062 4958 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-408a-account-create-vwbr4"] Oct 06 12:13:34 crc kubenswrapper[4958]: I1006 12:13:34.933180 4958 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7bf2444f-b495-4fa7-822d-053a8d5c9b5d" path="/var/lib/kubelet/pods/7bf2444f-b495-4fa7-822d-053a8d5c9b5d/volumes" Oct 06 12:13:39 crc kubenswrapper[4958]: I1006 12:13:39.051997 4958 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-0515-account-create-txz72"] Oct 06 12:13:39 crc kubenswrapper[4958]: I1006 12:13:39.068439 4958 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-0515-account-create-txz72"] Oct 06 12:13:40 crc kubenswrapper[4958]: I1006 12:13:40.934506 4958 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0a8e6024-5fe5-4759-ab73-414ebc3388f9" path="/var/lib/kubelet/pods/0a8e6024-5fe5-4759-ab73-414ebc3388f9/volumes" Oct 06 12:13:41 crc kubenswrapper[4958]: I1006 12:13:41.039247 4958 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-0757-account-create-7jfwp"] Oct 06 12:13:41 crc kubenswrapper[4958]: I1006 12:13:41.051831 4958 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-0757-account-create-7jfwp"] Oct 06 12:13:42 crc kubenswrapper[4958]: I1006 12:13:42.045448 4958 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-db-sync-jxgv9"] Oct 06 12:13:42 crc kubenswrapper[4958]: I1006 12:13:42.054173 4958 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-db-sync-jxgv9"] Oct 06 12:13:42 crc kubenswrapper[4958]: I1006 12:13:42.925990 4958 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="16cb82f4-081c-4085-b916-7ac6b4366c0a" path="/var/lib/kubelet/pods/16cb82f4-081c-4085-b916-7ac6b4366c0a/volumes" Oct 06 12:13:42 crc kubenswrapper[4958]: I1006 12:13:42.926776 4958 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e3d53199-b7b6-4d78-9b4a-53ec81b1041d" path="/var/lib/kubelet/pods/e3d53199-b7b6-4d78-9b4a-53ec81b1041d/volumes" Oct 06 12:13:56 crc kubenswrapper[4958]: I1006 12:13:56.086079 4958 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-bootstrap-vqtmw"] Oct 06 12:13:56 crc kubenswrapper[4958]: I1006 12:13:56.096112 4958 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-bootstrap-vqtmw"] Oct 06 12:13:56 crc kubenswrapper[4958]: I1006 12:13:56.929509 4958 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d7df091e-b3a3-441a-a831-d57964a84438" path="/var/lib/kubelet/pods/d7df091e-b3a3-441a-a831-d57964a84438/volumes" Oct 06 12:14:08 crc kubenswrapper[4958]: I1006 12:14:08.035503 4958 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-db-sync-dv44v"] Oct 06 12:14:08 crc kubenswrapper[4958]: I1006 12:14:08.045293 4958 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-db-sync-dv44v"] Oct 06 12:14:08 crc kubenswrapper[4958]: I1006 12:14:08.944704 4958 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d0818578-1e79-4dfb-8257-b8b1a2bc0cef" path="/var/lib/kubelet/pods/d0818578-1e79-4dfb-8257-b8b1a2bc0cef/volumes" Oct 06 12:14:15 crc kubenswrapper[4958]: I1006 12:14:15.040404 4958 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-db-sync-s7xhl"] Oct 06 12:14:15 crc kubenswrapper[4958]: I1006 12:14:15.052793 4958 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-db-sync-s7xhl"] Oct 06 12:14:16 crc kubenswrapper[4958]: I1006 12:14:16.931444 4958 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e2efb2d0-1b2f-47fd-a1ee-5e69cafa0fd7" path="/var/lib/kubelet/pods/e2efb2d0-1b2f-47fd-a1ee-5e69cafa0fd7/volumes" Oct 06 12:14:20 crc kubenswrapper[4958]: I1006 12:14:20.044097 4958 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-db-sync-9kx8n"] Oct 06 12:14:20 crc kubenswrapper[4958]: I1006 12:14:20.052596 4958 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-db-sync-9kx8n"] Oct 06 12:14:20 crc kubenswrapper[4958]: I1006 12:14:20.925135 4958 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2d86d752-3a4d-4940-b221-242b3253c418" path="/var/lib/kubelet/pods/2d86d752-3a4d-4940-b221-242b3253c418/volumes" Oct 06 12:14:23 crc kubenswrapper[4958]: I1006 12:14:23.801887 4958 patch_prober.go:28] interesting pod/machine-config-daemon-whw6z container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 06 12:14:23 crc kubenswrapper[4958]: I1006 12:14:23.802254 4958 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-whw6z" podUID="1a63a5e9-6d00-40ff-a080-cfcaf03a1c1b" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 06 12:14:29 crc kubenswrapper[4958]: I1006 12:14:29.789565 4958 scope.go:117] "RemoveContainer" containerID="ba28d7229c7cd7e99f7c0055e526ec03a8f7ff0840038c6b1530766f70185199" Oct 06 12:14:29 crc kubenswrapper[4958]: I1006 12:14:29.864966 4958 scope.go:117] "RemoveContainer" containerID="e85b20492bebd2fb52a6adfb25d6edcf5303303b44221c9d93db5b05ea4800f3" Oct 06 12:14:29 crc kubenswrapper[4958]: I1006 12:14:29.904462 4958 scope.go:117] "RemoveContainer" containerID="649ac2b31dec1f4a4111458193e96b6acbe4f8ffaf2b76c0e7b7bcebf89ba7b0" Oct 06 12:14:29 crc kubenswrapper[4958]: I1006 12:14:29.940897 4958 scope.go:117] "RemoveContainer" containerID="4d3a216076817b0747696b3a72343cd223743ed31aec65783864ca48a94b246a" Oct 06 12:14:29 crc kubenswrapper[4958]: I1006 12:14:29.985719 4958 scope.go:117] "RemoveContainer" containerID="be6cfd84d054a5d7f5ab91a026a82bbc0ded37e649b2de2a77f0d720bc80b0cf" Oct 06 12:14:30 crc kubenswrapper[4958]: I1006 12:14:30.055793 4958 scope.go:117] "RemoveContainer" containerID="bda9bd9e98ae5a4cd88593ea36a8f7ca2e61dd496c0d5a786ae8a7b15506bfcc" Oct 06 12:14:30 crc kubenswrapper[4958]: I1006 12:14:30.087500 4958 scope.go:117] "RemoveContainer" containerID="caa5ca7ecebae672272f61cf77f436f3c18a6481059473294df40d3abfa17a2d" Oct 06 12:14:30 crc kubenswrapper[4958]: I1006 12:14:30.120551 4958 scope.go:117] "RemoveContainer" containerID="aa4c6efd4b9366291e2ea0bbf13dc0b150b5630f61004579382ea0e57ad415b3" Oct 06 12:14:31 crc kubenswrapper[4958]: I1006 12:14:31.038620 4958 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-db-sync-wqlhj"] Oct 06 12:14:31 crc kubenswrapper[4958]: I1006 12:14:31.047175 4958 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-db-sync-wqlhj"] Oct 06 12:14:32 crc kubenswrapper[4958]: I1006 12:14:32.930559 4958 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9e240eda-9921-45e1-991d-971031189ee4" path="/var/lib/kubelet/pods/9e240eda-9921-45e1-991d-971031189ee4/volumes" Oct 06 12:14:44 crc kubenswrapper[4958]: I1006 12:14:44.726420 4958 generic.go:334] "Generic (PLEG): container finished" podID="55d6c75b-9ef4-4576-bdc9-46bd62865410" containerID="2fdf38403478aa77725b993a9a8b7de1c0f0166fd4b487babe4a77c4b6a6378b" exitCode=0 Oct 06 12:14:44 crc kubenswrapper[4958]: I1006 12:14:44.726518 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-7whq7" event={"ID":"55d6c75b-9ef4-4576-bdc9-46bd62865410","Type":"ContainerDied","Data":"2fdf38403478aa77725b993a9a8b7de1c0f0166fd4b487babe4a77c4b6a6378b"} Oct 06 12:14:46 crc kubenswrapper[4958]: I1006 12:14:46.213467 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-7whq7" Oct 06 12:14:46 crc kubenswrapper[4958]: I1006 12:14:46.313337 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tzz4k\" (UniqueName: \"kubernetes.io/projected/55d6c75b-9ef4-4576-bdc9-46bd62865410-kube-api-access-tzz4k\") pod \"55d6c75b-9ef4-4576-bdc9-46bd62865410\" (UID: \"55d6c75b-9ef4-4576-bdc9-46bd62865410\") " Oct 06 12:14:46 crc kubenswrapper[4958]: I1006 12:14:46.313409 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/55d6c75b-9ef4-4576-bdc9-46bd62865410-inventory\") pod \"55d6c75b-9ef4-4576-bdc9-46bd62865410\" (UID: \"55d6c75b-9ef4-4576-bdc9-46bd62865410\") " Oct 06 12:14:46 crc kubenswrapper[4958]: I1006 12:14:46.313521 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/55d6c75b-9ef4-4576-bdc9-46bd62865410-ssh-key\") pod \"55d6c75b-9ef4-4576-bdc9-46bd62865410\" (UID: \"55d6c75b-9ef4-4576-bdc9-46bd62865410\") " Oct 06 12:14:46 crc kubenswrapper[4958]: I1006 12:14:46.320396 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/55d6c75b-9ef4-4576-bdc9-46bd62865410-kube-api-access-tzz4k" (OuterVolumeSpecName: "kube-api-access-tzz4k") pod "55d6c75b-9ef4-4576-bdc9-46bd62865410" (UID: "55d6c75b-9ef4-4576-bdc9-46bd62865410"). InnerVolumeSpecName "kube-api-access-tzz4k". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 06 12:14:46 crc kubenswrapper[4958]: I1006 12:14:46.344975 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/55d6c75b-9ef4-4576-bdc9-46bd62865410-inventory" (OuterVolumeSpecName: "inventory") pod "55d6c75b-9ef4-4576-bdc9-46bd62865410" (UID: "55d6c75b-9ef4-4576-bdc9-46bd62865410"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 06 12:14:46 crc kubenswrapper[4958]: I1006 12:14:46.364465 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/55d6c75b-9ef4-4576-bdc9-46bd62865410-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "55d6c75b-9ef4-4576-bdc9-46bd62865410" (UID: "55d6c75b-9ef4-4576-bdc9-46bd62865410"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 06 12:14:46 crc kubenswrapper[4958]: I1006 12:14:46.415866 4958 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/55d6c75b-9ef4-4576-bdc9-46bd62865410-ssh-key\") on node \"crc\" DevicePath \"\"" Oct 06 12:14:46 crc kubenswrapper[4958]: I1006 12:14:46.415915 4958 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tzz4k\" (UniqueName: \"kubernetes.io/projected/55d6c75b-9ef4-4576-bdc9-46bd62865410-kube-api-access-tzz4k\") on node \"crc\" DevicePath \"\"" Oct 06 12:14:46 crc kubenswrapper[4958]: I1006 12:14:46.415929 4958 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/55d6c75b-9ef4-4576-bdc9-46bd62865410-inventory\") on node \"crc\" DevicePath \"\"" Oct 06 12:14:46 crc kubenswrapper[4958]: I1006 12:14:46.749125 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-7whq7" event={"ID":"55d6c75b-9ef4-4576-bdc9-46bd62865410","Type":"ContainerDied","Data":"8268cc8b502e5958c1e1bac9b37a526920f55dd82fe47833a0ea995988a15ef5"} Oct 06 12:14:46 crc kubenswrapper[4958]: I1006 12:14:46.749203 4958 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8268cc8b502e5958c1e1bac9b37a526920f55dd82fe47833a0ea995988a15ef5" Oct 06 12:14:46 crc kubenswrapper[4958]: I1006 12:14:46.749229 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-7whq7" Oct 06 12:14:46 crc kubenswrapper[4958]: I1006 12:14:46.860089 4958 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/validate-network-edpm-deployment-openstack-edpm-ipam-kg5b5"] Oct 06 12:14:46 crc kubenswrapper[4958]: E1006 12:14:46.860491 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="55d6c75b-9ef4-4576-bdc9-46bd62865410" containerName="configure-network-edpm-deployment-openstack-edpm-ipam" Oct 06 12:14:46 crc kubenswrapper[4958]: I1006 12:14:46.860510 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="55d6c75b-9ef4-4576-bdc9-46bd62865410" containerName="configure-network-edpm-deployment-openstack-edpm-ipam" Oct 06 12:14:46 crc kubenswrapper[4958]: I1006 12:14:46.860741 4958 memory_manager.go:354] "RemoveStaleState removing state" podUID="55d6c75b-9ef4-4576-bdc9-46bd62865410" containerName="configure-network-edpm-deployment-openstack-edpm-ipam" Oct 06 12:14:46 crc kubenswrapper[4958]: I1006 12:14:46.861383 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-kg5b5" Oct 06 12:14:46 crc kubenswrapper[4958]: I1006 12:14:46.865219 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Oct 06 12:14:46 crc kubenswrapper[4958]: I1006 12:14:46.865477 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Oct 06 12:14:46 crc kubenswrapper[4958]: I1006 12:14:46.865498 4958 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Oct 06 12:14:46 crc kubenswrapper[4958]: I1006 12:14:46.870119 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/validate-network-edpm-deployment-openstack-edpm-ipam-kg5b5"] Oct 06 12:14:46 crc kubenswrapper[4958]: I1006 12:14:46.873775 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-frkbw" Oct 06 12:14:47 crc kubenswrapper[4958]: I1006 12:14:47.027462 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-twhqc\" (UniqueName: \"kubernetes.io/projected/264707ac-53c7-4002-bb44-5ed2af779aec-kube-api-access-twhqc\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-kg5b5\" (UID: \"264707ac-53c7-4002-bb44-5ed2af779aec\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-kg5b5" Oct 06 12:14:47 crc kubenswrapper[4958]: I1006 12:14:47.028860 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/264707ac-53c7-4002-bb44-5ed2af779aec-inventory\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-kg5b5\" (UID: \"264707ac-53c7-4002-bb44-5ed2af779aec\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-kg5b5" Oct 06 12:14:47 crc kubenswrapper[4958]: I1006 12:14:47.029578 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/264707ac-53c7-4002-bb44-5ed2af779aec-ssh-key\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-kg5b5\" (UID: \"264707ac-53c7-4002-bb44-5ed2af779aec\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-kg5b5" Oct 06 12:14:47 crc kubenswrapper[4958]: I1006 12:14:47.131195 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/264707ac-53c7-4002-bb44-5ed2af779aec-ssh-key\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-kg5b5\" (UID: \"264707ac-53c7-4002-bb44-5ed2af779aec\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-kg5b5" Oct 06 12:14:47 crc kubenswrapper[4958]: I1006 12:14:47.131357 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-twhqc\" (UniqueName: \"kubernetes.io/projected/264707ac-53c7-4002-bb44-5ed2af779aec-kube-api-access-twhqc\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-kg5b5\" (UID: \"264707ac-53c7-4002-bb44-5ed2af779aec\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-kg5b5" Oct 06 12:14:47 crc kubenswrapper[4958]: I1006 12:14:47.131459 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/264707ac-53c7-4002-bb44-5ed2af779aec-inventory\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-kg5b5\" (UID: \"264707ac-53c7-4002-bb44-5ed2af779aec\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-kg5b5" Oct 06 12:14:47 crc kubenswrapper[4958]: I1006 12:14:47.136286 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/264707ac-53c7-4002-bb44-5ed2af779aec-ssh-key\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-kg5b5\" (UID: \"264707ac-53c7-4002-bb44-5ed2af779aec\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-kg5b5" Oct 06 12:14:47 crc kubenswrapper[4958]: I1006 12:14:47.138616 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/264707ac-53c7-4002-bb44-5ed2af779aec-inventory\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-kg5b5\" (UID: \"264707ac-53c7-4002-bb44-5ed2af779aec\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-kg5b5" Oct 06 12:14:47 crc kubenswrapper[4958]: I1006 12:14:47.164111 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-twhqc\" (UniqueName: \"kubernetes.io/projected/264707ac-53c7-4002-bb44-5ed2af779aec-kube-api-access-twhqc\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-kg5b5\" (UID: \"264707ac-53c7-4002-bb44-5ed2af779aec\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-kg5b5" Oct 06 12:14:47 crc kubenswrapper[4958]: I1006 12:14:47.193468 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-kg5b5" Oct 06 12:14:47 crc kubenswrapper[4958]: I1006 12:14:47.831181 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/validate-network-edpm-deployment-openstack-edpm-ipam-kg5b5"] Oct 06 12:14:47 crc kubenswrapper[4958]: W1006 12:14:47.838327 4958 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod264707ac_53c7_4002_bb44_5ed2af779aec.slice/crio-5ace7307f0be3a447493d1df665e4287cdb547c7bf1b03ff73bb49feac8383aa WatchSource:0}: Error finding container 5ace7307f0be3a447493d1df665e4287cdb547c7bf1b03ff73bb49feac8383aa: Status 404 returned error can't find the container with id 5ace7307f0be3a447493d1df665e4287cdb547c7bf1b03ff73bb49feac8383aa Oct 06 12:14:48 crc kubenswrapper[4958]: I1006 12:14:48.035624 4958 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-pw2gh"] Oct 06 12:14:48 crc kubenswrapper[4958]: I1006 12:14:48.039733 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-pw2gh" Oct 06 12:14:48 crc kubenswrapper[4958]: I1006 12:14:48.046541 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-pw2gh"] Oct 06 12:14:48 crc kubenswrapper[4958]: I1006 12:14:48.154586 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/65745d6b-470b-482c-91fd-8c71acbb7e06-catalog-content\") pod \"community-operators-pw2gh\" (UID: \"65745d6b-470b-482c-91fd-8c71acbb7e06\") " pod="openshift-marketplace/community-operators-pw2gh" Oct 06 12:14:48 crc kubenswrapper[4958]: I1006 12:14:48.155086 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/65745d6b-470b-482c-91fd-8c71acbb7e06-utilities\") pod \"community-operators-pw2gh\" (UID: \"65745d6b-470b-482c-91fd-8c71acbb7e06\") " pod="openshift-marketplace/community-operators-pw2gh" Oct 06 12:14:48 crc kubenswrapper[4958]: I1006 12:14:48.155145 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k2z74\" (UniqueName: \"kubernetes.io/projected/65745d6b-470b-482c-91fd-8c71acbb7e06-kube-api-access-k2z74\") pod \"community-operators-pw2gh\" (UID: \"65745d6b-470b-482c-91fd-8c71acbb7e06\") " pod="openshift-marketplace/community-operators-pw2gh" Oct 06 12:14:48 crc kubenswrapper[4958]: I1006 12:14:48.256368 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/65745d6b-470b-482c-91fd-8c71acbb7e06-catalog-content\") pod \"community-operators-pw2gh\" (UID: \"65745d6b-470b-482c-91fd-8c71acbb7e06\") " pod="openshift-marketplace/community-operators-pw2gh" Oct 06 12:14:48 crc kubenswrapper[4958]: I1006 12:14:48.256703 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/65745d6b-470b-482c-91fd-8c71acbb7e06-utilities\") pod \"community-operators-pw2gh\" (UID: \"65745d6b-470b-482c-91fd-8c71acbb7e06\") " pod="openshift-marketplace/community-operators-pw2gh" Oct 06 12:14:48 crc kubenswrapper[4958]: I1006 12:14:48.256786 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k2z74\" (UniqueName: \"kubernetes.io/projected/65745d6b-470b-482c-91fd-8c71acbb7e06-kube-api-access-k2z74\") pod \"community-operators-pw2gh\" (UID: \"65745d6b-470b-482c-91fd-8c71acbb7e06\") " pod="openshift-marketplace/community-operators-pw2gh" Oct 06 12:14:48 crc kubenswrapper[4958]: I1006 12:14:48.257047 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/65745d6b-470b-482c-91fd-8c71acbb7e06-catalog-content\") pod \"community-operators-pw2gh\" (UID: \"65745d6b-470b-482c-91fd-8c71acbb7e06\") " pod="openshift-marketplace/community-operators-pw2gh" Oct 06 12:14:48 crc kubenswrapper[4958]: I1006 12:14:48.257225 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/65745d6b-470b-482c-91fd-8c71acbb7e06-utilities\") pod \"community-operators-pw2gh\" (UID: \"65745d6b-470b-482c-91fd-8c71acbb7e06\") " pod="openshift-marketplace/community-operators-pw2gh" Oct 06 12:14:48 crc kubenswrapper[4958]: I1006 12:14:48.281241 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k2z74\" (UniqueName: \"kubernetes.io/projected/65745d6b-470b-482c-91fd-8c71acbb7e06-kube-api-access-k2z74\") pod \"community-operators-pw2gh\" (UID: \"65745d6b-470b-482c-91fd-8c71acbb7e06\") " pod="openshift-marketplace/community-operators-pw2gh" Oct 06 12:14:48 crc kubenswrapper[4958]: I1006 12:14:48.368256 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-pw2gh" Oct 06 12:14:48 crc kubenswrapper[4958]: I1006 12:14:48.671234 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-pw2gh"] Oct 06 12:14:48 crc kubenswrapper[4958]: I1006 12:14:48.774814 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-kg5b5" event={"ID":"264707ac-53c7-4002-bb44-5ed2af779aec","Type":"ContainerStarted","Data":"5ace7307f0be3a447493d1df665e4287cdb547c7bf1b03ff73bb49feac8383aa"} Oct 06 12:14:48 crc kubenswrapper[4958]: I1006 12:14:48.776521 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-pw2gh" event={"ID":"65745d6b-470b-482c-91fd-8c71acbb7e06","Type":"ContainerStarted","Data":"e9a6b672aa957c893f1c84ce981b6494a938590bfb13ab689a8b98c09d7da79f"} Oct 06 12:14:49 crc kubenswrapper[4958]: I1006 12:14:49.787898 4958 generic.go:334] "Generic (PLEG): container finished" podID="65745d6b-470b-482c-91fd-8c71acbb7e06" containerID="8717dd9362359191e0ec7cfb02410ef109bd4de8988d2f3eacae22cf8afdf2de" exitCode=0 Oct 06 12:14:49 crc kubenswrapper[4958]: I1006 12:14:49.787990 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-pw2gh" event={"ID":"65745d6b-470b-482c-91fd-8c71acbb7e06","Type":"ContainerDied","Data":"8717dd9362359191e0ec7cfb02410ef109bd4de8988d2f3eacae22cf8afdf2de"} Oct 06 12:14:49 crc kubenswrapper[4958]: I1006 12:14:49.789734 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-kg5b5" event={"ID":"264707ac-53c7-4002-bb44-5ed2af779aec","Type":"ContainerStarted","Data":"c1ffc527466d4a0c9e08d48193fba600d3b8e464a8555ffd21536a19c3c62184"} Oct 06 12:14:49 crc kubenswrapper[4958]: I1006 12:14:49.833092 4958 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-kg5b5" podStartSLOduration=2.560963769 podStartE2EDuration="3.833071408s" podCreationTimestamp="2025-10-06 12:14:46 +0000 UTC" firstStartedPulling="2025-10-06 12:14:47.841485353 +0000 UTC m=+1641.727510661" lastFinishedPulling="2025-10-06 12:14:49.113592952 +0000 UTC m=+1642.999618300" observedRunningTime="2025-10-06 12:14:49.826015624 +0000 UTC m=+1643.712040952" watchObservedRunningTime="2025-10-06 12:14:49.833071408 +0000 UTC m=+1643.719096726" Oct 06 12:14:51 crc kubenswrapper[4958]: I1006 12:14:51.820076 4958 generic.go:334] "Generic (PLEG): container finished" podID="65745d6b-470b-482c-91fd-8c71acbb7e06" containerID="715ff208bf2fc93acd0d74bd5dab08add6856bf19f35f42d0419440fabad1069" exitCode=0 Oct 06 12:14:51 crc kubenswrapper[4958]: I1006 12:14:51.820350 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-pw2gh" event={"ID":"65745d6b-470b-482c-91fd-8c71acbb7e06","Type":"ContainerDied","Data":"715ff208bf2fc93acd0d74bd5dab08add6856bf19f35f42d0419440fabad1069"} Oct 06 12:14:53 crc kubenswrapper[4958]: I1006 12:14:53.802331 4958 patch_prober.go:28] interesting pod/machine-config-daemon-whw6z container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 06 12:14:53 crc kubenswrapper[4958]: I1006 12:14:53.802729 4958 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-whw6z" podUID="1a63a5e9-6d00-40ff-a080-cfcaf03a1c1b" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 06 12:14:53 crc kubenswrapper[4958]: I1006 12:14:53.845442 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-pw2gh" event={"ID":"65745d6b-470b-482c-91fd-8c71acbb7e06","Type":"ContainerStarted","Data":"1cafcac7def010676f635e4f81d44b220c548ef447208f123d045a155a017161"} Oct 06 12:14:53 crc kubenswrapper[4958]: I1006 12:14:53.871933 4958 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-pw2gh" podStartSLOduration=2.831717712 podStartE2EDuration="5.871914292s" podCreationTimestamp="2025-10-06 12:14:48 +0000 UTC" firstStartedPulling="2025-10-06 12:14:49.790737341 +0000 UTC m=+1643.676762649" lastFinishedPulling="2025-10-06 12:14:52.830933911 +0000 UTC m=+1646.716959229" observedRunningTime="2025-10-06 12:14:53.866876666 +0000 UTC m=+1647.752901994" watchObservedRunningTime="2025-10-06 12:14:53.871914292 +0000 UTC m=+1647.757939600" Oct 06 12:14:54 crc kubenswrapper[4958]: I1006 12:14:54.045007 4958 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-db-create-gzxhx"] Oct 06 12:14:54 crc kubenswrapper[4958]: I1006 12:14:54.052749 4958 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-db-create-nsqbc"] Oct 06 12:14:54 crc kubenswrapper[4958]: I1006 12:14:54.061410 4958 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-db-create-6xbk9"] Oct 06 12:14:54 crc kubenswrapper[4958]: I1006 12:14:54.069001 4958 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-db-create-gzxhx"] Oct 06 12:14:54 crc kubenswrapper[4958]: I1006 12:14:54.076758 4958 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-db-create-nsqbc"] Oct 06 12:14:54 crc kubenswrapper[4958]: I1006 12:14:54.083335 4958 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-db-create-6xbk9"] Oct 06 12:14:54 crc kubenswrapper[4958]: I1006 12:14:54.858863 4958 generic.go:334] "Generic (PLEG): container finished" podID="264707ac-53c7-4002-bb44-5ed2af779aec" containerID="c1ffc527466d4a0c9e08d48193fba600d3b8e464a8555ffd21536a19c3c62184" exitCode=0 Oct 06 12:14:54 crc kubenswrapper[4958]: I1006 12:14:54.858977 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-kg5b5" event={"ID":"264707ac-53c7-4002-bb44-5ed2af779aec","Type":"ContainerDied","Data":"c1ffc527466d4a0c9e08d48193fba600d3b8e464a8555ffd21536a19c3c62184"} Oct 06 12:14:54 crc kubenswrapper[4958]: I1006 12:14:54.927052 4958 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9ba0cdbc-1261-4dd4-9be1-f733bc94f7d3" path="/var/lib/kubelet/pods/9ba0cdbc-1261-4dd4-9be1-f733bc94f7d3/volumes" Oct 06 12:14:54 crc kubenswrapper[4958]: I1006 12:14:54.927839 4958 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ee717167-2b5b-4c0a-9528-b14f49856f5e" path="/var/lib/kubelet/pods/ee717167-2b5b-4c0a-9528-b14f49856f5e/volumes" Oct 06 12:14:54 crc kubenswrapper[4958]: I1006 12:14:54.928582 4958 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f8492838-08b1-4d13-8697-bf595621d465" path="/var/lib/kubelet/pods/f8492838-08b1-4d13-8697-bf595621d465/volumes" Oct 06 12:14:56 crc kubenswrapper[4958]: I1006 12:14:56.277359 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-kg5b5" Oct 06 12:14:56 crc kubenswrapper[4958]: I1006 12:14:56.453006 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/264707ac-53c7-4002-bb44-5ed2af779aec-inventory\") pod \"264707ac-53c7-4002-bb44-5ed2af779aec\" (UID: \"264707ac-53c7-4002-bb44-5ed2af779aec\") " Oct 06 12:14:56 crc kubenswrapper[4958]: I1006 12:14:56.453115 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-twhqc\" (UniqueName: \"kubernetes.io/projected/264707ac-53c7-4002-bb44-5ed2af779aec-kube-api-access-twhqc\") pod \"264707ac-53c7-4002-bb44-5ed2af779aec\" (UID: \"264707ac-53c7-4002-bb44-5ed2af779aec\") " Oct 06 12:14:56 crc kubenswrapper[4958]: I1006 12:14:56.453146 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/264707ac-53c7-4002-bb44-5ed2af779aec-ssh-key\") pod \"264707ac-53c7-4002-bb44-5ed2af779aec\" (UID: \"264707ac-53c7-4002-bb44-5ed2af779aec\") " Oct 06 12:14:56 crc kubenswrapper[4958]: I1006 12:14:56.463430 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/264707ac-53c7-4002-bb44-5ed2af779aec-kube-api-access-twhqc" (OuterVolumeSpecName: "kube-api-access-twhqc") pod "264707ac-53c7-4002-bb44-5ed2af779aec" (UID: "264707ac-53c7-4002-bb44-5ed2af779aec"). InnerVolumeSpecName "kube-api-access-twhqc". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 06 12:14:56 crc kubenswrapper[4958]: I1006 12:14:56.480572 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/264707ac-53c7-4002-bb44-5ed2af779aec-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "264707ac-53c7-4002-bb44-5ed2af779aec" (UID: "264707ac-53c7-4002-bb44-5ed2af779aec"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 06 12:14:56 crc kubenswrapper[4958]: I1006 12:14:56.487341 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/264707ac-53c7-4002-bb44-5ed2af779aec-inventory" (OuterVolumeSpecName: "inventory") pod "264707ac-53c7-4002-bb44-5ed2af779aec" (UID: "264707ac-53c7-4002-bb44-5ed2af779aec"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 06 12:14:56 crc kubenswrapper[4958]: I1006 12:14:56.555811 4958 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-twhqc\" (UniqueName: \"kubernetes.io/projected/264707ac-53c7-4002-bb44-5ed2af779aec-kube-api-access-twhqc\") on node \"crc\" DevicePath \"\"" Oct 06 12:14:56 crc kubenswrapper[4958]: I1006 12:14:56.555853 4958 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/264707ac-53c7-4002-bb44-5ed2af779aec-ssh-key\") on node \"crc\" DevicePath \"\"" Oct 06 12:14:56 crc kubenswrapper[4958]: I1006 12:14:56.555866 4958 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/264707ac-53c7-4002-bb44-5ed2af779aec-inventory\") on node \"crc\" DevicePath \"\"" Oct 06 12:14:56 crc kubenswrapper[4958]: I1006 12:14:56.886348 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-kg5b5" event={"ID":"264707ac-53c7-4002-bb44-5ed2af779aec","Type":"ContainerDied","Data":"5ace7307f0be3a447493d1df665e4287cdb547c7bf1b03ff73bb49feac8383aa"} Oct 06 12:14:56 crc kubenswrapper[4958]: I1006 12:14:56.886425 4958 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5ace7307f0be3a447493d1df665e4287cdb547c7bf1b03ff73bb49feac8383aa" Oct 06 12:14:56 crc kubenswrapper[4958]: I1006 12:14:56.886521 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-kg5b5" Oct 06 12:14:56 crc kubenswrapper[4958]: I1006 12:14:56.971939 4958 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/install-os-edpm-deployment-openstack-edpm-ipam-m7n8m"] Oct 06 12:14:56 crc kubenswrapper[4958]: E1006 12:14:56.972710 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="264707ac-53c7-4002-bb44-5ed2af779aec" containerName="validate-network-edpm-deployment-openstack-edpm-ipam" Oct 06 12:14:56 crc kubenswrapper[4958]: I1006 12:14:56.972731 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="264707ac-53c7-4002-bb44-5ed2af779aec" containerName="validate-network-edpm-deployment-openstack-edpm-ipam" Oct 06 12:14:56 crc kubenswrapper[4958]: I1006 12:14:56.972902 4958 memory_manager.go:354] "RemoveStaleState removing state" podUID="264707ac-53c7-4002-bb44-5ed2af779aec" containerName="validate-network-edpm-deployment-openstack-edpm-ipam" Oct 06 12:14:56 crc kubenswrapper[4958]: I1006 12:14:56.973716 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-m7n8m" Oct 06 12:14:56 crc kubenswrapper[4958]: I1006 12:14:56.976543 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-frkbw" Oct 06 12:14:56 crc kubenswrapper[4958]: I1006 12:14:56.978275 4958 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Oct 06 12:14:56 crc kubenswrapper[4958]: I1006 12:14:56.978428 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Oct 06 12:14:56 crc kubenswrapper[4958]: I1006 12:14:56.978677 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Oct 06 12:14:56 crc kubenswrapper[4958]: I1006 12:14:56.988261 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-os-edpm-deployment-openstack-edpm-ipam-m7n8m"] Oct 06 12:14:57 crc kubenswrapper[4958]: I1006 12:14:57.169126 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/27373b57-9835-4096-9b31-eab53444391c-inventory\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-m7n8m\" (UID: \"27373b57-9835-4096-9b31-eab53444391c\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-m7n8m" Oct 06 12:14:57 crc kubenswrapper[4958]: I1006 12:14:57.169273 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9m7cq\" (UniqueName: \"kubernetes.io/projected/27373b57-9835-4096-9b31-eab53444391c-kube-api-access-9m7cq\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-m7n8m\" (UID: \"27373b57-9835-4096-9b31-eab53444391c\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-m7n8m" Oct 06 12:14:57 crc kubenswrapper[4958]: I1006 12:14:57.170259 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/27373b57-9835-4096-9b31-eab53444391c-ssh-key\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-m7n8m\" (UID: \"27373b57-9835-4096-9b31-eab53444391c\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-m7n8m" Oct 06 12:14:57 crc kubenswrapper[4958]: I1006 12:14:57.272097 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/27373b57-9835-4096-9b31-eab53444391c-ssh-key\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-m7n8m\" (UID: \"27373b57-9835-4096-9b31-eab53444391c\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-m7n8m" Oct 06 12:14:57 crc kubenswrapper[4958]: I1006 12:14:57.272260 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/27373b57-9835-4096-9b31-eab53444391c-inventory\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-m7n8m\" (UID: \"27373b57-9835-4096-9b31-eab53444391c\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-m7n8m" Oct 06 12:14:57 crc kubenswrapper[4958]: I1006 12:14:57.272289 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9m7cq\" (UniqueName: \"kubernetes.io/projected/27373b57-9835-4096-9b31-eab53444391c-kube-api-access-9m7cq\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-m7n8m\" (UID: \"27373b57-9835-4096-9b31-eab53444391c\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-m7n8m" Oct 06 12:14:57 crc kubenswrapper[4958]: I1006 12:14:57.279041 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/27373b57-9835-4096-9b31-eab53444391c-inventory\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-m7n8m\" (UID: \"27373b57-9835-4096-9b31-eab53444391c\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-m7n8m" Oct 06 12:14:57 crc kubenswrapper[4958]: I1006 12:14:57.280248 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/27373b57-9835-4096-9b31-eab53444391c-ssh-key\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-m7n8m\" (UID: \"27373b57-9835-4096-9b31-eab53444391c\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-m7n8m" Oct 06 12:14:57 crc kubenswrapper[4958]: I1006 12:14:57.291481 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9m7cq\" (UniqueName: \"kubernetes.io/projected/27373b57-9835-4096-9b31-eab53444391c-kube-api-access-9m7cq\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-m7n8m\" (UID: \"27373b57-9835-4096-9b31-eab53444391c\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-m7n8m" Oct 06 12:14:57 crc kubenswrapper[4958]: I1006 12:14:57.295069 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-m7n8m" Oct 06 12:14:57 crc kubenswrapper[4958]: W1006 12:14:57.915486 4958 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod27373b57_9835_4096_9b31_eab53444391c.slice/crio-b9506691750364514d7c3c213fd90857b3df81826686efd1309b4167fc4aa03d WatchSource:0}: Error finding container b9506691750364514d7c3c213fd90857b3df81826686efd1309b4167fc4aa03d: Status 404 returned error can't find the container with id b9506691750364514d7c3c213fd90857b3df81826686efd1309b4167fc4aa03d Oct 06 12:14:57 crc kubenswrapper[4958]: I1006 12:14:57.915734 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-os-edpm-deployment-openstack-edpm-ipam-m7n8m"] Oct 06 12:14:58 crc kubenswrapper[4958]: I1006 12:14:58.369479 4958 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-pw2gh" Oct 06 12:14:58 crc kubenswrapper[4958]: I1006 12:14:58.369537 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-pw2gh" Oct 06 12:14:58 crc kubenswrapper[4958]: I1006 12:14:58.437263 4958 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-pw2gh" Oct 06 12:14:58 crc kubenswrapper[4958]: I1006 12:14:58.908088 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-m7n8m" event={"ID":"27373b57-9835-4096-9b31-eab53444391c","Type":"ContainerStarted","Data":"b9506691750364514d7c3c213fd90857b3df81826686efd1309b4167fc4aa03d"} Oct 06 12:14:59 crc kubenswrapper[4958]: I1006 12:14:59.024721 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-pw2gh" Oct 06 12:14:59 crc kubenswrapper[4958]: I1006 12:14:59.087135 4958 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-pw2gh"] Oct 06 12:14:59 crc kubenswrapper[4958]: I1006 12:14:59.925800 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-m7n8m" event={"ID":"27373b57-9835-4096-9b31-eab53444391c","Type":"ContainerStarted","Data":"ea7945d0b60d068d8465a5c2e89970429aa3ac760248798222c8f887e7d2d87f"} Oct 06 12:14:59 crc kubenswrapper[4958]: I1006 12:14:59.968928 4958 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-m7n8m" podStartSLOduration=3.100135208 podStartE2EDuration="3.96888869s" podCreationTimestamp="2025-10-06 12:14:56 +0000 UTC" firstStartedPulling="2025-10-06 12:14:57.919584682 +0000 UTC m=+1651.805609990" lastFinishedPulling="2025-10-06 12:14:58.788338164 +0000 UTC m=+1652.674363472" observedRunningTime="2025-10-06 12:14:59.950189848 +0000 UTC m=+1653.836215216" watchObservedRunningTime="2025-10-06 12:14:59.96888869 +0000 UTC m=+1653.854913998" Oct 06 12:15:00 crc kubenswrapper[4958]: I1006 12:15:00.142051 4958 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29329215-wbdm2"] Oct 06 12:15:00 crc kubenswrapper[4958]: I1006 12:15:00.145768 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29329215-wbdm2" Oct 06 12:15:00 crc kubenswrapper[4958]: I1006 12:15:00.148902 4958 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Oct 06 12:15:00 crc kubenswrapper[4958]: I1006 12:15:00.148987 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Oct 06 12:15:00 crc kubenswrapper[4958]: I1006 12:15:00.157483 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29329215-wbdm2"] Oct 06 12:15:00 crc kubenswrapper[4958]: I1006 12:15:00.235674 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/15fb0054-0dd0-4d78-a368-5bb9b70bdfc7-secret-volume\") pod \"collect-profiles-29329215-wbdm2\" (UID: \"15fb0054-0dd0-4d78-a368-5bb9b70bdfc7\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29329215-wbdm2" Oct 06 12:15:00 crc kubenswrapper[4958]: I1006 12:15:00.235903 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w2nz9\" (UniqueName: \"kubernetes.io/projected/15fb0054-0dd0-4d78-a368-5bb9b70bdfc7-kube-api-access-w2nz9\") pod \"collect-profiles-29329215-wbdm2\" (UID: \"15fb0054-0dd0-4d78-a368-5bb9b70bdfc7\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29329215-wbdm2" Oct 06 12:15:00 crc kubenswrapper[4958]: I1006 12:15:00.236580 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/15fb0054-0dd0-4d78-a368-5bb9b70bdfc7-config-volume\") pod \"collect-profiles-29329215-wbdm2\" (UID: \"15fb0054-0dd0-4d78-a368-5bb9b70bdfc7\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29329215-wbdm2" Oct 06 12:15:00 crc kubenswrapper[4958]: I1006 12:15:00.338949 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/15fb0054-0dd0-4d78-a368-5bb9b70bdfc7-config-volume\") pod \"collect-profiles-29329215-wbdm2\" (UID: \"15fb0054-0dd0-4d78-a368-5bb9b70bdfc7\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29329215-wbdm2" Oct 06 12:15:00 crc kubenswrapper[4958]: I1006 12:15:00.339131 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/15fb0054-0dd0-4d78-a368-5bb9b70bdfc7-secret-volume\") pod \"collect-profiles-29329215-wbdm2\" (UID: \"15fb0054-0dd0-4d78-a368-5bb9b70bdfc7\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29329215-wbdm2" Oct 06 12:15:00 crc kubenswrapper[4958]: I1006 12:15:00.339389 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w2nz9\" (UniqueName: \"kubernetes.io/projected/15fb0054-0dd0-4d78-a368-5bb9b70bdfc7-kube-api-access-w2nz9\") pod \"collect-profiles-29329215-wbdm2\" (UID: \"15fb0054-0dd0-4d78-a368-5bb9b70bdfc7\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29329215-wbdm2" Oct 06 12:15:00 crc kubenswrapper[4958]: I1006 12:15:00.340021 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/15fb0054-0dd0-4d78-a368-5bb9b70bdfc7-config-volume\") pod \"collect-profiles-29329215-wbdm2\" (UID: \"15fb0054-0dd0-4d78-a368-5bb9b70bdfc7\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29329215-wbdm2" Oct 06 12:15:00 crc kubenswrapper[4958]: I1006 12:15:00.349530 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/15fb0054-0dd0-4d78-a368-5bb9b70bdfc7-secret-volume\") pod \"collect-profiles-29329215-wbdm2\" (UID: \"15fb0054-0dd0-4d78-a368-5bb9b70bdfc7\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29329215-wbdm2" Oct 06 12:15:00 crc kubenswrapper[4958]: I1006 12:15:00.362601 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w2nz9\" (UniqueName: \"kubernetes.io/projected/15fb0054-0dd0-4d78-a368-5bb9b70bdfc7-kube-api-access-w2nz9\") pod \"collect-profiles-29329215-wbdm2\" (UID: \"15fb0054-0dd0-4d78-a368-5bb9b70bdfc7\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29329215-wbdm2" Oct 06 12:15:00 crc kubenswrapper[4958]: I1006 12:15:00.475548 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29329215-wbdm2" Oct 06 12:15:00 crc kubenswrapper[4958]: I1006 12:15:00.910909 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29329215-wbdm2"] Oct 06 12:15:00 crc kubenswrapper[4958]: I1006 12:15:00.936489 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29329215-wbdm2" event={"ID":"15fb0054-0dd0-4d78-a368-5bb9b70bdfc7","Type":"ContainerStarted","Data":"415a0c429fb43d1c4dd757e3da5c812e473b066bf53bfd4ceb85af25e9a504ff"} Oct 06 12:15:00 crc kubenswrapper[4958]: I1006 12:15:00.936816 4958 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-pw2gh" podUID="65745d6b-470b-482c-91fd-8c71acbb7e06" containerName="registry-server" containerID="cri-o://1cafcac7def010676f635e4f81d44b220c548ef447208f123d045a155a017161" gracePeriod=2 Oct 06 12:15:01 crc kubenswrapper[4958]: I1006 12:15:01.323387 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-pw2gh" Oct 06 12:15:01 crc kubenswrapper[4958]: I1006 12:15:01.369867 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/65745d6b-470b-482c-91fd-8c71acbb7e06-catalog-content\") pod \"65745d6b-470b-482c-91fd-8c71acbb7e06\" (UID: \"65745d6b-470b-482c-91fd-8c71acbb7e06\") " Oct 06 12:15:01 crc kubenswrapper[4958]: I1006 12:15:01.369949 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-k2z74\" (UniqueName: \"kubernetes.io/projected/65745d6b-470b-482c-91fd-8c71acbb7e06-kube-api-access-k2z74\") pod \"65745d6b-470b-482c-91fd-8c71acbb7e06\" (UID: \"65745d6b-470b-482c-91fd-8c71acbb7e06\") " Oct 06 12:15:01 crc kubenswrapper[4958]: I1006 12:15:01.370274 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/65745d6b-470b-482c-91fd-8c71acbb7e06-utilities\") pod \"65745d6b-470b-482c-91fd-8c71acbb7e06\" (UID: \"65745d6b-470b-482c-91fd-8c71acbb7e06\") " Oct 06 12:15:01 crc kubenswrapper[4958]: I1006 12:15:01.371805 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/65745d6b-470b-482c-91fd-8c71acbb7e06-utilities" (OuterVolumeSpecName: "utilities") pod "65745d6b-470b-482c-91fd-8c71acbb7e06" (UID: "65745d6b-470b-482c-91fd-8c71acbb7e06"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 06 12:15:01 crc kubenswrapper[4958]: I1006 12:15:01.376848 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/65745d6b-470b-482c-91fd-8c71acbb7e06-kube-api-access-k2z74" (OuterVolumeSpecName: "kube-api-access-k2z74") pod "65745d6b-470b-482c-91fd-8c71acbb7e06" (UID: "65745d6b-470b-482c-91fd-8c71acbb7e06"). InnerVolumeSpecName "kube-api-access-k2z74". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 06 12:15:01 crc kubenswrapper[4958]: I1006 12:15:01.422450 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/65745d6b-470b-482c-91fd-8c71acbb7e06-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "65745d6b-470b-482c-91fd-8c71acbb7e06" (UID: "65745d6b-470b-482c-91fd-8c71acbb7e06"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 06 12:15:01 crc kubenswrapper[4958]: I1006 12:15:01.472097 4958 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/65745d6b-470b-482c-91fd-8c71acbb7e06-catalog-content\") on node \"crc\" DevicePath \"\"" Oct 06 12:15:01 crc kubenswrapper[4958]: I1006 12:15:01.472131 4958 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-k2z74\" (UniqueName: \"kubernetes.io/projected/65745d6b-470b-482c-91fd-8c71acbb7e06-kube-api-access-k2z74\") on node \"crc\" DevicePath \"\"" Oct 06 12:15:01 crc kubenswrapper[4958]: I1006 12:15:01.472158 4958 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/65745d6b-470b-482c-91fd-8c71acbb7e06-utilities\") on node \"crc\" DevicePath \"\"" Oct 06 12:15:01 crc kubenswrapper[4958]: I1006 12:15:01.947000 4958 generic.go:334] "Generic (PLEG): container finished" podID="15fb0054-0dd0-4d78-a368-5bb9b70bdfc7" containerID="d674dab3f923d3dec17353d18dd694cd7604db3a82c433e90d9d0eddd074c3d3" exitCode=0 Oct 06 12:15:01 crc kubenswrapper[4958]: I1006 12:15:01.947098 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29329215-wbdm2" event={"ID":"15fb0054-0dd0-4d78-a368-5bb9b70bdfc7","Type":"ContainerDied","Data":"d674dab3f923d3dec17353d18dd694cd7604db3a82c433e90d9d0eddd074c3d3"} Oct 06 12:15:01 crc kubenswrapper[4958]: I1006 12:15:01.950934 4958 generic.go:334] "Generic (PLEG): container finished" podID="65745d6b-470b-482c-91fd-8c71acbb7e06" containerID="1cafcac7def010676f635e4f81d44b220c548ef447208f123d045a155a017161" exitCode=0 Oct 06 12:15:01 crc kubenswrapper[4958]: I1006 12:15:01.950995 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-pw2gh" event={"ID":"65745d6b-470b-482c-91fd-8c71acbb7e06","Type":"ContainerDied","Data":"1cafcac7def010676f635e4f81d44b220c548ef447208f123d045a155a017161"} Oct 06 12:15:01 crc kubenswrapper[4958]: I1006 12:15:01.951033 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-pw2gh" event={"ID":"65745d6b-470b-482c-91fd-8c71acbb7e06","Type":"ContainerDied","Data":"e9a6b672aa957c893f1c84ce981b6494a938590bfb13ab689a8b98c09d7da79f"} Oct 06 12:15:01 crc kubenswrapper[4958]: I1006 12:15:01.951040 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-pw2gh" Oct 06 12:15:01 crc kubenswrapper[4958]: I1006 12:15:01.951060 4958 scope.go:117] "RemoveContainer" containerID="1cafcac7def010676f635e4f81d44b220c548ef447208f123d045a155a017161" Oct 06 12:15:01 crc kubenswrapper[4958]: I1006 12:15:01.980038 4958 scope.go:117] "RemoveContainer" containerID="715ff208bf2fc93acd0d74bd5dab08add6856bf19f35f42d0419440fabad1069" Oct 06 12:15:02 crc kubenswrapper[4958]: I1006 12:15:02.002939 4958 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-pw2gh"] Oct 06 12:15:02 crc kubenswrapper[4958]: I1006 12:15:02.008761 4958 scope.go:117] "RemoveContainer" containerID="8717dd9362359191e0ec7cfb02410ef109bd4de8988d2f3eacae22cf8afdf2de" Oct 06 12:15:02 crc kubenswrapper[4958]: I1006 12:15:02.013715 4958 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-pw2gh"] Oct 06 12:15:02 crc kubenswrapper[4958]: I1006 12:15:02.044362 4958 scope.go:117] "RemoveContainer" containerID="1cafcac7def010676f635e4f81d44b220c548ef447208f123d045a155a017161" Oct 06 12:15:02 crc kubenswrapper[4958]: E1006 12:15:02.047701 4958 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1cafcac7def010676f635e4f81d44b220c548ef447208f123d045a155a017161\": container with ID starting with 1cafcac7def010676f635e4f81d44b220c548ef447208f123d045a155a017161 not found: ID does not exist" containerID="1cafcac7def010676f635e4f81d44b220c548ef447208f123d045a155a017161" Oct 06 12:15:02 crc kubenswrapper[4958]: I1006 12:15:02.047734 4958 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1cafcac7def010676f635e4f81d44b220c548ef447208f123d045a155a017161"} err="failed to get container status \"1cafcac7def010676f635e4f81d44b220c548ef447208f123d045a155a017161\": rpc error: code = NotFound desc = could not find container \"1cafcac7def010676f635e4f81d44b220c548ef447208f123d045a155a017161\": container with ID starting with 1cafcac7def010676f635e4f81d44b220c548ef447208f123d045a155a017161 not found: ID does not exist" Oct 06 12:15:02 crc kubenswrapper[4958]: I1006 12:15:02.047763 4958 scope.go:117] "RemoveContainer" containerID="715ff208bf2fc93acd0d74bd5dab08add6856bf19f35f42d0419440fabad1069" Oct 06 12:15:02 crc kubenswrapper[4958]: E1006 12:15:02.048202 4958 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"715ff208bf2fc93acd0d74bd5dab08add6856bf19f35f42d0419440fabad1069\": container with ID starting with 715ff208bf2fc93acd0d74bd5dab08add6856bf19f35f42d0419440fabad1069 not found: ID does not exist" containerID="715ff208bf2fc93acd0d74bd5dab08add6856bf19f35f42d0419440fabad1069" Oct 06 12:15:02 crc kubenswrapper[4958]: I1006 12:15:02.048224 4958 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"715ff208bf2fc93acd0d74bd5dab08add6856bf19f35f42d0419440fabad1069"} err="failed to get container status \"715ff208bf2fc93acd0d74bd5dab08add6856bf19f35f42d0419440fabad1069\": rpc error: code = NotFound desc = could not find container \"715ff208bf2fc93acd0d74bd5dab08add6856bf19f35f42d0419440fabad1069\": container with ID starting with 715ff208bf2fc93acd0d74bd5dab08add6856bf19f35f42d0419440fabad1069 not found: ID does not exist" Oct 06 12:15:02 crc kubenswrapper[4958]: I1006 12:15:02.048236 4958 scope.go:117] "RemoveContainer" containerID="8717dd9362359191e0ec7cfb02410ef109bd4de8988d2f3eacae22cf8afdf2de" Oct 06 12:15:02 crc kubenswrapper[4958]: E1006 12:15:02.048512 4958 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8717dd9362359191e0ec7cfb02410ef109bd4de8988d2f3eacae22cf8afdf2de\": container with ID starting with 8717dd9362359191e0ec7cfb02410ef109bd4de8988d2f3eacae22cf8afdf2de not found: ID does not exist" containerID="8717dd9362359191e0ec7cfb02410ef109bd4de8988d2f3eacae22cf8afdf2de" Oct 06 12:15:02 crc kubenswrapper[4958]: I1006 12:15:02.048537 4958 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8717dd9362359191e0ec7cfb02410ef109bd4de8988d2f3eacae22cf8afdf2de"} err="failed to get container status \"8717dd9362359191e0ec7cfb02410ef109bd4de8988d2f3eacae22cf8afdf2de\": rpc error: code = NotFound desc = could not find container \"8717dd9362359191e0ec7cfb02410ef109bd4de8988d2f3eacae22cf8afdf2de\": container with ID starting with 8717dd9362359191e0ec7cfb02410ef109bd4de8988d2f3eacae22cf8afdf2de not found: ID does not exist" Oct 06 12:15:02 crc kubenswrapper[4958]: I1006 12:15:02.927432 4958 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="65745d6b-470b-482c-91fd-8c71acbb7e06" path="/var/lib/kubelet/pods/65745d6b-470b-482c-91fd-8c71acbb7e06/volumes" Oct 06 12:15:03 crc kubenswrapper[4958]: I1006 12:15:03.336294 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29329215-wbdm2" Oct 06 12:15:03 crc kubenswrapper[4958]: I1006 12:15:03.511698 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w2nz9\" (UniqueName: \"kubernetes.io/projected/15fb0054-0dd0-4d78-a368-5bb9b70bdfc7-kube-api-access-w2nz9\") pod \"15fb0054-0dd0-4d78-a368-5bb9b70bdfc7\" (UID: \"15fb0054-0dd0-4d78-a368-5bb9b70bdfc7\") " Oct 06 12:15:03 crc kubenswrapper[4958]: I1006 12:15:03.511765 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/15fb0054-0dd0-4d78-a368-5bb9b70bdfc7-secret-volume\") pod \"15fb0054-0dd0-4d78-a368-5bb9b70bdfc7\" (UID: \"15fb0054-0dd0-4d78-a368-5bb9b70bdfc7\") " Oct 06 12:15:03 crc kubenswrapper[4958]: I1006 12:15:03.511790 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/15fb0054-0dd0-4d78-a368-5bb9b70bdfc7-config-volume\") pod \"15fb0054-0dd0-4d78-a368-5bb9b70bdfc7\" (UID: \"15fb0054-0dd0-4d78-a368-5bb9b70bdfc7\") " Oct 06 12:15:03 crc kubenswrapper[4958]: I1006 12:15:03.512812 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/15fb0054-0dd0-4d78-a368-5bb9b70bdfc7-config-volume" (OuterVolumeSpecName: "config-volume") pod "15fb0054-0dd0-4d78-a368-5bb9b70bdfc7" (UID: "15fb0054-0dd0-4d78-a368-5bb9b70bdfc7"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 06 12:15:03 crc kubenswrapper[4958]: I1006 12:15:03.519953 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/15fb0054-0dd0-4d78-a368-5bb9b70bdfc7-kube-api-access-w2nz9" (OuterVolumeSpecName: "kube-api-access-w2nz9") pod "15fb0054-0dd0-4d78-a368-5bb9b70bdfc7" (UID: "15fb0054-0dd0-4d78-a368-5bb9b70bdfc7"). InnerVolumeSpecName "kube-api-access-w2nz9". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 06 12:15:03 crc kubenswrapper[4958]: I1006 12:15:03.521634 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/15fb0054-0dd0-4d78-a368-5bb9b70bdfc7-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "15fb0054-0dd0-4d78-a368-5bb9b70bdfc7" (UID: "15fb0054-0dd0-4d78-a368-5bb9b70bdfc7"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 06 12:15:03 crc kubenswrapper[4958]: I1006 12:15:03.614266 4958 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w2nz9\" (UniqueName: \"kubernetes.io/projected/15fb0054-0dd0-4d78-a368-5bb9b70bdfc7-kube-api-access-w2nz9\") on node \"crc\" DevicePath \"\"" Oct 06 12:15:03 crc kubenswrapper[4958]: I1006 12:15:03.614330 4958 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/15fb0054-0dd0-4d78-a368-5bb9b70bdfc7-secret-volume\") on node \"crc\" DevicePath \"\"" Oct 06 12:15:03 crc kubenswrapper[4958]: I1006 12:15:03.614354 4958 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/15fb0054-0dd0-4d78-a368-5bb9b70bdfc7-config-volume\") on node \"crc\" DevicePath \"\"" Oct 06 12:15:03 crc kubenswrapper[4958]: I1006 12:15:03.980533 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29329215-wbdm2" event={"ID":"15fb0054-0dd0-4d78-a368-5bb9b70bdfc7","Type":"ContainerDied","Data":"415a0c429fb43d1c4dd757e3da5c812e473b066bf53bfd4ceb85af25e9a504ff"} Oct 06 12:15:03 crc kubenswrapper[4958]: I1006 12:15:03.980606 4958 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="415a0c429fb43d1c4dd757e3da5c812e473b066bf53bfd4ceb85af25e9a504ff" Oct 06 12:15:03 crc kubenswrapper[4958]: I1006 12:15:03.980700 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29329215-wbdm2" Oct 06 12:15:09 crc kubenswrapper[4958]: I1006 12:15:09.053054 4958 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-279f-account-create-j5q5b"] Oct 06 12:15:09 crc kubenswrapper[4958]: I1006 12:15:09.063928 4958 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-279f-account-create-j5q5b"] Oct 06 12:15:09 crc kubenswrapper[4958]: I1006 12:15:09.076393 4958 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-1f67-account-create-nd8n5"] Oct 06 12:15:09 crc kubenswrapper[4958]: I1006 12:15:09.085518 4958 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-ebeb-account-create-cqw5c"] Oct 06 12:15:09 crc kubenswrapper[4958]: I1006 12:15:09.092821 4958 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-1f67-account-create-nd8n5"] Oct 06 12:15:09 crc kubenswrapper[4958]: I1006 12:15:09.098657 4958 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-ebeb-account-create-cqw5c"] Oct 06 12:15:10 crc kubenswrapper[4958]: I1006 12:15:10.934502 4958 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2660dd3f-a383-4257-8591-0ae6dfc37d62" path="/var/lib/kubelet/pods/2660dd3f-a383-4257-8591-0ae6dfc37d62/volumes" Oct 06 12:15:10 crc kubenswrapper[4958]: I1006 12:15:10.935979 4958 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="30bd3ffb-c536-40ae-af8e-e579520bb461" path="/var/lib/kubelet/pods/30bd3ffb-c536-40ae-af8e-e579520bb461/volumes" Oct 06 12:15:10 crc kubenswrapper[4958]: I1006 12:15:10.937367 4958 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8004c5d1-adb8-4834-b181-1f9f393a9555" path="/var/lib/kubelet/pods/8004c5d1-adb8-4834-b181-1f9f393a9555/volumes" Oct 06 12:15:23 crc kubenswrapper[4958]: I1006 12:15:23.801843 4958 patch_prober.go:28] interesting pod/machine-config-daemon-whw6z container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 06 12:15:23 crc kubenswrapper[4958]: I1006 12:15:23.802634 4958 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-whw6z" podUID="1a63a5e9-6d00-40ff-a080-cfcaf03a1c1b" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 06 12:15:23 crc kubenswrapper[4958]: I1006 12:15:23.802712 4958 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-whw6z" Oct 06 12:15:23 crc kubenswrapper[4958]: I1006 12:15:23.803845 4958 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"92e208915ca70ac5b3441238341d5b28c612e6494612e3dece5b9499a5bc906f"} pod="openshift-machine-config-operator/machine-config-daemon-whw6z" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Oct 06 12:15:23 crc kubenswrapper[4958]: I1006 12:15:23.804105 4958 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-whw6z" podUID="1a63a5e9-6d00-40ff-a080-cfcaf03a1c1b" containerName="machine-config-daemon" containerID="cri-o://92e208915ca70ac5b3441238341d5b28c612e6494612e3dece5b9499a5bc906f" gracePeriod=600 Oct 06 12:15:23 crc kubenswrapper[4958]: E1006 12:15:23.963832 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-whw6z_openshift-machine-config-operator(1a63a5e9-6d00-40ff-a080-cfcaf03a1c1b)\"" pod="openshift-machine-config-operator/machine-config-daemon-whw6z" podUID="1a63a5e9-6d00-40ff-a080-cfcaf03a1c1b" Oct 06 12:15:24 crc kubenswrapper[4958]: I1006 12:15:24.225182 4958 generic.go:334] "Generic (PLEG): container finished" podID="1a63a5e9-6d00-40ff-a080-cfcaf03a1c1b" containerID="92e208915ca70ac5b3441238341d5b28c612e6494612e3dece5b9499a5bc906f" exitCode=0 Oct 06 12:15:24 crc kubenswrapper[4958]: I1006 12:15:24.225264 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-whw6z" event={"ID":"1a63a5e9-6d00-40ff-a080-cfcaf03a1c1b","Type":"ContainerDied","Data":"92e208915ca70ac5b3441238341d5b28c612e6494612e3dece5b9499a5bc906f"} Oct 06 12:15:24 crc kubenswrapper[4958]: I1006 12:15:24.225345 4958 scope.go:117] "RemoveContainer" containerID="206f17391c9f0682bafb2e779c977ed48daa6668033056ec39d18f8882e93154" Oct 06 12:15:24 crc kubenswrapper[4958]: I1006 12:15:24.226712 4958 scope.go:117] "RemoveContainer" containerID="92e208915ca70ac5b3441238341d5b28c612e6494612e3dece5b9499a5bc906f" Oct 06 12:15:24 crc kubenswrapper[4958]: E1006 12:15:24.227238 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-whw6z_openshift-machine-config-operator(1a63a5e9-6d00-40ff-a080-cfcaf03a1c1b)\"" pod="openshift-machine-config-operator/machine-config-daemon-whw6z" podUID="1a63a5e9-6d00-40ff-a080-cfcaf03a1c1b" Oct 06 12:15:30 crc kubenswrapper[4958]: I1006 12:15:30.065300 4958 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-zld27"] Oct 06 12:15:30 crc kubenswrapper[4958]: I1006 12:15:30.080064 4958 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-zld27"] Oct 06 12:15:30 crc kubenswrapper[4958]: I1006 12:15:30.287890 4958 scope.go:117] "RemoveContainer" containerID="58cd5dabc47b974671339c8128ad769ec7cd73a4962460365f64ed7ee184b341" Oct 06 12:15:30 crc kubenswrapper[4958]: I1006 12:15:30.313915 4958 scope.go:117] "RemoveContainer" containerID="01d0deaa66b9acb07807834538e40a9c822915037de0932b420bf804d22ee9ea" Oct 06 12:15:30 crc kubenswrapper[4958]: I1006 12:15:30.370048 4958 scope.go:117] "RemoveContainer" containerID="9baca3a04d3fb8a69611a589ecd1f022f36e21ee81568370ebeb268cbbeae73a" Oct 06 12:15:30 crc kubenswrapper[4958]: I1006 12:15:30.453011 4958 scope.go:117] "RemoveContainer" containerID="9332de5a5120ad7fa67a25a4e15c057cddb2bb5f7dade6111a95be93aeac7579" Oct 06 12:15:30 crc kubenswrapper[4958]: I1006 12:15:30.476362 4958 scope.go:117] "RemoveContainer" containerID="0fe94584495c09f75008c7acfd8ecf375d80f2d3a0555d07594747077c408196" Oct 06 12:15:30 crc kubenswrapper[4958]: I1006 12:15:30.535808 4958 scope.go:117] "RemoveContainer" containerID="e605699a52e226bfe9b9e83d9cd83d10da6754e4c007920d3b8fdc00181e8885" Oct 06 12:15:30 crc kubenswrapper[4958]: I1006 12:15:30.571895 4958 scope.go:117] "RemoveContainer" containerID="b6a8901a24727c2bac3f7d1fa2a4e8d296acbb728b45ec919812e646dc73df52" Oct 06 12:15:30 crc kubenswrapper[4958]: I1006 12:15:30.964651 4958 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c4703feb-17e7-47e1-9487-ffb7fd7303c2" path="/var/lib/kubelet/pods/c4703feb-17e7-47e1-9487-ffb7fd7303c2/volumes" Oct 06 12:15:36 crc kubenswrapper[4958]: I1006 12:15:36.920223 4958 scope.go:117] "RemoveContainer" containerID="92e208915ca70ac5b3441238341d5b28c612e6494612e3dece5b9499a5bc906f" Oct 06 12:15:36 crc kubenswrapper[4958]: E1006 12:15:36.921377 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-whw6z_openshift-machine-config-operator(1a63a5e9-6d00-40ff-a080-cfcaf03a1c1b)\"" pod="openshift-machine-config-operator/machine-config-daemon-whw6z" podUID="1a63a5e9-6d00-40ff-a080-cfcaf03a1c1b" Oct 06 12:15:40 crc kubenswrapper[4958]: I1006 12:15:40.408743 4958 generic.go:334] "Generic (PLEG): container finished" podID="27373b57-9835-4096-9b31-eab53444391c" containerID="ea7945d0b60d068d8465a5c2e89970429aa3ac760248798222c8f887e7d2d87f" exitCode=0 Oct 06 12:15:40 crc kubenswrapper[4958]: I1006 12:15:40.408809 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-m7n8m" event={"ID":"27373b57-9835-4096-9b31-eab53444391c","Type":"ContainerDied","Data":"ea7945d0b60d068d8465a5c2e89970429aa3ac760248798222c8f887e7d2d87f"} Oct 06 12:15:41 crc kubenswrapper[4958]: I1006 12:15:41.897504 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-m7n8m" Oct 06 12:15:42 crc kubenswrapper[4958]: I1006 12:15:42.015385 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9m7cq\" (UniqueName: \"kubernetes.io/projected/27373b57-9835-4096-9b31-eab53444391c-kube-api-access-9m7cq\") pod \"27373b57-9835-4096-9b31-eab53444391c\" (UID: \"27373b57-9835-4096-9b31-eab53444391c\") " Oct 06 12:15:42 crc kubenswrapper[4958]: I1006 12:15:42.015621 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/27373b57-9835-4096-9b31-eab53444391c-ssh-key\") pod \"27373b57-9835-4096-9b31-eab53444391c\" (UID: \"27373b57-9835-4096-9b31-eab53444391c\") " Oct 06 12:15:42 crc kubenswrapper[4958]: I1006 12:15:42.015730 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/27373b57-9835-4096-9b31-eab53444391c-inventory\") pod \"27373b57-9835-4096-9b31-eab53444391c\" (UID: \"27373b57-9835-4096-9b31-eab53444391c\") " Oct 06 12:15:42 crc kubenswrapper[4958]: I1006 12:15:42.021376 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/27373b57-9835-4096-9b31-eab53444391c-kube-api-access-9m7cq" (OuterVolumeSpecName: "kube-api-access-9m7cq") pod "27373b57-9835-4096-9b31-eab53444391c" (UID: "27373b57-9835-4096-9b31-eab53444391c"). InnerVolumeSpecName "kube-api-access-9m7cq". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 06 12:15:42 crc kubenswrapper[4958]: I1006 12:15:42.041562 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/27373b57-9835-4096-9b31-eab53444391c-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "27373b57-9835-4096-9b31-eab53444391c" (UID: "27373b57-9835-4096-9b31-eab53444391c"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 06 12:15:42 crc kubenswrapper[4958]: I1006 12:15:42.050767 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/27373b57-9835-4096-9b31-eab53444391c-inventory" (OuterVolumeSpecName: "inventory") pod "27373b57-9835-4096-9b31-eab53444391c" (UID: "27373b57-9835-4096-9b31-eab53444391c"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 06 12:15:42 crc kubenswrapper[4958]: I1006 12:15:42.118334 4958 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/27373b57-9835-4096-9b31-eab53444391c-inventory\") on node \"crc\" DevicePath \"\"" Oct 06 12:15:42 crc kubenswrapper[4958]: I1006 12:15:42.118386 4958 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9m7cq\" (UniqueName: \"kubernetes.io/projected/27373b57-9835-4096-9b31-eab53444391c-kube-api-access-9m7cq\") on node \"crc\" DevicePath \"\"" Oct 06 12:15:42 crc kubenswrapper[4958]: I1006 12:15:42.118403 4958 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/27373b57-9835-4096-9b31-eab53444391c-ssh-key\") on node \"crc\" DevicePath \"\"" Oct 06 12:15:42 crc kubenswrapper[4958]: I1006 12:15:42.435078 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-m7n8m" event={"ID":"27373b57-9835-4096-9b31-eab53444391c","Type":"ContainerDied","Data":"b9506691750364514d7c3c213fd90857b3df81826686efd1309b4167fc4aa03d"} Oct 06 12:15:42 crc kubenswrapper[4958]: I1006 12:15:42.435198 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-m7n8m" Oct 06 12:15:42 crc kubenswrapper[4958]: I1006 12:15:42.435224 4958 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b9506691750364514d7c3c213fd90857b3df81826686efd1309b4167fc4aa03d" Oct 06 12:15:42 crc kubenswrapper[4958]: I1006 12:15:42.551893 4958 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/configure-os-edpm-deployment-openstack-edpm-ipam-7ppk7"] Oct 06 12:15:42 crc kubenswrapper[4958]: E1006 12:15:42.552614 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="65745d6b-470b-482c-91fd-8c71acbb7e06" containerName="extract-content" Oct 06 12:15:42 crc kubenswrapper[4958]: I1006 12:15:42.552641 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="65745d6b-470b-482c-91fd-8c71acbb7e06" containerName="extract-content" Oct 06 12:15:42 crc kubenswrapper[4958]: E1006 12:15:42.552653 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="27373b57-9835-4096-9b31-eab53444391c" containerName="install-os-edpm-deployment-openstack-edpm-ipam" Oct 06 12:15:42 crc kubenswrapper[4958]: I1006 12:15:42.552663 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="27373b57-9835-4096-9b31-eab53444391c" containerName="install-os-edpm-deployment-openstack-edpm-ipam" Oct 06 12:15:42 crc kubenswrapper[4958]: E1006 12:15:42.552684 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="65745d6b-470b-482c-91fd-8c71acbb7e06" containerName="registry-server" Oct 06 12:15:42 crc kubenswrapper[4958]: I1006 12:15:42.552693 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="65745d6b-470b-482c-91fd-8c71acbb7e06" containerName="registry-server" Oct 06 12:15:42 crc kubenswrapper[4958]: E1006 12:15:42.552704 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="65745d6b-470b-482c-91fd-8c71acbb7e06" containerName="extract-utilities" Oct 06 12:15:42 crc kubenswrapper[4958]: I1006 12:15:42.552712 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="65745d6b-470b-482c-91fd-8c71acbb7e06" containerName="extract-utilities" Oct 06 12:15:42 crc kubenswrapper[4958]: E1006 12:15:42.552729 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="15fb0054-0dd0-4d78-a368-5bb9b70bdfc7" containerName="collect-profiles" Oct 06 12:15:42 crc kubenswrapper[4958]: I1006 12:15:42.552736 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="15fb0054-0dd0-4d78-a368-5bb9b70bdfc7" containerName="collect-profiles" Oct 06 12:15:42 crc kubenswrapper[4958]: I1006 12:15:42.553007 4958 memory_manager.go:354] "RemoveStaleState removing state" podUID="65745d6b-470b-482c-91fd-8c71acbb7e06" containerName="registry-server" Oct 06 12:15:42 crc kubenswrapper[4958]: I1006 12:15:42.553033 4958 memory_manager.go:354] "RemoveStaleState removing state" podUID="27373b57-9835-4096-9b31-eab53444391c" containerName="install-os-edpm-deployment-openstack-edpm-ipam" Oct 06 12:15:42 crc kubenswrapper[4958]: I1006 12:15:42.553055 4958 memory_manager.go:354] "RemoveStaleState removing state" podUID="15fb0054-0dd0-4d78-a368-5bb9b70bdfc7" containerName="collect-profiles" Oct 06 12:15:42 crc kubenswrapper[4958]: I1006 12:15:42.553842 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-7ppk7" Oct 06 12:15:42 crc kubenswrapper[4958]: I1006 12:15:42.557313 4958 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Oct 06 12:15:42 crc kubenswrapper[4958]: I1006 12:15:42.557805 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-frkbw" Oct 06 12:15:42 crc kubenswrapper[4958]: I1006 12:15:42.558084 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Oct 06 12:15:42 crc kubenswrapper[4958]: I1006 12:15:42.561813 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Oct 06 12:15:42 crc kubenswrapper[4958]: I1006 12:15:42.566438 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-os-edpm-deployment-openstack-edpm-ipam-7ppk7"] Oct 06 12:15:42 crc kubenswrapper[4958]: I1006 12:15:42.730497 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/9966ebae-f14d-4b3a-aea7-28843e2fe605-inventory\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-7ppk7\" (UID: \"9966ebae-f14d-4b3a-aea7-28843e2fe605\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-7ppk7" Oct 06 12:15:42 crc kubenswrapper[4958]: I1006 12:15:42.730581 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/9966ebae-f14d-4b3a-aea7-28843e2fe605-ssh-key\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-7ppk7\" (UID: \"9966ebae-f14d-4b3a-aea7-28843e2fe605\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-7ppk7" Oct 06 12:15:42 crc kubenswrapper[4958]: I1006 12:15:42.730863 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b5pxg\" (UniqueName: \"kubernetes.io/projected/9966ebae-f14d-4b3a-aea7-28843e2fe605-kube-api-access-b5pxg\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-7ppk7\" (UID: \"9966ebae-f14d-4b3a-aea7-28843e2fe605\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-7ppk7" Oct 06 12:15:42 crc kubenswrapper[4958]: I1006 12:15:42.832219 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/9966ebae-f14d-4b3a-aea7-28843e2fe605-inventory\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-7ppk7\" (UID: \"9966ebae-f14d-4b3a-aea7-28843e2fe605\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-7ppk7" Oct 06 12:15:42 crc kubenswrapper[4958]: I1006 12:15:42.832555 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/9966ebae-f14d-4b3a-aea7-28843e2fe605-ssh-key\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-7ppk7\" (UID: \"9966ebae-f14d-4b3a-aea7-28843e2fe605\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-7ppk7" Oct 06 12:15:42 crc kubenswrapper[4958]: I1006 12:15:42.832652 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b5pxg\" (UniqueName: \"kubernetes.io/projected/9966ebae-f14d-4b3a-aea7-28843e2fe605-kube-api-access-b5pxg\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-7ppk7\" (UID: \"9966ebae-f14d-4b3a-aea7-28843e2fe605\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-7ppk7" Oct 06 12:15:42 crc kubenswrapper[4958]: I1006 12:15:42.836718 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/9966ebae-f14d-4b3a-aea7-28843e2fe605-ssh-key\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-7ppk7\" (UID: \"9966ebae-f14d-4b3a-aea7-28843e2fe605\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-7ppk7" Oct 06 12:15:42 crc kubenswrapper[4958]: I1006 12:15:42.836796 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/9966ebae-f14d-4b3a-aea7-28843e2fe605-inventory\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-7ppk7\" (UID: \"9966ebae-f14d-4b3a-aea7-28843e2fe605\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-7ppk7" Oct 06 12:15:42 crc kubenswrapper[4958]: I1006 12:15:42.857281 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b5pxg\" (UniqueName: \"kubernetes.io/projected/9966ebae-f14d-4b3a-aea7-28843e2fe605-kube-api-access-b5pxg\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-7ppk7\" (UID: \"9966ebae-f14d-4b3a-aea7-28843e2fe605\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-7ppk7" Oct 06 12:15:42 crc kubenswrapper[4958]: I1006 12:15:42.878475 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-7ppk7" Oct 06 12:15:43 crc kubenswrapper[4958]: I1006 12:15:43.414349 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-os-edpm-deployment-openstack-edpm-ipam-7ppk7"] Oct 06 12:15:43 crc kubenswrapper[4958]: I1006 12:15:43.444338 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-7ppk7" event={"ID":"9966ebae-f14d-4b3a-aea7-28843e2fe605","Type":"ContainerStarted","Data":"69db23087d7a427feb2c9703ea803876e91586bfbccf9b0eb9a4a46a4a78b417"} Oct 06 12:15:44 crc kubenswrapper[4958]: I1006 12:15:44.459007 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-7ppk7" event={"ID":"9966ebae-f14d-4b3a-aea7-28843e2fe605","Type":"ContainerStarted","Data":"8e0889d6af68895540bf867ebd9512fb69e869ee52bf02f0a221095a2e3ade75"} Oct 06 12:15:44 crc kubenswrapper[4958]: I1006 12:15:44.480563 4958 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-7ppk7" podStartSLOduration=2.012829047 podStartE2EDuration="2.480511757s" podCreationTimestamp="2025-10-06 12:15:42 +0000 UTC" firstStartedPulling="2025-10-06 12:15:43.425128548 +0000 UTC m=+1697.311153866" lastFinishedPulling="2025-10-06 12:15:43.892811228 +0000 UTC m=+1697.778836576" observedRunningTime="2025-10-06 12:15:44.4802933 +0000 UTC m=+1698.366318658" watchObservedRunningTime="2025-10-06 12:15:44.480511757 +0000 UTC m=+1698.366537075" Oct 06 12:15:47 crc kubenswrapper[4958]: I1006 12:15:47.069026 4958 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-cell-mapping-mbwxn"] Oct 06 12:15:47 crc kubenswrapper[4958]: I1006 12:15:47.087415 4958 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-cell-mapping-mbwxn"] Oct 06 12:15:48 crc kubenswrapper[4958]: I1006 12:15:48.037039 4958 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-kpxjr"] Oct 06 12:15:48 crc kubenswrapper[4958]: I1006 12:15:48.042257 4958 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-kpxjr"] Oct 06 12:15:48 crc kubenswrapper[4958]: I1006 12:15:48.932036 4958 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6ed410ad-6f3a-4b26-bdad-1de8609840cb" path="/var/lib/kubelet/pods/6ed410ad-6f3a-4b26-bdad-1de8609840cb/volumes" Oct 06 12:15:48 crc kubenswrapper[4958]: I1006 12:15:48.932667 4958 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="847e5baa-32e2-4bba-88c2-a493724883f9" path="/var/lib/kubelet/pods/847e5baa-32e2-4bba-88c2-a493724883f9/volumes" Oct 06 12:15:50 crc kubenswrapper[4958]: I1006 12:15:50.913858 4958 scope.go:117] "RemoveContainer" containerID="92e208915ca70ac5b3441238341d5b28c612e6494612e3dece5b9499a5bc906f" Oct 06 12:15:50 crc kubenswrapper[4958]: E1006 12:15:50.914596 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-whw6z_openshift-machine-config-operator(1a63a5e9-6d00-40ff-a080-cfcaf03a1c1b)\"" pod="openshift-machine-config-operator/machine-config-daemon-whw6z" podUID="1a63a5e9-6d00-40ff-a080-cfcaf03a1c1b" Oct 06 12:16:03 crc kubenswrapper[4958]: I1006 12:16:03.913570 4958 scope.go:117] "RemoveContainer" containerID="92e208915ca70ac5b3441238341d5b28c612e6494612e3dece5b9499a5bc906f" Oct 06 12:16:03 crc kubenswrapper[4958]: E1006 12:16:03.914882 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-whw6z_openshift-machine-config-operator(1a63a5e9-6d00-40ff-a080-cfcaf03a1c1b)\"" pod="openshift-machine-config-operator/machine-config-daemon-whw6z" podUID="1a63a5e9-6d00-40ff-a080-cfcaf03a1c1b" Oct 06 12:16:17 crc kubenswrapper[4958]: I1006 12:16:17.913959 4958 scope.go:117] "RemoveContainer" containerID="92e208915ca70ac5b3441238341d5b28c612e6494612e3dece5b9499a5bc906f" Oct 06 12:16:17 crc kubenswrapper[4958]: E1006 12:16:17.914736 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-whw6z_openshift-machine-config-operator(1a63a5e9-6d00-40ff-a080-cfcaf03a1c1b)\"" pod="openshift-machine-config-operator/machine-config-daemon-whw6z" podUID="1a63a5e9-6d00-40ff-a080-cfcaf03a1c1b" Oct 06 12:16:30 crc kubenswrapper[4958]: I1006 12:16:30.723363 4958 scope.go:117] "RemoveContainer" containerID="dd9530070cb06ce9962ff69b3a29281c0fe964c5ca71d08d584dad6c2e23d6ee" Oct 06 12:16:30 crc kubenswrapper[4958]: I1006 12:16:30.794715 4958 scope.go:117] "RemoveContainer" containerID="82440537c81f3157cd5b3ab206175befa2606065430c1e7af8e130544b646e8e" Oct 06 12:16:30 crc kubenswrapper[4958]: I1006 12:16:30.846337 4958 scope.go:117] "RemoveContainer" containerID="5282bdbc80071e56bf4669acdc716b15a530e229f47e2b2546483be68d41c10f" Oct 06 12:16:30 crc kubenswrapper[4958]: I1006 12:16:30.914870 4958 scope.go:117] "RemoveContainer" containerID="92e208915ca70ac5b3441238341d5b28c612e6494612e3dece5b9499a5bc906f" Oct 06 12:16:30 crc kubenswrapper[4958]: E1006 12:16:30.915774 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-whw6z_openshift-machine-config-operator(1a63a5e9-6d00-40ff-a080-cfcaf03a1c1b)\"" pod="openshift-machine-config-operator/machine-config-daemon-whw6z" podUID="1a63a5e9-6d00-40ff-a080-cfcaf03a1c1b" Oct 06 12:16:33 crc kubenswrapper[4958]: I1006 12:16:33.076202 4958 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-cell-mapping-xftf8"] Oct 06 12:16:33 crc kubenswrapper[4958]: I1006 12:16:33.091922 4958 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-cell-mapping-xftf8"] Oct 06 12:16:34 crc kubenswrapper[4958]: I1006 12:16:34.930092 4958 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3da134f3-379d-4899-a301-d9091d57f4d4" path="/var/lib/kubelet/pods/3da134f3-379d-4899-a301-d9091d57f4d4/volumes" Oct 06 12:16:42 crc kubenswrapper[4958]: I1006 12:16:42.914363 4958 scope.go:117] "RemoveContainer" containerID="92e208915ca70ac5b3441238341d5b28c612e6494612e3dece5b9499a5bc906f" Oct 06 12:16:42 crc kubenswrapper[4958]: E1006 12:16:42.915583 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-whw6z_openshift-machine-config-operator(1a63a5e9-6d00-40ff-a080-cfcaf03a1c1b)\"" pod="openshift-machine-config-operator/machine-config-daemon-whw6z" podUID="1a63a5e9-6d00-40ff-a080-cfcaf03a1c1b" Oct 06 12:16:45 crc kubenswrapper[4958]: I1006 12:16:45.136481 4958 generic.go:334] "Generic (PLEG): container finished" podID="9966ebae-f14d-4b3a-aea7-28843e2fe605" containerID="8e0889d6af68895540bf867ebd9512fb69e869ee52bf02f0a221095a2e3ade75" exitCode=2 Oct 06 12:16:45 crc kubenswrapper[4958]: I1006 12:16:45.136587 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-7ppk7" event={"ID":"9966ebae-f14d-4b3a-aea7-28843e2fe605","Type":"ContainerDied","Data":"8e0889d6af68895540bf867ebd9512fb69e869ee52bf02f0a221095a2e3ade75"} Oct 06 12:16:46 crc kubenswrapper[4958]: I1006 12:16:46.609557 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-7ppk7" Oct 06 12:16:46 crc kubenswrapper[4958]: I1006 12:16:46.678247 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/9966ebae-f14d-4b3a-aea7-28843e2fe605-ssh-key\") pod \"9966ebae-f14d-4b3a-aea7-28843e2fe605\" (UID: \"9966ebae-f14d-4b3a-aea7-28843e2fe605\") " Oct 06 12:16:46 crc kubenswrapper[4958]: I1006 12:16:46.678520 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-b5pxg\" (UniqueName: \"kubernetes.io/projected/9966ebae-f14d-4b3a-aea7-28843e2fe605-kube-api-access-b5pxg\") pod \"9966ebae-f14d-4b3a-aea7-28843e2fe605\" (UID: \"9966ebae-f14d-4b3a-aea7-28843e2fe605\") " Oct 06 12:16:46 crc kubenswrapper[4958]: I1006 12:16:46.678586 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/9966ebae-f14d-4b3a-aea7-28843e2fe605-inventory\") pod \"9966ebae-f14d-4b3a-aea7-28843e2fe605\" (UID: \"9966ebae-f14d-4b3a-aea7-28843e2fe605\") " Oct 06 12:16:46 crc kubenswrapper[4958]: I1006 12:16:46.683421 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9966ebae-f14d-4b3a-aea7-28843e2fe605-kube-api-access-b5pxg" (OuterVolumeSpecName: "kube-api-access-b5pxg") pod "9966ebae-f14d-4b3a-aea7-28843e2fe605" (UID: "9966ebae-f14d-4b3a-aea7-28843e2fe605"). InnerVolumeSpecName "kube-api-access-b5pxg". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 06 12:16:46 crc kubenswrapper[4958]: I1006 12:16:46.706925 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9966ebae-f14d-4b3a-aea7-28843e2fe605-inventory" (OuterVolumeSpecName: "inventory") pod "9966ebae-f14d-4b3a-aea7-28843e2fe605" (UID: "9966ebae-f14d-4b3a-aea7-28843e2fe605"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 06 12:16:46 crc kubenswrapper[4958]: I1006 12:16:46.725940 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9966ebae-f14d-4b3a-aea7-28843e2fe605-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "9966ebae-f14d-4b3a-aea7-28843e2fe605" (UID: "9966ebae-f14d-4b3a-aea7-28843e2fe605"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 06 12:16:46 crc kubenswrapper[4958]: I1006 12:16:46.780274 4958 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-b5pxg\" (UniqueName: \"kubernetes.io/projected/9966ebae-f14d-4b3a-aea7-28843e2fe605-kube-api-access-b5pxg\") on node \"crc\" DevicePath \"\"" Oct 06 12:16:46 crc kubenswrapper[4958]: I1006 12:16:46.780312 4958 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/9966ebae-f14d-4b3a-aea7-28843e2fe605-inventory\") on node \"crc\" DevicePath \"\"" Oct 06 12:16:46 crc kubenswrapper[4958]: I1006 12:16:46.780321 4958 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/9966ebae-f14d-4b3a-aea7-28843e2fe605-ssh-key\") on node \"crc\" DevicePath \"\"" Oct 06 12:16:47 crc kubenswrapper[4958]: I1006 12:16:47.156617 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-7ppk7" event={"ID":"9966ebae-f14d-4b3a-aea7-28843e2fe605","Type":"ContainerDied","Data":"69db23087d7a427feb2c9703ea803876e91586bfbccf9b0eb9a4a46a4a78b417"} Oct 06 12:16:47 crc kubenswrapper[4958]: I1006 12:16:47.156660 4958 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="69db23087d7a427feb2c9703ea803876e91586bfbccf9b0eb9a4a46a4a78b417" Oct 06 12:16:47 crc kubenswrapper[4958]: I1006 12:16:47.156716 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-7ppk7" Oct 06 12:16:54 crc kubenswrapper[4958]: I1006 12:16:54.049935 4958 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/configure-os-edpm-deployment-openstack-edpm-ipam-b6zqz"] Oct 06 12:16:54 crc kubenswrapper[4958]: E1006 12:16:54.051316 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9966ebae-f14d-4b3a-aea7-28843e2fe605" containerName="configure-os-edpm-deployment-openstack-edpm-ipam" Oct 06 12:16:54 crc kubenswrapper[4958]: I1006 12:16:54.051341 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="9966ebae-f14d-4b3a-aea7-28843e2fe605" containerName="configure-os-edpm-deployment-openstack-edpm-ipam" Oct 06 12:16:54 crc kubenswrapper[4958]: I1006 12:16:54.051696 4958 memory_manager.go:354] "RemoveStaleState removing state" podUID="9966ebae-f14d-4b3a-aea7-28843e2fe605" containerName="configure-os-edpm-deployment-openstack-edpm-ipam" Oct 06 12:16:54 crc kubenswrapper[4958]: I1006 12:16:54.052986 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-b6zqz" Oct 06 12:16:54 crc kubenswrapper[4958]: I1006 12:16:54.056211 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-frkbw" Oct 06 12:16:54 crc kubenswrapper[4958]: I1006 12:16:54.056473 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Oct 06 12:16:54 crc kubenswrapper[4958]: I1006 12:16:54.056958 4958 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Oct 06 12:16:54 crc kubenswrapper[4958]: I1006 12:16:54.058980 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Oct 06 12:16:54 crc kubenswrapper[4958]: I1006 12:16:54.075322 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-os-edpm-deployment-openstack-edpm-ipam-b6zqz"] Oct 06 12:16:54 crc kubenswrapper[4958]: I1006 12:16:54.138042 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/b84c284f-00cf-4afd-a3e6-84c24af1caae-ssh-key\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-b6zqz\" (UID: \"b84c284f-00cf-4afd-a3e6-84c24af1caae\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-b6zqz" Oct 06 12:16:54 crc kubenswrapper[4958]: I1006 12:16:54.138803 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/b84c284f-00cf-4afd-a3e6-84c24af1caae-inventory\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-b6zqz\" (UID: \"b84c284f-00cf-4afd-a3e6-84c24af1caae\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-b6zqz" Oct 06 12:16:54 crc kubenswrapper[4958]: I1006 12:16:54.138847 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6nsnp\" (UniqueName: \"kubernetes.io/projected/b84c284f-00cf-4afd-a3e6-84c24af1caae-kube-api-access-6nsnp\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-b6zqz\" (UID: \"b84c284f-00cf-4afd-a3e6-84c24af1caae\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-b6zqz" Oct 06 12:16:54 crc kubenswrapper[4958]: I1006 12:16:54.242018 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/b84c284f-00cf-4afd-a3e6-84c24af1caae-inventory\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-b6zqz\" (UID: \"b84c284f-00cf-4afd-a3e6-84c24af1caae\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-b6zqz" Oct 06 12:16:54 crc kubenswrapper[4958]: I1006 12:16:54.242079 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6nsnp\" (UniqueName: \"kubernetes.io/projected/b84c284f-00cf-4afd-a3e6-84c24af1caae-kube-api-access-6nsnp\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-b6zqz\" (UID: \"b84c284f-00cf-4afd-a3e6-84c24af1caae\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-b6zqz" Oct 06 12:16:54 crc kubenswrapper[4958]: I1006 12:16:54.242118 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/b84c284f-00cf-4afd-a3e6-84c24af1caae-ssh-key\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-b6zqz\" (UID: \"b84c284f-00cf-4afd-a3e6-84c24af1caae\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-b6zqz" Oct 06 12:16:54 crc kubenswrapper[4958]: I1006 12:16:54.248835 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/b84c284f-00cf-4afd-a3e6-84c24af1caae-inventory\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-b6zqz\" (UID: \"b84c284f-00cf-4afd-a3e6-84c24af1caae\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-b6zqz" Oct 06 12:16:54 crc kubenswrapper[4958]: I1006 12:16:54.253092 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/b84c284f-00cf-4afd-a3e6-84c24af1caae-ssh-key\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-b6zqz\" (UID: \"b84c284f-00cf-4afd-a3e6-84c24af1caae\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-b6zqz" Oct 06 12:16:54 crc kubenswrapper[4958]: I1006 12:16:54.263962 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6nsnp\" (UniqueName: \"kubernetes.io/projected/b84c284f-00cf-4afd-a3e6-84c24af1caae-kube-api-access-6nsnp\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-b6zqz\" (UID: \"b84c284f-00cf-4afd-a3e6-84c24af1caae\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-b6zqz" Oct 06 12:16:54 crc kubenswrapper[4958]: I1006 12:16:54.383075 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-b6zqz" Oct 06 12:16:54 crc kubenswrapper[4958]: I1006 12:16:54.753280 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-os-edpm-deployment-openstack-edpm-ipam-b6zqz"] Oct 06 12:16:54 crc kubenswrapper[4958]: W1006 12:16:54.764555 4958 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb84c284f_00cf_4afd_a3e6_84c24af1caae.slice/crio-88a2b8af41ce5c20cc88e698ade65ef957320318c4737cddd7db1b97e1f731d3 WatchSource:0}: Error finding container 88a2b8af41ce5c20cc88e698ade65ef957320318c4737cddd7db1b97e1f731d3: Status 404 returned error can't find the container with id 88a2b8af41ce5c20cc88e698ade65ef957320318c4737cddd7db1b97e1f731d3 Oct 06 12:16:54 crc kubenswrapper[4958]: I1006 12:16:54.913657 4958 scope.go:117] "RemoveContainer" containerID="92e208915ca70ac5b3441238341d5b28c612e6494612e3dece5b9499a5bc906f" Oct 06 12:16:54 crc kubenswrapper[4958]: E1006 12:16:54.913997 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-whw6z_openshift-machine-config-operator(1a63a5e9-6d00-40ff-a080-cfcaf03a1c1b)\"" pod="openshift-machine-config-operator/machine-config-daemon-whw6z" podUID="1a63a5e9-6d00-40ff-a080-cfcaf03a1c1b" Oct 06 12:16:55 crc kubenswrapper[4958]: I1006 12:16:55.234547 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-b6zqz" event={"ID":"b84c284f-00cf-4afd-a3e6-84c24af1caae","Type":"ContainerStarted","Data":"88a2b8af41ce5c20cc88e698ade65ef957320318c4737cddd7db1b97e1f731d3"} Oct 06 12:16:56 crc kubenswrapper[4958]: I1006 12:16:56.244040 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-b6zqz" event={"ID":"b84c284f-00cf-4afd-a3e6-84c24af1caae","Type":"ContainerStarted","Data":"f12d69cf62b25940ee8b508ae30d50410423850120a988fe2fc28eb78c83b0f8"} Oct 06 12:16:56 crc kubenswrapper[4958]: I1006 12:16:56.260812 4958 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-b6zqz" podStartSLOduration=1.796633975 podStartE2EDuration="2.260796373s" podCreationTimestamp="2025-10-06 12:16:54 +0000 UTC" firstStartedPulling="2025-10-06 12:16:54.768102763 +0000 UTC m=+1768.654128071" lastFinishedPulling="2025-10-06 12:16:55.232265171 +0000 UTC m=+1769.118290469" observedRunningTime="2025-10-06 12:16:56.256521669 +0000 UTC m=+1770.142546987" watchObservedRunningTime="2025-10-06 12:16:56.260796373 +0000 UTC m=+1770.146821691" Oct 06 12:17:07 crc kubenswrapper[4958]: I1006 12:17:07.914139 4958 scope.go:117] "RemoveContainer" containerID="92e208915ca70ac5b3441238341d5b28c612e6494612e3dece5b9499a5bc906f" Oct 06 12:17:07 crc kubenswrapper[4958]: E1006 12:17:07.915195 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-whw6z_openshift-machine-config-operator(1a63a5e9-6d00-40ff-a080-cfcaf03a1c1b)\"" pod="openshift-machine-config-operator/machine-config-daemon-whw6z" podUID="1a63a5e9-6d00-40ff-a080-cfcaf03a1c1b" Oct 06 12:17:18 crc kubenswrapper[4958]: I1006 12:17:18.913695 4958 scope.go:117] "RemoveContainer" containerID="92e208915ca70ac5b3441238341d5b28c612e6494612e3dece5b9499a5bc906f" Oct 06 12:17:18 crc kubenswrapper[4958]: E1006 12:17:18.915004 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-whw6z_openshift-machine-config-operator(1a63a5e9-6d00-40ff-a080-cfcaf03a1c1b)\"" pod="openshift-machine-config-operator/machine-config-daemon-whw6z" podUID="1a63a5e9-6d00-40ff-a080-cfcaf03a1c1b" Oct 06 12:17:31 crc kubenswrapper[4958]: I1006 12:17:31.024482 4958 scope.go:117] "RemoveContainer" containerID="8f9158940894e8da96a3f5f5ed001d8f3cbe61333921f60498bd6d6f7d4324df" Oct 06 12:17:32 crc kubenswrapper[4958]: I1006 12:17:32.917262 4958 scope.go:117] "RemoveContainer" containerID="92e208915ca70ac5b3441238341d5b28c612e6494612e3dece5b9499a5bc906f" Oct 06 12:17:32 crc kubenswrapper[4958]: E1006 12:17:32.918470 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-whw6z_openshift-machine-config-operator(1a63a5e9-6d00-40ff-a080-cfcaf03a1c1b)\"" pod="openshift-machine-config-operator/machine-config-daemon-whw6z" podUID="1a63a5e9-6d00-40ff-a080-cfcaf03a1c1b" Oct 06 12:17:47 crc kubenswrapper[4958]: I1006 12:17:47.913847 4958 scope.go:117] "RemoveContainer" containerID="92e208915ca70ac5b3441238341d5b28c612e6494612e3dece5b9499a5bc906f" Oct 06 12:17:47 crc kubenswrapper[4958]: E1006 12:17:47.915181 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-whw6z_openshift-machine-config-operator(1a63a5e9-6d00-40ff-a080-cfcaf03a1c1b)\"" pod="openshift-machine-config-operator/machine-config-daemon-whw6z" podUID="1a63a5e9-6d00-40ff-a080-cfcaf03a1c1b" Oct 06 12:17:49 crc kubenswrapper[4958]: I1006 12:17:49.874043 4958 generic.go:334] "Generic (PLEG): container finished" podID="b84c284f-00cf-4afd-a3e6-84c24af1caae" containerID="f12d69cf62b25940ee8b508ae30d50410423850120a988fe2fc28eb78c83b0f8" exitCode=0 Oct 06 12:17:49 crc kubenswrapper[4958]: I1006 12:17:49.874211 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-b6zqz" event={"ID":"b84c284f-00cf-4afd-a3e6-84c24af1caae","Type":"ContainerDied","Data":"f12d69cf62b25940ee8b508ae30d50410423850120a988fe2fc28eb78c83b0f8"} Oct 06 12:17:51 crc kubenswrapper[4958]: I1006 12:17:51.346690 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-b6zqz" Oct 06 12:17:51 crc kubenswrapper[4958]: I1006 12:17:51.475307 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/b84c284f-00cf-4afd-a3e6-84c24af1caae-inventory\") pod \"b84c284f-00cf-4afd-a3e6-84c24af1caae\" (UID: \"b84c284f-00cf-4afd-a3e6-84c24af1caae\") " Oct 06 12:17:51 crc kubenswrapper[4958]: I1006 12:17:51.475676 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/b84c284f-00cf-4afd-a3e6-84c24af1caae-ssh-key\") pod \"b84c284f-00cf-4afd-a3e6-84c24af1caae\" (UID: \"b84c284f-00cf-4afd-a3e6-84c24af1caae\") " Oct 06 12:17:51 crc kubenswrapper[4958]: I1006 12:17:51.475814 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6nsnp\" (UniqueName: \"kubernetes.io/projected/b84c284f-00cf-4afd-a3e6-84c24af1caae-kube-api-access-6nsnp\") pod \"b84c284f-00cf-4afd-a3e6-84c24af1caae\" (UID: \"b84c284f-00cf-4afd-a3e6-84c24af1caae\") " Oct 06 12:17:51 crc kubenswrapper[4958]: I1006 12:17:51.480098 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b84c284f-00cf-4afd-a3e6-84c24af1caae-kube-api-access-6nsnp" (OuterVolumeSpecName: "kube-api-access-6nsnp") pod "b84c284f-00cf-4afd-a3e6-84c24af1caae" (UID: "b84c284f-00cf-4afd-a3e6-84c24af1caae"). InnerVolumeSpecName "kube-api-access-6nsnp". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 06 12:17:51 crc kubenswrapper[4958]: I1006 12:17:51.505166 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b84c284f-00cf-4afd-a3e6-84c24af1caae-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "b84c284f-00cf-4afd-a3e6-84c24af1caae" (UID: "b84c284f-00cf-4afd-a3e6-84c24af1caae"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 06 12:17:51 crc kubenswrapper[4958]: I1006 12:17:51.506812 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b84c284f-00cf-4afd-a3e6-84c24af1caae-inventory" (OuterVolumeSpecName: "inventory") pod "b84c284f-00cf-4afd-a3e6-84c24af1caae" (UID: "b84c284f-00cf-4afd-a3e6-84c24af1caae"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 06 12:17:51 crc kubenswrapper[4958]: I1006 12:17:51.578388 4958 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/b84c284f-00cf-4afd-a3e6-84c24af1caae-inventory\") on node \"crc\" DevicePath \"\"" Oct 06 12:17:51 crc kubenswrapper[4958]: I1006 12:17:51.578424 4958 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/b84c284f-00cf-4afd-a3e6-84c24af1caae-ssh-key\") on node \"crc\" DevicePath \"\"" Oct 06 12:17:51 crc kubenswrapper[4958]: I1006 12:17:51.578436 4958 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6nsnp\" (UniqueName: \"kubernetes.io/projected/b84c284f-00cf-4afd-a3e6-84c24af1caae-kube-api-access-6nsnp\") on node \"crc\" DevicePath \"\"" Oct 06 12:17:51 crc kubenswrapper[4958]: I1006 12:17:51.895708 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-b6zqz" event={"ID":"b84c284f-00cf-4afd-a3e6-84c24af1caae","Type":"ContainerDied","Data":"88a2b8af41ce5c20cc88e698ade65ef957320318c4737cddd7db1b97e1f731d3"} Oct 06 12:17:51 crc kubenswrapper[4958]: I1006 12:17:51.895744 4958 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="88a2b8af41ce5c20cc88e698ade65ef957320318c4737cddd7db1b97e1f731d3" Oct 06 12:17:51 crc kubenswrapper[4958]: I1006 12:17:51.895834 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-b6zqz" Oct 06 12:17:51 crc kubenswrapper[4958]: I1006 12:17:51.988932 4958 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ssh-known-hosts-edpm-deployment-r248v"] Oct 06 12:17:51 crc kubenswrapper[4958]: E1006 12:17:51.989334 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b84c284f-00cf-4afd-a3e6-84c24af1caae" containerName="configure-os-edpm-deployment-openstack-edpm-ipam" Oct 06 12:17:51 crc kubenswrapper[4958]: I1006 12:17:51.989354 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="b84c284f-00cf-4afd-a3e6-84c24af1caae" containerName="configure-os-edpm-deployment-openstack-edpm-ipam" Oct 06 12:17:51 crc kubenswrapper[4958]: I1006 12:17:51.989573 4958 memory_manager.go:354] "RemoveStaleState removing state" podUID="b84c284f-00cf-4afd-a3e6-84c24af1caae" containerName="configure-os-edpm-deployment-openstack-edpm-ipam" Oct 06 12:17:51 crc kubenswrapper[4958]: I1006 12:17:51.990165 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-edpm-deployment-r248v" Oct 06 12:17:51 crc kubenswrapper[4958]: I1006 12:17:51.992104 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Oct 06 12:17:51 crc kubenswrapper[4958]: I1006 12:17:51.992307 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-frkbw" Oct 06 12:17:51 crc kubenswrapper[4958]: I1006 12:17:51.992388 4958 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Oct 06 12:17:51 crc kubenswrapper[4958]: I1006 12:17:51.992591 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Oct 06 12:17:52 crc kubenswrapper[4958]: I1006 12:17:52.005322 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ssh-known-hosts-edpm-deployment-r248v"] Oct 06 12:17:52 crc kubenswrapper[4958]: I1006 12:17:52.087635 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m4dcl\" (UniqueName: \"kubernetes.io/projected/263da9ff-5240-442f-9a4b-e2d8b5a30321-kube-api-access-m4dcl\") pod \"ssh-known-hosts-edpm-deployment-r248v\" (UID: \"263da9ff-5240-442f-9a4b-e2d8b5a30321\") " pod="openstack/ssh-known-hosts-edpm-deployment-r248v" Oct 06 12:17:52 crc kubenswrapper[4958]: I1006 12:17:52.087789 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/263da9ff-5240-442f-9a4b-e2d8b5a30321-ssh-key-openstack-edpm-ipam\") pod \"ssh-known-hosts-edpm-deployment-r248v\" (UID: \"263da9ff-5240-442f-9a4b-e2d8b5a30321\") " pod="openstack/ssh-known-hosts-edpm-deployment-r248v" Oct 06 12:17:52 crc kubenswrapper[4958]: I1006 12:17:52.087979 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/263da9ff-5240-442f-9a4b-e2d8b5a30321-inventory-0\") pod \"ssh-known-hosts-edpm-deployment-r248v\" (UID: \"263da9ff-5240-442f-9a4b-e2d8b5a30321\") " pod="openstack/ssh-known-hosts-edpm-deployment-r248v" Oct 06 12:17:52 crc kubenswrapper[4958]: I1006 12:17:52.189942 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m4dcl\" (UniqueName: \"kubernetes.io/projected/263da9ff-5240-442f-9a4b-e2d8b5a30321-kube-api-access-m4dcl\") pod \"ssh-known-hosts-edpm-deployment-r248v\" (UID: \"263da9ff-5240-442f-9a4b-e2d8b5a30321\") " pod="openstack/ssh-known-hosts-edpm-deployment-r248v" Oct 06 12:17:52 crc kubenswrapper[4958]: I1006 12:17:52.190055 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/263da9ff-5240-442f-9a4b-e2d8b5a30321-ssh-key-openstack-edpm-ipam\") pod \"ssh-known-hosts-edpm-deployment-r248v\" (UID: \"263da9ff-5240-442f-9a4b-e2d8b5a30321\") " pod="openstack/ssh-known-hosts-edpm-deployment-r248v" Oct 06 12:17:52 crc kubenswrapper[4958]: I1006 12:17:52.190226 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/263da9ff-5240-442f-9a4b-e2d8b5a30321-inventory-0\") pod \"ssh-known-hosts-edpm-deployment-r248v\" (UID: \"263da9ff-5240-442f-9a4b-e2d8b5a30321\") " pod="openstack/ssh-known-hosts-edpm-deployment-r248v" Oct 06 12:17:52 crc kubenswrapper[4958]: I1006 12:17:52.196373 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/263da9ff-5240-442f-9a4b-e2d8b5a30321-inventory-0\") pod \"ssh-known-hosts-edpm-deployment-r248v\" (UID: \"263da9ff-5240-442f-9a4b-e2d8b5a30321\") " pod="openstack/ssh-known-hosts-edpm-deployment-r248v" Oct 06 12:17:52 crc kubenswrapper[4958]: I1006 12:17:52.199393 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/263da9ff-5240-442f-9a4b-e2d8b5a30321-ssh-key-openstack-edpm-ipam\") pod \"ssh-known-hosts-edpm-deployment-r248v\" (UID: \"263da9ff-5240-442f-9a4b-e2d8b5a30321\") " pod="openstack/ssh-known-hosts-edpm-deployment-r248v" Oct 06 12:17:52 crc kubenswrapper[4958]: I1006 12:17:52.215513 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m4dcl\" (UniqueName: \"kubernetes.io/projected/263da9ff-5240-442f-9a4b-e2d8b5a30321-kube-api-access-m4dcl\") pod \"ssh-known-hosts-edpm-deployment-r248v\" (UID: \"263da9ff-5240-442f-9a4b-e2d8b5a30321\") " pod="openstack/ssh-known-hosts-edpm-deployment-r248v" Oct 06 12:17:52 crc kubenswrapper[4958]: I1006 12:17:52.350811 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-edpm-deployment-r248v" Oct 06 12:17:52 crc kubenswrapper[4958]: I1006 12:17:52.766850 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ssh-known-hosts-edpm-deployment-r248v"] Oct 06 12:17:52 crc kubenswrapper[4958]: I1006 12:17:52.931494 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-edpm-deployment-r248v" event={"ID":"263da9ff-5240-442f-9a4b-e2d8b5a30321","Type":"ContainerStarted","Data":"60690dc0d59e5c016aff02560c3db9389ca1e410a43037c504b4011edc544bb9"} Oct 06 12:17:53 crc kubenswrapper[4958]: I1006 12:17:53.943855 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-edpm-deployment-r248v" event={"ID":"263da9ff-5240-442f-9a4b-e2d8b5a30321","Type":"ContainerStarted","Data":"cad0b34def5befabad49258c59f54bbe30dd7a7d7d7f5a0ae2ead98e85358334"} Oct 06 12:17:53 crc kubenswrapper[4958]: I1006 12:17:53.979970 4958 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ssh-known-hosts-edpm-deployment-r248v" podStartSLOduration=2.548025292 podStartE2EDuration="2.979945857s" podCreationTimestamp="2025-10-06 12:17:51 +0000 UTC" firstStartedPulling="2025-10-06 12:17:52.757851237 +0000 UTC m=+1826.643876545" lastFinishedPulling="2025-10-06 12:17:53.189771802 +0000 UTC m=+1827.075797110" observedRunningTime="2025-10-06 12:17:53.966761158 +0000 UTC m=+1827.852786496" watchObservedRunningTime="2025-10-06 12:17:53.979945857 +0000 UTC m=+1827.865971175" Oct 06 12:17:59 crc kubenswrapper[4958]: I1006 12:17:59.914035 4958 scope.go:117] "RemoveContainer" containerID="92e208915ca70ac5b3441238341d5b28c612e6494612e3dece5b9499a5bc906f" Oct 06 12:17:59 crc kubenswrapper[4958]: E1006 12:17:59.914864 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-whw6z_openshift-machine-config-operator(1a63a5e9-6d00-40ff-a080-cfcaf03a1c1b)\"" pod="openshift-machine-config-operator/machine-config-daemon-whw6z" podUID="1a63a5e9-6d00-40ff-a080-cfcaf03a1c1b" Oct 06 12:18:02 crc kubenswrapper[4958]: I1006 12:18:02.036886 4958 generic.go:334] "Generic (PLEG): container finished" podID="263da9ff-5240-442f-9a4b-e2d8b5a30321" containerID="cad0b34def5befabad49258c59f54bbe30dd7a7d7d7f5a0ae2ead98e85358334" exitCode=0 Oct 06 12:18:02 crc kubenswrapper[4958]: I1006 12:18:02.036970 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-edpm-deployment-r248v" event={"ID":"263da9ff-5240-442f-9a4b-e2d8b5a30321","Type":"ContainerDied","Data":"cad0b34def5befabad49258c59f54bbe30dd7a7d7d7f5a0ae2ead98e85358334"} Oct 06 12:18:03 crc kubenswrapper[4958]: I1006 12:18:03.581682 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-edpm-deployment-r248v" Oct 06 12:18:03 crc kubenswrapper[4958]: I1006 12:18:03.634984 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/263da9ff-5240-442f-9a4b-e2d8b5a30321-ssh-key-openstack-edpm-ipam\") pod \"263da9ff-5240-442f-9a4b-e2d8b5a30321\" (UID: \"263da9ff-5240-442f-9a4b-e2d8b5a30321\") " Oct 06 12:18:03 crc kubenswrapper[4958]: I1006 12:18:03.635215 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-m4dcl\" (UniqueName: \"kubernetes.io/projected/263da9ff-5240-442f-9a4b-e2d8b5a30321-kube-api-access-m4dcl\") pod \"263da9ff-5240-442f-9a4b-e2d8b5a30321\" (UID: \"263da9ff-5240-442f-9a4b-e2d8b5a30321\") " Oct 06 12:18:03 crc kubenswrapper[4958]: I1006 12:18:03.635382 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/263da9ff-5240-442f-9a4b-e2d8b5a30321-inventory-0\") pod \"263da9ff-5240-442f-9a4b-e2d8b5a30321\" (UID: \"263da9ff-5240-442f-9a4b-e2d8b5a30321\") " Oct 06 12:18:03 crc kubenswrapper[4958]: I1006 12:18:03.643450 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/263da9ff-5240-442f-9a4b-e2d8b5a30321-kube-api-access-m4dcl" (OuterVolumeSpecName: "kube-api-access-m4dcl") pod "263da9ff-5240-442f-9a4b-e2d8b5a30321" (UID: "263da9ff-5240-442f-9a4b-e2d8b5a30321"). InnerVolumeSpecName "kube-api-access-m4dcl". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 06 12:18:03 crc kubenswrapper[4958]: I1006 12:18:03.669546 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/263da9ff-5240-442f-9a4b-e2d8b5a30321-inventory-0" (OuterVolumeSpecName: "inventory-0") pod "263da9ff-5240-442f-9a4b-e2d8b5a30321" (UID: "263da9ff-5240-442f-9a4b-e2d8b5a30321"). InnerVolumeSpecName "inventory-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 06 12:18:03 crc kubenswrapper[4958]: I1006 12:18:03.671113 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/263da9ff-5240-442f-9a4b-e2d8b5a30321-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "263da9ff-5240-442f-9a4b-e2d8b5a30321" (UID: "263da9ff-5240-442f-9a4b-e2d8b5a30321"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 06 12:18:03 crc kubenswrapper[4958]: I1006 12:18:03.738023 4958 reconciler_common.go:293] "Volume detached for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/263da9ff-5240-442f-9a4b-e2d8b5a30321-inventory-0\") on node \"crc\" DevicePath \"\"" Oct 06 12:18:03 crc kubenswrapper[4958]: I1006 12:18:03.738438 4958 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/263da9ff-5240-442f-9a4b-e2d8b5a30321-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Oct 06 12:18:03 crc kubenswrapper[4958]: I1006 12:18:03.738453 4958 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-m4dcl\" (UniqueName: \"kubernetes.io/projected/263da9ff-5240-442f-9a4b-e2d8b5a30321-kube-api-access-m4dcl\") on node \"crc\" DevicePath \"\"" Oct 06 12:18:04 crc kubenswrapper[4958]: I1006 12:18:04.071418 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-edpm-deployment-r248v" event={"ID":"263da9ff-5240-442f-9a4b-e2d8b5a30321","Type":"ContainerDied","Data":"60690dc0d59e5c016aff02560c3db9389ca1e410a43037c504b4011edc544bb9"} Oct 06 12:18:04 crc kubenswrapper[4958]: I1006 12:18:04.071531 4958 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="60690dc0d59e5c016aff02560c3db9389ca1e410a43037c504b4011edc544bb9" Oct 06 12:18:04 crc kubenswrapper[4958]: I1006 12:18:04.071685 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-edpm-deployment-r248v" Oct 06 12:18:04 crc kubenswrapper[4958]: I1006 12:18:04.151281 4958 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/run-os-edpm-deployment-openstack-edpm-ipam-jd665"] Oct 06 12:18:04 crc kubenswrapper[4958]: E1006 12:18:04.151896 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="263da9ff-5240-442f-9a4b-e2d8b5a30321" containerName="ssh-known-hosts-edpm-deployment" Oct 06 12:18:04 crc kubenswrapper[4958]: I1006 12:18:04.151953 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="263da9ff-5240-442f-9a4b-e2d8b5a30321" containerName="ssh-known-hosts-edpm-deployment" Oct 06 12:18:04 crc kubenswrapper[4958]: I1006 12:18:04.152282 4958 memory_manager.go:354] "RemoveStaleState removing state" podUID="263da9ff-5240-442f-9a4b-e2d8b5a30321" containerName="ssh-known-hosts-edpm-deployment" Oct 06 12:18:04 crc kubenswrapper[4958]: I1006 12:18:04.153287 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-jd665" Oct 06 12:18:04 crc kubenswrapper[4958]: I1006 12:18:04.159346 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Oct 06 12:18:04 crc kubenswrapper[4958]: I1006 12:18:04.159602 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-frkbw" Oct 06 12:18:04 crc kubenswrapper[4958]: I1006 12:18:04.159745 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Oct 06 12:18:04 crc kubenswrapper[4958]: I1006 12:18:04.159898 4958 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Oct 06 12:18:04 crc kubenswrapper[4958]: I1006 12:18:04.161693 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/run-os-edpm-deployment-openstack-edpm-ipam-jd665"] Oct 06 12:18:04 crc kubenswrapper[4958]: I1006 12:18:04.246887 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/7a0b8144-e1d6-4d95-8f13-71ea09785481-inventory\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-jd665\" (UID: \"7a0b8144-e1d6-4d95-8f13-71ea09785481\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-jd665" Oct 06 12:18:04 crc kubenswrapper[4958]: I1006 12:18:04.247025 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/7a0b8144-e1d6-4d95-8f13-71ea09785481-ssh-key\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-jd665\" (UID: \"7a0b8144-e1d6-4d95-8f13-71ea09785481\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-jd665" Oct 06 12:18:04 crc kubenswrapper[4958]: I1006 12:18:04.247206 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-psrbx\" (UniqueName: \"kubernetes.io/projected/7a0b8144-e1d6-4d95-8f13-71ea09785481-kube-api-access-psrbx\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-jd665\" (UID: \"7a0b8144-e1d6-4d95-8f13-71ea09785481\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-jd665" Oct 06 12:18:04 crc kubenswrapper[4958]: I1006 12:18:04.348729 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-psrbx\" (UniqueName: \"kubernetes.io/projected/7a0b8144-e1d6-4d95-8f13-71ea09785481-kube-api-access-psrbx\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-jd665\" (UID: \"7a0b8144-e1d6-4d95-8f13-71ea09785481\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-jd665" Oct 06 12:18:04 crc kubenswrapper[4958]: I1006 12:18:04.348887 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/7a0b8144-e1d6-4d95-8f13-71ea09785481-inventory\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-jd665\" (UID: \"7a0b8144-e1d6-4d95-8f13-71ea09785481\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-jd665" Oct 06 12:18:04 crc kubenswrapper[4958]: I1006 12:18:04.348930 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/7a0b8144-e1d6-4d95-8f13-71ea09785481-ssh-key\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-jd665\" (UID: \"7a0b8144-e1d6-4d95-8f13-71ea09785481\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-jd665" Oct 06 12:18:04 crc kubenswrapper[4958]: I1006 12:18:04.355563 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/7a0b8144-e1d6-4d95-8f13-71ea09785481-ssh-key\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-jd665\" (UID: \"7a0b8144-e1d6-4d95-8f13-71ea09785481\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-jd665" Oct 06 12:18:04 crc kubenswrapper[4958]: I1006 12:18:04.370636 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/7a0b8144-e1d6-4d95-8f13-71ea09785481-inventory\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-jd665\" (UID: \"7a0b8144-e1d6-4d95-8f13-71ea09785481\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-jd665" Oct 06 12:18:04 crc kubenswrapper[4958]: I1006 12:18:04.371712 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-psrbx\" (UniqueName: \"kubernetes.io/projected/7a0b8144-e1d6-4d95-8f13-71ea09785481-kube-api-access-psrbx\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-jd665\" (UID: \"7a0b8144-e1d6-4d95-8f13-71ea09785481\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-jd665" Oct 06 12:18:04 crc kubenswrapper[4958]: I1006 12:18:04.523717 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-jd665" Oct 06 12:18:05 crc kubenswrapper[4958]: I1006 12:18:05.096239 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/run-os-edpm-deployment-openstack-edpm-ipam-jd665"] Oct 06 12:18:05 crc kubenswrapper[4958]: W1006 12:18:05.102074 4958 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod7a0b8144_e1d6_4d95_8f13_71ea09785481.slice/crio-45a9eff29bebd4c71bbd47b10f71050be6d061c27bb159fea79a4c081478b1e8 WatchSource:0}: Error finding container 45a9eff29bebd4c71bbd47b10f71050be6d061c27bb159fea79a4c081478b1e8: Status 404 returned error can't find the container with id 45a9eff29bebd4c71bbd47b10f71050be6d061c27bb159fea79a4c081478b1e8 Oct 06 12:18:06 crc kubenswrapper[4958]: I1006 12:18:06.097273 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-jd665" event={"ID":"7a0b8144-e1d6-4d95-8f13-71ea09785481","Type":"ContainerStarted","Data":"1229947eeeb399ef39e8fe9e0e877f7cc42e1738da7a86cd611624f0438bc756"} Oct 06 12:18:06 crc kubenswrapper[4958]: I1006 12:18:06.097610 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-jd665" event={"ID":"7a0b8144-e1d6-4d95-8f13-71ea09785481","Type":"ContainerStarted","Data":"45a9eff29bebd4c71bbd47b10f71050be6d061c27bb159fea79a4c081478b1e8"} Oct 06 12:18:06 crc kubenswrapper[4958]: I1006 12:18:06.122549 4958 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-jd665" podStartSLOduration=1.714655138 podStartE2EDuration="2.122525669s" podCreationTimestamp="2025-10-06 12:18:04 +0000 UTC" firstStartedPulling="2025-10-06 12:18:05.105288519 +0000 UTC m=+1838.991313837" lastFinishedPulling="2025-10-06 12:18:05.51315906 +0000 UTC m=+1839.399184368" observedRunningTime="2025-10-06 12:18:06.117201264 +0000 UTC m=+1840.003226582" watchObservedRunningTime="2025-10-06 12:18:06.122525669 +0000 UTC m=+1840.008550977" Oct 06 12:18:10 crc kubenswrapper[4958]: I1006 12:18:10.916709 4958 scope.go:117] "RemoveContainer" containerID="92e208915ca70ac5b3441238341d5b28c612e6494612e3dece5b9499a5bc906f" Oct 06 12:18:10 crc kubenswrapper[4958]: E1006 12:18:10.917408 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-whw6z_openshift-machine-config-operator(1a63a5e9-6d00-40ff-a080-cfcaf03a1c1b)\"" pod="openshift-machine-config-operator/machine-config-daemon-whw6z" podUID="1a63a5e9-6d00-40ff-a080-cfcaf03a1c1b" Oct 06 12:18:15 crc kubenswrapper[4958]: I1006 12:18:15.187194 4958 generic.go:334] "Generic (PLEG): container finished" podID="7a0b8144-e1d6-4d95-8f13-71ea09785481" containerID="1229947eeeb399ef39e8fe9e0e877f7cc42e1738da7a86cd611624f0438bc756" exitCode=0 Oct 06 12:18:15 crc kubenswrapper[4958]: I1006 12:18:15.187293 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-jd665" event={"ID":"7a0b8144-e1d6-4d95-8f13-71ea09785481","Type":"ContainerDied","Data":"1229947eeeb399ef39e8fe9e0e877f7cc42e1738da7a86cd611624f0438bc756"} Oct 06 12:18:16 crc kubenswrapper[4958]: I1006 12:18:16.589754 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-jd665" Oct 06 12:18:16 crc kubenswrapper[4958]: I1006 12:18:16.608854 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/7a0b8144-e1d6-4d95-8f13-71ea09785481-ssh-key\") pod \"7a0b8144-e1d6-4d95-8f13-71ea09785481\" (UID: \"7a0b8144-e1d6-4d95-8f13-71ea09785481\") " Oct 06 12:18:16 crc kubenswrapper[4958]: I1006 12:18:16.609068 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-psrbx\" (UniqueName: \"kubernetes.io/projected/7a0b8144-e1d6-4d95-8f13-71ea09785481-kube-api-access-psrbx\") pod \"7a0b8144-e1d6-4d95-8f13-71ea09785481\" (UID: \"7a0b8144-e1d6-4d95-8f13-71ea09785481\") " Oct 06 12:18:16 crc kubenswrapper[4958]: I1006 12:18:16.609198 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/7a0b8144-e1d6-4d95-8f13-71ea09785481-inventory\") pod \"7a0b8144-e1d6-4d95-8f13-71ea09785481\" (UID: \"7a0b8144-e1d6-4d95-8f13-71ea09785481\") " Oct 06 12:18:16 crc kubenswrapper[4958]: I1006 12:18:16.615526 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7a0b8144-e1d6-4d95-8f13-71ea09785481-kube-api-access-psrbx" (OuterVolumeSpecName: "kube-api-access-psrbx") pod "7a0b8144-e1d6-4d95-8f13-71ea09785481" (UID: "7a0b8144-e1d6-4d95-8f13-71ea09785481"). InnerVolumeSpecName "kube-api-access-psrbx". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 06 12:18:16 crc kubenswrapper[4958]: I1006 12:18:16.644716 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7a0b8144-e1d6-4d95-8f13-71ea09785481-inventory" (OuterVolumeSpecName: "inventory") pod "7a0b8144-e1d6-4d95-8f13-71ea09785481" (UID: "7a0b8144-e1d6-4d95-8f13-71ea09785481"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 06 12:18:16 crc kubenswrapper[4958]: I1006 12:18:16.660193 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7a0b8144-e1d6-4d95-8f13-71ea09785481-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "7a0b8144-e1d6-4d95-8f13-71ea09785481" (UID: "7a0b8144-e1d6-4d95-8f13-71ea09785481"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 06 12:18:16 crc kubenswrapper[4958]: I1006 12:18:16.710940 4958 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/7a0b8144-e1d6-4d95-8f13-71ea09785481-inventory\") on node \"crc\" DevicePath \"\"" Oct 06 12:18:16 crc kubenswrapper[4958]: I1006 12:18:16.711377 4958 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/7a0b8144-e1d6-4d95-8f13-71ea09785481-ssh-key\") on node \"crc\" DevicePath \"\"" Oct 06 12:18:16 crc kubenswrapper[4958]: I1006 12:18:16.711391 4958 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-psrbx\" (UniqueName: \"kubernetes.io/projected/7a0b8144-e1d6-4d95-8f13-71ea09785481-kube-api-access-psrbx\") on node \"crc\" DevicePath \"\"" Oct 06 12:18:17 crc kubenswrapper[4958]: I1006 12:18:17.210974 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-jd665" event={"ID":"7a0b8144-e1d6-4d95-8f13-71ea09785481","Type":"ContainerDied","Data":"45a9eff29bebd4c71bbd47b10f71050be6d061c27bb159fea79a4c081478b1e8"} Oct 06 12:18:17 crc kubenswrapper[4958]: I1006 12:18:17.211020 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-jd665" Oct 06 12:18:17 crc kubenswrapper[4958]: I1006 12:18:17.211044 4958 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="45a9eff29bebd4c71bbd47b10f71050be6d061c27bb159fea79a4c081478b1e8" Oct 06 12:18:17 crc kubenswrapper[4958]: I1006 12:18:17.326345 4958 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-bf2ht"] Oct 06 12:18:17 crc kubenswrapper[4958]: E1006 12:18:17.326736 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7a0b8144-e1d6-4d95-8f13-71ea09785481" containerName="run-os-edpm-deployment-openstack-edpm-ipam" Oct 06 12:18:17 crc kubenswrapper[4958]: I1006 12:18:17.326757 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="7a0b8144-e1d6-4d95-8f13-71ea09785481" containerName="run-os-edpm-deployment-openstack-edpm-ipam" Oct 06 12:18:17 crc kubenswrapper[4958]: I1006 12:18:17.326988 4958 memory_manager.go:354] "RemoveStaleState removing state" podUID="7a0b8144-e1d6-4d95-8f13-71ea09785481" containerName="run-os-edpm-deployment-openstack-edpm-ipam" Oct 06 12:18:17 crc kubenswrapper[4958]: I1006 12:18:17.327710 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-bf2ht" Oct 06 12:18:17 crc kubenswrapper[4958]: I1006 12:18:17.331359 4958 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Oct 06 12:18:17 crc kubenswrapper[4958]: I1006 12:18:17.331816 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-frkbw" Oct 06 12:18:17 crc kubenswrapper[4958]: I1006 12:18:17.332450 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Oct 06 12:18:17 crc kubenswrapper[4958]: I1006 12:18:17.337091 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Oct 06 12:18:17 crc kubenswrapper[4958]: I1006 12:18:17.348527 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-bf2ht"] Oct 06 12:18:17 crc kubenswrapper[4958]: I1006 12:18:17.428296 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/eee45a20-bff0-4c1c-a7a7-84646b71c82d-inventory\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-bf2ht\" (UID: \"eee45a20-bff0-4c1c-a7a7-84646b71c82d\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-bf2ht" Oct 06 12:18:17 crc kubenswrapper[4958]: I1006 12:18:17.428495 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/eee45a20-bff0-4c1c-a7a7-84646b71c82d-ssh-key\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-bf2ht\" (UID: \"eee45a20-bff0-4c1c-a7a7-84646b71c82d\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-bf2ht" Oct 06 12:18:17 crc kubenswrapper[4958]: I1006 12:18:17.428559 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cqj4g\" (UniqueName: \"kubernetes.io/projected/eee45a20-bff0-4c1c-a7a7-84646b71c82d-kube-api-access-cqj4g\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-bf2ht\" (UID: \"eee45a20-bff0-4c1c-a7a7-84646b71c82d\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-bf2ht" Oct 06 12:18:17 crc kubenswrapper[4958]: I1006 12:18:17.530525 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqj4g\" (UniqueName: \"kubernetes.io/projected/eee45a20-bff0-4c1c-a7a7-84646b71c82d-kube-api-access-cqj4g\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-bf2ht\" (UID: \"eee45a20-bff0-4c1c-a7a7-84646b71c82d\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-bf2ht" Oct 06 12:18:17 crc kubenswrapper[4958]: I1006 12:18:17.530705 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/eee45a20-bff0-4c1c-a7a7-84646b71c82d-inventory\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-bf2ht\" (UID: \"eee45a20-bff0-4c1c-a7a7-84646b71c82d\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-bf2ht" Oct 06 12:18:17 crc kubenswrapper[4958]: I1006 12:18:17.530893 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/eee45a20-bff0-4c1c-a7a7-84646b71c82d-ssh-key\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-bf2ht\" (UID: \"eee45a20-bff0-4c1c-a7a7-84646b71c82d\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-bf2ht" Oct 06 12:18:17 crc kubenswrapper[4958]: I1006 12:18:17.541794 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/eee45a20-bff0-4c1c-a7a7-84646b71c82d-inventory\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-bf2ht\" (UID: \"eee45a20-bff0-4c1c-a7a7-84646b71c82d\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-bf2ht" Oct 06 12:18:17 crc kubenswrapper[4958]: I1006 12:18:17.542054 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/eee45a20-bff0-4c1c-a7a7-84646b71c82d-ssh-key\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-bf2ht\" (UID: \"eee45a20-bff0-4c1c-a7a7-84646b71c82d\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-bf2ht" Oct 06 12:18:17 crc kubenswrapper[4958]: I1006 12:18:17.552359 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cqj4g\" (UniqueName: \"kubernetes.io/projected/eee45a20-bff0-4c1c-a7a7-84646b71c82d-kube-api-access-cqj4g\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-bf2ht\" (UID: \"eee45a20-bff0-4c1c-a7a7-84646b71c82d\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-bf2ht" Oct 06 12:18:17 crc kubenswrapper[4958]: I1006 12:18:17.647080 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-bf2ht" Oct 06 12:18:18 crc kubenswrapper[4958]: I1006 12:18:18.220531 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-bf2ht"] Oct 06 12:18:18 crc kubenswrapper[4958]: I1006 12:18:18.223526 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-bf2ht" event={"ID":"eee45a20-bff0-4c1c-a7a7-84646b71c82d","Type":"ContainerStarted","Data":"4d7ad57ce8f0532454c0d8b5e95aa7a0121d7f64d3f4a7f67c7aa2b7394b948f"} Oct 06 12:18:19 crc kubenswrapper[4958]: I1006 12:18:19.239459 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-bf2ht" event={"ID":"eee45a20-bff0-4c1c-a7a7-84646b71c82d","Type":"ContainerStarted","Data":"ed566eaa39fd943da27e6fc499e20d7452010c63b2d25900c3394a2c659448fc"} Oct 06 12:18:19 crc kubenswrapper[4958]: I1006 12:18:19.272356 4958 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-bf2ht" podStartSLOduration=1.757868444 podStartE2EDuration="2.272331357s" podCreationTimestamp="2025-10-06 12:18:17 +0000 UTC" firstStartedPulling="2025-10-06 12:18:18.211264064 +0000 UTC m=+1852.097289392" lastFinishedPulling="2025-10-06 12:18:18.725726997 +0000 UTC m=+1852.611752305" observedRunningTime="2025-10-06 12:18:19.257632407 +0000 UTC m=+1853.143657755" watchObservedRunningTime="2025-10-06 12:18:19.272331357 +0000 UTC m=+1853.158356685" Oct 06 12:18:23 crc kubenswrapper[4958]: I1006 12:18:23.914331 4958 scope.go:117] "RemoveContainer" containerID="92e208915ca70ac5b3441238341d5b28c612e6494612e3dece5b9499a5bc906f" Oct 06 12:18:23 crc kubenswrapper[4958]: E1006 12:18:23.915631 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-whw6z_openshift-machine-config-operator(1a63a5e9-6d00-40ff-a080-cfcaf03a1c1b)\"" pod="openshift-machine-config-operator/machine-config-daemon-whw6z" podUID="1a63a5e9-6d00-40ff-a080-cfcaf03a1c1b" Oct 06 12:18:30 crc kubenswrapper[4958]: I1006 12:18:30.358056 4958 generic.go:334] "Generic (PLEG): container finished" podID="eee45a20-bff0-4c1c-a7a7-84646b71c82d" containerID="ed566eaa39fd943da27e6fc499e20d7452010c63b2d25900c3394a2c659448fc" exitCode=0 Oct 06 12:18:30 crc kubenswrapper[4958]: I1006 12:18:30.358130 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-bf2ht" event={"ID":"eee45a20-bff0-4c1c-a7a7-84646b71c82d","Type":"ContainerDied","Data":"ed566eaa39fd943da27e6fc499e20d7452010c63b2d25900c3394a2c659448fc"} Oct 06 12:18:31 crc kubenswrapper[4958]: I1006 12:18:31.864502 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-bf2ht" Oct 06 12:18:32 crc kubenswrapper[4958]: I1006 12:18:32.004517 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/eee45a20-bff0-4c1c-a7a7-84646b71c82d-ssh-key\") pod \"eee45a20-bff0-4c1c-a7a7-84646b71c82d\" (UID: \"eee45a20-bff0-4c1c-a7a7-84646b71c82d\") " Oct 06 12:18:32 crc kubenswrapper[4958]: I1006 12:18:32.004624 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/eee45a20-bff0-4c1c-a7a7-84646b71c82d-inventory\") pod \"eee45a20-bff0-4c1c-a7a7-84646b71c82d\" (UID: \"eee45a20-bff0-4c1c-a7a7-84646b71c82d\") " Oct 06 12:18:32 crc kubenswrapper[4958]: I1006 12:18:32.004667 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cqj4g\" (UniqueName: \"kubernetes.io/projected/eee45a20-bff0-4c1c-a7a7-84646b71c82d-kube-api-access-cqj4g\") pod \"eee45a20-bff0-4c1c-a7a7-84646b71c82d\" (UID: \"eee45a20-bff0-4c1c-a7a7-84646b71c82d\") " Oct 06 12:18:32 crc kubenswrapper[4958]: I1006 12:18:32.018485 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/eee45a20-bff0-4c1c-a7a7-84646b71c82d-kube-api-access-cqj4g" (OuterVolumeSpecName: "kube-api-access-cqj4g") pod "eee45a20-bff0-4c1c-a7a7-84646b71c82d" (UID: "eee45a20-bff0-4c1c-a7a7-84646b71c82d"). InnerVolumeSpecName "kube-api-access-cqj4g". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 06 12:18:32 crc kubenswrapper[4958]: I1006 12:18:32.052580 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/eee45a20-bff0-4c1c-a7a7-84646b71c82d-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "eee45a20-bff0-4c1c-a7a7-84646b71c82d" (UID: "eee45a20-bff0-4c1c-a7a7-84646b71c82d"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 06 12:18:32 crc kubenswrapper[4958]: I1006 12:18:32.059673 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/eee45a20-bff0-4c1c-a7a7-84646b71c82d-inventory" (OuterVolumeSpecName: "inventory") pod "eee45a20-bff0-4c1c-a7a7-84646b71c82d" (UID: "eee45a20-bff0-4c1c-a7a7-84646b71c82d"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 06 12:18:32 crc kubenswrapper[4958]: I1006 12:18:32.108119 4958 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/eee45a20-bff0-4c1c-a7a7-84646b71c82d-ssh-key\") on node \"crc\" DevicePath \"\"" Oct 06 12:18:32 crc kubenswrapper[4958]: I1006 12:18:32.108189 4958 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/eee45a20-bff0-4c1c-a7a7-84646b71c82d-inventory\") on node \"crc\" DevicePath \"\"" Oct 06 12:18:32 crc kubenswrapper[4958]: I1006 12:18:32.108211 4958 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cqj4g\" (UniqueName: \"kubernetes.io/projected/eee45a20-bff0-4c1c-a7a7-84646b71c82d-kube-api-access-cqj4g\") on node \"crc\" DevicePath \"\"" Oct 06 12:18:32 crc kubenswrapper[4958]: I1006 12:18:32.384367 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-bf2ht" event={"ID":"eee45a20-bff0-4c1c-a7a7-84646b71c82d","Type":"ContainerDied","Data":"4d7ad57ce8f0532454c0d8b5e95aa7a0121d7f64d3f4a7f67c7aa2b7394b948f"} Oct 06 12:18:32 crc kubenswrapper[4958]: I1006 12:18:32.384433 4958 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4d7ad57ce8f0532454c0d8b5e95aa7a0121d7f64d3f4a7f67c7aa2b7394b948f" Oct 06 12:18:32 crc kubenswrapper[4958]: I1006 12:18:32.384503 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-bf2ht" Oct 06 12:18:32 crc kubenswrapper[4958]: I1006 12:18:32.576064 4958 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/install-certs-edpm-deployment-openstack-edpm-ipam-gmxmw"] Oct 06 12:18:32 crc kubenswrapper[4958]: E1006 12:18:32.576634 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="eee45a20-bff0-4c1c-a7a7-84646b71c82d" containerName="reboot-os-edpm-deployment-openstack-edpm-ipam" Oct 06 12:18:32 crc kubenswrapper[4958]: I1006 12:18:32.576664 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="eee45a20-bff0-4c1c-a7a7-84646b71c82d" containerName="reboot-os-edpm-deployment-openstack-edpm-ipam" Oct 06 12:18:32 crc kubenswrapper[4958]: I1006 12:18:32.576907 4958 memory_manager.go:354] "RemoveStaleState removing state" podUID="eee45a20-bff0-4c1c-a7a7-84646b71c82d" containerName="reboot-os-edpm-deployment-openstack-edpm-ipam" Oct 06 12:18:32 crc kubenswrapper[4958]: I1006 12:18:32.577814 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-gmxmw" Oct 06 12:18:32 crc kubenswrapper[4958]: I1006 12:18:32.581481 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-libvirt-default-certs-0" Oct 06 12:18:32 crc kubenswrapper[4958]: I1006 12:18:32.581548 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Oct 06 12:18:32 crc kubenswrapper[4958]: I1006 12:18:32.581627 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-neutron-metadata-default-certs-0" Oct 06 12:18:32 crc kubenswrapper[4958]: I1006 12:18:32.581527 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-telemetry-default-certs-0" Oct 06 12:18:32 crc kubenswrapper[4958]: I1006 12:18:32.582005 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-frkbw" Oct 06 12:18:32 crc kubenswrapper[4958]: I1006 12:18:32.582040 4958 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Oct 06 12:18:32 crc kubenswrapper[4958]: I1006 12:18:32.582446 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Oct 06 12:18:32 crc kubenswrapper[4958]: I1006 12:18:32.590328 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-ovn-default-certs-0" Oct 06 12:18:32 crc kubenswrapper[4958]: I1006 12:18:32.605072 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-certs-edpm-deployment-openstack-edpm-ipam-gmxmw"] Oct 06 12:18:32 crc kubenswrapper[4958]: I1006 12:18:32.721014 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f1064552-8f6a-46ac-8628-d9d2bc8c2a95-ovn-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-gmxmw\" (UID: \"f1064552-8f6a-46ac-8628-d9d2bc8c2a95\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-gmxmw" Oct 06 12:18:32 crc kubenswrapper[4958]: I1006 12:18:32.721410 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f1064552-8f6a-46ac-8628-d9d2bc8c2a95-telemetry-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-gmxmw\" (UID: \"f1064552-8f6a-46ac-8628-d9d2bc8c2a95\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-gmxmw" Oct 06 12:18:32 crc kubenswrapper[4958]: I1006 12:18:32.721499 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam-telemetry-default-certs-0\" (UniqueName: \"kubernetes.io/projected/f1064552-8f6a-46ac-8628-d9d2bc8c2a95-openstack-edpm-ipam-telemetry-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-gmxmw\" (UID: \"f1064552-8f6a-46ac-8628-d9d2bc8c2a95\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-gmxmw" Oct 06 12:18:32 crc kubenswrapper[4958]: I1006 12:18:32.721522 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/f1064552-8f6a-46ac-8628-d9d2bc8c2a95-inventory\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-gmxmw\" (UID: \"f1064552-8f6a-46ac-8628-d9d2bc8c2a95\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-gmxmw" Oct 06 12:18:32 crc kubenswrapper[4958]: I1006 12:18:32.721575 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f1064552-8f6a-46ac-8628-d9d2bc8c2a95-nova-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-gmxmw\" (UID: \"f1064552-8f6a-46ac-8628-d9d2bc8c2a95\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-gmxmw" Oct 06 12:18:32 crc kubenswrapper[4958]: I1006 12:18:32.721601 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/f1064552-8f6a-46ac-8628-d9d2bc8c2a95-openstack-edpm-ipam-neutron-metadata-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-gmxmw\" (UID: \"f1064552-8f6a-46ac-8628-d9d2bc8c2a95\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-gmxmw" Oct 06 12:18:32 crc kubenswrapper[4958]: I1006 12:18:32.721645 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f1064552-8f6a-46ac-8628-d9d2bc8c2a95-bootstrap-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-gmxmw\" (UID: \"f1064552-8f6a-46ac-8628-d9d2bc8c2a95\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-gmxmw" Oct 06 12:18:32 crc kubenswrapper[4958]: I1006 12:18:32.721688 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f1064552-8f6a-46ac-8628-d9d2bc8c2a95-repo-setup-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-gmxmw\" (UID: \"f1064552-8f6a-46ac-8628-d9d2bc8c2a95\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-gmxmw" Oct 06 12:18:32 crc kubenswrapper[4958]: I1006 12:18:32.721712 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f1064552-8f6a-46ac-8628-d9d2bc8c2a95-neutron-metadata-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-gmxmw\" (UID: \"f1064552-8f6a-46ac-8628-d9d2bc8c2a95\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-gmxmw" Oct 06 12:18:32 crc kubenswrapper[4958]: I1006 12:18:32.721730 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-szl7g\" (UniqueName: \"kubernetes.io/projected/f1064552-8f6a-46ac-8628-d9d2bc8c2a95-kube-api-access-szl7g\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-gmxmw\" (UID: \"f1064552-8f6a-46ac-8628-d9d2bc8c2a95\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-gmxmw" Oct 06 12:18:32 crc kubenswrapper[4958]: I1006 12:18:32.721750 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f1064552-8f6a-46ac-8628-d9d2bc8c2a95-libvirt-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-gmxmw\" (UID: \"f1064552-8f6a-46ac-8628-d9d2bc8c2a95\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-gmxmw" Oct 06 12:18:32 crc kubenswrapper[4958]: I1006 12:18:32.721778 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/f1064552-8f6a-46ac-8628-d9d2bc8c2a95-openstack-edpm-ipam-ovn-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-gmxmw\" (UID: \"f1064552-8f6a-46ac-8628-d9d2bc8c2a95\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-gmxmw" Oct 06 12:18:32 crc kubenswrapper[4958]: I1006 12:18:32.721798 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/f1064552-8f6a-46ac-8628-d9d2bc8c2a95-openstack-edpm-ipam-libvirt-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-gmxmw\" (UID: \"f1064552-8f6a-46ac-8628-d9d2bc8c2a95\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-gmxmw" Oct 06 12:18:32 crc kubenswrapper[4958]: I1006 12:18:32.721825 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/f1064552-8f6a-46ac-8628-d9d2bc8c2a95-ssh-key\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-gmxmw\" (UID: \"f1064552-8f6a-46ac-8628-d9d2bc8c2a95\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-gmxmw" Oct 06 12:18:32 crc kubenswrapper[4958]: I1006 12:18:32.822967 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/f1064552-8f6a-46ac-8628-d9d2bc8c2a95-openstack-edpm-ipam-libvirt-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-gmxmw\" (UID: \"f1064552-8f6a-46ac-8628-d9d2bc8c2a95\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-gmxmw" Oct 06 12:18:32 crc kubenswrapper[4958]: I1006 12:18:32.823022 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/f1064552-8f6a-46ac-8628-d9d2bc8c2a95-ssh-key\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-gmxmw\" (UID: \"f1064552-8f6a-46ac-8628-d9d2bc8c2a95\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-gmxmw" Oct 06 12:18:32 crc kubenswrapper[4958]: I1006 12:18:32.823053 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f1064552-8f6a-46ac-8628-d9d2bc8c2a95-ovn-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-gmxmw\" (UID: \"f1064552-8f6a-46ac-8628-d9d2bc8c2a95\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-gmxmw" Oct 06 12:18:32 crc kubenswrapper[4958]: I1006 12:18:32.823128 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f1064552-8f6a-46ac-8628-d9d2bc8c2a95-telemetry-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-gmxmw\" (UID: \"f1064552-8f6a-46ac-8628-d9d2bc8c2a95\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-gmxmw" Oct 06 12:18:32 crc kubenswrapper[4958]: I1006 12:18:32.823431 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam-telemetry-default-certs-0\" (UniqueName: \"kubernetes.io/projected/f1064552-8f6a-46ac-8628-d9d2bc8c2a95-openstack-edpm-ipam-telemetry-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-gmxmw\" (UID: \"f1064552-8f6a-46ac-8628-d9d2bc8c2a95\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-gmxmw" Oct 06 12:18:32 crc kubenswrapper[4958]: I1006 12:18:32.823504 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/f1064552-8f6a-46ac-8628-d9d2bc8c2a95-inventory\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-gmxmw\" (UID: \"f1064552-8f6a-46ac-8628-d9d2bc8c2a95\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-gmxmw" Oct 06 12:18:32 crc kubenswrapper[4958]: I1006 12:18:32.823670 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f1064552-8f6a-46ac-8628-d9d2bc8c2a95-nova-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-gmxmw\" (UID: \"f1064552-8f6a-46ac-8628-d9d2bc8c2a95\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-gmxmw" Oct 06 12:18:32 crc kubenswrapper[4958]: I1006 12:18:32.823778 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/f1064552-8f6a-46ac-8628-d9d2bc8c2a95-openstack-edpm-ipam-neutron-metadata-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-gmxmw\" (UID: \"f1064552-8f6a-46ac-8628-d9d2bc8c2a95\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-gmxmw" Oct 06 12:18:32 crc kubenswrapper[4958]: I1006 12:18:32.823981 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f1064552-8f6a-46ac-8628-d9d2bc8c2a95-bootstrap-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-gmxmw\" (UID: \"f1064552-8f6a-46ac-8628-d9d2bc8c2a95\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-gmxmw" Oct 06 12:18:32 crc kubenswrapper[4958]: I1006 12:18:32.824215 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f1064552-8f6a-46ac-8628-d9d2bc8c2a95-repo-setup-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-gmxmw\" (UID: \"f1064552-8f6a-46ac-8628-d9d2bc8c2a95\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-gmxmw" Oct 06 12:18:32 crc kubenswrapper[4958]: I1006 12:18:32.824276 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f1064552-8f6a-46ac-8628-d9d2bc8c2a95-neutron-metadata-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-gmxmw\" (UID: \"f1064552-8f6a-46ac-8628-d9d2bc8c2a95\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-gmxmw" Oct 06 12:18:32 crc kubenswrapper[4958]: I1006 12:18:32.824313 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-szl7g\" (UniqueName: \"kubernetes.io/projected/f1064552-8f6a-46ac-8628-d9d2bc8c2a95-kube-api-access-szl7g\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-gmxmw\" (UID: \"f1064552-8f6a-46ac-8628-d9d2bc8c2a95\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-gmxmw" Oct 06 12:18:32 crc kubenswrapper[4958]: I1006 12:18:32.824359 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f1064552-8f6a-46ac-8628-d9d2bc8c2a95-libvirt-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-gmxmw\" (UID: \"f1064552-8f6a-46ac-8628-d9d2bc8c2a95\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-gmxmw" Oct 06 12:18:32 crc kubenswrapper[4958]: I1006 12:18:32.824431 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/f1064552-8f6a-46ac-8628-d9d2bc8c2a95-openstack-edpm-ipam-ovn-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-gmxmw\" (UID: \"f1064552-8f6a-46ac-8628-d9d2bc8c2a95\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-gmxmw" Oct 06 12:18:32 crc kubenswrapper[4958]: I1006 12:18:32.829118 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/f1064552-8f6a-46ac-8628-d9d2bc8c2a95-inventory\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-gmxmw\" (UID: \"f1064552-8f6a-46ac-8628-d9d2bc8c2a95\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-gmxmw" Oct 06 12:18:32 crc kubenswrapper[4958]: I1006 12:18:32.829319 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f1064552-8f6a-46ac-8628-d9d2bc8c2a95-repo-setup-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-gmxmw\" (UID: \"f1064552-8f6a-46ac-8628-d9d2bc8c2a95\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-gmxmw" Oct 06 12:18:32 crc kubenswrapper[4958]: I1006 12:18:32.830427 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/f1064552-8f6a-46ac-8628-d9d2bc8c2a95-openstack-edpm-ipam-ovn-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-gmxmw\" (UID: \"f1064552-8f6a-46ac-8628-d9d2bc8c2a95\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-gmxmw" Oct 06 12:18:32 crc kubenswrapper[4958]: I1006 12:18:32.831240 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam-telemetry-default-certs-0\" (UniqueName: \"kubernetes.io/projected/f1064552-8f6a-46ac-8628-d9d2bc8c2a95-openstack-edpm-ipam-telemetry-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-gmxmw\" (UID: \"f1064552-8f6a-46ac-8628-d9d2bc8c2a95\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-gmxmw" Oct 06 12:18:32 crc kubenswrapper[4958]: I1006 12:18:32.832141 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f1064552-8f6a-46ac-8628-d9d2bc8c2a95-telemetry-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-gmxmw\" (UID: \"f1064552-8f6a-46ac-8628-d9d2bc8c2a95\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-gmxmw" Oct 06 12:18:32 crc kubenswrapper[4958]: I1006 12:18:32.832438 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/f1064552-8f6a-46ac-8628-d9d2bc8c2a95-openstack-edpm-ipam-libvirt-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-gmxmw\" (UID: \"f1064552-8f6a-46ac-8628-d9d2bc8c2a95\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-gmxmw" Oct 06 12:18:32 crc kubenswrapper[4958]: I1006 12:18:32.833015 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f1064552-8f6a-46ac-8628-d9d2bc8c2a95-libvirt-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-gmxmw\" (UID: \"f1064552-8f6a-46ac-8628-d9d2bc8c2a95\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-gmxmw" Oct 06 12:18:32 crc kubenswrapper[4958]: I1006 12:18:32.833196 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f1064552-8f6a-46ac-8628-d9d2bc8c2a95-nova-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-gmxmw\" (UID: \"f1064552-8f6a-46ac-8628-d9d2bc8c2a95\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-gmxmw" Oct 06 12:18:32 crc kubenswrapper[4958]: I1006 12:18:32.834500 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f1064552-8f6a-46ac-8628-d9d2bc8c2a95-neutron-metadata-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-gmxmw\" (UID: \"f1064552-8f6a-46ac-8628-d9d2bc8c2a95\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-gmxmw" Oct 06 12:18:32 crc kubenswrapper[4958]: I1006 12:18:32.834845 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/f1064552-8f6a-46ac-8628-d9d2bc8c2a95-ssh-key\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-gmxmw\" (UID: \"f1064552-8f6a-46ac-8628-d9d2bc8c2a95\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-gmxmw" Oct 06 12:18:32 crc kubenswrapper[4958]: I1006 12:18:32.837570 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f1064552-8f6a-46ac-8628-d9d2bc8c2a95-bootstrap-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-gmxmw\" (UID: \"f1064552-8f6a-46ac-8628-d9d2bc8c2a95\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-gmxmw" Oct 06 12:18:32 crc kubenswrapper[4958]: I1006 12:18:32.838464 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/f1064552-8f6a-46ac-8628-d9d2bc8c2a95-openstack-edpm-ipam-neutron-metadata-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-gmxmw\" (UID: \"f1064552-8f6a-46ac-8628-d9d2bc8c2a95\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-gmxmw" Oct 06 12:18:32 crc kubenswrapper[4958]: I1006 12:18:32.838944 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f1064552-8f6a-46ac-8628-d9d2bc8c2a95-ovn-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-gmxmw\" (UID: \"f1064552-8f6a-46ac-8628-d9d2bc8c2a95\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-gmxmw" Oct 06 12:18:32 crc kubenswrapper[4958]: I1006 12:18:32.853290 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-szl7g\" (UniqueName: \"kubernetes.io/projected/f1064552-8f6a-46ac-8628-d9d2bc8c2a95-kube-api-access-szl7g\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-gmxmw\" (UID: \"f1064552-8f6a-46ac-8628-d9d2bc8c2a95\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-gmxmw" Oct 06 12:18:32 crc kubenswrapper[4958]: I1006 12:18:32.919964 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-gmxmw" Oct 06 12:18:33 crc kubenswrapper[4958]: I1006 12:18:33.536521 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-certs-edpm-deployment-openstack-edpm-ipam-gmxmw"] Oct 06 12:18:33 crc kubenswrapper[4958]: I1006 12:18:33.550294 4958 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Oct 06 12:18:34 crc kubenswrapper[4958]: I1006 12:18:34.404264 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-gmxmw" event={"ID":"f1064552-8f6a-46ac-8628-d9d2bc8c2a95","Type":"ContainerStarted","Data":"5663979bb10375144315b201b04819738386a6c24e1458d04ab1e0458c2ea8de"} Oct 06 12:18:34 crc kubenswrapper[4958]: I1006 12:18:34.405853 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-gmxmw" event={"ID":"f1064552-8f6a-46ac-8628-d9d2bc8c2a95","Type":"ContainerStarted","Data":"08f1d3c3fce027b99315d8e7754f6f7d6877c502c9661eec51d63d0ab3b283b4"} Oct 06 12:18:34 crc kubenswrapper[4958]: I1006 12:18:34.915419 4958 scope.go:117] "RemoveContainer" containerID="92e208915ca70ac5b3441238341d5b28c612e6494612e3dece5b9499a5bc906f" Oct 06 12:18:34 crc kubenswrapper[4958]: E1006 12:18:34.916517 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-whw6z_openshift-machine-config-operator(1a63a5e9-6d00-40ff-a080-cfcaf03a1c1b)\"" pod="openshift-machine-config-operator/machine-config-daemon-whw6z" podUID="1a63a5e9-6d00-40ff-a080-cfcaf03a1c1b" Oct 06 12:18:45 crc kubenswrapper[4958]: I1006 12:18:45.913856 4958 scope.go:117] "RemoveContainer" containerID="92e208915ca70ac5b3441238341d5b28c612e6494612e3dece5b9499a5bc906f" Oct 06 12:18:45 crc kubenswrapper[4958]: E1006 12:18:45.914604 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-whw6z_openshift-machine-config-operator(1a63a5e9-6d00-40ff-a080-cfcaf03a1c1b)\"" pod="openshift-machine-config-operator/machine-config-daemon-whw6z" podUID="1a63a5e9-6d00-40ff-a080-cfcaf03a1c1b" Oct 06 12:18:58 crc kubenswrapper[4958]: I1006 12:18:58.914352 4958 scope.go:117] "RemoveContainer" containerID="92e208915ca70ac5b3441238341d5b28c612e6494612e3dece5b9499a5bc906f" Oct 06 12:18:58 crc kubenswrapper[4958]: E1006 12:18:58.915858 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-whw6z_openshift-machine-config-operator(1a63a5e9-6d00-40ff-a080-cfcaf03a1c1b)\"" pod="openshift-machine-config-operator/machine-config-daemon-whw6z" podUID="1a63a5e9-6d00-40ff-a080-cfcaf03a1c1b" Oct 06 12:19:09 crc kubenswrapper[4958]: I1006 12:19:09.913730 4958 scope.go:117] "RemoveContainer" containerID="92e208915ca70ac5b3441238341d5b28c612e6494612e3dece5b9499a5bc906f" Oct 06 12:19:09 crc kubenswrapper[4958]: E1006 12:19:09.914704 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-whw6z_openshift-machine-config-operator(1a63a5e9-6d00-40ff-a080-cfcaf03a1c1b)\"" pod="openshift-machine-config-operator/machine-config-daemon-whw6z" podUID="1a63a5e9-6d00-40ff-a080-cfcaf03a1c1b" Oct 06 12:19:19 crc kubenswrapper[4958]: I1006 12:19:19.940115 4958 generic.go:334] "Generic (PLEG): container finished" podID="f1064552-8f6a-46ac-8628-d9d2bc8c2a95" containerID="5663979bb10375144315b201b04819738386a6c24e1458d04ab1e0458c2ea8de" exitCode=0 Oct 06 12:19:19 crc kubenswrapper[4958]: I1006 12:19:19.940260 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-gmxmw" event={"ID":"f1064552-8f6a-46ac-8628-d9d2bc8c2a95","Type":"ContainerDied","Data":"5663979bb10375144315b201b04819738386a6c24e1458d04ab1e0458c2ea8de"} Oct 06 12:19:21 crc kubenswrapper[4958]: I1006 12:19:21.393588 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-gmxmw" Oct 06 12:19:21 crc kubenswrapper[4958]: I1006 12:19:21.463805 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/f1064552-8f6a-46ac-8628-d9d2bc8c2a95-openstack-edpm-ipam-ovn-default-certs-0\") pod \"f1064552-8f6a-46ac-8628-d9d2bc8c2a95\" (UID: \"f1064552-8f6a-46ac-8628-d9d2bc8c2a95\") " Oct 06 12:19:21 crc kubenswrapper[4958]: I1006 12:19:21.463858 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f1064552-8f6a-46ac-8628-d9d2bc8c2a95-libvirt-combined-ca-bundle\") pod \"f1064552-8f6a-46ac-8628-d9d2bc8c2a95\" (UID: \"f1064552-8f6a-46ac-8628-d9d2bc8c2a95\") " Oct 06 12:19:21 crc kubenswrapper[4958]: I1006 12:19:21.463903 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f1064552-8f6a-46ac-8628-d9d2bc8c2a95-telemetry-combined-ca-bundle\") pod \"f1064552-8f6a-46ac-8628-d9d2bc8c2a95\" (UID: \"f1064552-8f6a-46ac-8628-d9d2bc8c2a95\") " Oct 06 12:19:21 crc kubenswrapper[4958]: I1006 12:19:21.463940 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/f1064552-8f6a-46ac-8628-d9d2bc8c2a95-openstack-edpm-ipam-neutron-metadata-default-certs-0\") pod \"f1064552-8f6a-46ac-8628-d9d2bc8c2a95\" (UID: \"f1064552-8f6a-46ac-8628-d9d2bc8c2a95\") " Oct 06 12:19:21 crc kubenswrapper[4958]: I1006 12:19:21.464003 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f1064552-8f6a-46ac-8628-d9d2bc8c2a95-ovn-combined-ca-bundle\") pod \"f1064552-8f6a-46ac-8628-d9d2bc8c2a95\" (UID: \"f1064552-8f6a-46ac-8628-d9d2bc8c2a95\") " Oct 06 12:19:21 crc kubenswrapper[4958]: I1006 12:19:21.464066 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/f1064552-8f6a-46ac-8628-d9d2bc8c2a95-openstack-edpm-ipam-libvirt-default-certs-0\") pod \"f1064552-8f6a-46ac-8628-d9d2bc8c2a95\" (UID: \"f1064552-8f6a-46ac-8628-d9d2bc8c2a95\") " Oct 06 12:19:21 crc kubenswrapper[4958]: I1006 12:19:21.464096 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f1064552-8f6a-46ac-8628-d9d2bc8c2a95-repo-setup-combined-ca-bundle\") pod \"f1064552-8f6a-46ac-8628-d9d2bc8c2a95\" (UID: \"f1064552-8f6a-46ac-8628-d9d2bc8c2a95\") " Oct 06 12:19:21 crc kubenswrapper[4958]: I1006 12:19:21.464168 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f1064552-8f6a-46ac-8628-d9d2bc8c2a95-neutron-metadata-combined-ca-bundle\") pod \"f1064552-8f6a-46ac-8628-d9d2bc8c2a95\" (UID: \"f1064552-8f6a-46ac-8628-d9d2bc8c2a95\") " Oct 06 12:19:21 crc kubenswrapper[4958]: I1006 12:19:21.464244 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/f1064552-8f6a-46ac-8628-d9d2bc8c2a95-inventory\") pod \"f1064552-8f6a-46ac-8628-d9d2bc8c2a95\" (UID: \"f1064552-8f6a-46ac-8628-d9d2bc8c2a95\") " Oct 06 12:19:21 crc kubenswrapper[4958]: I1006 12:19:21.464886 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f1064552-8f6a-46ac-8628-d9d2bc8c2a95-nova-combined-ca-bundle\") pod \"f1064552-8f6a-46ac-8628-d9d2bc8c2a95\" (UID: \"f1064552-8f6a-46ac-8628-d9d2bc8c2a95\") " Oct 06 12:19:21 crc kubenswrapper[4958]: I1006 12:19:21.464920 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/f1064552-8f6a-46ac-8628-d9d2bc8c2a95-ssh-key\") pod \"f1064552-8f6a-46ac-8628-d9d2bc8c2a95\" (UID: \"f1064552-8f6a-46ac-8628-d9d2bc8c2a95\") " Oct 06 12:19:21 crc kubenswrapper[4958]: I1006 12:19:21.464945 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f1064552-8f6a-46ac-8628-d9d2bc8c2a95-bootstrap-combined-ca-bundle\") pod \"f1064552-8f6a-46ac-8628-d9d2bc8c2a95\" (UID: \"f1064552-8f6a-46ac-8628-d9d2bc8c2a95\") " Oct 06 12:19:21 crc kubenswrapper[4958]: I1006 12:19:21.464993 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-szl7g\" (UniqueName: \"kubernetes.io/projected/f1064552-8f6a-46ac-8628-d9d2bc8c2a95-kube-api-access-szl7g\") pod \"f1064552-8f6a-46ac-8628-d9d2bc8c2a95\" (UID: \"f1064552-8f6a-46ac-8628-d9d2bc8c2a95\") " Oct 06 12:19:21 crc kubenswrapper[4958]: I1006 12:19:21.465025 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam-telemetry-default-certs-0\" (UniqueName: \"kubernetes.io/projected/f1064552-8f6a-46ac-8628-d9d2bc8c2a95-openstack-edpm-ipam-telemetry-default-certs-0\") pod \"f1064552-8f6a-46ac-8628-d9d2bc8c2a95\" (UID: \"f1064552-8f6a-46ac-8628-d9d2bc8c2a95\") " Oct 06 12:19:21 crc kubenswrapper[4958]: I1006 12:19:21.470590 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f1064552-8f6a-46ac-8628-d9d2bc8c2a95-repo-setup-combined-ca-bundle" (OuterVolumeSpecName: "repo-setup-combined-ca-bundle") pod "f1064552-8f6a-46ac-8628-d9d2bc8c2a95" (UID: "f1064552-8f6a-46ac-8628-d9d2bc8c2a95"). InnerVolumeSpecName "repo-setup-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 06 12:19:21 crc kubenswrapper[4958]: I1006 12:19:21.471879 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f1064552-8f6a-46ac-8628-d9d2bc8c2a95-nova-combined-ca-bundle" (OuterVolumeSpecName: "nova-combined-ca-bundle") pod "f1064552-8f6a-46ac-8628-d9d2bc8c2a95" (UID: "f1064552-8f6a-46ac-8628-d9d2bc8c2a95"). InnerVolumeSpecName "nova-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 06 12:19:21 crc kubenswrapper[4958]: I1006 12:19:21.471976 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f1064552-8f6a-46ac-8628-d9d2bc8c2a95-openstack-edpm-ipam-ovn-default-certs-0" (OuterVolumeSpecName: "openstack-edpm-ipam-ovn-default-certs-0") pod "f1064552-8f6a-46ac-8628-d9d2bc8c2a95" (UID: "f1064552-8f6a-46ac-8628-d9d2bc8c2a95"). InnerVolumeSpecName "openstack-edpm-ipam-ovn-default-certs-0". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 06 12:19:21 crc kubenswrapper[4958]: I1006 12:19:21.472012 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f1064552-8f6a-46ac-8628-d9d2bc8c2a95-neutron-metadata-combined-ca-bundle" (OuterVolumeSpecName: "neutron-metadata-combined-ca-bundle") pod "f1064552-8f6a-46ac-8628-d9d2bc8c2a95" (UID: "f1064552-8f6a-46ac-8628-d9d2bc8c2a95"). InnerVolumeSpecName "neutron-metadata-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 06 12:19:21 crc kubenswrapper[4958]: I1006 12:19:21.472548 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f1064552-8f6a-46ac-8628-d9d2bc8c2a95-openstack-edpm-ipam-telemetry-default-certs-0" (OuterVolumeSpecName: "openstack-edpm-ipam-telemetry-default-certs-0") pod "f1064552-8f6a-46ac-8628-d9d2bc8c2a95" (UID: "f1064552-8f6a-46ac-8628-d9d2bc8c2a95"). InnerVolumeSpecName "openstack-edpm-ipam-telemetry-default-certs-0". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 06 12:19:21 crc kubenswrapper[4958]: I1006 12:19:21.473516 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f1064552-8f6a-46ac-8628-d9d2bc8c2a95-openstack-edpm-ipam-neutron-metadata-default-certs-0" (OuterVolumeSpecName: "openstack-edpm-ipam-neutron-metadata-default-certs-0") pod "f1064552-8f6a-46ac-8628-d9d2bc8c2a95" (UID: "f1064552-8f6a-46ac-8628-d9d2bc8c2a95"). InnerVolumeSpecName "openstack-edpm-ipam-neutron-metadata-default-certs-0". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 06 12:19:21 crc kubenswrapper[4958]: I1006 12:19:21.473605 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f1064552-8f6a-46ac-8628-d9d2bc8c2a95-ovn-combined-ca-bundle" (OuterVolumeSpecName: "ovn-combined-ca-bundle") pod "f1064552-8f6a-46ac-8628-d9d2bc8c2a95" (UID: "f1064552-8f6a-46ac-8628-d9d2bc8c2a95"). InnerVolumeSpecName "ovn-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 06 12:19:21 crc kubenswrapper[4958]: I1006 12:19:21.474695 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f1064552-8f6a-46ac-8628-d9d2bc8c2a95-libvirt-combined-ca-bundle" (OuterVolumeSpecName: "libvirt-combined-ca-bundle") pod "f1064552-8f6a-46ac-8628-d9d2bc8c2a95" (UID: "f1064552-8f6a-46ac-8628-d9d2bc8c2a95"). InnerVolumeSpecName "libvirt-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 06 12:19:21 crc kubenswrapper[4958]: I1006 12:19:21.475799 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f1064552-8f6a-46ac-8628-d9d2bc8c2a95-bootstrap-combined-ca-bundle" (OuterVolumeSpecName: "bootstrap-combined-ca-bundle") pod "f1064552-8f6a-46ac-8628-d9d2bc8c2a95" (UID: "f1064552-8f6a-46ac-8628-d9d2bc8c2a95"). InnerVolumeSpecName "bootstrap-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 06 12:19:21 crc kubenswrapper[4958]: I1006 12:19:21.476126 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f1064552-8f6a-46ac-8628-d9d2bc8c2a95-openstack-edpm-ipam-libvirt-default-certs-0" (OuterVolumeSpecName: "openstack-edpm-ipam-libvirt-default-certs-0") pod "f1064552-8f6a-46ac-8628-d9d2bc8c2a95" (UID: "f1064552-8f6a-46ac-8628-d9d2bc8c2a95"). InnerVolumeSpecName "openstack-edpm-ipam-libvirt-default-certs-0". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 06 12:19:21 crc kubenswrapper[4958]: I1006 12:19:21.477177 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f1064552-8f6a-46ac-8628-d9d2bc8c2a95-kube-api-access-szl7g" (OuterVolumeSpecName: "kube-api-access-szl7g") pod "f1064552-8f6a-46ac-8628-d9d2bc8c2a95" (UID: "f1064552-8f6a-46ac-8628-d9d2bc8c2a95"). InnerVolumeSpecName "kube-api-access-szl7g". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 06 12:19:21 crc kubenswrapper[4958]: I1006 12:19:21.477636 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f1064552-8f6a-46ac-8628-d9d2bc8c2a95-telemetry-combined-ca-bundle" (OuterVolumeSpecName: "telemetry-combined-ca-bundle") pod "f1064552-8f6a-46ac-8628-d9d2bc8c2a95" (UID: "f1064552-8f6a-46ac-8628-d9d2bc8c2a95"). InnerVolumeSpecName "telemetry-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 06 12:19:21 crc kubenswrapper[4958]: I1006 12:19:21.508802 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f1064552-8f6a-46ac-8628-d9d2bc8c2a95-inventory" (OuterVolumeSpecName: "inventory") pod "f1064552-8f6a-46ac-8628-d9d2bc8c2a95" (UID: "f1064552-8f6a-46ac-8628-d9d2bc8c2a95"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 06 12:19:21 crc kubenswrapper[4958]: I1006 12:19:21.513165 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f1064552-8f6a-46ac-8628-d9d2bc8c2a95-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "f1064552-8f6a-46ac-8628-d9d2bc8c2a95" (UID: "f1064552-8f6a-46ac-8628-d9d2bc8c2a95"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 06 12:19:21 crc kubenswrapper[4958]: I1006 12:19:21.567700 4958 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/f1064552-8f6a-46ac-8628-d9d2bc8c2a95-openstack-edpm-ipam-libvirt-default-certs-0\") on node \"crc\" DevicePath \"\"" Oct 06 12:19:21 crc kubenswrapper[4958]: I1006 12:19:21.567756 4958 reconciler_common.go:293] "Volume detached for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f1064552-8f6a-46ac-8628-d9d2bc8c2a95-repo-setup-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 06 12:19:21 crc kubenswrapper[4958]: I1006 12:19:21.567782 4958 reconciler_common.go:293] "Volume detached for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f1064552-8f6a-46ac-8628-d9d2bc8c2a95-neutron-metadata-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 06 12:19:21 crc kubenswrapper[4958]: I1006 12:19:21.567803 4958 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/f1064552-8f6a-46ac-8628-d9d2bc8c2a95-inventory\") on node \"crc\" DevicePath \"\"" Oct 06 12:19:21 crc kubenswrapper[4958]: I1006 12:19:21.567823 4958 reconciler_common.go:293] "Volume detached for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f1064552-8f6a-46ac-8628-d9d2bc8c2a95-nova-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 06 12:19:21 crc kubenswrapper[4958]: I1006 12:19:21.567842 4958 reconciler_common.go:293] "Volume detached for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f1064552-8f6a-46ac-8628-d9d2bc8c2a95-bootstrap-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 06 12:19:21 crc kubenswrapper[4958]: I1006 12:19:21.567857 4958 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/f1064552-8f6a-46ac-8628-d9d2bc8c2a95-ssh-key\") on node \"crc\" DevicePath \"\"" Oct 06 12:19:21 crc kubenswrapper[4958]: I1006 12:19:21.567873 4958 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-szl7g\" (UniqueName: \"kubernetes.io/projected/f1064552-8f6a-46ac-8628-d9d2bc8c2a95-kube-api-access-szl7g\") on node \"crc\" DevicePath \"\"" Oct 06 12:19:21 crc kubenswrapper[4958]: I1006 12:19:21.567890 4958 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam-telemetry-default-certs-0\" (UniqueName: \"kubernetes.io/projected/f1064552-8f6a-46ac-8628-d9d2bc8c2a95-openstack-edpm-ipam-telemetry-default-certs-0\") on node \"crc\" DevicePath \"\"" Oct 06 12:19:21 crc kubenswrapper[4958]: I1006 12:19:21.567907 4958 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/f1064552-8f6a-46ac-8628-d9d2bc8c2a95-openstack-edpm-ipam-ovn-default-certs-0\") on node \"crc\" DevicePath \"\"" Oct 06 12:19:21 crc kubenswrapper[4958]: I1006 12:19:21.567924 4958 reconciler_common.go:293] "Volume detached for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f1064552-8f6a-46ac-8628-d9d2bc8c2a95-libvirt-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 06 12:19:21 crc kubenswrapper[4958]: I1006 12:19:21.567941 4958 reconciler_common.go:293] "Volume detached for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f1064552-8f6a-46ac-8628-d9d2bc8c2a95-telemetry-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 06 12:19:21 crc kubenswrapper[4958]: I1006 12:19:21.567960 4958 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/f1064552-8f6a-46ac-8628-d9d2bc8c2a95-openstack-edpm-ipam-neutron-metadata-default-certs-0\") on node \"crc\" DevicePath \"\"" Oct 06 12:19:21 crc kubenswrapper[4958]: I1006 12:19:21.567981 4958 reconciler_common.go:293] "Volume detached for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f1064552-8f6a-46ac-8628-d9d2bc8c2a95-ovn-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 06 12:19:21 crc kubenswrapper[4958]: I1006 12:19:21.970496 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-gmxmw" event={"ID":"f1064552-8f6a-46ac-8628-d9d2bc8c2a95","Type":"ContainerDied","Data":"08f1d3c3fce027b99315d8e7754f6f7d6877c502c9661eec51d63d0ab3b283b4"} Oct 06 12:19:21 crc kubenswrapper[4958]: I1006 12:19:21.971228 4958 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="08f1d3c3fce027b99315d8e7754f6f7d6877c502c9661eec51d63d0ab3b283b4" Oct 06 12:19:21 crc kubenswrapper[4958]: I1006 12:19:21.970724 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-gmxmw" Oct 06 12:19:22 crc kubenswrapper[4958]: I1006 12:19:22.088055 4958 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-edpm-deployment-openstack-edpm-ipam-5ljwg"] Oct 06 12:19:22 crc kubenswrapper[4958]: E1006 12:19:22.088886 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f1064552-8f6a-46ac-8628-d9d2bc8c2a95" containerName="install-certs-edpm-deployment-openstack-edpm-ipam" Oct 06 12:19:22 crc kubenswrapper[4958]: I1006 12:19:22.088924 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="f1064552-8f6a-46ac-8628-d9d2bc8c2a95" containerName="install-certs-edpm-deployment-openstack-edpm-ipam" Oct 06 12:19:22 crc kubenswrapper[4958]: I1006 12:19:22.089798 4958 memory_manager.go:354] "RemoveStaleState removing state" podUID="f1064552-8f6a-46ac-8628-d9d2bc8c2a95" containerName="install-certs-edpm-deployment-openstack-edpm-ipam" Oct 06 12:19:22 crc kubenswrapper[4958]: I1006 12:19:22.091283 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-5ljwg" Oct 06 12:19:22 crc kubenswrapper[4958]: I1006 12:19:22.093872 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Oct 06 12:19:22 crc kubenswrapper[4958]: I1006 12:19:22.094222 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-frkbw" Oct 06 12:19:22 crc kubenswrapper[4958]: I1006 12:19:22.094575 4958 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Oct 06 12:19:22 crc kubenswrapper[4958]: I1006 12:19:22.094792 4958 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-config" Oct 06 12:19:22 crc kubenswrapper[4958]: I1006 12:19:22.101064 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Oct 06 12:19:22 crc kubenswrapper[4958]: I1006 12:19:22.105249 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-edpm-deployment-openstack-edpm-ipam-5ljwg"] Oct 06 12:19:22 crc kubenswrapper[4958]: I1006 12:19:22.200917 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/08d67da7-f4f3-4e1c-acd8-c8fcec30f59d-ovn-combined-ca-bundle\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-5ljwg\" (UID: \"08d67da7-f4f3-4e1c-acd8-c8fcec30f59d\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-5ljwg" Oct 06 12:19:22 crc kubenswrapper[4958]: I1006 12:19:22.201082 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/08d67da7-f4f3-4e1c-acd8-c8fcec30f59d-inventory\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-5ljwg\" (UID: \"08d67da7-f4f3-4e1c-acd8-c8fcec30f59d\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-5ljwg" Oct 06 12:19:22 crc kubenswrapper[4958]: I1006 12:19:22.201186 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5vm86\" (UniqueName: \"kubernetes.io/projected/08d67da7-f4f3-4e1c-acd8-c8fcec30f59d-kube-api-access-5vm86\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-5ljwg\" (UID: \"08d67da7-f4f3-4e1c-acd8-c8fcec30f59d\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-5ljwg" Oct 06 12:19:22 crc kubenswrapper[4958]: I1006 12:19:22.201525 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/08d67da7-f4f3-4e1c-acd8-c8fcec30f59d-ssh-key\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-5ljwg\" (UID: \"08d67da7-f4f3-4e1c-acd8-c8fcec30f59d\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-5ljwg" Oct 06 12:19:22 crc kubenswrapper[4958]: I1006 12:19:22.201724 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/08d67da7-f4f3-4e1c-acd8-c8fcec30f59d-ovncontroller-config-0\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-5ljwg\" (UID: \"08d67da7-f4f3-4e1c-acd8-c8fcec30f59d\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-5ljwg" Oct 06 12:19:22 crc kubenswrapper[4958]: I1006 12:19:22.304038 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5vm86\" (UniqueName: \"kubernetes.io/projected/08d67da7-f4f3-4e1c-acd8-c8fcec30f59d-kube-api-access-5vm86\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-5ljwg\" (UID: \"08d67da7-f4f3-4e1c-acd8-c8fcec30f59d\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-5ljwg" Oct 06 12:19:22 crc kubenswrapper[4958]: I1006 12:19:22.304608 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/08d67da7-f4f3-4e1c-acd8-c8fcec30f59d-ssh-key\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-5ljwg\" (UID: \"08d67da7-f4f3-4e1c-acd8-c8fcec30f59d\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-5ljwg" Oct 06 12:19:22 crc kubenswrapper[4958]: I1006 12:19:22.304839 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/08d67da7-f4f3-4e1c-acd8-c8fcec30f59d-ovncontroller-config-0\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-5ljwg\" (UID: \"08d67da7-f4f3-4e1c-acd8-c8fcec30f59d\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-5ljwg" Oct 06 12:19:22 crc kubenswrapper[4958]: I1006 12:19:22.305079 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/08d67da7-f4f3-4e1c-acd8-c8fcec30f59d-ovn-combined-ca-bundle\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-5ljwg\" (UID: \"08d67da7-f4f3-4e1c-acd8-c8fcec30f59d\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-5ljwg" Oct 06 12:19:22 crc kubenswrapper[4958]: I1006 12:19:22.305400 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/08d67da7-f4f3-4e1c-acd8-c8fcec30f59d-inventory\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-5ljwg\" (UID: \"08d67da7-f4f3-4e1c-acd8-c8fcec30f59d\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-5ljwg" Oct 06 12:19:22 crc kubenswrapper[4958]: I1006 12:19:22.307198 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/08d67da7-f4f3-4e1c-acd8-c8fcec30f59d-ovncontroller-config-0\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-5ljwg\" (UID: \"08d67da7-f4f3-4e1c-acd8-c8fcec30f59d\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-5ljwg" Oct 06 12:19:22 crc kubenswrapper[4958]: I1006 12:19:22.309202 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/08d67da7-f4f3-4e1c-acd8-c8fcec30f59d-ssh-key\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-5ljwg\" (UID: \"08d67da7-f4f3-4e1c-acd8-c8fcec30f59d\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-5ljwg" Oct 06 12:19:22 crc kubenswrapper[4958]: I1006 12:19:22.314637 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/08d67da7-f4f3-4e1c-acd8-c8fcec30f59d-inventory\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-5ljwg\" (UID: \"08d67da7-f4f3-4e1c-acd8-c8fcec30f59d\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-5ljwg" Oct 06 12:19:22 crc kubenswrapper[4958]: I1006 12:19:22.314873 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/08d67da7-f4f3-4e1c-acd8-c8fcec30f59d-ovn-combined-ca-bundle\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-5ljwg\" (UID: \"08d67da7-f4f3-4e1c-acd8-c8fcec30f59d\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-5ljwg" Oct 06 12:19:22 crc kubenswrapper[4958]: I1006 12:19:22.337064 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5vm86\" (UniqueName: \"kubernetes.io/projected/08d67da7-f4f3-4e1c-acd8-c8fcec30f59d-kube-api-access-5vm86\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-5ljwg\" (UID: \"08d67da7-f4f3-4e1c-acd8-c8fcec30f59d\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-5ljwg" Oct 06 12:19:22 crc kubenswrapper[4958]: I1006 12:19:22.419491 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-5ljwg" Oct 06 12:19:22 crc kubenswrapper[4958]: I1006 12:19:22.913879 4958 scope.go:117] "RemoveContainer" containerID="92e208915ca70ac5b3441238341d5b28c612e6494612e3dece5b9499a5bc906f" Oct 06 12:19:22 crc kubenswrapper[4958]: E1006 12:19:22.914747 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-whw6z_openshift-machine-config-operator(1a63a5e9-6d00-40ff-a080-cfcaf03a1c1b)\"" pod="openshift-machine-config-operator/machine-config-daemon-whw6z" podUID="1a63a5e9-6d00-40ff-a080-cfcaf03a1c1b" Oct 06 12:19:23 crc kubenswrapper[4958]: I1006 12:19:23.021245 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-edpm-deployment-openstack-edpm-ipam-5ljwg"] Oct 06 12:19:23 crc kubenswrapper[4958]: I1006 12:19:23.994261 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-5ljwg" event={"ID":"08d67da7-f4f3-4e1c-acd8-c8fcec30f59d","Type":"ContainerStarted","Data":"0ff6d790d4020aba92738cb7e2d02c36e5c0a87bd998c7bc777a9ce95037ed93"} Oct 06 12:19:25 crc kubenswrapper[4958]: I1006 12:19:25.009116 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-5ljwg" event={"ID":"08d67da7-f4f3-4e1c-acd8-c8fcec30f59d","Type":"ContainerStarted","Data":"3f156c8992ae4977b8b91711e2f1cddd686ea575b15dd4e3b27a96157d42c8ba"} Oct 06 12:19:25 crc kubenswrapper[4958]: I1006 12:19:25.047508 4958 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-5ljwg" podStartSLOduration=2.2517554459999998 podStartE2EDuration="3.047479742s" podCreationTimestamp="2025-10-06 12:19:22 +0000 UTC" firstStartedPulling="2025-10-06 12:19:23.023827737 +0000 UTC m=+1916.909853085" lastFinishedPulling="2025-10-06 12:19:23.819552043 +0000 UTC m=+1917.705577381" observedRunningTime="2025-10-06 12:19:25.030474898 +0000 UTC m=+1918.916500246" watchObservedRunningTime="2025-10-06 12:19:25.047479742 +0000 UTC m=+1918.933505090" Oct 06 12:19:33 crc kubenswrapper[4958]: I1006 12:19:33.914346 4958 scope.go:117] "RemoveContainer" containerID="92e208915ca70ac5b3441238341d5b28c612e6494612e3dece5b9499a5bc906f" Oct 06 12:19:33 crc kubenswrapper[4958]: E1006 12:19:33.915450 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-whw6z_openshift-machine-config-operator(1a63a5e9-6d00-40ff-a080-cfcaf03a1c1b)\"" pod="openshift-machine-config-operator/machine-config-daemon-whw6z" podUID="1a63a5e9-6d00-40ff-a080-cfcaf03a1c1b" Oct 06 12:19:44 crc kubenswrapper[4958]: I1006 12:19:44.914059 4958 scope.go:117] "RemoveContainer" containerID="92e208915ca70ac5b3441238341d5b28c612e6494612e3dece5b9499a5bc906f" Oct 06 12:19:44 crc kubenswrapper[4958]: E1006 12:19:44.915500 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-whw6z_openshift-machine-config-operator(1a63a5e9-6d00-40ff-a080-cfcaf03a1c1b)\"" pod="openshift-machine-config-operator/machine-config-daemon-whw6z" podUID="1a63a5e9-6d00-40ff-a080-cfcaf03a1c1b" Oct 06 12:19:58 crc kubenswrapper[4958]: I1006 12:19:58.917602 4958 scope.go:117] "RemoveContainer" containerID="92e208915ca70ac5b3441238341d5b28c612e6494612e3dece5b9499a5bc906f" Oct 06 12:19:58 crc kubenswrapper[4958]: E1006 12:19:58.918595 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-whw6z_openshift-machine-config-operator(1a63a5e9-6d00-40ff-a080-cfcaf03a1c1b)\"" pod="openshift-machine-config-operator/machine-config-daemon-whw6z" podUID="1a63a5e9-6d00-40ff-a080-cfcaf03a1c1b" Oct 06 12:20:11 crc kubenswrapper[4958]: I1006 12:20:11.913848 4958 scope.go:117] "RemoveContainer" containerID="92e208915ca70ac5b3441238341d5b28c612e6494612e3dece5b9499a5bc906f" Oct 06 12:20:11 crc kubenswrapper[4958]: E1006 12:20:11.914764 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-whw6z_openshift-machine-config-operator(1a63a5e9-6d00-40ff-a080-cfcaf03a1c1b)\"" pod="openshift-machine-config-operator/machine-config-daemon-whw6z" podUID="1a63a5e9-6d00-40ff-a080-cfcaf03a1c1b" Oct 06 12:20:23 crc kubenswrapper[4958]: I1006 12:20:23.914613 4958 scope.go:117] "RemoveContainer" containerID="92e208915ca70ac5b3441238341d5b28c612e6494612e3dece5b9499a5bc906f" Oct 06 12:20:24 crc kubenswrapper[4958]: I1006 12:20:24.672264 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-whw6z" event={"ID":"1a63a5e9-6d00-40ff-a080-cfcaf03a1c1b","Type":"ContainerStarted","Data":"18733eebda307ab68dd54cc8556b7585a95b1eb772aad1fe185443d73decf313"} Oct 06 12:20:38 crc kubenswrapper[4958]: I1006 12:20:38.823563 4958 generic.go:334] "Generic (PLEG): container finished" podID="08d67da7-f4f3-4e1c-acd8-c8fcec30f59d" containerID="3f156c8992ae4977b8b91711e2f1cddd686ea575b15dd4e3b27a96157d42c8ba" exitCode=0 Oct 06 12:20:38 crc kubenswrapper[4958]: I1006 12:20:38.823771 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-5ljwg" event={"ID":"08d67da7-f4f3-4e1c-acd8-c8fcec30f59d","Type":"ContainerDied","Data":"3f156c8992ae4977b8b91711e2f1cddd686ea575b15dd4e3b27a96157d42c8ba"} Oct 06 12:20:40 crc kubenswrapper[4958]: I1006 12:20:40.356232 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-5ljwg" Oct 06 12:20:40 crc kubenswrapper[4958]: I1006 12:20:40.461630 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5vm86\" (UniqueName: \"kubernetes.io/projected/08d67da7-f4f3-4e1c-acd8-c8fcec30f59d-kube-api-access-5vm86\") pod \"08d67da7-f4f3-4e1c-acd8-c8fcec30f59d\" (UID: \"08d67da7-f4f3-4e1c-acd8-c8fcec30f59d\") " Oct 06 12:20:40 crc kubenswrapper[4958]: I1006 12:20:40.461781 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/08d67da7-f4f3-4e1c-acd8-c8fcec30f59d-ovncontroller-config-0\") pod \"08d67da7-f4f3-4e1c-acd8-c8fcec30f59d\" (UID: \"08d67da7-f4f3-4e1c-acd8-c8fcec30f59d\") " Oct 06 12:20:40 crc kubenswrapper[4958]: I1006 12:20:40.461869 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/08d67da7-f4f3-4e1c-acd8-c8fcec30f59d-ssh-key\") pod \"08d67da7-f4f3-4e1c-acd8-c8fcec30f59d\" (UID: \"08d67da7-f4f3-4e1c-acd8-c8fcec30f59d\") " Oct 06 12:20:40 crc kubenswrapper[4958]: I1006 12:20:40.461920 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/08d67da7-f4f3-4e1c-acd8-c8fcec30f59d-ovn-combined-ca-bundle\") pod \"08d67da7-f4f3-4e1c-acd8-c8fcec30f59d\" (UID: \"08d67da7-f4f3-4e1c-acd8-c8fcec30f59d\") " Oct 06 12:20:40 crc kubenswrapper[4958]: I1006 12:20:40.462047 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/08d67da7-f4f3-4e1c-acd8-c8fcec30f59d-inventory\") pod \"08d67da7-f4f3-4e1c-acd8-c8fcec30f59d\" (UID: \"08d67da7-f4f3-4e1c-acd8-c8fcec30f59d\") " Oct 06 12:20:40 crc kubenswrapper[4958]: I1006 12:20:40.471658 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/08d67da7-f4f3-4e1c-acd8-c8fcec30f59d-ovn-combined-ca-bundle" (OuterVolumeSpecName: "ovn-combined-ca-bundle") pod "08d67da7-f4f3-4e1c-acd8-c8fcec30f59d" (UID: "08d67da7-f4f3-4e1c-acd8-c8fcec30f59d"). InnerVolumeSpecName "ovn-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 06 12:20:40 crc kubenswrapper[4958]: I1006 12:20:40.471943 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/08d67da7-f4f3-4e1c-acd8-c8fcec30f59d-kube-api-access-5vm86" (OuterVolumeSpecName: "kube-api-access-5vm86") pod "08d67da7-f4f3-4e1c-acd8-c8fcec30f59d" (UID: "08d67da7-f4f3-4e1c-acd8-c8fcec30f59d"). InnerVolumeSpecName "kube-api-access-5vm86". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 06 12:20:40 crc kubenswrapper[4958]: I1006 12:20:40.504103 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/08d67da7-f4f3-4e1c-acd8-c8fcec30f59d-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "08d67da7-f4f3-4e1c-acd8-c8fcec30f59d" (UID: "08d67da7-f4f3-4e1c-acd8-c8fcec30f59d"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 06 12:20:40 crc kubenswrapper[4958]: I1006 12:20:40.508324 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/08d67da7-f4f3-4e1c-acd8-c8fcec30f59d-ovncontroller-config-0" (OuterVolumeSpecName: "ovncontroller-config-0") pod "08d67da7-f4f3-4e1c-acd8-c8fcec30f59d" (UID: "08d67da7-f4f3-4e1c-acd8-c8fcec30f59d"). InnerVolumeSpecName "ovncontroller-config-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 06 12:20:40 crc kubenswrapper[4958]: I1006 12:20:40.526761 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/08d67da7-f4f3-4e1c-acd8-c8fcec30f59d-inventory" (OuterVolumeSpecName: "inventory") pod "08d67da7-f4f3-4e1c-acd8-c8fcec30f59d" (UID: "08d67da7-f4f3-4e1c-acd8-c8fcec30f59d"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 06 12:20:40 crc kubenswrapper[4958]: I1006 12:20:40.565240 4958 reconciler_common.go:293] "Volume detached for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/08d67da7-f4f3-4e1c-acd8-c8fcec30f59d-ovncontroller-config-0\") on node \"crc\" DevicePath \"\"" Oct 06 12:20:40 crc kubenswrapper[4958]: I1006 12:20:40.565302 4958 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/08d67da7-f4f3-4e1c-acd8-c8fcec30f59d-ssh-key\") on node \"crc\" DevicePath \"\"" Oct 06 12:20:40 crc kubenswrapper[4958]: I1006 12:20:40.565324 4958 reconciler_common.go:293] "Volume detached for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/08d67da7-f4f3-4e1c-acd8-c8fcec30f59d-ovn-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 06 12:20:40 crc kubenswrapper[4958]: I1006 12:20:40.565342 4958 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/08d67da7-f4f3-4e1c-acd8-c8fcec30f59d-inventory\") on node \"crc\" DevicePath \"\"" Oct 06 12:20:40 crc kubenswrapper[4958]: I1006 12:20:40.565360 4958 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5vm86\" (UniqueName: \"kubernetes.io/projected/08d67da7-f4f3-4e1c-acd8-c8fcec30f59d-kube-api-access-5vm86\") on node \"crc\" DevicePath \"\"" Oct 06 12:20:40 crc kubenswrapper[4958]: I1006 12:20:40.854063 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-5ljwg" event={"ID":"08d67da7-f4f3-4e1c-acd8-c8fcec30f59d","Type":"ContainerDied","Data":"0ff6d790d4020aba92738cb7e2d02c36e5c0a87bd998c7bc777a9ce95037ed93"} Oct 06 12:20:40 crc kubenswrapper[4958]: I1006 12:20:40.854100 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-5ljwg" Oct 06 12:20:40 crc kubenswrapper[4958]: I1006 12:20:40.854103 4958 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0ff6d790d4020aba92738cb7e2d02c36e5c0a87bd998c7bc777a9ce95037ed93" Oct 06 12:20:40 crc kubenswrapper[4958]: I1006 12:20:40.983312 4958 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-hch6t"] Oct 06 12:20:40 crc kubenswrapper[4958]: E1006 12:20:40.983846 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="08d67da7-f4f3-4e1c-acd8-c8fcec30f59d" containerName="ovn-edpm-deployment-openstack-edpm-ipam" Oct 06 12:20:40 crc kubenswrapper[4958]: I1006 12:20:40.983864 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="08d67da7-f4f3-4e1c-acd8-c8fcec30f59d" containerName="ovn-edpm-deployment-openstack-edpm-ipam" Oct 06 12:20:40 crc kubenswrapper[4958]: I1006 12:20:40.984161 4958 memory_manager.go:354] "RemoveStaleState removing state" podUID="08d67da7-f4f3-4e1c-acd8-c8fcec30f59d" containerName="ovn-edpm-deployment-openstack-edpm-ipam" Oct 06 12:20:40 crc kubenswrapper[4958]: I1006 12:20:40.984983 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-hch6t" Oct 06 12:20:40 crc kubenswrapper[4958]: I1006 12:20:40.988457 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-ovn-metadata-agent-neutron-config" Oct 06 12:20:40 crc kubenswrapper[4958]: I1006 12:20:40.988673 4958 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Oct 06 12:20:40 crc kubenswrapper[4958]: I1006 12:20:40.988863 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Oct 06 12:20:40 crc kubenswrapper[4958]: I1006 12:20:40.989003 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Oct 06 12:20:40 crc kubenswrapper[4958]: I1006 12:20:40.989118 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-frkbw" Oct 06 12:20:40 crc kubenswrapper[4958]: I1006 12:20:40.992922 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-neutron-config" Oct 06 12:20:41 crc kubenswrapper[4958]: I1006 12:20:41.011841 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-hch6t"] Oct 06 12:20:41 crc kubenswrapper[4958]: I1006 12:20:41.085315 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/c7524451-dd6e-42b7-8454-4e9efe77c79c-ssh-key\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-hch6t\" (UID: \"c7524451-dd6e-42b7-8454-4e9efe77c79c\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-hch6t" Oct 06 12:20:41 crc kubenswrapper[4958]: I1006 12:20:41.085420 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/c7524451-dd6e-42b7-8454-4e9efe77c79c-nova-metadata-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-hch6t\" (UID: \"c7524451-dd6e-42b7-8454-4e9efe77c79c\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-hch6t" Oct 06 12:20:41 crc kubenswrapper[4958]: I1006 12:20:41.085522 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/c7524451-dd6e-42b7-8454-4e9efe77c79c-neutron-ovn-metadata-agent-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-hch6t\" (UID: \"c7524451-dd6e-42b7-8454-4e9efe77c79c\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-hch6t" Oct 06 12:20:41 crc kubenswrapper[4958]: I1006 12:20:41.085554 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/c7524451-dd6e-42b7-8454-4e9efe77c79c-inventory\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-hch6t\" (UID: \"c7524451-dd6e-42b7-8454-4e9efe77c79c\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-hch6t" Oct 06 12:20:41 crc kubenswrapper[4958]: I1006 12:20:41.085699 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2zbxh\" (UniqueName: \"kubernetes.io/projected/c7524451-dd6e-42b7-8454-4e9efe77c79c-kube-api-access-2zbxh\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-hch6t\" (UID: \"c7524451-dd6e-42b7-8454-4e9efe77c79c\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-hch6t" Oct 06 12:20:41 crc kubenswrapper[4958]: I1006 12:20:41.085763 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c7524451-dd6e-42b7-8454-4e9efe77c79c-neutron-metadata-combined-ca-bundle\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-hch6t\" (UID: \"c7524451-dd6e-42b7-8454-4e9efe77c79c\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-hch6t" Oct 06 12:20:41 crc kubenswrapper[4958]: I1006 12:20:41.188950 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/c7524451-dd6e-42b7-8454-4e9efe77c79c-nova-metadata-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-hch6t\" (UID: \"c7524451-dd6e-42b7-8454-4e9efe77c79c\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-hch6t" Oct 06 12:20:41 crc kubenswrapper[4958]: I1006 12:20:41.189082 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/c7524451-dd6e-42b7-8454-4e9efe77c79c-neutron-ovn-metadata-agent-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-hch6t\" (UID: \"c7524451-dd6e-42b7-8454-4e9efe77c79c\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-hch6t" Oct 06 12:20:41 crc kubenswrapper[4958]: I1006 12:20:41.189122 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/c7524451-dd6e-42b7-8454-4e9efe77c79c-inventory\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-hch6t\" (UID: \"c7524451-dd6e-42b7-8454-4e9efe77c79c\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-hch6t" Oct 06 12:20:41 crc kubenswrapper[4958]: I1006 12:20:41.189224 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2zbxh\" (UniqueName: \"kubernetes.io/projected/c7524451-dd6e-42b7-8454-4e9efe77c79c-kube-api-access-2zbxh\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-hch6t\" (UID: \"c7524451-dd6e-42b7-8454-4e9efe77c79c\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-hch6t" Oct 06 12:20:41 crc kubenswrapper[4958]: I1006 12:20:41.190202 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c7524451-dd6e-42b7-8454-4e9efe77c79c-neutron-metadata-combined-ca-bundle\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-hch6t\" (UID: \"c7524451-dd6e-42b7-8454-4e9efe77c79c\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-hch6t" Oct 06 12:20:41 crc kubenswrapper[4958]: I1006 12:20:41.190828 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/c7524451-dd6e-42b7-8454-4e9efe77c79c-ssh-key\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-hch6t\" (UID: \"c7524451-dd6e-42b7-8454-4e9efe77c79c\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-hch6t" Oct 06 12:20:41 crc kubenswrapper[4958]: I1006 12:20:41.193754 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/c7524451-dd6e-42b7-8454-4e9efe77c79c-nova-metadata-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-hch6t\" (UID: \"c7524451-dd6e-42b7-8454-4e9efe77c79c\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-hch6t" Oct 06 12:20:41 crc kubenswrapper[4958]: I1006 12:20:41.193830 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/c7524451-dd6e-42b7-8454-4e9efe77c79c-inventory\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-hch6t\" (UID: \"c7524451-dd6e-42b7-8454-4e9efe77c79c\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-hch6t" Oct 06 12:20:41 crc kubenswrapper[4958]: I1006 12:20:41.194553 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/c7524451-dd6e-42b7-8454-4e9efe77c79c-ssh-key\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-hch6t\" (UID: \"c7524451-dd6e-42b7-8454-4e9efe77c79c\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-hch6t" Oct 06 12:20:41 crc kubenswrapper[4958]: I1006 12:20:41.194935 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c7524451-dd6e-42b7-8454-4e9efe77c79c-neutron-metadata-combined-ca-bundle\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-hch6t\" (UID: \"c7524451-dd6e-42b7-8454-4e9efe77c79c\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-hch6t" Oct 06 12:20:41 crc kubenswrapper[4958]: I1006 12:20:41.195232 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/c7524451-dd6e-42b7-8454-4e9efe77c79c-neutron-ovn-metadata-agent-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-hch6t\" (UID: \"c7524451-dd6e-42b7-8454-4e9efe77c79c\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-hch6t" Oct 06 12:20:41 crc kubenswrapper[4958]: I1006 12:20:41.207812 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2zbxh\" (UniqueName: \"kubernetes.io/projected/c7524451-dd6e-42b7-8454-4e9efe77c79c-kube-api-access-2zbxh\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-hch6t\" (UID: \"c7524451-dd6e-42b7-8454-4e9efe77c79c\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-hch6t" Oct 06 12:20:41 crc kubenswrapper[4958]: I1006 12:20:41.310095 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-hch6t" Oct 06 12:20:41 crc kubenswrapper[4958]: I1006 12:20:41.864727 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-hch6t"] Oct 06 12:20:42 crc kubenswrapper[4958]: I1006 12:20:42.894267 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-hch6t" event={"ID":"c7524451-dd6e-42b7-8454-4e9efe77c79c","Type":"ContainerStarted","Data":"bccb26261a3aed62c07cd749c3bae2a4d86d8d9490c92f2ce1cf6c0b9001d13d"} Oct 06 12:20:42 crc kubenswrapper[4958]: I1006 12:20:42.894891 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-hch6t" event={"ID":"c7524451-dd6e-42b7-8454-4e9efe77c79c","Type":"ContainerStarted","Data":"43058a7a62196bdd4439909978ce47ebaf669b625a185757d11a88cd3311fb71"} Oct 06 12:20:42 crc kubenswrapper[4958]: I1006 12:20:42.924639 4958 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-hch6t" podStartSLOduration=2.518260052 podStartE2EDuration="2.924615581s" podCreationTimestamp="2025-10-06 12:20:40 +0000 UTC" firstStartedPulling="2025-10-06 12:20:41.880667074 +0000 UTC m=+1995.766692382" lastFinishedPulling="2025-10-06 12:20:42.287022593 +0000 UTC m=+1996.173047911" observedRunningTime="2025-10-06 12:20:42.915820691 +0000 UTC m=+1996.801846039" watchObservedRunningTime="2025-10-06 12:20:42.924615581 +0000 UTC m=+1996.810640899" Oct 06 12:21:39 crc kubenswrapper[4958]: I1006 12:21:39.528883 4958 generic.go:334] "Generic (PLEG): container finished" podID="c7524451-dd6e-42b7-8454-4e9efe77c79c" containerID="bccb26261a3aed62c07cd749c3bae2a4d86d8d9490c92f2ce1cf6c0b9001d13d" exitCode=0 Oct 06 12:21:39 crc kubenswrapper[4958]: I1006 12:21:39.528964 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-hch6t" event={"ID":"c7524451-dd6e-42b7-8454-4e9efe77c79c","Type":"ContainerDied","Data":"bccb26261a3aed62c07cd749c3bae2a4d86d8d9490c92f2ce1cf6c0b9001d13d"} Oct 06 12:21:41 crc kubenswrapper[4958]: I1006 12:21:41.197719 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-hch6t" Oct 06 12:21:41 crc kubenswrapper[4958]: I1006 12:21:41.300634 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/c7524451-dd6e-42b7-8454-4e9efe77c79c-inventory\") pod \"c7524451-dd6e-42b7-8454-4e9efe77c79c\" (UID: \"c7524451-dd6e-42b7-8454-4e9efe77c79c\") " Oct 06 12:21:41 crc kubenswrapper[4958]: I1006 12:21:41.300742 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c7524451-dd6e-42b7-8454-4e9efe77c79c-neutron-metadata-combined-ca-bundle\") pod \"c7524451-dd6e-42b7-8454-4e9efe77c79c\" (UID: \"c7524451-dd6e-42b7-8454-4e9efe77c79c\") " Oct 06 12:21:41 crc kubenswrapper[4958]: I1006 12:21:41.300771 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/c7524451-dd6e-42b7-8454-4e9efe77c79c-ssh-key\") pod \"c7524451-dd6e-42b7-8454-4e9efe77c79c\" (UID: \"c7524451-dd6e-42b7-8454-4e9efe77c79c\") " Oct 06 12:21:41 crc kubenswrapper[4958]: I1006 12:21:41.300886 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/c7524451-dd6e-42b7-8454-4e9efe77c79c-nova-metadata-neutron-config-0\") pod \"c7524451-dd6e-42b7-8454-4e9efe77c79c\" (UID: \"c7524451-dd6e-42b7-8454-4e9efe77c79c\") " Oct 06 12:21:41 crc kubenswrapper[4958]: I1006 12:21:41.300999 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/c7524451-dd6e-42b7-8454-4e9efe77c79c-neutron-ovn-metadata-agent-neutron-config-0\") pod \"c7524451-dd6e-42b7-8454-4e9efe77c79c\" (UID: \"c7524451-dd6e-42b7-8454-4e9efe77c79c\") " Oct 06 12:21:41 crc kubenswrapper[4958]: I1006 12:21:41.301075 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2zbxh\" (UniqueName: \"kubernetes.io/projected/c7524451-dd6e-42b7-8454-4e9efe77c79c-kube-api-access-2zbxh\") pod \"c7524451-dd6e-42b7-8454-4e9efe77c79c\" (UID: \"c7524451-dd6e-42b7-8454-4e9efe77c79c\") " Oct 06 12:21:41 crc kubenswrapper[4958]: I1006 12:21:41.305751 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c7524451-dd6e-42b7-8454-4e9efe77c79c-kube-api-access-2zbxh" (OuterVolumeSpecName: "kube-api-access-2zbxh") pod "c7524451-dd6e-42b7-8454-4e9efe77c79c" (UID: "c7524451-dd6e-42b7-8454-4e9efe77c79c"). InnerVolumeSpecName "kube-api-access-2zbxh". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 06 12:21:41 crc kubenswrapper[4958]: I1006 12:21:41.309308 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c7524451-dd6e-42b7-8454-4e9efe77c79c-neutron-metadata-combined-ca-bundle" (OuterVolumeSpecName: "neutron-metadata-combined-ca-bundle") pod "c7524451-dd6e-42b7-8454-4e9efe77c79c" (UID: "c7524451-dd6e-42b7-8454-4e9efe77c79c"). InnerVolumeSpecName "neutron-metadata-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 06 12:21:41 crc kubenswrapper[4958]: I1006 12:21:41.327859 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c7524451-dd6e-42b7-8454-4e9efe77c79c-nova-metadata-neutron-config-0" (OuterVolumeSpecName: "nova-metadata-neutron-config-0") pod "c7524451-dd6e-42b7-8454-4e9efe77c79c" (UID: "c7524451-dd6e-42b7-8454-4e9efe77c79c"). InnerVolumeSpecName "nova-metadata-neutron-config-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 06 12:21:41 crc kubenswrapper[4958]: I1006 12:21:41.331649 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c7524451-dd6e-42b7-8454-4e9efe77c79c-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "c7524451-dd6e-42b7-8454-4e9efe77c79c" (UID: "c7524451-dd6e-42b7-8454-4e9efe77c79c"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 06 12:21:41 crc kubenswrapper[4958]: I1006 12:21:41.340577 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c7524451-dd6e-42b7-8454-4e9efe77c79c-inventory" (OuterVolumeSpecName: "inventory") pod "c7524451-dd6e-42b7-8454-4e9efe77c79c" (UID: "c7524451-dd6e-42b7-8454-4e9efe77c79c"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 06 12:21:41 crc kubenswrapper[4958]: I1006 12:21:41.351857 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c7524451-dd6e-42b7-8454-4e9efe77c79c-neutron-ovn-metadata-agent-neutron-config-0" (OuterVolumeSpecName: "neutron-ovn-metadata-agent-neutron-config-0") pod "c7524451-dd6e-42b7-8454-4e9efe77c79c" (UID: "c7524451-dd6e-42b7-8454-4e9efe77c79c"). InnerVolumeSpecName "neutron-ovn-metadata-agent-neutron-config-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 06 12:21:41 crc kubenswrapper[4958]: I1006 12:21:41.403372 4958 reconciler_common.go:293] "Volume detached for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/c7524451-dd6e-42b7-8454-4e9efe77c79c-nova-metadata-neutron-config-0\") on node \"crc\" DevicePath \"\"" Oct 06 12:21:41 crc kubenswrapper[4958]: I1006 12:21:41.403403 4958 reconciler_common.go:293] "Volume detached for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/c7524451-dd6e-42b7-8454-4e9efe77c79c-neutron-ovn-metadata-agent-neutron-config-0\") on node \"crc\" DevicePath \"\"" Oct 06 12:21:41 crc kubenswrapper[4958]: I1006 12:21:41.403415 4958 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2zbxh\" (UniqueName: \"kubernetes.io/projected/c7524451-dd6e-42b7-8454-4e9efe77c79c-kube-api-access-2zbxh\") on node \"crc\" DevicePath \"\"" Oct 06 12:21:41 crc kubenswrapper[4958]: I1006 12:21:41.403425 4958 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/c7524451-dd6e-42b7-8454-4e9efe77c79c-inventory\") on node \"crc\" DevicePath \"\"" Oct 06 12:21:41 crc kubenswrapper[4958]: I1006 12:21:41.403433 4958 reconciler_common.go:293] "Volume detached for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c7524451-dd6e-42b7-8454-4e9efe77c79c-neutron-metadata-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 06 12:21:41 crc kubenswrapper[4958]: I1006 12:21:41.403444 4958 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/c7524451-dd6e-42b7-8454-4e9efe77c79c-ssh-key\") on node \"crc\" DevicePath \"\"" Oct 06 12:21:41 crc kubenswrapper[4958]: I1006 12:21:41.555277 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-hch6t" event={"ID":"c7524451-dd6e-42b7-8454-4e9efe77c79c","Type":"ContainerDied","Data":"43058a7a62196bdd4439909978ce47ebaf669b625a185757d11a88cd3311fb71"} Oct 06 12:21:41 crc kubenswrapper[4958]: I1006 12:21:41.555321 4958 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="43058a7a62196bdd4439909978ce47ebaf669b625a185757d11a88cd3311fb71" Oct 06 12:21:41 crc kubenswrapper[4958]: I1006 12:21:41.555376 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-hch6t" Oct 06 12:21:41 crc kubenswrapper[4958]: I1006 12:21:41.695838 4958 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/libvirt-edpm-deployment-openstack-edpm-ipam-rqzwk"] Oct 06 12:21:41 crc kubenswrapper[4958]: E1006 12:21:41.696356 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c7524451-dd6e-42b7-8454-4e9efe77c79c" containerName="neutron-metadata-edpm-deployment-openstack-edpm-ipam" Oct 06 12:21:41 crc kubenswrapper[4958]: I1006 12:21:41.696385 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="c7524451-dd6e-42b7-8454-4e9efe77c79c" containerName="neutron-metadata-edpm-deployment-openstack-edpm-ipam" Oct 06 12:21:41 crc kubenswrapper[4958]: I1006 12:21:41.696785 4958 memory_manager.go:354] "RemoveStaleState removing state" podUID="c7524451-dd6e-42b7-8454-4e9efe77c79c" containerName="neutron-metadata-edpm-deployment-openstack-edpm-ipam" Oct 06 12:21:41 crc kubenswrapper[4958]: I1006 12:21:41.697854 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-rqzwk" Oct 06 12:21:41 crc kubenswrapper[4958]: I1006 12:21:41.702115 4958 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Oct 06 12:21:41 crc kubenswrapper[4958]: I1006 12:21:41.702190 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Oct 06 12:21:41 crc kubenswrapper[4958]: I1006 12:21:41.702506 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Oct 06 12:21:41 crc kubenswrapper[4958]: I1006 12:21:41.702738 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-frkbw" Oct 06 12:21:41 crc kubenswrapper[4958]: I1006 12:21:41.703542 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"libvirt-secret" Oct 06 12:21:41 crc kubenswrapper[4958]: I1006 12:21:41.710680 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/libvirt-edpm-deployment-openstack-edpm-ipam-rqzwk"] Oct 06 12:21:41 crc kubenswrapper[4958]: I1006 12:21:41.811366 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/186200b0-8ce3-46a8-9691-42b254a077be-libvirt-secret-0\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-rqzwk\" (UID: \"186200b0-8ce3-46a8-9691-42b254a077be\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-rqzwk" Oct 06 12:21:41 crc kubenswrapper[4958]: I1006 12:21:41.811542 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tktjg\" (UniqueName: \"kubernetes.io/projected/186200b0-8ce3-46a8-9691-42b254a077be-kube-api-access-tktjg\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-rqzwk\" (UID: \"186200b0-8ce3-46a8-9691-42b254a077be\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-rqzwk" Oct 06 12:21:41 crc kubenswrapper[4958]: I1006 12:21:41.811605 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/186200b0-8ce3-46a8-9691-42b254a077be-ssh-key\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-rqzwk\" (UID: \"186200b0-8ce3-46a8-9691-42b254a077be\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-rqzwk" Oct 06 12:21:41 crc kubenswrapper[4958]: I1006 12:21:41.811667 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/186200b0-8ce3-46a8-9691-42b254a077be-inventory\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-rqzwk\" (UID: \"186200b0-8ce3-46a8-9691-42b254a077be\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-rqzwk" Oct 06 12:21:41 crc kubenswrapper[4958]: I1006 12:21:41.811847 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/186200b0-8ce3-46a8-9691-42b254a077be-libvirt-combined-ca-bundle\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-rqzwk\" (UID: \"186200b0-8ce3-46a8-9691-42b254a077be\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-rqzwk" Oct 06 12:21:41 crc kubenswrapper[4958]: I1006 12:21:41.914038 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tktjg\" (UniqueName: \"kubernetes.io/projected/186200b0-8ce3-46a8-9691-42b254a077be-kube-api-access-tktjg\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-rqzwk\" (UID: \"186200b0-8ce3-46a8-9691-42b254a077be\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-rqzwk" Oct 06 12:21:41 crc kubenswrapper[4958]: I1006 12:21:41.914579 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/186200b0-8ce3-46a8-9691-42b254a077be-ssh-key\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-rqzwk\" (UID: \"186200b0-8ce3-46a8-9691-42b254a077be\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-rqzwk" Oct 06 12:21:41 crc kubenswrapper[4958]: I1006 12:21:41.914650 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/186200b0-8ce3-46a8-9691-42b254a077be-inventory\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-rqzwk\" (UID: \"186200b0-8ce3-46a8-9691-42b254a077be\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-rqzwk" Oct 06 12:21:41 crc kubenswrapper[4958]: I1006 12:21:41.914866 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/186200b0-8ce3-46a8-9691-42b254a077be-libvirt-combined-ca-bundle\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-rqzwk\" (UID: \"186200b0-8ce3-46a8-9691-42b254a077be\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-rqzwk" Oct 06 12:21:41 crc kubenswrapper[4958]: I1006 12:21:41.915050 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/186200b0-8ce3-46a8-9691-42b254a077be-libvirt-secret-0\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-rqzwk\" (UID: \"186200b0-8ce3-46a8-9691-42b254a077be\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-rqzwk" Oct 06 12:21:41 crc kubenswrapper[4958]: I1006 12:21:41.921216 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/186200b0-8ce3-46a8-9691-42b254a077be-libvirt-combined-ca-bundle\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-rqzwk\" (UID: \"186200b0-8ce3-46a8-9691-42b254a077be\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-rqzwk" Oct 06 12:21:41 crc kubenswrapper[4958]: I1006 12:21:41.921676 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/186200b0-8ce3-46a8-9691-42b254a077be-libvirt-secret-0\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-rqzwk\" (UID: \"186200b0-8ce3-46a8-9691-42b254a077be\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-rqzwk" Oct 06 12:21:41 crc kubenswrapper[4958]: I1006 12:21:41.924771 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/186200b0-8ce3-46a8-9691-42b254a077be-ssh-key\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-rqzwk\" (UID: \"186200b0-8ce3-46a8-9691-42b254a077be\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-rqzwk" Oct 06 12:21:41 crc kubenswrapper[4958]: I1006 12:21:41.925541 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/186200b0-8ce3-46a8-9691-42b254a077be-inventory\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-rqzwk\" (UID: \"186200b0-8ce3-46a8-9691-42b254a077be\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-rqzwk" Oct 06 12:21:41 crc kubenswrapper[4958]: I1006 12:21:41.936487 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tktjg\" (UniqueName: \"kubernetes.io/projected/186200b0-8ce3-46a8-9691-42b254a077be-kube-api-access-tktjg\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-rqzwk\" (UID: \"186200b0-8ce3-46a8-9691-42b254a077be\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-rqzwk" Oct 06 12:21:42 crc kubenswrapper[4958]: I1006 12:21:42.023545 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-rqzwk" Oct 06 12:21:42 crc kubenswrapper[4958]: I1006 12:21:42.621639 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/libvirt-edpm-deployment-openstack-edpm-ipam-rqzwk"] Oct 06 12:21:43 crc kubenswrapper[4958]: I1006 12:21:43.579456 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-rqzwk" event={"ID":"186200b0-8ce3-46a8-9691-42b254a077be","Type":"ContainerStarted","Data":"abb024f16f020633345c0d72db94c945d2a797e5a6e2a8f82fffb4f0fb6adf36"} Oct 06 12:21:43 crc kubenswrapper[4958]: I1006 12:21:43.579788 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-rqzwk" event={"ID":"186200b0-8ce3-46a8-9691-42b254a077be","Type":"ContainerStarted","Data":"e9aa912b2bf8898207e75ad69f5af243ad2b8045c9ac0546db0b853e195ff768"} Oct 06 12:21:43 crc kubenswrapper[4958]: I1006 12:21:43.606860 4958 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-rqzwk" podStartSLOduration=1.958920772 podStartE2EDuration="2.606830211s" podCreationTimestamp="2025-10-06 12:21:41 +0000 UTC" firstStartedPulling="2025-10-06 12:21:42.630220778 +0000 UTC m=+2056.516246086" lastFinishedPulling="2025-10-06 12:21:43.278130177 +0000 UTC m=+2057.164155525" observedRunningTime="2025-10-06 12:21:43.599902122 +0000 UTC m=+2057.485927460" watchObservedRunningTime="2025-10-06 12:21:43.606830211 +0000 UTC m=+2057.492855559" Oct 06 12:21:52 crc kubenswrapper[4958]: I1006 12:21:52.789702 4958 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-q2phg"] Oct 06 12:21:52 crc kubenswrapper[4958]: I1006 12:21:52.792303 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-q2phg" Oct 06 12:21:52 crc kubenswrapper[4958]: I1006 12:21:52.818116 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-q2phg"] Oct 06 12:21:52 crc kubenswrapper[4958]: I1006 12:21:52.875562 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7dh6l\" (UniqueName: \"kubernetes.io/projected/7e6fc909-d9bc-4e01-8dbb-e74039c06de8-kube-api-access-7dh6l\") pod \"redhat-operators-q2phg\" (UID: \"7e6fc909-d9bc-4e01-8dbb-e74039c06de8\") " pod="openshift-marketplace/redhat-operators-q2phg" Oct 06 12:21:52 crc kubenswrapper[4958]: I1006 12:21:52.875732 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7e6fc909-d9bc-4e01-8dbb-e74039c06de8-catalog-content\") pod \"redhat-operators-q2phg\" (UID: \"7e6fc909-d9bc-4e01-8dbb-e74039c06de8\") " pod="openshift-marketplace/redhat-operators-q2phg" Oct 06 12:21:52 crc kubenswrapper[4958]: I1006 12:21:52.875808 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7e6fc909-d9bc-4e01-8dbb-e74039c06de8-utilities\") pod \"redhat-operators-q2phg\" (UID: \"7e6fc909-d9bc-4e01-8dbb-e74039c06de8\") " pod="openshift-marketplace/redhat-operators-q2phg" Oct 06 12:21:52 crc kubenswrapper[4958]: I1006 12:21:52.977441 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7dh6l\" (UniqueName: \"kubernetes.io/projected/7e6fc909-d9bc-4e01-8dbb-e74039c06de8-kube-api-access-7dh6l\") pod \"redhat-operators-q2phg\" (UID: \"7e6fc909-d9bc-4e01-8dbb-e74039c06de8\") " pod="openshift-marketplace/redhat-operators-q2phg" Oct 06 12:21:52 crc kubenswrapper[4958]: I1006 12:21:52.977543 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7e6fc909-d9bc-4e01-8dbb-e74039c06de8-catalog-content\") pod \"redhat-operators-q2phg\" (UID: \"7e6fc909-d9bc-4e01-8dbb-e74039c06de8\") " pod="openshift-marketplace/redhat-operators-q2phg" Oct 06 12:21:52 crc kubenswrapper[4958]: I1006 12:21:52.977575 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7e6fc909-d9bc-4e01-8dbb-e74039c06de8-utilities\") pod \"redhat-operators-q2phg\" (UID: \"7e6fc909-d9bc-4e01-8dbb-e74039c06de8\") " pod="openshift-marketplace/redhat-operators-q2phg" Oct 06 12:21:52 crc kubenswrapper[4958]: I1006 12:21:52.978256 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7e6fc909-d9bc-4e01-8dbb-e74039c06de8-utilities\") pod \"redhat-operators-q2phg\" (UID: \"7e6fc909-d9bc-4e01-8dbb-e74039c06de8\") " pod="openshift-marketplace/redhat-operators-q2phg" Oct 06 12:21:52 crc kubenswrapper[4958]: I1006 12:21:52.978990 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7e6fc909-d9bc-4e01-8dbb-e74039c06de8-catalog-content\") pod \"redhat-operators-q2phg\" (UID: \"7e6fc909-d9bc-4e01-8dbb-e74039c06de8\") " pod="openshift-marketplace/redhat-operators-q2phg" Oct 06 12:21:53 crc kubenswrapper[4958]: I1006 12:21:52.998401 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7dh6l\" (UniqueName: \"kubernetes.io/projected/7e6fc909-d9bc-4e01-8dbb-e74039c06de8-kube-api-access-7dh6l\") pod \"redhat-operators-q2phg\" (UID: \"7e6fc909-d9bc-4e01-8dbb-e74039c06de8\") " pod="openshift-marketplace/redhat-operators-q2phg" Oct 06 12:21:53 crc kubenswrapper[4958]: I1006 12:21:53.130601 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-q2phg" Oct 06 12:21:53 crc kubenswrapper[4958]: I1006 12:21:53.615914 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-q2phg"] Oct 06 12:21:53 crc kubenswrapper[4958]: I1006 12:21:53.701875 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-q2phg" event={"ID":"7e6fc909-d9bc-4e01-8dbb-e74039c06de8","Type":"ContainerStarted","Data":"3b364f6198d877556c5e08d3044cec933cf0c8911d314088f400fe8f25d07a94"} Oct 06 12:21:54 crc kubenswrapper[4958]: I1006 12:21:54.713022 4958 generic.go:334] "Generic (PLEG): container finished" podID="7e6fc909-d9bc-4e01-8dbb-e74039c06de8" containerID="8bf16e235b1277ae85db774bfa8e4706542d1de88936ca1ead2e61b42c9a4776" exitCode=0 Oct 06 12:21:54 crc kubenswrapper[4958]: I1006 12:21:54.713094 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-q2phg" event={"ID":"7e6fc909-d9bc-4e01-8dbb-e74039c06de8","Type":"ContainerDied","Data":"8bf16e235b1277ae85db774bfa8e4706542d1de88936ca1ead2e61b42c9a4776"} Oct 06 12:21:56 crc kubenswrapper[4958]: I1006 12:21:56.749028 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-q2phg" event={"ID":"7e6fc909-d9bc-4e01-8dbb-e74039c06de8","Type":"ContainerStarted","Data":"a081db3fb7a3b838b59596ad3350bd38a6e47f94e12dda725a3d1538689b4d9c"} Oct 06 12:21:56 crc kubenswrapper[4958]: E1006 12:21:56.970349 4958 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod7e6fc909_d9bc_4e01_8dbb_e74039c06de8.slice/crio-a081db3fb7a3b838b59596ad3350bd38a6e47f94e12dda725a3d1538689b4d9c.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod7e6fc909_d9bc_4e01_8dbb_e74039c06de8.slice/crio-conmon-a081db3fb7a3b838b59596ad3350bd38a6e47f94e12dda725a3d1538689b4d9c.scope\": RecentStats: unable to find data in memory cache]" Oct 06 12:21:57 crc kubenswrapper[4958]: I1006 12:21:57.760006 4958 generic.go:334] "Generic (PLEG): container finished" podID="7e6fc909-d9bc-4e01-8dbb-e74039c06de8" containerID="a081db3fb7a3b838b59596ad3350bd38a6e47f94e12dda725a3d1538689b4d9c" exitCode=0 Oct 06 12:21:57 crc kubenswrapper[4958]: I1006 12:21:57.760163 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-q2phg" event={"ID":"7e6fc909-d9bc-4e01-8dbb-e74039c06de8","Type":"ContainerDied","Data":"a081db3fb7a3b838b59596ad3350bd38a6e47f94e12dda725a3d1538689b4d9c"} Oct 06 12:21:59 crc kubenswrapper[4958]: I1006 12:21:59.779109 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-q2phg" event={"ID":"7e6fc909-d9bc-4e01-8dbb-e74039c06de8","Type":"ContainerStarted","Data":"66bee53e0ddac49f6159e5965b3d3582e7acc65b5fd065f075eafcf68ffa5cc5"} Oct 06 12:21:59 crc kubenswrapper[4958]: I1006 12:21:59.795217 4958 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-q2phg" podStartSLOduration=3.7773082479999998 podStartE2EDuration="7.795198217s" podCreationTimestamp="2025-10-06 12:21:52 +0000 UTC" firstStartedPulling="2025-10-06 12:21:54.716119986 +0000 UTC m=+2068.602145334" lastFinishedPulling="2025-10-06 12:21:58.734009985 +0000 UTC m=+2072.620035303" observedRunningTime="2025-10-06 12:21:59.79412194 +0000 UTC m=+2073.680147258" watchObservedRunningTime="2025-10-06 12:21:59.795198217 +0000 UTC m=+2073.681223525" Oct 06 12:22:03 crc kubenswrapper[4958]: I1006 12:22:03.130772 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-q2phg" Oct 06 12:22:03 crc kubenswrapper[4958]: I1006 12:22:03.131452 4958 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-q2phg" Oct 06 12:22:04 crc kubenswrapper[4958]: I1006 12:22:04.210573 4958 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-q2phg" podUID="7e6fc909-d9bc-4e01-8dbb-e74039c06de8" containerName="registry-server" probeResult="failure" output=< Oct 06 12:22:04 crc kubenswrapper[4958]: timeout: failed to connect service ":50051" within 1s Oct 06 12:22:04 crc kubenswrapper[4958]: > Oct 06 12:22:13 crc kubenswrapper[4958]: I1006 12:22:13.220711 4958 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-q2phg" Oct 06 12:22:13 crc kubenswrapper[4958]: I1006 12:22:13.305126 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-q2phg" Oct 06 12:22:13 crc kubenswrapper[4958]: I1006 12:22:13.464613 4958 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-q2phg"] Oct 06 12:22:14 crc kubenswrapper[4958]: I1006 12:22:14.955330 4958 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-q2phg" podUID="7e6fc909-d9bc-4e01-8dbb-e74039c06de8" containerName="registry-server" containerID="cri-o://66bee53e0ddac49f6159e5965b3d3582e7acc65b5fd065f075eafcf68ffa5cc5" gracePeriod=2 Oct 06 12:22:15 crc kubenswrapper[4958]: I1006 12:22:15.458026 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-q2phg" Oct 06 12:22:15 crc kubenswrapper[4958]: I1006 12:22:15.538037 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7e6fc909-d9bc-4e01-8dbb-e74039c06de8-utilities\") pod \"7e6fc909-d9bc-4e01-8dbb-e74039c06de8\" (UID: \"7e6fc909-d9bc-4e01-8dbb-e74039c06de8\") " Oct 06 12:22:15 crc kubenswrapper[4958]: I1006 12:22:15.538735 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7e6fc909-d9bc-4e01-8dbb-e74039c06de8-catalog-content\") pod \"7e6fc909-d9bc-4e01-8dbb-e74039c06de8\" (UID: \"7e6fc909-d9bc-4e01-8dbb-e74039c06de8\") " Oct 06 12:22:15 crc kubenswrapper[4958]: I1006 12:22:15.538803 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7dh6l\" (UniqueName: \"kubernetes.io/projected/7e6fc909-d9bc-4e01-8dbb-e74039c06de8-kube-api-access-7dh6l\") pod \"7e6fc909-d9bc-4e01-8dbb-e74039c06de8\" (UID: \"7e6fc909-d9bc-4e01-8dbb-e74039c06de8\") " Oct 06 12:22:15 crc kubenswrapper[4958]: I1006 12:22:15.539404 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7e6fc909-d9bc-4e01-8dbb-e74039c06de8-utilities" (OuterVolumeSpecName: "utilities") pod "7e6fc909-d9bc-4e01-8dbb-e74039c06de8" (UID: "7e6fc909-d9bc-4e01-8dbb-e74039c06de8"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 06 12:22:15 crc kubenswrapper[4958]: I1006 12:22:15.543939 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7e6fc909-d9bc-4e01-8dbb-e74039c06de8-kube-api-access-7dh6l" (OuterVolumeSpecName: "kube-api-access-7dh6l") pod "7e6fc909-d9bc-4e01-8dbb-e74039c06de8" (UID: "7e6fc909-d9bc-4e01-8dbb-e74039c06de8"). InnerVolumeSpecName "kube-api-access-7dh6l". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 06 12:22:15 crc kubenswrapper[4958]: I1006 12:22:15.621840 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7e6fc909-d9bc-4e01-8dbb-e74039c06de8-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "7e6fc909-d9bc-4e01-8dbb-e74039c06de8" (UID: "7e6fc909-d9bc-4e01-8dbb-e74039c06de8"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 06 12:22:15 crc kubenswrapper[4958]: I1006 12:22:15.641684 4958 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7e6fc909-d9bc-4e01-8dbb-e74039c06de8-catalog-content\") on node \"crc\" DevicePath \"\"" Oct 06 12:22:15 crc kubenswrapper[4958]: I1006 12:22:15.641718 4958 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7dh6l\" (UniqueName: \"kubernetes.io/projected/7e6fc909-d9bc-4e01-8dbb-e74039c06de8-kube-api-access-7dh6l\") on node \"crc\" DevicePath \"\"" Oct 06 12:22:15 crc kubenswrapper[4958]: I1006 12:22:15.641730 4958 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7e6fc909-d9bc-4e01-8dbb-e74039c06de8-utilities\") on node \"crc\" DevicePath \"\"" Oct 06 12:22:15 crc kubenswrapper[4958]: I1006 12:22:15.969405 4958 generic.go:334] "Generic (PLEG): container finished" podID="7e6fc909-d9bc-4e01-8dbb-e74039c06de8" containerID="66bee53e0ddac49f6159e5965b3d3582e7acc65b5fd065f075eafcf68ffa5cc5" exitCode=0 Oct 06 12:22:15 crc kubenswrapper[4958]: I1006 12:22:15.969476 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-q2phg" event={"ID":"7e6fc909-d9bc-4e01-8dbb-e74039c06de8","Type":"ContainerDied","Data":"66bee53e0ddac49f6159e5965b3d3582e7acc65b5fd065f075eafcf68ffa5cc5"} Oct 06 12:22:15 crc kubenswrapper[4958]: I1006 12:22:15.969526 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-q2phg" event={"ID":"7e6fc909-d9bc-4e01-8dbb-e74039c06de8","Type":"ContainerDied","Data":"3b364f6198d877556c5e08d3044cec933cf0c8911d314088f400fe8f25d07a94"} Oct 06 12:22:15 crc kubenswrapper[4958]: I1006 12:22:15.969539 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-q2phg" Oct 06 12:22:15 crc kubenswrapper[4958]: I1006 12:22:15.969556 4958 scope.go:117] "RemoveContainer" containerID="66bee53e0ddac49f6159e5965b3d3582e7acc65b5fd065f075eafcf68ffa5cc5" Oct 06 12:22:16 crc kubenswrapper[4958]: I1006 12:22:16.008651 4958 scope.go:117] "RemoveContainer" containerID="a081db3fb7a3b838b59596ad3350bd38a6e47f94e12dda725a3d1538689b4d9c" Oct 06 12:22:16 crc kubenswrapper[4958]: I1006 12:22:16.038347 4958 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-q2phg"] Oct 06 12:22:16 crc kubenswrapper[4958]: I1006 12:22:16.051181 4958 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-q2phg"] Oct 06 12:22:16 crc kubenswrapper[4958]: I1006 12:22:16.066836 4958 scope.go:117] "RemoveContainer" containerID="8bf16e235b1277ae85db774bfa8e4706542d1de88936ca1ead2e61b42c9a4776" Oct 06 12:22:16 crc kubenswrapper[4958]: I1006 12:22:16.115864 4958 scope.go:117] "RemoveContainer" containerID="66bee53e0ddac49f6159e5965b3d3582e7acc65b5fd065f075eafcf68ffa5cc5" Oct 06 12:22:16 crc kubenswrapper[4958]: E1006 12:22:16.116698 4958 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"66bee53e0ddac49f6159e5965b3d3582e7acc65b5fd065f075eafcf68ffa5cc5\": container with ID starting with 66bee53e0ddac49f6159e5965b3d3582e7acc65b5fd065f075eafcf68ffa5cc5 not found: ID does not exist" containerID="66bee53e0ddac49f6159e5965b3d3582e7acc65b5fd065f075eafcf68ffa5cc5" Oct 06 12:22:16 crc kubenswrapper[4958]: I1006 12:22:16.116771 4958 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"66bee53e0ddac49f6159e5965b3d3582e7acc65b5fd065f075eafcf68ffa5cc5"} err="failed to get container status \"66bee53e0ddac49f6159e5965b3d3582e7acc65b5fd065f075eafcf68ffa5cc5\": rpc error: code = NotFound desc = could not find container \"66bee53e0ddac49f6159e5965b3d3582e7acc65b5fd065f075eafcf68ffa5cc5\": container with ID starting with 66bee53e0ddac49f6159e5965b3d3582e7acc65b5fd065f075eafcf68ffa5cc5 not found: ID does not exist" Oct 06 12:22:16 crc kubenswrapper[4958]: I1006 12:22:16.116801 4958 scope.go:117] "RemoveContainer" containerID="a081db3fb7a3b838b59596ad3350bd38a6e47f94e12dda725a3d1538689b4d9c" Oct 06 12:22:16 crc kubenswrapper[4958]: E1006 12:22:16.117159 4958 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a081db3fb7a3b838b59596ad3350bd38a6e47f94e12dda725a3d1538689b4d9c\": container with ID starting with a081db3fb7a3b838b59596ad3350bd38a6e47f94e12dda725a3d1538689b4d9c not found: ID does not exist" containerID="a081db3fb7a3b838b59596ad3350bd38a6e47f94e12dda725a3d1538689b4d9c" Oct 06 12:22:16 crc kubenswrapper[4958]: I1006 12:22:16.117192 4958 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a081db3fb7a3b838b59596ad3350bd38a6e47f94e12dda725a3d1538689b4d9c"} err="failed to get container status \"a081db3fb7a3b838b59596ad3350bd38a6e47f94e12dda725a3d1538689b4d9c\": rpc error: code = NotFound desc = could not find container \"a081db3fb7a3b838b59596ad3350bd38a6e47f94e12dda725a3d1538689b4d9c\": container with ID starting with a081db3fb7a3b838b59596ad3350bd38a6e47f94e12dda725a3d1538689b4d9c not found: ID does not exist" Oct 06 12:22:16 crc kubenswrapper[4958]: I1006 12:22:16.117241 4958 scope.go:117] "RemoveContainer" containerID="8bf16e235b1277ae85db774bfa8e4706542d1de88936ca1ead2e61b42c9a4776" Oct 06 12:22:16 crc kubenswrapper[4958]: E1006 12:22:16.117653 4958 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8bf16e235b1277ae85db774bfa8e4706542d1de88936ca1ead2e61b42c9a4776\": container with ID starting with 8bf16e235b1277ae85db774bfa8e4706542d1de88936ca1ead2e61b42c9a4776 not found: ID does not exist" containerID="8bf16e235b1277ae85db774bfa8e4706542d1de88936ca1ead2e61b42c9a4776" Oct 06 12:22:16 crc kubenswrapper[4958]: I1006 12:22:16.117681 4958 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8bf16e235b1277ae85db774bfa8e4706542d1de88936ca1ead2e61b42c9a4776"} err="failed to get container status \"8bf16e235b1277ae85db774bfa8e4706542d1de88936ca1ead2e61b42c9a4776\": rpc error: code = NotFound desc = could not find container \"8bf16e235b1277ae85db774bfa8e4706542d1de88936ca1ead2e61b42c9a4776\": container with ID starting with 8bf16e235b1277ae85db774bfa8e4706542d1de88936ca1ead2e61b42c9a4776 not found: ID does not exist" Oct 06 12:22:16 crc kubenswrapper[4958]: I1006 12:22:16.936214 4958 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7e6fc909-d9bc-4e01-8dbb-e74039c06de8" path="/var/lib/kubelet/pods/7e6fc909-d9bc-4e01-8dbb-e74039c06de8/volumes" Oct 06 12:22:53 crc kubenswrapper[4958]: I1006 12:22:53.801444 4958 patch_prober.go:28] interesting pod/machine-config-daemon-whw6z container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 06 12:22:53 crc kubenswrapper[4958]: I1006 12:22:53.803884 4958 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-whw6z" podUID="1a63a5e9-6d00-40ff-a080-cfcaf03a1c1b" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 06 12:23:11 crc kubenswrapper[4958]: I1006 12:23:11.532493 4958 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-fgtnr"] Oct 06 12:23:11 crc kubenswrapper[4958]: E1006 12:23:11.534041 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7e6fc909-d9bc-4e01-8dbb-e74039c06de8" containerName="extract-utilities" Oct 06 12:23:11 crc kubenswrapper[4958]: I1006 12:23:11.534070 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="7e6fc909-d9bc-4e01-8dbb-e74039c06de8" containerName="extract-utilities" Oct 06 12:23:11 crc kubenswrapper[4958]: E1006 12:23:11.534108 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7e6fc909-d9bc-4e01-8dbb-e74039c06de8" containerName="extract-content" Oct 06 12:23:11 crc kubenswrapper[4958]: I1006 12:23:11.534126 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="7e6fc909-d9bc-4e01-8dbb-e74039c06de8" containerName="extract-content" Oct 06 12:23:11 crc kubenswrapper[4958]: E1006 12:23:11.534202 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7e6fc909-d9bc-4e01-8dbb-e74039c06de8" containerName="registry-server" Oct 06 12:23:11 crc kubenswrapper[4958]: I1006 12:23:11.534222 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="7e6fc909-d9bc-4e01-8dbb-e74039c06de8" containerName="registry-server" Oct 06 12:23:11 crc kubenswrapper[4958]: I1006 12:23:11.534768 4958 memory_manager.go:354] "RemoveStaleState removing state" podUID="7e6fc909-d9bc-4e01-8dbb-e74039c06de8" containerName="registry-server" Oct 06 12:23:11 crc kubenswrapper[4958]: I1006 12:23:11.536875 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-fgtnr" Oct 06 12:23:11 crc kubenswrapper[4958]: I1006 12:23:11.552488 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-fgtnr"] Oct 06 12:23:11 crc kubenswrapper[4958]: I1006 12:23:11.720037 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mxrth\" (UniqueName: \"kubernetes.io/projected/26491d73-6bef-411d-9f16-7e04566215b4-kube-api-access-mxrth\") pod \"redhat-marketplace-fgtnr\" (UID: \"26491d73-6bef-411d-9f16-7e04566215b4\") " pod="openshift-marketplace/redhat-marketplace-fgtnr" Oct 06 12:23:11 crc kubenswrapper[4958]: I1006 12:23:11.720094 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/26491d73-6bef-411d-9f16-7e04566215b4-catalog-content\") pod \"redhat-marketplace-fgtnr\" (UID: \"26491d73-6bef-411d-9f16-7e04566215b4\") " pod="openshift-marketplace/redhat-marketplace-fgtnr" Oct 06 12:23:11 crc kubenswrapper[4958]: I1006 12:23:11.720284 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/26491d73-6bef-411d-9f16-7e04566215b4-utilities\") pod \"redhat-marketplace-fgtnr\" (UID: \"26491d73-6bef-411d-9f16-7e04566215b4\") " pod="openshift-marketplace/redhat-marketplace-fgtnr" Oct 06 12:23:11 crc kubenswrapper[4958]: I1006 12:23:11.822123 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mxrth\" (UniqueName: \"kubernetes.io/projected/26491d73-6bef-411d-9f16-7e04566215b4-kube-api-access-mxrth\") pod \"redhat-marketplace-fgtnr\" (UID: \"26491d73-6bef-411d-9f16-7e04566215b4\") " pod="openshift-marketplace/redhat-marketplace-fgtnr" Oct 06 12:23:11 crc kubenswrapper[4958]: I1006 12:23:11.822218 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/26491d73-6bef-411d-9f16-7e04566215b4-catalog-content\") pod \"redhat-marketplace-fgtnr\" (UID: \"26491d73-6bef-411d-9f16-7e04566215b4\") " pod="openshift-marketplace/redhat-marketplace-fgtnr" Oct 06 12:23:11 crc kubenswrapper[4958]: I1006 12:23:11.822405 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/26491d73-6bef-411d-9f16-7e04566215b4-utilities\") pod \"redhat-marketplace-fgtnr\" (UID: \"26491d73-6bef-411d-9f16-7e04566215b4\") " pod="openshift-marketplace/redhat-marketplace-fgtnr" Oct 06 12:23:11 crc kubenswrapper[4958]: I1006 12:23:11.823033 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/26491d73-6bef-411d-9f16-7e04566215b4-utilities\") pod \"redhat-marketplace-fgtnr\" (UID: \"26491d73-6bef-411d-9f16-7e04566215b4\") " pod="openshift-marketplace/redhat-marketplace-fgtnr" Oct 06 12:23:11 crc kubenswrapper[4958]: I1006 12:23:11.824829 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/26491d73-6bef-411d-9f16-7e04566215b4-catalog-content\") pod \"redhat-marketplace-fgtnr\" (UID: \"26491d73-6bef-411d-9f16-7e04566215b4\") " pod="openshift-marketplace/redhat-marketplace-fgtnr" Oct 06 12:23:11 crc kubenswrapper[4958]: I1006 12:23:11.848998 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mxrth\" (UniqueName: \"kubernetes.io/projected/26491d73-6bef-411d-9f16-7e04566215b4-kube-api-access-mxrth\") pod \"redhat-marketplace-fgtnr\" (UID: \"26491d73-6bef-411d-9f16-7e04566215b4\") " pod="openshift-marketplace/redhat-marketplace-fgtnr" Oct 06 12:23:11 crc kubenswrapper[4958]: I1006 12:23:11.904616 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-fgtnr" Oct 06 12:23:12 crc kubenswrapper[4958]: I1006 12:23:12.357486 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-fgtnr"] Oct 06 12:23:12 crc kubenswrapper[4958]: I1006 12:23:12.599606 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-fgtnr" event={"ID":"26491d73-6bef-411d-9f16-7e04566215b4","Type":"ContainerStarted","Data":"f39d7cca1a986861b992120741df3bf611a4f6d487c0d9ed1d3761ff0149c497"} Oct 06 12:23:13 crc kubenswrapper[4958]: I1006 12:23:13.614440 4958 generic.go:334] "Generic (PLEG): container finished" podID="26491d73-6bef-411d-9f16-7e04566215b4" containerID="755d5a892d4822cf75455182c072fc36560b23d110798f07ece096fe772ef473" exitCode=0 Oct 06 12:23:13 crc kubenswrapper[4958]: I1006 12:23:13.614519 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-fgtnr" event={"ID":"26491d73-6bef-411d-9f16-7e04566215b4","Type":"ContainerDied","Data":"755d5a892d4822cf75455182c072fc36560b23d110798f07ece096fe772ef473"} Oct 06 12:23:15 crc kubenswrapper[4958]: I1006 12:23:15.646118 4958 generic.go:334] "Generic (PLEG): container finished" podID="26491d73-6bef-411d-9f16-7e04566215b4" containerID="037fe9c6e26e7b4b6a7d0ca558aa500e19a1b663e235ddace0b1bc8374257b9f" exitCode=0 Oct 06 12:23:15 crc kubenswrapper[4958]: I1006 12:23:15.646350 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-fgtnr" event={"ID":"26491d73-6bef-411d-9f16-7e04566215b4","Type":"ContainerDied","Data":"037fe9c6e26e7b4b6a7d0ca558aa500e19a1b663e235ddace0b1bc8374257b9f"} Oct 06 12:23:16 crc kubenswrapper[4958]: I1006 12:23:16.670692 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-fgtnr" event={"ID":"26491d73-6bef-411d-9f16-7e04566215b4","Type":"ContainerStarted","Data":"f1ad190050273ae445286e5b5fc164a9dddf3ee5ee5c432d4b7c1d7b6b4fd1a1"} Oct 06 12:23:16 crc kubenswrapper[4958]: I1006 12:23:16.704311 4958 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-fgtnr" podStartSLOduration=3.0422008 podStartE2EDuration="5.704291059s" podCreationTimestamp="2025-10-06 12:23:11 +0000 UTC" firstStartedPulling="2025-10-06 12:23:13.618663673 +0000 UTC m=+2147.504689031" lastFinishedPulling="2025-10-06 12:23:16.280753972 +0000 UTC m=+2150.166779290" observedRunningTime="2025-10-06 12:23:16.701755915 +0000 UTC m=+2150.587781253" watchObservedRunningTime="2025-10-06 12:23:16.704291059 +0000 UTC m=+2150.590316387" Oct 06 12:23:21 crc kubenswrapper[4958]: I1006 12:23:21.905076 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-fgtnr" Oct 06 12:23:21 crc kubenswrapper[4958]: I1006 12:23:21.905892 4958 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-fgtnr" Oct 06 12:23:21 crc kubenswrapper[4958]: I1006 12:23:21.991963 4958 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-fgtnr" Oct 06 12:23:22 crc kubenswrapper[4958]: I1006 12:23:22.833029 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-fgtnr" Oct 06 12:23:22 crc kubenswrapper[4958]: I1006 12:23:22.927328 4958 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-fgtnr"] Oct 06 12:23:23 crc kubenswrapper[4958]: I1006 12:23:23.802519 4958 patch_prober.go:28] interesting pod/machine-config-daemon-whw6z container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 06 12:23:23 crc kubenswrapper[4958]: I1006 12:23:23.803023 4958 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-whw6z" podUID="1a63a5e9-6d00-40ff-a080-cfcaf03a1c1b" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 06 12:23:24 crc kubenswrapper[4958]: I1006 12:23:24.775391 4958 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-fgtnr" podUID="26491d73-6bef-411d-9f16-7e04566215b4" containerName="registry-server" containerID="cri-o://f1ad190050273ae445286e5b5fc164a9dddf3ee5ee5c432d4b7c1d7b6b4fd1a1" gracePeriod=2 Oct 06 12:23:25 crc kubenswrapper[4958]: I1006 12:23:25.246483 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-fgtnr" Oct 06 12:23:25 crc kubenswrapper[4958]: I1006 12:23:25.327391 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/26491d73-6bef-411d-9f16-7e04566215b4-utilities\") pod \"26491d73-6bef-411d-9f16-7e04566215b4\" (UID: \"26491d73-6bef-411d-9f16-7e04566215b4\") " Oct 06 12:23:25 crc kubenswrapper[4958]: I1006 12:23:25.327492 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/26491d73-6bef-411d-9f16-7e04566215b4-catalog-content\") pod \"26491d73-6bef-411d-9f16-7e04566215b4\" (UID: \"26491d73-6bef-411d-9f16-7e04566215b4\") " Oct 06 12:23:25 crc kubenswrapper[4958]: I1006 12:23:25.327560 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mxrth\" (UniqueName: \"kubernetes.io/projected/26491d73-6bef-411d-9f16-7e04566215b4-kube-api-access-mxrth\") pod \"26491d73-6bef-411d-9f16-7e04566215b4\" (UID: \"26491d73-6bef-411d-9f16-7e04566215b4\") " Oct 06 12:23:25 crc kubenswrapper[4958]: I1006 12:23:25.328285 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/26491d73-6bef-411d-9f16-7e04566215b4-utilities" (OuterVolumeSpecName: "utilities") pod "26491d73-6bef-411d-9f16-7e04566215b4" (UID: "26491d73-6bef-411d-9f16-7e04566215b4"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 06 12:23:25 crc kubenswrapper[4958]: I1006 12:23:25.345501 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/26491d73-6bef-411d-9f16-7e04566215b4-kube-api-access-mxrth" (OuterVolumeSpecName: "kube-api-access-mxrth") pod "26491d73-6bef-411d-9f16-7e04566215b4" (UID: "26491d73-6bef-411d-9f16-7e04566215b4"). InnerVolumeSpecName "kube-api-access-mxrth". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 06 12:23:25 crc kubenswrapper[4958]: I1006 12:23:25.354681 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/26491d73-6bef-411d-9f16-7e04566215b4-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "26491d73-6bef-411d-9f16-7e04566215b4" (UID: "26491d73-6bef-411d-9f16-7e04566215b4"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 06 12:23:25 crc kubenswrapper[4958]: I1006 12:23:25.429375 4958 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/26491d73-6bef-411d-9f16-7e04566215b4-utilities\") on node \"crc\" DevicePath \"\"" Oct 06 12:23:25 crc kubenswrapper[4958]: I1006 12:23:25.429406 4958 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/26491d73-6bef-411d-9f16-7e04566215b4-catalog-content\") on node \"crc\" DevicePath \"\"" Oct 06 12:23:25 crc kubenswrapper[4958]: I1006 12:23:25.429417 4958 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mxrth\" (UniqueName: \"kubernetes.io/projected/26491d73-6bef-411d-9f16-7e04566215b4-kube-api-access-mxrth\") on node \"crc\" DevicePath \"\"" Oct 06 12:23:25 crc kubenswrapper[4958]: I1006 12:23:25.788818 4958 generic.go:334] "Generic (PLEG): container finished" podID="26491d73-6bef-411d-9f16-7e04566215b4" containerID="f1ad190050273ae445286e5b5fc164a9dddf3ee5ee5c432d4b7c1d7b6b4fd1a1" exitCode=0 Oct 06 12:23:25 crc kubenswrapper[4958]: I1006 12:23:25.788893 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-fgtnr" event={"ID":"26491d73-6bef-411d-9f16-7e04566215b4","Type":"ContainerDied","Data":"f1ad190050273ae445286e5b5fc164a9dddf3ee5ee5c432d4b7c1d7b6b4fd1a1"} Oct 06 12:23:25 crc kubenswrapper[4958]: I1006 12:23:25.788937 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-fgtnr" Oct 06 12:23:25 crc kubenswrapper[4958]: I1006 12:23:25.789280 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-fgtnr" event={"ID":"26491d73-6bef-411d-9f16-7e04566215b4","Type":"ContainerDied","Data":"f39d7cca1a986861b992120741df3bf611a4f6d487c0d9ed1d3761ff0149c497"} Oct 06 12:23:25 crc kubenswrapper[4958]: I1006 12:23:25.789360 4958 scope.go:117] "RemoveContainer" containerID="f1ad190050273ae445286e5b5fc164a9dddf3ee5ee5c432d4b7c1d7b6b4fd1a1" Oct 06 12:23:25 crc kubenswrapper[4958]: I1006 12:23:25.819351 4958 scope.go:117] "RemoveContainer" containerID="037fe9c6e26e7b4b6a7d0ca558aa500e19a1b663e235ddace0b1bc8374257b9f" Oct 06 12:23:25 crc kubenswrapper[4958]: I1006 12:23:25.840226 4958 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-fgtnr"] Oct 06 12:23:25 crc kubenswrapper[4958]: I1006 12:23:25.846180 4958 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-fgtnr"] Oct 06 12:23:25 crc kubenswrapper[4958]: I1006 12:23:25.856848 4958 scope.go:117] "RemoveContainer" containerID="755d5a892d4822cf75455182c072fc36560b23d110798f07ece096fe772ef473" Oct 06 12:23:25 crc kubenswrapper[4958]: I1006 12:23:25.929403 4958 scope.go:117] "RemoveContainer" containerID="f1ad190050273ae445286e5b5fc164a9dddf3ee5ee5c432d4b7c1d7b6b4fd1a1" Oct 06 12:23:25 crc kubenswrapper[4958]: E1006 12:23:25.929884 4958 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f1ad190050273ae445286e5b5fc164a9dddf3ee5ee5c432d4b7c1d7b6b4fd1a1\": container with ID starting with f1ad190050273ae445286e5b5fc164a9dddf3ee5ee5c432d4b7c1d7b6b4fd1a1 not found: ID does not exist" containerID="f1ad190050273ae445286e5b5fc164a9dddf3ee5ee5c432d4b7c1d7b6b4fd1a1" Oct 06 12:23:25 crc kubenswrapper[4958]: I1006 12:23:25.929927 4958 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f1ad190050273ae445286e5b5fc164a9dddf3ee5ee5c432d4b7c1d7b6b4fd1a1"} err="failed to get container status \"f1ad190050273ae445286e5b5fc164a9dddf3ee5ee5c432d4b7c1d7b6b4fd1a1\": rpc error: code = NotFound desc = could not find container \"f1ad190050273ae445286e5b5fc164a9dddf3ee5ee5c432d4b7c1d7b6b4fd1a1\": container with ID starting with f1ad190050273ae445286e5b5fc164a9dddf3ee5ee5c432d4b7c1d7b6b4fd1a1 not found: ID does not exist" Oct 06 12:23:25 crc kubenswrapper[4958]: I1006 12:23:25.929953 4958 scope.go:117] "RemoveContainer" containerID="037fe9c6e26e7b4b6a7d0ca558aa500e19a1b663e235ddace0b1bc8374257b9f" Oct 06 12:23:25 crc kubenswrapper[4958]: E1006 12:23:25.930685 4958 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"037fe9c6e26e7b4b6a7d0ca558aa500e19a1b663e235ddace0b1bc8374257b9f\": container with ID starting with 037fe9c6e26e7b4b6a7d0ca558aa500e19a1b663e235ddace0b1bc8374257b9f not found: ID does not exist" containerID="037fe9c6e26e7b4b6a7d0ca558aa500e19a1b663e235ddace0b1bc8374257b9f" Oct 06 12:23:25 crc kubenswrapper[4958]: I1006 12:23:25.930750 4958 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"037fe9c6e26e7b4b6a7d0ca558aa500e19a1b663e235ddace0b1bc8374257b9f"} err="failed to get container status \"037fe9c6e26e7b4b6a7d0ca558aa500e19a1b663e235ddace0b1bc8374257b9f\": rpc error: code = NotFound desc = could not find container \"037fe9c6e26e7b4b6a7d0ca558aa500e19a1b663e235ddace0b1bc8374257b9f\": container with ID starting with 037fe9c6e26e7b4b6a7d0ca558aa500e19a1b663e235ddace0b1bc8374257b9f not found: ID does not exist" Oct 06 12:23:25 crc kubenswrapper[4958]: I1006 12:23:25.930778 4958 scope.go:117] "RemoveContainer" containerID="755d5a892d4822cf75455182c072fc36560b23d110798f07ece096fe772ef473" Oct 06 12:23:25 crc kubenswrapper[4958]: E1006 12:23:25.931182 4958 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"755d5a892d4822cf75455182c072fc36560b23d110798f07ece096fe772ef473\": container with ID starting with 755d5a892d4822cf75455182c072fc36560b23d110798f07ece096fe772ef473 not found: ID does not exist" containerID="755d5a892d4822cf75455182c072fc36560b23d110798f07ece096fe772ef473" Oct 06 12:23:25 crc kubenswrapper[4958]: I1006 12:23:25.931234 4958 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"755d5a892d4822cf75455182c072fc36560b23d110798f07ece096fe772ef473"} err="failed to get container status \"755d5a892d4822cf75455182c072fc36560b23d110798f07ece096fe772ef473\": rpc error: code = NotFound desc = could not find container \"755d5a892d4822cf75455182c072fc36560b23d110798f07ece096fe772ef473\": container with ID starting with 755d5a892d4822cf75455182c072fc36560b23d110798f07ece096fe772ef473 not found: ID does not exist" Oct 06 12:23:26 crc kubenswrapper[4958]: I1006 12:23:26.935879 4958 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="26491d73-6bef-411d-9f16-7e04566215b4" path="/var/lib/kubelet/pods/26491d73-6bef-411d-9f16-7e04566215b4/volumes" Oct 06 12:23:53 crc kubenswrapper[4958]: I1006 12:23:53.802368 4958 patch_prober.go:28] interesting pod/machine-config-daemon-whw6z container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 06 12:23:53 crc kubenswrapper[4958]: I1006 12:23:53.803109 4958 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-whw6z" podUID="1a63a5e9-6d00-40ff-a080-cfcaf03a1c1b" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 06 12:23:53 crc kubenswrapper[4958]: I1006 12:23:53.803230 4958 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-whw6z" Oct 06 12:23:53 crc kubenswrapper[4958]: I1006 12:23:53.804506 4958 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"18733eebda307ab68dd54cc8556b7585a95b1eb772aad1fe185443d73decf313"} pod="openshift-machine-config-operator/machine-config-daemon-whw6z" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Oct 06 12:23:53 crc kubenswrapper[4958]: I1006 12:23:53.804607 4958 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-whw6z" podUID="1a63a5e9-6d00-40ff-a080-cfcaf03a1c1b" containerName="machine-config-daemon" containerID="cri-o://18733eebda307ab68dd54cc8556b7585a95b1eb772aad1fe185443d73decf313" gracePeriod=600 Oct 06 12:23:54 crc kubenswrapper[4958]: I1006 12:23:54.128694 4958 generic.go:334] "Generic (PLEG): container finished" podID="1a63a5e9-6d00-40ff-a080-cfcaf03a1c1b" containerID="18733eebda307ab68dd54cc8556b7585a95b1eb772aad1fe185443d73decf313" exitCode=0 Oct 06 12:23:54 crc kubenswrapper[4958]: I1006 12:23:54.128776 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-whw6z" event={"ID":"1a63a5e9-6d00-40ff-a080-cfcaf03a1c1b","Type":"ContainerDied","Data":"18733eebda307ab68dd54cc8556b7585a95b1eb772aad1fe185443d73decf313"} Oct 06 12:23:54 crc kubenswrapper[4958]: I1006 12:23:54.129100 4958 scope.go:117] "RemoveContainer" containerID="92e208915ca70ac5b3441238341d5b28c612e6494612e3dece5b9499a5bc906f" Oct 06 12:23:55 crc kubenswrapper[4958]: I1006 12:23:55.143026 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-whw6z" event={"ID":"1a63a5e9-6d00-40ff-a080-cfcaf03a1c1b","Type":"ContainerStarted","Data":"d5a93cdafe689a1af3c191a363fd93dd9e51df0fafe51e825fe13eb8ebfdd2e5"} Oct 06 12:24:25 crc kubenswrapper[4958]: I1006 12:24:25.748274 4958 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-29n4f"] Oct 06 12:24:25 crc kubenswrapper[4958]: E1006 12:24:25.750998 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="26491d73-6bef-411d-9f16-7e04566215b4" containerName="extract-content" Oct 06 12:24:25 crc kubenswrapper[4958]: I1006 12:24:25.751028 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="26491d73-6bef-411d-9f16-7e04566215b4" containerName="extract-content" Oct 06 12:24:25 crc kubenswrapper[4958]: E1006 12:24:25.751072 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="26491d73-6bef-411d-9f16-7e04566215b4" containerName="registry-server" Oct 06 12:24:25 crc kubenswrapper[4958]: I1006 12:24:25.751084 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="26491d73-6bef-411d-9f16-7e04566215b4" containerName="registry-server" Oct 06 12:24:25 crc kubenswrapper[4958]: E1006 12:24:25.751126 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="26491d73-6bef-411d-9f16-7e04566215b4" containerName="extract-utilities" Oct 06 12:24:25 crc kubenswrapper[4958]: I1006 12:24:25.751139 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="26491d73-6bef-411d-9f16-7e04566215b4" containerName="extract-utilities" Oct 06 12:24:25 crc kubenswrapper[4958]: I1006 12:24:25.751594 4958 memory_manager.go:354] "RemoveStaleState removing state" podUID="26491d73-6bef-411d-9f16-7e04566215b4" containerName="registry-server" Oct 06 12:24:25 crc kubenswrapper[4958]: I1006 12:24:25.754469 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-29n4f" Oct 06 12:24:25 crc kubenswrapper[4958]: I1006 12:24:25.774910 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-29n4f"] Oct 06 12:24:25 crc kubenswrapper[4958]: I1006 12:24:25.843597 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d8f86c95-bf04-4887-9e45-b4eb18dad492-utilities\") pod \"certified-operators-29n4f\" (UID: \"d8f86c95-bf04-4887-9e45-b4eb18dad492\") " pod="openshift-marketplace/certified-operators-29n4f" Oct 06 12:24:25 crc kubenswrapper[4958]: I1006 12:24:25.843890 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d8f86c95-bf04-4887-9e45-b4eb18dad492-catalog-content\") pod \"certified-operators-29n4f\" (UID: \"d8f86c95-bf04-4887-9e45-b4eb18dad492\") " pod="openshift-marketplace/certified-operators-29n4f" Oct 06 12:24:25 crc kubenswrapper[4958]: I1006 12:24:25.844135 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9k26h\" (UniqueName: \"kubernetes.io/projected/d8f86c95-bf04-4887-9e45-b4eb18dad492-kube-api-access-9k26h\") pod \"certified-operators-29n4f\" (UID: \"d8f86c95-bf04-4887-9e45-b4eb18dad492\") " pod="openshift-marketplace/certified-operators-29n4f" Oct 06 12:24:25 crc kubenswrapper[4958]: I1006 12:24:25.945219 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d8f86c95-bf04-4887-9e45-b4eb18dad492-catalog-content\") pod \"certified-operators-29n4f\" (UID: \"d8f86c95-bf04-4887-9e45-b4eb18dad492\") " pod="openshift-marketplace/certified-operators-29n4f" Oct 06 12:24:25 crc kubenswrapper[4958]: I1006 12:24:25.945365 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9k26h\" (UniqueName: \"kubernetes.io/projected/d8f86c95-bf04-4887-9e45-b4eb18dad492-kube-api-access-9k26h\") pod \"certified-operators-29n4f\" (UID: \"d8f86c95-bf04-4887-9e45-b4eb18dad492\") " pod="openshift-marketplace/certified-operators-29n4f" Oct 06 12:24:25 crc kubenswrapper[4958]: I1006 12:24:25.945506 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d8f86c95-bf04-4887-9e45-b4eb18dad492-utilities\") pod \"certified-operators-29n4f\" (UID: \"d8f86c95-bf04-4887-9e45-b4eb18dad492\") " pod="openshift-marketplace/certified-operators-29n4f" Oct 06 12:24:25 crc kubenswrapper[4958]: I1006 12:24:25.945844 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d8f86c95-bf04-4887-9e45-b4eb18dad492-catalog-content\") pod \"certified-operators-29n4f\" (UID: \"d8f86c95-bf04-4887-9e45-b4eb18dad492\") " pod="openshift-marketplace/certified-operators-29n4f" Oct 06 12:24:25 crc kubenswrapper[4958]: I1006 12:24:25.946649 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d8f86c95-bf04-4887-9e45-b4eb18dad492-utilities\") pod \"certified-operators-29n4f\" (UID: \"d8f86c95-bf04-4887-9e45-b4eb18dad492\") " pod="openshift-marketplace/certified-operators-29n4f" Oct 06 12:24:25 crc kubenswrapper[4958]: I1006 12:24:25.984663 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9k26h\" (UniqueName: \"kubernetes.io/projected/d8f86c95-bf04-4887-9e45-b4eb18dad492-kube-api-access-9k26h\") pod \"certified-operators-29n4f\" (UID: \"d8f86c95-bf04-4887-9e45-b4eb18dad492\") " pod="openshift-marketplace/certified-operators-29n4f" Oct 06 12:24:26 crc kubenswrapper[4958]: I1006 12:24:26.094210 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-29n4f" Oct 06 12:24:26 crc kubenswrapper[4958]: I1006 12:24:26.638708 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-29n4f"] Oct 06 12:24:27 crc kubenswrapper[4958]: I1006 12:24:27.492628 4958 generic.go:334] "Generic (PLEG): container finished" podID="d8f86c95-bf04-4887-9e45-b4eb18dad492" containerID="cbde799161b08ccc7a1c8940991034725c97c56331a126b194fbba2ef148c7f6" exitCode=0 Oct 06 12:24:27 crc kubenswrapper[4958]: I1006 12:24:27.493127 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-29n4f" event={"ID":"d8f86c95-bf04-4887-9e45-b4eb18dad492","Type":"ContainerDied","Data":"cbde799161b08ccc7a1c8940991034725c97c56331a126b194fbba2ef148c7f6"} Oct 06 12:24:27 crc kubenswrapper[4958]: I1006 12:24:27.493206 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-29n4f" event={"ID":"d8f86c95-bf04-4887-9e45-b4eb18dad492","Type":"ContainerStarted","Data":"2593a5cb80af87951845e140094ef4f95238ee2d148d1e4b87e26171c52e260b"} Oct 06 12:24:27 crc kubenswrapper[4958]: I1006 12:24:27.497279 4958 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Oct 06 12:24:28 crc kubenswrapper[4958]: I1006 12:24:28.507412 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-29n4f" event={"ID":"d8f86c95-bf04-4887-9e45-b4eb18dad492","Type":"ContainerStarted","Data":"c6271b78231e15c924aeb5b5818d304089e3f43eca41eed69b47bbde72d401ef"} Oct 06 12:24:30 crc kubenswrapper[4958]: I1006 12:24:30.539828 4958 generic.go:334] "Generic (PLEG): container finished" podID="d8f86c95-bf04-4887-9e45-b4eb18dad492" containerID="c6271b78231e15c924aeb5b5818d304089e3f43eca41eed69b47bbde72d401ef" exitCode=0 Oct 06 12:24:30 crc kubenswrapper[4958]: I1006 12:24:30.540027 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-29n4f" event={"ID":"d8f86c95-bf04-4887-9e45-b4eb18dad492","Type":"ContainerDied","Data":"c6271b78231e15c924aeb5b5818d304089e3f43eca41eed69b47bbde72d401ef"} Oct 06 12:24:31 crc kubenswrapper[4958]: I1006 12:24:31.560242 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-29n4f" event={"ID":"d8f86c95-bf04-4887-9e45-b4eb18dad492","Type":"ContainerStarted","Data":"95e7bad2348c7d6d0363789dc544eea4f2ea6486107b4dda99128083241f904e"} Oct 06 12:24:31 crc kubenswrapper[4958]: I1006 12:24:31.603461 4958 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-29n4f" podStartSLOduration=3.061718302 podStartE2EDuration="6.603429845s" podCreationTimestamp="2025-10-06 12:24:25 +0000 UTC" firstStartedPulling="2025-10-06 12:24:27.496601052 +0000 UTC m=+2221.382626390" lastFinishedPulling="2025-10-06 12:24:31.038312615 +0000 UTC m=+2224.924337933" observedRunningTime="2025-10-06 12:24:31.59340042 +0000 UTC m=+2225.479425758" watchObservedRunningTime="2025-10-06 12:24:31.603429845 +0000 UTC m=+2225.489455183" Oct 06 12:24:36 crc kubenswrapper[4958]: I1006 12:24:36.095465 4958 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-29n4f" Oct 06 12:24:36 crc kubenswrapper[4958]: I1006 12:24:36.095550 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-29n4f" Oct 06 12:24:36 crc kubenswrapper[4958]: I1006 12:24:36.196319 4958 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-29n4f" Oct 06 12:24:36 crc kubenswrapper[4958]: I1006 12:24:36.670056 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-29n4f" Oct 06 12:24:37 crc kubenswrapper[4958]: I1006 12:24:37.727519 4958 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-29n4f"] Oct 06 12:24:38 crc kubenswrapper[4958]: I1006 12:24:38.646900 4958 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-29n4f" podUID="d8f86c95-bf04-4887-9e45-b4eb18dad492" containerName="registry-server" containerID="cri-o://95e7bad2348c7d6d0363789dc544eea4f2ea6486107b4dda99128083241f904e" gracePeriod=2 Oct 06 12:24:39 crc kubenswrapper[4958]: I1006 12:24:39.659613 4958 generic.go:334] "Generic (PLEG): container finished" podID="d8f86c95-bf04-4887-9e45-b4eb18dad492" containerID="95e7bad2348c7d6d0363789dc544eea4f2ea6486107b4dda99128083241f904e" exitCode=0 Oct 06 12:24:39 crc kubenswrapper[4958]: I1006 12:24:39.659697 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-29n4f" event={"ID":"d8f86c95-bf04-4887-9e45-b4eb18dad492","Type":"ContainerDied","Data":"95e7bad2348c7d6d0363789dc544eea4f2ea6486107b4dda99128083241f904e"} Oct 06 12:24:39 crc kubenswrapper[4958]: I1006 12:24:39.660059 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-29n4f" event={"ID":"d8f86c95-bf04-4887-9e45-b4eb18dad492","Type":"ContainerDied","Data":"2593a5cb80af87951845e140094ef4f95238ee2d148d1e4b87e26171c52e260b"} Oct 06 12:24:39 crc kubenswrapper[4958]: I1006 12:24:39.660091 4958 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2593a5cb80af87951845e140094ef4f95238ee2d148d1e4b87e26171c52e260b" Oct 06 12:24:39 crc kubenswrapper[4958]: I1006 12:24:39.671848 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-29n4f" Oct 06 12:24:39 crc kubenswrapper[4958]: I1006 12:24:39.785470 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9k26h\" (UniqueName: \"kubernetes.io/projected/d8f86c95-bf04-4887-9e45-b4eb18dad492-kube-api-access-9k26h\") pod \"d8f86c95-bf04-4887-9e45-b4eb18dad492\" (UID: \"d8f86c95-bf04-4887-9e45-b4eb18dad492\") " Oct 06 12:24:39 crc kubenswrapper[4958]: I1006 12:24:39.785640 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d8f86c95-bf04-4887-9e45-b4eb18dad492-utilities\") pod \"d8f86c95-bf04-4887-9e45-b4eb18dad492\" (UID: \"d8f86c95-bf04-4887-9e45-b4eb18dad492\") " Oct 06 12:24:39 crc kubenswrapper[4958]: I1006 12:24:39.785702 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d8f86c95-bf04-4887-9e45-b4eb18dad492-catalog-content\") pod \"d8f86c95-bf04-4887-9e45-b4eb18dad492\" (UID: \"d8f86c95-bf04-4887-9e45-b4eb18dad492\") " Oct 06 12:24:39 crc kubenswrapper[4958]: I1006 12:24:39.790308 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d8f86c95-bf04-4887-9e45-b4eb18dad492-utilities" (OuterVolumeSpecName: "utilities") pod "d8f86c95-bf04-4887-9e45-b4eb18dad492" (UID: "d8f86c95-bf04-4887-9e45-b4eb18dad492"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 06 12:24:39 crc kubenswrapper[4958]: I1006 12:24:39.850510 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d8f86c95-bf04-4887-9e45-b4eb18dad492-kube-api-access-9k26h" (OuterVolumeSpecName: "kube-api-access-9k26h") pod "d8f86c95-bf04-4887-9e45-b4eb18dad492" (UID: "d8f86c95-bf04-4887-9e45-b4eb18dad492"). InnerVolumeSpecName "kube-api-access-9k26h". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 06 12:24:39 crc kubenswrapper[4958]: I1006 12:24:39.871250 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d8f86c95-bf04-4887-9e45-b4eb18dad492-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "d8f86c95-bf04-4887-9e45-b4eb18dad492" (UID: "d8f86c95-bf04-4887-9e45-b4eb18dad492"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 06 12:24:39 crc kubenswrapper[4958]: I1006 12:24:39.888705 4958 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d8f86c95-bf04-4887-9e45-b4eb18dad492-utilities\") on node \"crc\" DevicePath \"\"" Oct 06 12:24:39 crc kubenswrapper[4958]: I1006 12:24:39.888741 4958 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d8f86c95-bf04-4887-9e45-b4eb18dad492-catalog-content\") on node \"crc\" DevicePath \"\"" Oct 06 12:24:39 crc kubenswrapper[4958]: I1006 12:24:39.888757 4958 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9k26h\" (UniqueName: \"kubernetes.io/projected/d8f86c95-bf04-4887-9e45-b4eb18dad492-kube-api-access-9k26h\") on node \"crc\" DevicePath \"\"" Oct 06 12:24:40 crc kubenswrapper[4958]: I1006 12:24:40.671887 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-29n4f" Oct 06 12:24:40 crc kubenswrapper[4958]: I1006 12:24:40.718478 4958 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-29n4f"] Oct 06 12:24:40 crc kubenswrapper[4958]: I1006 12:24:40.725849 4958 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-29n4f"] Oct 06 12:24:40 crc kubenswrapper[4958]: I1006 12:24:40.928508 4958 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d8f86c95-bf04-4887-9e45-b4eb18dad492" path="/var/lib/kubelet/pods/d8f86c95-bf04-4887-9e45-b4eb18dad492/volumes" Oct 06 12:24:51 crc kubenswrapper[4958]: I1006 12:24:51.263518 4958 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-7kj7k"] Oct 06 12:24:51 crc kubenswrapper[4958]: E1006 12:24:51.264501 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d8f86c95-bf04-4887-9e45-b4eb18dad492" containerName="registry-server" Oct 06 12:24:51 crc kubenswrapper[4958]: I1006 12:24:51.264512 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="d8f86c95-bf04-4887-9e45-b4eb18dad492" containerName="registry-server" Oct 06 12:24:51 crc kubenswrapper[4958]: E1006 12:24:51.264546 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d8f86c95-bf04-4887-9e45-b4eb18dad492" containerName="extract-utilities" Oct 06 12:24:51 crc kubenswrapper[4958]: I1006 12:24:51.264555 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="d8f86c95-bf04-4887-9e45-b4eb18dad492" containerName="extract-utilities" Oct 06 12:24:51 crc kubenswrapper[4958]: E1006 12:24:51.264566 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d8f86c95-bf04-4887-9e45-b4eb18dad492" containerName="extract-content" Oct 06 12:24:51 crc kubenswrapper[4958]: I1006 12:24:51.264578 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="d8f86c95-bf04-4887-9e45-b4eb18dad492" containerName="extract-content" Oct 06 12:24:51 crc kubenswrapper[4958]: I1006 12:24:51.264798 4958 memory_manager.go:354] "RemoveStaleState removing state" podUID="d8f86c95-bf04-4887-9e45-b4eb18dad492" containerName="registry-server" Oct 06 12:24:51 crc kubenswrapper[4958]: I1006 12:24:51.266475 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-7kj7k" Oct 06 12:24:51 crc kubenswrapper[4958]: I1006 12:24:51.276097 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-7kj7k"] Oct 06 12:24:51 crc kubenswrapper[4958]: I1006 12:24:51.354220 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5205bf50-d310-4880-8ee8-f2d4d02756b4-catalog-content\") pod \"community-operators-7kj7k\" (UID: \"5205bf50-d310-4880-8ee8-f2d4d02756b4\") " pod="openshift-marketplace/community-operators-7kj7k" Oct 06 12:24:51 crc kubenswrapper[4958]: I1006 12:24:51.354649 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ldb8z\" (UniqueName: \"kubernetes.io/projected/5205bf50-d310-4880-8ee8-f2d4d02756b4-kube-api-access-ldb8z\") pod \"community-operators-7kj7k\" (UID: \"5205bf50-d310-4880-8ee8-f2d4d02756b4\") " pod="openshift-marketplace/community-operators-7kj7k" Oct 06 12:24:51 crc kubenswrapper[4958]: I1006 12:24:51.354728 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5205bf50-d310-4880-8ee8-f2d4d02756b4-utilities\") pod \"community-operators-7kj7k\" (UID: \"5205bf50-d310-4880-8ee8-f2d4d02756b4\") " pod="openshift-marketplace/community-operators-7kj7k" Oct 06 12:24:51 crc kubenswrapper[4958]: I1006 12:24:51.456220 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ldb8z\" (UniqueName: \"kubernetes.io/projected/5205bf50-d310-4880-8ee8-f2d4d02756b4-kube-api-access-ldb8z\") pod \"community-operators-7kj7k\" (UID: \"5205bf50-d310-4880-8ee8-f2d4d02756b4\") " pod="openshift-marketplace/community-operators-7kj7k" Oct 06 12:24:51 crc kubenswrapper[4958]: I1006 12:24:51.456370 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5205bf50-d310-4880-8ee8-f2d4d02756b4-utilities\") pod \"community-operators-7kj7k\" (UID: \"5205bf50-d310-4880-8ee8-f2d4d02756b4\") " pod="openshift-marketplace/community-operators-7kj7k" Oct 06 12:24:51 crc kubenswrapper[4958]: I1006 12:24:51.456471 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5205bf50-d310-4880-8ee8-f2d4d02756b4-catalog-content\") pod \"community-operators-7kj7k\" (UID: \"5205bf50-d310-4880-8ee8-f2d4d02756b4\") " pod="openshift-marketplace/community-operators-7kj7k" Oct 06 12:24:51 crc kubenswrapper[4958]: I1006 12:24:51.456999 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5205bf50-d310-4880-8ee8-f2d4d02756b4-catalog-content\") pod \"community-operators-7kj7k\" (UID: \"5205bf50-d310-4880-8ee8-f2d4d02756b4\") " pod="openshift-marketplace/community-operators-7kj7k" Oct 06 12:24:51 crc kubenswrapper[4958]: I1006 12:24:51.457016 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5205bf50-d310-4880-8ee8-f2d4d02756b4-utilities\") pod \"community-operators-7kj7k\" (UID: \"5205bf50-d310-4880-8ee8-f2d4d02756b4\") " pod="openshift-marketplace/community-operators-7kj7k" Oct 06 12:24:51 crc kubenswrapper[4958]: I1006 12:24:51.496650 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ldb8z\" (UniqueName: \"kubernetes.io/projected/5205bf50-d310-4880-8ee8-f2d4d02756b4-kube-api-access-ldb8z\") pod \"community-operators-7kj7k\" (UID: \"5205bf50-d310-4880-8ee8-f2d4d02756b4\") " pod="openshift-marketplace/community-operators-7kj7k" Oct 06 12:24:51 crc kubenswrapper[4958]: I1006 12:24:51.600543 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-7kj7k" Oct 06 12:24:52 crc kubenswrapper[4958]: I1006 12:24:52.240992 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-7kj7k"] Oct 06 12:24:52 crc kubenswrapper[4958]: W1006 12:24:52.249027 4958 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5205bf50_d310_4880_8ee8_f2d4d02756b4.slice/crio-0bfef95b7b17d762219c9cdd2ff94596cd60ed6f9c0a8debb169c7820023cb06 WatchSource:0}: Error finding container 0bfef95b7b17d762219c9cdd2ff94596cd60ed6f9c0a8debb169c7820023cb06: Status 404 returned error can't find the container with id 0bfef95b7b17d762219c9cdd2ff94596cd60ed6f9c0a8debb169c7820023cb06 Oct 06 12:24:52 crc kubenswrapper[4958]: I1006 12:24:52.795640 4958 generic.go:334] "Generic (PLEG): container finished" podID="5205bf50-d310-4880-8ee8-f2d4d02756b4" containerID="a7f03f68d2d2c4e06e521efab3cc28e5b945dad3f2e9fa904c4fec709b4ebc24" exitCode=0 Oct 06 12:24:52 crc kubenswrapper[4958]: I1006 12:24:52.795741 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-7kj7k" event={"ID":"5205bf50-d310-4880-8ee8-f2d4d02756b4","Type":"ContainerDied","Data":"a7f03f68d2d2c4e06e521efab3cc28e5b945dad3f2e9fa904c4fec709b4ebc24"} Oct 06 12:24:52 crc kubenswrapper[4958]: I1006 12:24:52.796036 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-7kj7k" event={"ID":"5205bf50-d310-4880-8ee8-f2d4d02756b4","Type":"ContainerStarted","Data":"0bfef95b7b17d762219c9cdd2ff94596cd60ed6f9c0a8debb169c7820023cb06"} Oct 06 12:24:54 crc kubenswrapper[4958]: I1006 12:24:54.830728 4958 generic.go:334] "Generic (PLEG): container finished" podID="5205bf50-d310-4880-8ee8-f2d4d02756b4" containerID="3e9e37fa8242f2e7c82ff56b6f4ebed2c2a2b36ae798ee960677f0e571b2cf4d" exitCode=0 Oct 06 12:24:54 crc kubenswrapper[4958]: I1006 12:24:54.830846 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-7kj7k" event={"ID":"5205bf50-d310-4880-8ee8-f2d4d02756b4","Type":"ContainerDied","Data":"3e9e37fa8242f2e7c82ff56b6f4ebed2c2a2b36ae798ee960677f0e571b2cf4d"} Oct 06 12:24:55 crc kubenswrapper[4958]: I1006 12:24:55.844925 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-7kj7k" event={"ID":"5205bf50-d310-4880-8ee8-f2d4d02756b4","Type":"ContainerStarted","Data":"536d4076bf17002e5d371c40df80002d3e8c305db4404ec544d9fa2b7435dfbd"} Oct 06 12:24:55 crc kubenswrapper[4958]: I1006 12:24:55.868592 4958 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-7kj7k" podStartSLOduration=2.451025429 podStartE2EDuration="4.868565185s" podCreationTimestamp="2025-10-06 12:24:51 +0000 UTC" firstStartedPulling="2025-10-06 12:24:52.79989535 +0000 UTC m=+2246.685920688" lastFinishedPulling="2025-10-06 12:24:55.217435096 +0000 UTC m=+2249.103460444" observedRunningTime="2025-10-06 12:24:55.866720898 +0000 UTC m=+2249.752746226" watchObservedRunningTime="2025-10-06 12:24:55.868565185 +0000 UTC m=+2249.754590513" Oct 06 12:25:01 crc kubenswrapper[4958]: I1006 12:25:01.601244 4958 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-7kj7k" Oct 06 12:25:01 crc kubenswrapper[4958]: I1006 12:25:01.601961 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-7kj7k" Oct 06 12:25:01 crc kubenswrapper[4958]: I1006 12:25:01.707050 4958 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-7kj7k" Oct 06 12:25:01 crc kubenswrapper[4958]: I1006 12:25:01.982203 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-7kj7k" Oct 06 12:25:02 crc kubenswrapper[4958]: I1006 12:25:02.048122 4958 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-7kj7k"] Oct 06 12:25:03 crc kubenswrapper[4958]: I1006 12:25:03.923283 4958 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-7kj7k" podUID="5205bf50-d310-4880-8ee8-f2d4d02756b4" containerName="registry-server" containerID="cri-o://536d4076bf17002e5d371c40df80002d3e8c305db4404ec544d9fa2b7435dfbd" gracePeriod=2 Oct 06 12:25:04 crc kubenswrapper[4958]: I1006 12:25:04.384813 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-7kj7k" Oct 06 12:25:04 crc kubenswrapper[4958]: I1006 12:25:04.434641 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5205bf50-d310-4880-8ee8-f2d4d02756b4-catalog-content\") pod \"5205bf50-d310-4880-8ee8-f2d4d02756b4\" (UID: \"5205bf50-d310-4880-8ee8-f2d4d02756b4\") " Oct 06 12:25:04 crc kubenswrapper[4958]: I1006 12:25:04.434685 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5205bf50-d310-4880-8ee8-f2d4d02756b4-utilities\") pod \"5205bf50-d310-4880-8ee8-f2d4d02756b4\" (UID: \"5205bf50-d310-4880-8ee8-f2d4d02756b4\") " Oct 06 12:25:04 crc kubenswrapper[4958]: I1006 12:25:04.434768 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ldb8z\" (UniqueName: \"kubernetes.io/projected/5205bf50-d310-4880-8ee8-f2d4d02756b4-kube-api-access-ldb8z\") pod \"5205bf50-d310-4880-8ee8-f2d4d02756b4\" (UID: \"5205bf50-d310-4880-8ee8-f2d4d02756b4\") " Oct 06 12:25:04 crc kubenswrapper[4958]: I1006 12:25:04.435620 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5205bf50-d310-4880-8ee8-f2d4d02756b4-utilities" (OuterVolumeSpecName: "utilities") pod "5205bf50-d310-4880-8ee8-f2d4d02756b4" (UID: "5205bf50-d310-4880-8ee8-f2d4d02756b4"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 06 12:25:04 crc kubenswrapper[4958]: I1006 12:25:04.442477 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5205bf50-d310-4880-8ee8-f2d4d02756b4-kube-api-access-ldb8z" (OuterVolumeSpecName: "kube-api-access-ldb8z") pod "5205bf50-d310-4880-8ee8-f2d4d02756b4" (UID: "5205bf50-d310-4880-8ee8-f2d4d02756b4"). InnerVolumeSpecName "kube-api-access-ldb8z". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 06 12:25:04 crc kubenswrapper[4958]: I1006 12:25:04.489374 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5205bf50-d310-4880-8ee8-f2d4d02756b4-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "5205bf50-d310-4880-8ee8-f2d4d02756b4" (UID: "5205bf50-d310-4880-8ee8-f2d4d02756b4"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 06 12:25:04 crc kubenswrapper[4958]: I1006 12:25:04.536421 4958 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5205bf50-d310-4880-8ee8-f2d4d02756b4-catalog-content\") on node \"crc\" DevicePath \"\"" Oct 06 12:25:04 crc kubenswrapper[4958]: I1006 12:25:04.536452 4958 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5205bf50-d310-4880-8ee8-f2d4d02756b4-utilities\") on node \"crc\" DevicePath \"\"" Oct 06 12:25:04 crc kubenswrapper[4958]: I1006 12:25:04.536462 4958 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ldb8z\" (UniqueName: \"kubernetes.io/projected/5205bf50-d310-4880-8ee8-f2d4d02756b4-kube-api-access-ldb8z\") on node \"crc\" DevicePath \"\"" Oct 06 12:25:04 crc kubenswrapper[4958]: I1006 12:25:04.939055 4958 generic.go:334] "Generic (PLEG): container finished" podID="5205bf50-d310-4880-8ee8-f2d4d02756b4" containerID="536d4076bf17002e5d371c40df80002d3e8c305db4404ec544d9fa2b7435dfbd" exitCode=0 Oct 06 12:25:04 crc kubenswrapper[4958]: I1006 12:25:04.939140 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-7kj7k" Oct 06 12:25:04 crc kubenswrapper[4958]: I1006 12:25:04.939191 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-7kj7k" event={"ID":"5205bf50-d310-4880-8ee8-f2d4d02756b4","Type":"ContainerDied","Data":"536d4076bf17002e5d371c40df80002d3e8c305db4404ec544d9fa2b7435dfbd"} Oct 06 12:25:04 crc kubenswrapper[4958]: I1006 12:25:04.941181 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-7kj7k" event={"ID":"5205bf50-d310-4880-8ee8-f2d4d02756b4","Type":"ContainerDied","Data":"0bfef95b7b17d762219c9cdd2ff94596cd60ed6f9c0a8debb169c7820023cb06"} Oct 06 12:25:04 crc kubenswrapper[4958]: I1006 12:25:04.941242 4958 scope.go:117] "RemoveContainer" containerID="536d4076bf17002e5d371c40df80002d3e8c305db4404ec544d9fa2b7435dfbd" Oct 06 12:25:04 crc kubenswrapper[4958]: I1006 12:25:04.991101 4958 scope.go:117] "RemoveContainer" containerID="3e9e37fa8242f2e7c82ff56b6f4ebed2c2a2b36ae798ee960677f0e571b2cf4d" Oct 06 12:25:05 crc kubenswrapper[4958]: I1006 12:25:05.022123 4958 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-7kj7k"] Oct 06 12:25:05 crc kubenswrapper[4958]: I1006 12:25:05.032102 4958 scope.go:117] "RemoveContainer" containerID="a7f03f68d2d2c4e06e521efab3cc28e5b945dad3f2e9fa904c4fec709b4ebc24" Oct 06 12:25:05 crc kubenswrapper[4958]: I1006 12:25:05.044416 4958 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-7kj7k"] Oct 06 12:25:05 crc kubenswrapper[4958]: I1006 12:25:05.070448 4958 scope.go:117] "RemoveContainer" containerID="536d4076bf17002e5d371c40df80002d3e8c305db4404ec544d9fa2b7435dfbd" Oct 06 12:25:05 crc kubenswrapper[4958]: E1006 12:25:05.071474 4958 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"536d4076bf17002e5d371c40df80002d3e8c305db4404ec544d9fa2b7435dfbd\": container with ID starting with 536d4076bf17002e5d371c40df80002d3e8c305db4404ec544d9fa2b7435dfbd not found: ID does not exist" containerID="536d4076bf17002e5d371c40df80002d3e8c305db4404ec544d9fa2b7435dfbd" Oct 06 12:25:05 crc kubenswrapper[4958]: I1006 12:25:05.071517 4958 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"536d4076bf17002e5d371c40df80002d3e8c305db4404ec544d9fa2b7435dfbd"} err="failed to get container status \"536d4076bf17002e5d371c40df80002d3e8c305db4404ec544d9fa2b7435dfbd\": rpc error: code = NotFound desc = could not find container \"536d4076bf17002e5d371c40df80002d3e8c305db4404ec544d9fa2b7435dfbd\": container with ID starting with 536d4076bf17002e5d371c40df80002d3e8c305db4404ec544d9fa2b7435dfbd not found: ID does not exist" Oct 06 12:25:05 crc kubenswrapper[4958]: I1006 12:25:05.071544 4958 scope.go:117] "RemoveContainer" containerID="3e9e37fa8242f2e7c82ff56b6f4ebed2c2a2b36ae798ee960677f0e571b2cf4d" Oct 06 12:25:05 crc kubenswrapper[4958]: E1006 12:25:05.071888 4958 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3e9e37fa8242f2e7c82ff56b6f4ebed2c2a2b36ae798ee960677f0e571b2cf4d\": container with ID starting with 3e9e37fa8242f2e7c82ff56b6f4ebed2c2a2b36ae798ee960677f0e571b2cf4d not found: ID does not exist" containerID="3e9e37fa8242f2e7c82ff56b6f4ebed2c2a2b36ae798ee960677f0e571b2cf4d" Oct 06 12:25:05 crc kubenswrapper[4958]: I1006 12:25:05.071918 4958 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3e9e37fa8242f2e7c82ff56b6f4ebed2c2a2b36ae798ee960677f0e571b2cf4d"} err="failed to get container status \"3e9e37fa8242f2e7c82ff56b6f4ebed2c2a2b36ae798ee960677f0e571b2cf4d\": rpc error: code = NotFound desc = could not find container \"3e9e37fa8242f2e7c82ff56b6f4ebed2c2a2b36ae798ee960677f0e571b2cf4d\": container with ID starting with 3e9e37fa8242f2e7c82ff56b6f4ebed2c2a2b36ae798ee960677f0e571b2cf4d not found: ID does not exist" Oct 06 12:25:05 crc kubenswrapper[4958]: I1006 12:25:05.071937 4958 scope.go:117] "RemoveContainer" containerID="a7f03f68d2d2c4e06e521efab3cc28e5b945dad3f2e9fa904c4fec709b4ebc24" Oct 06 12:25:05 crc kubenswrapper[4958]: E1006 12:25:05.072487 4958 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a7f03f68d2d2c4e06e521efab3cc28e5b945dad3f2e9fa904c4fec709b4ebc24\": container with ID starting with a7f03f68d2d2c4e06e521efab3cc28e5b945dad3f2e9fa904c4fec709b4ebc24 not found: ID does not exist" containerID="a7f03f68d2d2c4e06e521efab3cc28e5b945dad3f2e9fa904c4fec709b4ebc24" Oct 06 12:25:05 crc kubenswrapper[4958]: I1006 12:25:05.072562 4958 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a7f03f68d2d2c4e06e521efab3cc28e5b945dad3f2e9fa904c4fec709b4ebc24"} err="failed to get container status \"a7f03f68d2d2c4e06e521efab3cc28e5b945dad3f2e9fa904c4fec709b4ebc24\": rpc error: code = NotFound desc = could not find container \"a7f03f68d2d2c4e06e521efab3cc28e5b945dad3f2e9fa904c4fec709b4ebc24\": container with ID starting with a7f03f68d2d2c4e06e521efab3cc28e5b945dad3f2e9fa904c4fec709b4ebc24 not found: ID does not exist" Oct 06 12:25:06 crc kubenswrapper[4958]: I1006 12:25:06.932133 4958 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5205bf50-d310-4880-8ee8-f2d4d02756b4" path="/var/lib/kubelet/pods/5205bf50-d310-4880-8ee8-f2d4d02756b4/volumes" Oct 06 12:26:23 crc kubenswrapper[4958]: I1006 12:26:23.801848 4958 patch_prober.go:28] interesting pod/machine-config-daemon-whw6z container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 06 12:26:23 crc kubenswrapper[4958]: I1006 12:26:23.802673 4958 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-whw6z" podUID="1a63a5e9-6d00-40ff-a080-cfcaf03a1c1b" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 06 12:26:34 crc kubenswrapper[4958]: E1006 12:26:34.448370 4958 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod186200b0_8ce3_46a8_9691_42b254a077be.slice/crio-abb024f16f020633345c0d72db94c945d2a797e5a6e2a8f82fffb4f0fb6adf36.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod186200b0_8ce3_46a8_9691_42b254a077be.slice/crio-conmon-abb024f16f020633345c0d72db94c945d2a797e5a6e2a8f82fffb4f0fb6adf36.scope\": RecentStats: unable to find data in memory cache]" Oct 06 12:26:34 crc kubenswrapper[4958]: I1006 12:26:34.947891 4958 generic.go:334] "Generic (PLEG): container finished" podID="186200b0-8ce3-46a8-9691-42b254a077be" containerID="abb024f16f020633345c0d72db94c945d2a797e5a6e2a8f82fffb4f0fb6adf36" exitCode=0 Oct 06 12:26:34 crc kubenswrapper[4958]: I1006 12:26:34.947976 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-rqzwk" event={"ID":"186200b0-8ce3-46a8-9691-42b254a077be","Type":"ContainerDied","Data":"abb024f16f020633345c0d72db94c945d2a797e5a6e2a8f82fffb4f0fb6adf36"} Oct 06 12:26:36 crc kubenswrapper[4958]: I1006 12:26:36.430556 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-rqzwk" Oct 06 12:26:36 crc kubenswrapper[4958]: I1006 12:26:36.547284 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tktjg\" (UniqueName: \"kubernetes.io/projected/186200b0-8ce3-46a8-9691-42b254a077be-kube-api-access-tktjg\") pod \"186200b0-8ce3-46a8-9691-42b254a077be\" (UID: \"186200b0-8ce3-46a8-9691-42b254a077be\") " Oct 06 12:26:36 crc kubenswrapper[4958]: I1006 12:26:36.547530 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/186200b0-8ce3-46a8-9691-42b254a077be-libvirt-secret-0\") pod \"186200b0-8ce3-46a8-9691-42b254a077be\" (UID: \"186200b0-8ce3-46a8-9691-42b254a077be\") " Oct 06 12:26:36 crc kubenswrapper[4958]: I1006 12:26:36.547559 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/186200b0-8ce3-46a8-9691-42b254a077be-inventory\") pod \"186200b0-8ce3-46a8-9691-42b254a077be\" (UID: \"186200b0-8ce3-46a8-9691-42b254a077be\") " Oct 06 12:26:36 crc kubenswrapper[4958]: I1006 12:26:36.547649 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/186200b0-8ce3-46a8-9691-42b254a077be-ssh-key\") pod \"186200b0-8ce3-46a8-9691-42b254a077be\" (UID: \"186200b0-8ce3-46a8-9691-42b254a077be\") " Oct 06 12:26:36 crc kubenswrapper[4958]: I1006 12:26:36.547685 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/186200b0-8ce3-46a8-9691-42b254a077be-libvirt-combined-ca-bundle\") pod \"186200b0-8ce3-46a8-9691-42b254a077be\" (UID: \"186200b0-8ce3-46a8-9691-42b254a077be\") " Oct 06 12:26:36 crc kubenswrapper[4958]: I1006 12:26:36.553327 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/186200b0-8ce3-46a8-9691-42b254a077be-libvirt-combined-ca-bundle" (OuterVolumeSpecName: "libvirt-combined-ca-bundle") pod "186200b0-8ce3-46a8-9691-42b254a077be" (UID: "186200b0-8ce3-46a8-9691-42b254a077be"). InnerVolumeSpecName "libvirt-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 06 12:26:36 crc kubenswrapper[4958]: I1006 12:26:36.553559 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/186200b0-8ce3-46a8-9691-42b254a077be-kube-api-access-tktjg" (OuterVolumeSpecName: "kube-api-access-tktjg") pod "186200b0-8ce3-46a8-9691-42b254a077be" (UID: "186200b0-8ce3-46a8-9691-42b254a077be"). InnerVolumeSpecName "kube-api-access-tktjg". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 06 12:26:36 crc kubenswrapper[4958]: I1006 12:26:36.575474 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/186200b0-8ce3-46a8-9691-42b254a077be-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "186200b0-8ce3-46a8-9691-42b254a077be" (UID: "186200b0-8ce3-46a8-9691-42b254a077be"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 06 12:26:36 crc kubenswrapper[4958]: I1006 12:26:36.577607 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/186200b0-8ce3-46a8-9691-42b254a077be-libvirt-secret-0" (OuterVolumeSpecName: "libvirt-secret-0") pod "186200b0-8ce3-46a8-9691-42b254a077be" (UID: "186200b0-8ce3-46a8-9691-42b254a077be"). InnerVolumeSpecName "libvirt-secret-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 06 12:26:36 crc kubenswrapper[4958]: I1006 12:26:36.583764 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/186200b0-8ce3-46a8-9691-42b254a077be-inventory" (OuterVolumeSpecName: "inventory") pod "186200b0-8ce3-46a8-9691-42b254a077be" (UID: "186200b0-8ce3-46a8-9691-42b254a077be"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 06 12:26:36 crc kubenswrapper[4958]: I1006 12:26:36.650033 4958 reconciler_common.go:293] "Volume detached for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/186200b0-8ce3-46a8-9691-42b254a077be-libvirt-secret-0\") on node \"crc\" DevicePath \"\"" Oct 06 12:26:36 crc kubenswrapper[4958]: I1006 12:26:36.650092 4958 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/186200b0-8ce3-46a8-9691-42b254a077be-inventory\") on node \"crc\" DevicePath \"\"" Oct 06 12:26:36 crc kubenswrapper[4958]: I1006 12:26:36.650110 4958 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/186200b0-8ce3-46a8-9691-42b254a077be-ssh-key\") on node \"crc\" DevicePath \"\"" Oct 06 12:26:36 crc kubenswrapper[4958]: I1006 12:26:36.650126 4958 reconciler_common.go:293] "Volume detached for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/186200b0-8ce3-46a8-9691-42b254a077be-libvirt-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 06 12:26:36 crc kubenswrapper[4958]: I1006 12:26:36.650174 4958 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tktjg\" (UniqueName: \"kubernetes.io/projected/186200b0-8ce3-46a8-9691-42b254a077be-kube-api-access-tktjg\") on node \"crc\" DevicePath \"\"" Oct 06 12:26:36 crc kubenswrapper[4958]: I1006 12:26:36.973036 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-rqzwk" event={"ID":"186200b0-8ce3-46a8-9691-42b254a077be","Type":"ContainerDied","Data":"e9aa912b2bf8898207e75ad69f5af243ad2b8045c9ac0546db0b853e195ff768"} Oct 06 12:26:36 crc kubenswrapper[4958]: I1006 12:26:36.973112 4958 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e9aa912b2bf8898207e75ad69f5af243ad2b8045c9ac0546db0b853e195ff768" Oct 06 12:26:36 crc kubenswrapper[4958]: I1006 12:26:36.973138 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-rqzwk" Oct 06 12:26:37 crc kubenswrapper[4958]: I1006 12:26:37.100002 4958 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-edpm-deployment-openstack-edpm-ipam-nx8z6"] Oct 06 12:26:37 crc kubenswrapper[4958]: E1006 12:26:37.100518 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5205bf50-d310-4880-8ee8-f2d4d02756b4" containerName="registry-server" Oct 06 12:26:37 crc kubenswrapper[4958]: I1006 12:26:37.100539 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="5205bf50-d310-4880-8ee8-f2d4d02756b4" containerName="registry-server" Oct 06 12:26:37 crc kubenswrapper[4958]: E1006 12:26:37.100571 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5205bf50-d310-4880-8ee8-f2d4d02756b4" containerName="extract-utilities" Oct 06 12:26:37 crc kubenswrapper[4958]: I1006 12:26:37.100580 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="5205bf50-d310-4880-8ee8-f2d4d02756b4" containerName="extract-utilities" Oct 06 12:26:37 crc kubenswrapper[4958]: E1006 12:26:37.100601 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5205bf50-d310-4880-8ee8-f2d4d02756b4" containerName="extract-content" Oct 06 12:26:37 crc kubenswrapper[4958]: I1006 12:26:37.100609 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="5205bf50-d310-4880-8ee8-f2d4d02756b4" containerName="extract-content" Oct 06 12:26:37 crc kubenswrapper[4958]: E1006 12:26:37.100628 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="186200b0-8ce3-46a8-9691-42b254a077be" containerName="libvirt-edpm-deployment-openstack-edpm-ipam" Oct 06 12:26:37 crc kubenswrapper[4958]: I1006 12:26:37.100637 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="186200b0-8ce3-46a8-9691-42b254a077be" containerName="libvirt-edpm-deployment-openstack-edpm-ipam" Oct 06 12:26:37 crc kubenswrapper[4958]: I1006 12:26:37.100879 4958 memory_manager.go:354] "RemoveStaleState removing state" podUID="5205bf50-d310-4880-8ee8-f2d4d02756b4" containerName="registry-server" Oct 06 12:26:37 crc kubenswrapper[4958]: I1006 12:26:37.100911 4958 memory_manager.go:354] "RemoveStaleState removing state" podUID="186200b0-8ce3-46a8-9691-42b254a077be" containerName="libvirt-edpm-deployment-openstack-edpm-ipam" Oct 06 12:26:37 crc kubenswrapper[4958]: I1006 12:26:37.102334 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-nx8z6" Oct 06 12:26:37 crc kubenswrapper[4958]: I1006 12:26:37.109910 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-compute-config" Oct 06 12:26:37 crc kubenswrapper[4958]: I1006 12:26:37.109968 4958 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Oct 06 12:26:37 crc kubenswrapper[4958]: I1006 12:26:37.110014 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-frkbw" Oct 06 12:26:37 crc kubenswrapper[4958]: I1006 12:26:37.110630 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-migration-ssh-key" Oct 06 12:26:37 crc kubenswrapper[4958]: I1006 12:26:37.110816 4958 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"nova-extra-config" Oct 06 12:26:37 crc kubenswrapper[4958]: I1006 12:26:37.110837 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Oct 06 12:26:37 crc kubenswrapper[4958]: I1006 12:26:37.111009 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Oct 06 12:26:37 crc kubenswrapper[4958]: I1006 12:26:37.132377 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-edpm-deployment-openstack-edpm-ipam-nx8z6"] Oct 06 12:26:37 crc kubenswrapper[4958]: I1006 12:26:37.263544 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/226865fc-14de-4b5f-a693-a27ef3d06efa-nova-cell1-compute-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-nx8z6\" (UID: \"226865fc-14de-4b5f-a693-a27ef3d06efa\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-nx8z6" Oct 06 12:26:37 crc kubenswrapper[4958]: I1006 12:26:37.263659 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/226865fc-14de-4b5f-a693-a27ef3d06efa-nova-cell1-compute-config-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-nx8z6\" (UID: \"226865fc-14de-4b5f-a693-a27ef3d06efa\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-nx8z6" Oct 06 12:26:37 crc kubenswrapper[4958]: I1006 12:26:37.263719 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/226865fc-14de-4b5f-a693-a27ef3d06efa-nova-migration-ssh-key-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-nx8z6\" (UID: \"226865fc-14de-4b5f-a693-a27ef3d06efa\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-nx8z6" Oct 06 12:26:37 crc kubenswrapper[4958]: I1006 12:26:37.263762 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/226865fc-14de-4b5f-a693-a27ef3d06efa-nova-extra-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-nx8z6\" (UID: \"226865fc-14de-4b5f-a693-a27ef3d06efa\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-nx8z6" Oct 06 12:26:37 crc kubenswrapper[4958]: I1006 12:26:37.263974 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/226865fc-14de-4b5f-a693-a27ef3d06efa-ssh-key\") pod \"nova-edpm-deployment-openstack-edpm-ipam-nx8z6\" (UID: \"226865fc-14de-4b5f-a693-a27ef3d06efa\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-nx8z6" Oct 06 12:26:37 crc kubenswrapper[4958]: I1006 12:26:37.264097 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/226865fc-14de-4b5f-a693-a27ef3d06efa-nova-migration-ssh-key-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-nx8z6\" (UID: \"226865fc-14de-4b5f-a693-a27ef3d06efa\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-nx8z6" Oct 06 12:26:37 crc kubenswrapper[4958]: I1006 12:26:37.264264 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/226865fc-14de-4b5f-a693-a27ef3d06efa-nova-combined-ca-bundle\") pod \"nova-edpm-deployment-openstack-edpm-ipam-nx8z6\" (UID: \"226865fc-14de-4b5f-a693-a27ef3d06efa\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-nx8z6" Oct 06 12:26:37 crc kubenswrapper[4958]: I1006 12:26:37.264310 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/226865fc-14de-4b5f-a693-a27ef3d06efa-inventory\") pod \"nova-edpm-deployment-openstack-edpm-ipam-nx8z6\" (UID: \"226865fc-14de-4b5f-a693-a27ef3d06efa\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-nx8z6" Oct 06 12:26:37 crc kubenswrapper[4958]: I1006 12:26:37.264548 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ldcv8\" (UniqueName: \"kubernetes.io/projected/226865fc-14de-4b5f-a693-a27ef3d06efa-kube-api-access-ldcv8\") pod \"nova-edpm-deployment-openstack-edpm-ipam-nx8z6\" (UID: \"226865fc-14de-4b5f-a693-a27ef3d06efa\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-nx8z6" Oct 06 12:26:37 crc kubenswrapper[4958]: I1006 12:26:37.365923 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/226865fc-14de-4b5f-a693-a27ef3d06efa-nova-cell1-compute-config-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-nx8z6\" (UID: \"226865fc-14de-4b5f-a693-a27ef3d06efa\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-nx8z6" Oct 06 12:26:37 crc kubenswrapper[4958]: I1006 12:26:37.365985 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/226865fc-14de-4b5f-a693-a27ef3d06efa-nova-migration-ssh-key-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-nx8z6\" (UID: \"226865fc-14de-4b5f-a693-a27ef3d06efa\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-nx8z6" Oct 06 12:26:37 crc kubenswrapper[4958]: I1006 12:26:37.366021 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/226865fc-14de-4b5f-a693-a27ef3d06efa-nova-extra-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-nx8z6\" (UID: \"226865fc-14de-4b5f-a693-a27ef3d06efa\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-nx8z6" Oct 06 12:26:37 crc kubenswrapper[4958]: I1006 12:26:37.366086 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/226865fc-14de-4b5f-a693-a27ef3d06efa-ssh-key\") pod \"nova-edpm-deployment-openstack-edpm-ipam-nx8z6\" (UID: \"226865fc-14de-4b5f-a693-a27ef3d06efa\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-nx8z6" Oct 06 12:26:37 crc kubenswrapper[4958]: I1006 12:26:37.366132 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/226865fc-14de-4b5f-a693-a27ef3d06efa-nova-migration-ssh-key-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-nx8z6\" (UID: \"226865fc-14de-4b5f-a693-a27ef3d06efa\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-nx8z6" Oct 06 12:26:37 crc kubenswrapper[4958]: I1006 12:26:37.366388 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/226865fc-14de-4b5f-a693-a27ef3d06efa-nova-combined-ca-bundle\") pod \"nova-edpm-deployment-openstack-edpm-ipam-nx8z6\" (UID: \"226865fc-14de-4b5f-a693-a27ef3d06efa\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-nx8z6" Oct 06 12:26:37 crc kubenswrapper[4958]: I1006 12:26:37.366507 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/226865fc-14de-4b5f-a693-a27ef3d06efa-inventory\") pod \"nova-edpm-deployment-openstack-edpm-ipam-nx8z6\" (UID: \"226865fc-14de-4b5f-a693-a27ef3d06efa\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-nx8z6" Oct 06 12:26:37 crc kubenswrapper[4958]: I1006 12:26:37.366780 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ldcv8\" (UniqueName: \"kubernetes.io/projected/226865fc-14de-4b5f-a693-a27ef3d06efa-kube-api-access-ldcv8\") pod \"nova-edpm-deployment-openstack-edpm-ipam-nx8z6\" (UID: \"226865fc-14de-4b5f-a693-a27ef3d06efa\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-nx8z6" Oct 06 12:26:37 crc kubenswrapper[4958]: I1006 12:26:37.366887 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/226865fc-14de-4b5f-a693-a27ef3d06efa-nova-cell1-compute-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-nx8z6\" (UID: \"226865fc-14de-4b5f-a693-a27ef3d06efa\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-nx8z6" Oct 06 12:26:37 crc kubenswrapper[4958]: I1006 12:26:37.367955 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/226865fc-14de-4b5f-a693-a27ef3d06efa-nova-extra-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-nx8z6\" (UID: \"226865fc-14de-4b5f-a693-a27ef3d06efa\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-nx8z6" Oct 06 12:26:37 crc kubenswrapper[4958]: I1006 12:26:37.372984 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/226865fc-14de-4b5f-a693-a27ef3d06efa-nova-migration-ssh-key-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-nx8z6\" (UID: \"226865fc-14de-4b5f-a693-a27ef3d06efa\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-nx8z6" Oct 06 12:26:37 crc kubenswrapper[4958]: I1006 12:26:37.373586 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/226865fc-14de-4b5f-a693-a27ef3d06efa-nova-migration-ssh-key-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-nx8z6\" (UID: \"226865fc-14de-4b5f-a693-a27ef3d06efa\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-nx8z6" Oct 06 12:26:37 crc kubenswrapper[4958]: I1006 12:26:37.374271 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/226865fc-14de-4b5f-a693-a27ef3d06efa-nova-cell1-compute-config-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-nx8z6\" (UID: \"226865fc-14de-4b5f-a693-a27ef3d06efa\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-nx8z6" Oct 06 12:26:37 crc kubenswrapper[4958]: I1006 12:26:37.374713 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/226865fc-14de-4b5f-a693-a27ef3d06efa-ssh-key\") pod \"nova-edpm-deployment-openstack-edpm-ipam-nx8z6\" (UID: \"226865fc-14de-4b5f-a693-a27ef3d06efa\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-nx8z6" Oct 06 12:26:37 crc kubenswrapper[4958]: I1006 12:26:37.376650 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/226865fc-14de-4b5f-a693-a27ef3d06efa-nova-cell1-compute-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-nx8z6\" (UID: \"226865fc-14de-4b5f-a693-a27ef3d06efa\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-nx8z6" Oct 06 12:26:37 crc kubenswrapper[4958]: I1006 12:26:37.376802 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/226865fc-14de-4b5f-a693-a27ef3d06efa-nova-combined-ca-bundle\") pod \"nova-edpm-deployment-openstack-edpm-ipam-nx8z6\" (UID: \"226865fc-14de-4b5f-a693-a27ef3d06efa\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-nx8z6" Oct 06 12:26:37 crc kubenswrapper[4958]: I1006 12:26:37.385960 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/226865fc-14de-4b5f-a693-a27ef3d06efa-inventory\") pod \"nova-edpm-deployment-openstack-edpm-ipam-nx8z6\" (UID: \"226865fc-14de-4b5f-a693-a27ef3d06efa\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-nx8z6" Oct 06 12:26:37 crc kubenswrapper[4958]: I1006 12:26:37.394528 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ldcv8\" (UniqueName: \"kubernetes.io/projected/226865fc-14de-4b5f-a693-a27ef3d06efa-kube-api-access-ldcv8\") pod \"nova-edpm-deployment-openstack-edpm-ipam-nx8z6\" (UID: \"226865fc-14de-4b5f-a693-a27ef3d06efa\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-nx8z6" Oct 06 12:26:37 crc kubenswrapper[4958]: I1006 12:26:37.439390 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-nx8z6" Oct 06 12:26:38 crc kubenswrapper[4958]: I1006 12:26:38.073328 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-edpm-deployment-openstack-edpm-ipam-nx8z6"] Oct 06 12:26:38 crc kubenswrapper[4958]: I1006 12:26:38.993745 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-nx8z6" event={"ID":"226865fc-14de-4b5f-a693-a27ef3d06efa","Type":"ContainerStarted","Data":"7912cc8b0bee07dfca8e41479b12bf31cc5ee093f86a8a5868504647f912d89f"} Oct 06 12:26:38 crc kubenswrapper[4958]: I1006 12:26:38.994307 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-nx8z6" event={"ID":"226865fc-14de-4b5f-a693-a27ef3d06efa","Type":"ContainerStarted","Data":"e61fe60eabcef2683e94c7e2c8724a3e271e973a58986750833f8517a3dd7ebd"} Oct 06 12:26:53 crc kubenswrapper[4958]: I1006 12:26:53.801893 4958 patch_prober.go:28] interesting pod/machine-config-daemon-whw6z container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 06 12:26:53 crc kubenswrapper[4958]: I1006 12:26:53.802453 4958 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-whw6z" podUID="1a63a5e9-6d00-40ff-a080-cfcaf03a1c1b" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 06 12:27:06 crc kubenswrapper[4958]: I1006 12:27:06.381543 4958 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/swift-proxy-5cc4dd9879-7xgdr" podUID="eb6c6362-e91c-47c8-8616-702c4cada19a" containerName="proxy-server" probeResult="failure" output="HTTP probe failed with statuscode: 502" Oct 06 12:27:23 crc kubenswrapper[4958]: I1006 12:27:23.801355 4958 patch_prober.go:28] interesting pod/machine-config-daemon-whw6z container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 06 12:27:23 crc kubenswrapper[4958]: I1006 12:27:23.802642 4958 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-whw6z" podUID="1a63a5e9-6d00-40ff-a080-cfcaf03a1c1b" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 06 12:27:23 crc kubenswrapper[4958]: I1006 12:27:23.802711 4958 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-whw6z" Oct 06 12:27:23 crc kubenswrapper[4958]: I1006 12:27:23.803554 4958 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"d5a93cdafe689a1af3c191a363fd93dd9e51df0fafe51e825fe13eb8ebfdd2e5"} pod="openshift-machine-config-operator/machine-config-daemon-whw6z" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Oct 06 12:27:23 crc kubenswrapper[4958]: I1006 12:27:23.803614 4958 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-whw6z" podUID="1a63a5e9-6d00-40ff-a080-cfcaf03a1c1b" containerName="machine-config-daemon" containerID="cri-o://d5a93cdafe689a1af3c191a363fd93dd9e51df0fafe51e825fe13eb8ebfdd2e5" gracePeriod=600 Oct 06 12:27:23 crc kubenswrapper[4958]: E1006 12:27:23.979488 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-whw6z_openshift-machine-config-operator(1a63a5e9-6d00-40ff-a080-cfcaf03a1c1b)\"" pod="openshift-machine-config-operator/machine-config-daemon-whw6z" podUID="1a63a5e9-6d00-40ff-a080-cfcaf03a1c1b" Oct 06 12:27:24 crc kubenswrapper[4958]: I1006 12:27:24.493372 4958 generic.go:334] "Generic (PLEG): container finished" podID="1a63a5e9-6d00-40ff-a080-cfcaf03a1c1b" containerID="d5a93cdafe689a1af3c191a363fd93dd9e51df0fafe51e825fe13eb8ebfdd2e5" exitCode=0 Oct 06 12:27:24 crc kubenswrapper[4958]: I1006 12:27:24.493449 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-whw6z" event={"ID":"1a63a5e9-6d00-40ff-a080-cfcaf03a1c1b","Type":"ContainerDied","Data":"d5a93cdafe689a1af3c191a363fd93dd9e51df0fafe51e825fe13eb8ebfdd2e5"} Oct 06 12:27:24 crc kubenswrapper[4958]: I1006 12:27:24.493853 4958 scope.go:117] "RemoveContainer" containerID="18733eebda307ab68dd54cc8556b7585a95b1eb772aad1fe185443d73decf313" Oct 06 12:27:24 crc kubenswrapper[4958]: I1006 12:27:24.494635 4958 scope.go:117] "RemoveContainer" containerID="d5a93cdafe689a1af3c191a363fd93dd9e51df0fafe51e825fe13eb8ebfdd2e5" Oct 06 12:27:24 crc kubenswrapper[4958]: E1006 12:27:24.495041 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-whw6z_openshift-machine-config-operator(1a63a5e9-6d00-40ff-a080-cfcaf03a1c1b)\"" pod="openshift-machine-config-operator/machine-config-daemon-whw6z" podUID="1a63a5e9-6d00-40ff-a080-cfcaf03a1c1b" Oct 06 12:27:24 crc kubenswrapper[4958]: I1006 12:27:24.546292 4958 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-nx8z6" podStartSLOduration=47.110409794 podStartE2EDuration="47.546268214s" podCreationTimestamp="2025-10-06 12:26:37 +0000 UTC" firstStartedPulling="2025-10-06 12:26:38.085808599 +0000 UTC m=+2351.971833917" lastFinishedPulling="2025-10-06 12:26:38.521667029 +0000 UTC m=+2352.407692337" observedRunningTime="2025-10-06 12:26:39.016652005 +0000 UTC m=+2352.902677353" watchObservedRunningTime="2025-10-06 12:27:24.546268214 +0000 UTC m=+2398.432293522" Oct 06 12:27:37 crc kubenswrapper[4958]: I1006 12:27:37.913980 4958 scope.go:117] "RemoveContainer" containerID="d5a93cdafe689a1af3c191a363fd93dd9e51df0fafe51e825fe13eb8ebfdd2e5" Oct 06 12:27:37 crc kubenswrapper[4958]: E1006 12:27:37.915369 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-whw6z_openshift-machine-config-operator(1a63a5e9-6d00-40ff-a080-cfcaf03a1c1b)\"" pod="openshift-machine-config-operator/machine-config-daemon-whw6z" podUID="1a63a5e9-6d00-40ff-a080-cfcaf03a1c1b" Oct 06 12:27:48 crc kubenswrapper[4958]: I1006 12:27:48.914102 4958 scope.go:117] "RemoveContainer" containerID="d5a93cdafe689a1af3c191a363fd93dd9e51df0fafe51e825fe13eb8ebfdd2e5" Oct 06 12:27:48 crc kubenswrapper[4958]: E1006 12:27:48.915325 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-whw6z_openshift-machine-config-operator(1a63a5e9-6d00-40ff-a080-cfcaf03a1c1b)\"" pod="openshift-machine-config-operator/machine-config-daemon-whw6z" podUID="1a63a5e9-6d00-40ff-a080-cfcaf03a1c1b" Oct 06 12:28:01 crc kubenswrapper[4958]: I1006 12:28:01.913668 4958 scope.go:117] "RemoveContainer" containerID="d5a93cdafe689a1af3c191a363fd93dd9e51df0fafe51e825fe13eb8ebfdd2e5" Oct 06 12:28:01 crc kubenswrapper[4958]: E1006 12:28:01.914549 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-whw6z_openshift-machine-config-operator(1a63a5e9-6d00-40ff-a080-cfcaf03a1c1b)\"" pod="openshift-machine-config-operator/machine-config-daemon-whw6z" podUID="1a63a5e9-6d00-40ff-a080-cfcaf03a1c1b" Oct 06 12:28:15 crc kubenswrapper[4958]: I1006 12:28:15.914005 4958 scope.go:117] "RemoveContainer" containerID="d5a93cdafe689a1af3c191a363fd93dd9e51df0fafe51e825fe13eb8ebfdd2e5" Oct 06 12:28:15 crc kubenswrapper[4958]: E1006 12:28:15.915235 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-whw6z_openshift-machine-config-operator(1a63a5e9-6d00-40ff-a080-cfcaf03a1c1b)\"" pod="openshift-machine-config-operator/machine-config-daemon-whw6z" podUID="1a63a5e9-6d00-40ff-a080-cfcaf03a1c1b" Oct 06 12:28:29 crc kubenswrapper[4958]: I1006 12:28:29.913139 4958 scope.go:117] "RemoveContainer" containerID="d5a93cdafe689a1af3c191a363fd93dd9e51df0fafe51e825fe13eb8ebfdd2e5" Oct 06 12:28:29 crc kubenswrapper[4958]: E1006 12:28:29.913849 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-whw6z_openshift-machine-config-operator(1a63a5e9-6d00-40ff-a080-cfcaf03a1c1b)\"" pod="openshift-machine-config-operator/machine-config-daemon-whw6z" podUID="1a63a5e9-6d00-40ff-a080-cfcaf03a1c1b" Oct 06 12:28:42 crc kubenswrapper[4958]: I1006 12:28:42.914696 4958 scope.go:117] "RemoveContainer" containerID="d5a93cdafe689a1af3c191a363fd93dd9e51df0fafe51e825fe13eb8ebfdd2e5" Oct 06 12:28:42 crc kubenswrapper[4958]: E1006 12:28:42.916050 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-whw6z_openshift-machine-config-operator(1a63a5e9-6d00-40ff-a080-cfcaf03a1c1b)\"" pod="openshift-machine-config-operator/machine-config-daemon-whw6z" podUID="1a63a5e9-6d00-40ff-a080-cfcaf03a1c1b" Oct 06 12:28:57 crc kubenswrapper[4958]: I1006 12:28:57.914122 4958 scope.go:117] "RemoveContainer" containerID="d5a93cdafe689a1af3c191a363fd93dd9e51df0fafe51e825fe13eb8ebfdd2e5" Oct 06 12:28:57 crc kubenswrapper[4958]: E1006 12:28:57.915465 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-whw6z_openshift-machine-config-operator(1a63a5e9-6d00-40ff-a080-cfcaf03a1c1b)\"" pod="openshift-machine-config-operator/machine-config-daemon-whw6z" podUID="1a63a5e9-6d00-40ff-a080-cfcaf03a1c1b" Oct 06 12:29:10 crc kubenswrapper[4958]: I1006 12:29:10.913246 4958 scope.go:117] "RemoveContainer" containerID="d5a93cdafe689a1af3c191a363fd93dd9e51df0fafe51e825fe13eb8ebfdd2e5" Oct 06 12:29:10 crc kubenswrapper[4958]: E1006 12:29:10.914199 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-whw6z_openshift-machine-config-operator(1a63a5e9-6d00-40ff-a080-cfcaf03a1c1b)\"" pod="openshift-machine-config-operator/machine-config-daemon-whw6z" podUID="1a63a5e9-6d00-40ff-a080-cfcaf03a1c1b" Oct 06 12:29:23 crc kubenswrapper[4958]: I1006 12:29:23.914544 4958 scope.go:117] "RemoveContainer" containerID="d5a93cdafe689a1af3c191a363fd93dd9e51df0fafe51e825fe13eb8ebfdd2e5" Oct 06 12:29:23 crc kubenswrapper[4958]: E1006 12:29:23.915893 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-whw6z_openshift-machine-config-operator(1a63a5e9-6d00-40ff-a080-cfcaf03a1c1b)\"" pod="openshift-machine-config-operator/machine-config-daemon-whw6z" podUID="1a63a5e9-6d00-40ff-a080-cfcaf03a1c1b" Oct 06 12:29:34 crc kubenswrapper[4958]: I1006 12:29:34.914878 4958 scope.go:117] "RemoveContainer" containerID="d5a93cdafe689a1af3c191a363fd93dd9e51df0fafe51e825fe13eb8ebfdd2e5" Oct 06 12:29:34 crc kubenswrapper[4958]: E1006 12:29:34.916072 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-whw6z_openshift-machine-config-operator(1a63a5e9-6d00-40ff-a080-cfcaf03a1c1b)\"" pod="openshift-machine-config-operator/machine-config-daemon-whw6z" podUID="1a63a5e9-6d00-40ff-a080-cfcaf03a1c1b" Oct 06 12:29:45 crc kubenswrapper[4958]: I1006 12:29:45.913845 4958 scope.go:117] "RemoveContainer" containerID="d5a93cdafe689a1af3c191a363fd93dd9e51df0fafe51e825fe13eb8ebfdd2e5" Oct 06 12:29:45 crc kubenswrapper[4958]: E1006 12:29:45.914675 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-whw6z_openshift-machine-config-operator(1a63a5e9-6d00-40ff-a080-cfcaf03a1c1b)\"" pod="openshift-machine-config-operator/machine-config-daemon-whw6z" podUID="1a63a5e9-6d00-40ff-a080-cfcaf03a1c1b" Oct 06 12:29:59 crc kubenswrapper[4958]: I1006 12:29:59.914345 4958 scope.go:117] "RemoveContainer" containerID="d5a93cdafe689a1af3c191a363fd93dd9e51df0fafe51e825fe13eb8ebfdd2e5" Oct 06 12:29:59 crc kubenswrapper[4958]: E1006 12:29:59.915198 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-whw6z_openshift-machine-config-operator(1a63a5e9-6d00-40ff-a080-cfcaf03a1c1b)\"" pod="openshift-machine-config-operator/machine-config-daemon-whw6z" podUID="1a63a5e9-6d00-40ff-a080-cfcaf03a1c1b" Oct 06 12:30:00 crc kubenswrapper[4958]: I1006 12:30:00.169063 4958 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29329230-wdrsf"] Oct 06 12:30:00 crc kubenswrapper[4958]: I1006 12:30:00.170920 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29329230-wdrsf" Oct 06 12:30:00 crc kubenswrapper[4958]: I1006 12:30:00.173017 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Oct 06 12:30:00 crc kubenswrapper[4958]: I1006 12:30:00.174179 4958 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Oct 06 12:30:00 crc kubenswrapper[4958]: I1006 12:30:00.183269 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29329230-wdrsf"] Oct 06 12:30:00 crc kubenswrapper[4958]: I1006 12:30:00.292302 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/51131f3b-9af1-45ce-8e80-cd7439743329-secret-volume\") pod \"collect-profiles-29329230-wdrsf\" (UID: \"51131f3b-9af1-45ce-8e80-cd7439743329\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29329230-wdrsf" Oct 06 12:30:00 crc kubenswrapper[4958]: I1006 12:30:00.292850 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p6dzh\" (UniqueName: \"kubernetes.io/projected/51131f3b-9af1-45ce-8e80-cd7439743329-kube-api-access-p6dzh\") pod \"collect-profiles-29329230-wdrsf\" (UID: \"51131f3b-9af1-45ce-8e80-cd7439743329\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29329230-wdrsf" Oct 06 12:30:00 crc kubenswrapper[4958]: I1006 12:30:00.292985 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/51131f3b-9af1-45ce-8e80-cd7439743329-config-volume\") pod \"collect-profiles-29329230-wdrsf\" (UID: \"51131f3b-9af1-45ce-8e80-cd7439743329\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29329230-wdrsf" Oct 06 12:30:00 crc kubenswrapper[4958]: I1006 12:30:00.395253 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/51131f3b-9af1-45ce-8e80-cd7439743329-secret-volume\") pod \"collect-profiles-29329230-wdrsf\" (UID: \"51131f3b-9af1-45ce-8e80-cd7439743329\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29329230-wdrsf" Oct 06 12:30:00 crc kubenswrapper[4958]: I1006 12:30:00.395471 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p6dzh\" (UniqueName: \"kubernetes.io/projected/51131f3b-9af1-45ce-8e80-cd7439743329-kube-api-access-p6dzh\") pod \"collect-profiles-29329230-wdrsf\" (UID: \"51131f3b-9af1-45ce-8e80-cd7439743329\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29329230-wdrsf" Oct 06 12:30:00 crc kubenswrapper[4958]: I1006 12:30:00.395546 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/51131f3b-9af1-45ce-8e80-cd7439743329-config-volume\") pod \"collect-profiles-29329230-wdrsf\" (UID: \"51131f3b-9af1-45ce-8e80-cd7439743329\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29329230-wdrsf" Oct 06 12:30:00 crc kubenswrapper[4958]: I1006 12:30:00.396782 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/51131f3b-9af1-45ce-8e80-cd7439743329-config-volume\") pod \"collect-profiles-29329230-wdrsf\" (UID: \"51131f3b-9af1-45ce-8e80-cd7439743329\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29329230-wdrsf" Oct 06 12:30:00 crc kubenswrapper[4958]: I1006 12:30:00.400788 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/51131f3b-9af1-45ce-8e80-cd7439743329-secret-volume\") pod \"collect-profiles-29329230-wdrsf\" (UID: \"51131f3b-9af1-45ce-8e80-cd7439743329\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29329230-wdrsf" Oct 06 12:30:00 crc kubenswrapper[4958]: I1006 12:30:00.415288 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p6dzh\" (UniqueName: \"kubernetes.io/projected/51131f3b-9af1-45ce-8e80-cd7439743329-kube-api-access-p6dzh\") pod \"collect-profiles-29329230-wdrsf\" (UID: \"51131f3b-9af1-45ce-8e80-cd7439743329\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29329230-wdrsf" Oct 06 12:30:00 crc kubenswrapper[4958]: I1006 12:30:00.496662 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29329230-wdrsf" Oct 06 12:30:01 crc kubenswrapper[4958]: I1006 12:30:01.083368 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29329230-wdrsf"] Oct 06 12:30:01 crc kubenswrapper[4958]: I1006 12:30:01.266773 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29329230-wdrsf" event={"ID":"51131f3b-9af1-45ce-8e80-cd7439743329","Type":"ContainerStarted","Data":"b6974d44ddc2acb6f3131290df8ac4efe132d5257a39ec65fc0e58853e4ef23e"} Oct 06 12:30:01 crc kubenswrapper[4958]: I1006 12:30:01.266831 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29329230-wdrsf" event={"ID":"51131f3b-9af1-45ce-8e80-cd7439743329","Type":"ContainerStarted","Data":"af518b5cd00c44a40a3f198e4c2ed993c74ce8d94fb9a55bfd23a29be0663302"} Oct 06 12:30:01 crc kubenswrapper[4958]: I1006 12:30:01.290520 4958 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29329230-wdrsf" podStartSLOduration=1.290495176 podStartE2EDuration="1.290495176s" podCreationTimestamp="2025-10-06 12:30:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-06 12:30:01.280777172 +0000 UTC m=+2555.166802500" watchObservedRunningTime="2025-10-06 12:30:01.290495176 +0000 UTC m=+2555.176520494" Oct 06 12:30:02 crc kubenswrapper[4958]: I1006 12:30:02.275744 4958 generic.go:334] "Generic (PLEG): container finished" podID="51131f3b-9af1-45ce-8e80-cd7439743329" containerID="b6974d44ddc2acb6f3131290df8ac4efe132d5257a39ec65fc0e58853e4ef23e" exitCode=0 Oct 06 12:30:02 crc kubenswrapper[4958]: I1006 12:30:02.275835 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29329230-wdrsf" event={"ID":"51131f3b-9af1-45ce-8e80-cd7439743329","Type":"ContainerDied","Data":"b6974d44ddc2acb6f3131290df8ac4efe132d5257a39ec65fc0e58853e4ef23e"} Oct 06 12:30:03 crc kubenswrapper[4958]: I1006 12:30:03.661926 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29329230-wdrsf" Oct 06 12:30:03 crc kubenswrapper[4958]: I1006 12:30:03.760480 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/51131f3b-9af1-45ce-8e80-cd7439743329-secret-volume\") pod \"51131f3b-9af1-45ce-8e80-cd7439743329\" (UID: \"51131f3b-9af1-45ce-8e80-cd7439743329\") " Oct 06 12:30:03 crc kubenswrapper[4958]: I1006 12:30:03.760559 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-p6dzh\" (UniqueName: \"kubernetes.io/projected/51131f3b-9af1-45ce-8e80-cd7439743329-kube-api-access-p6dzh\") pod \"51131f3b-9af1-45ce-8e80-cd7439743329\" (UID: \"51131f3b-9af1-45ce-8e80-cd7439743329\") " Oct 06 12:30:03 crc kubenswrapper[4958]: I1006 12:30:03.760639 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/51131f3b-9af1-45ce-8e80-cd7439743329-config-volume\") pod \"51131f3b-9af1-45ce-8e80-cd7439743329\" (UID: \"51131f3b-9af1-45ce-8e80-cd7439743329\") " Oct 06 12:30:03 crc kubenswrapper[4958]: I1006 12:30:03.761203 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/51131f3b-9af1-45ce-8e80-cd7439743329-config-volume" (OuterVolumeSpecName: "config-volume") pod "51131f3b-9af1-45ce-8e80-cd7439743329" (UID: "51131f3b-9af1-45ce-8e80-cd7439743329"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 06 12:30:03 crc kubenswrapper[4958]: I1006 12:30:03.775727 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/51131f3b-9af1-45ce-8e80-cd7439743329-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "51131f3b-9af1-45ce-8e80-cd7439743329" (UID: "51131f3b-9af1-45ce-8e80-cd7439743329"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 06 12:30:03 crc kubenswrapper[4958]: I1006 12:30:03.777336 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/51131f3b-9af1-45ce-8e80-cd7439743329-kube-api-access-p6dzh" (OuterVolumeSpecName: "kube-api-access-p6dzh") pod "51131f3b-9af1-45ce-8e80-cd7439743329" (UID: "51131f3b-9af1-45ce-8e80-cd7439743329"). InnerVolumeSpecName "kube-api-access-p6dzh". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 06 12:30:03 crc kubenswrapper[4958]: I1006 12:30:03.862178 4958 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/51131f3b-9af1-45ce-8e80-cd7439743329-config-volume\") on node \"crc\" DevicePath \"\"" Oct 06 12:30:03 crc kubenswrapper[4958]: I1006 12:30:03.862226 4958 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/51131f3b-9af1-45ce-8e80-cd7439743329-secret-volume\") on node \"crc\" DevicePath \"\"" Oct 06 12:30:03 crc kubenswrapper[4958]: I1006 12:30:03.862242 4958 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-p6dzh\" (UniqueName: \"kubernetes.io/projected/51131f3b-9af1-45ce-8e80-cd7439743329-kube-api-access-p6dzh\") on node \"crc\" DevicePath \"\"" Oct 06 12:30:04 crc kubenswrapper[4958]: I1006 12:30:04.298800 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29329230-wdrsf" event={"ID":"51131f3b-9af1-45ce-8e80-cd7439743329","Type":"ContainerDied","Data":"af518b5cd00c44a40a3f198e4c2ed993c74ce8d94fb9a55bfd23a29be0663302"} Oct 06 12:30:04 crc kubenswrapper[4958]: I1006 12:30:04.299349 4958 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="af518b5cd00c44a40a3f198e4c2ed993c74ce8d94fb9a55bfd23a29be0663302" Oct 06 12:30:04 crc kubenswrapper[4958]: I1006 12:30:04.298852 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29329230-wdrsf" Oct 06 12:30:04 crc kubenswrapper[4958]: I1006 12:30:04.420374 4958 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29329185-6dnq5"] Oct 06 12:30:04 crc kubenswrapper[4958]: I1006 12:30:04.434060 4958 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29329185-6dnq5"] Oct 06 12:30:04 crc kubenswrapper[4958]: I1006 12:30:04.931350 4958 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="221a0896-9d41-4bf4-b05e-57c067e8b885" path="/var/lib/kubelet/pods/221a0896-9d41-4bf4-b05e-57c067e8b885/volumes" Oct 06 12:30:10 crc kubenswrapper[4958]: I1006 12:30:10.913505 4958 scope.go:117] "RemoveContainer" containerID="d5a93cdafe689a1af3c191a363fd93dd9e51df0fafe51e825fe13eb8ebfdd2e5" Oct 06 12:30:10 crc kubenswrapper[4958]: E1006 12:30:10.914382 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-whw6z_openshift-machine-config-operator(1a63a5e9-6d00-40ff-a080-cfcaf03a1c1b)\"" pod="openshift-machine-config-operator/machine-config-daemon-whw6z" podUID="1a63a5e9-6d00-40ff-a080-cfcaf03a1c1b" Oct 06 12:30:22 crc kubenswrapper[4958]: I1006 12:30:22.913777 4958 scope.go:117] "RemoveContainer" containerID="d5a93cdafe689a1af3c191a363fd93dd9e51df0fafe51e825fe13eb8ebfdd2e5" Oct 06 12:30:22 crc kubenswrapper[4958]: E1006 12:30:22.914673 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-whw6z_openshift-machine-config-operator(1a63a5e9-6d00-40ff-a080-cfcaf03a1c1b)\"" pod="openshift-machine-config-operator/machine-config-daemon-whw6z" podUID="1a63a5e9-6d00-40ff-a080-cfcaf03a1c1b" Oct 06 12:30:31 crc kubenswrapper[4958]: I1006 12:30:31.499591 4958 scope.go:117] "RemoveContainer" containerID="13b32212c738a7428967dc14b36ee17dbb64e527c6d599368c48b4ac82b29aaa" Oct 06 12:30:31 crc kubenswrapper[4958]: I1006 12:30:31.543777 4958 scope.go:117] "RemoveContainer" containerID="95e7bad2348c7d6d0363789dc544eea4f2ea6486107b4dda99128083241f904e" Oct 06 12:30:31 crc kubenswrapper[4958]: I1006 12:30:31.572375 4958 scope.go:117] "RemoveContainer" containerID="c6271b78231e15c924aeb5b5818d304089e3f43eca41eed69b47bbde72d401ef" Oct 06 12:30:31 crc kubenswrapper[4958]: I1006 12:30:31.602639 4958 scope.go:117] "RemoveContainer" containerID="cbde799161b08ccc7a1c8940991034725c97c56331a126b194fbba2ef148c7f6" Oct 06 12:30:34 crc kubenswrapper[4958]: I1006 12:30:34.575319 4958 generic.go:334] "Generic (PLEG): container finished" podID="226865fc-14de-4b5f-a693-a27ef3d06efa" containerID="7912cc8b0bee07dfca8e41479b12bf31cc5ee093f86a8a5868504647f912d89f" exitCode=0 Oct 06 12:30:34 crc kubenswrapper[4958]: I1006 12:30:34.575437 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-nx8z6" event={"ID":"226865fc-14de-4b5f-a693-a27ef3d06efa","Type":"ContainerDied","Data":"7912cc8b0bee07dfca8e41479b12bf31cc5ee093f86a8a5868504647f912d89f"} Oct 06 12:30:34 crc kubenswrapper[4958]: I1006 12:30:34.914354 4958 scope.go:117] "RemoveContainer" containerID="d5a93cdafe689a1af3c191a363fd93dd9e51df0fafe51e825fe13eb8ebfdd2e5" Oct 06 12:30:34 crc kubenswrapper[4958]: E1006 12:30:34.914667 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-whw6z_openshift-machine-config-operator(1a63a5e9-6d00-40ff-a080-cfcaf03a1c1b)\"" pod="openshift-machine-config-operator/machine-config-daemon-whw6z" podUID="1a63a5e9-6d00-40ff-a080-cfcaf03a1c1b" Oct 06 12:30:36 crc kubenswrapper[4958]: I1006 12:30:36.124531 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-nx8z6" Oct 06 12:30:36 crc kubenswrapper[4958]: I1006 12:30:36.206231 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/226865fc-14de-4b5f-a693-a27ef3d06efa-nova-combined-ca-bundle\") pod \"226865fc-14de-4b5f-a693-a27ef3d06efa\" (UID: \"226865fc-14de-4b5f-a693-a27ef3d06efa\") " Oct 06 12:30:36 crc kubenswrapper[4958]: I1006 12:30:36.206567 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ldcv8\" (UniqueName: \"kubernetes.io/projected/226865fc-14de-4b5f-a693-a27ef3d06efa-kube-api-access-ldcv8\") pod \"226865fc-14de-4b5f-a693-a27ef3d06efa\" (UID: \"226865fc-14de-4b5f-a693-a27ef3d06efa\") " Oct 06 12:30:36 crc kubenswrapper[4958]: I1006 12:30:36.206663 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/226865fc-14de-4b5f-a693-a27ef3d06efa-ssh-key\") pod \"226865fc-14de-4b5f-a693-a27ef3d06efa\" (UID: \"226865fc-14de-4b5f-a693-a27ef3d06efa\") " Oct 06 12:30:36 crc kubenswrapper[4958]: I1006 12:30:36.206709 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/226865fc-14de-4b5f-a693-a27ef3d06efa-inventory\") pod \"226865fc-14de-4b5f-a693-a27ef3d06efa\" (UID: \"226865fc-14de-4b5f-a693-a27ef3d06efa\") " Oct 06 12:30:36 crc kubenswrapper[4958]: I1006 12:30:36.206789 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/226865fc-14de-4b5f-a693-a27ef3d06efa-nova-migration-ssh-key-1\") pod \"226865fc-14de-4b5f-a693-a27ef3d06efa\" (UID: \"226865fc-14de-4b5f-a693-a27ef3d06efa\") " Oct 06 12:30:36 crc kubenswrapper[4958]: I1006 12:30:36.206866 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/226865fc-14de-4b5f-a693-a27ef3d06efa-nova-cell1-compute-config-1\") pod \"226865fc-14de-4b5f-a693-a27ef3d06efa\" (UID: \"226865fc-14de-4b5f-a693-a27ef3d06efa\") " Oct 06 12:30:36 crc kubenswrapper[4958]: I1006 12:30:36.206957 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/226865fc-14de-4b5f-a693-a27ef3d06efa-nova-extra-config-0\") pod \"226865fc-14de-4b5f-a693-a27ef3d06efa\" (UID: \"226865fc-14de-4b5f-a693-a27ef3d06efa\") " Oct 06 12:30:36 crc kubenswrapper[4958]: I1006 12:30:36.206979 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/226865fc-14de-4b5f-a693-a27ef3d06efa-nova-migration-ssh-key-0\") pod \"226865fc-14de-4b5f-a693-a27ef3d06efa\" (UID: \"226865fc-14de-4b5f-a693-a27ef3d06efa\") " Oct 06 12:30:36 crc kubenswrapper[4958]: I1006 12:30:36.207030 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/226865fc-14de-4b5f-a693-a27ef3d06efa-nova-cell1-compute-config-0\") pod \"226865fc-14de-4b5f-a693-a27ef3d06efa\" (UID: \"226865fc-14de-4b5f-a693-a27ef3d06efa\") " Oct 06 12:30:36 crc kubenswrapper[4958]: I1006 12:30:36.216401 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/226865fc-14de-4b5f-a693-a27ef3d06efa-kube-api-access-ldcv8" (OuterVolumeSpecName: "kube-api-access-ldcv8") pod "226865fc-14de-4b5f-a693-a27ef3d06efa" (UID: "226865fc-14de-4b5f-a693-a27ef3d06efa"). InnerVolumeSpecName "kube-api-access-ldcv8". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 06 12:30:36 crc kubenswrapper[4958]: I1006 12:30:36.231295 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/226865fc-14de-4b5f-a693-a27ef3d06efa-nova-combined-ca-bundle" (OuterVolumeSpecName: "nova-combined-ca-bundle") pod "226865fc-14de-4b5f-a693-a27ef3d06efa" (UID: "226865fc-14de-4b5f-a693-a27ef3d06efa"). InnerVolumeSpecName "nova-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 06 12:30:36 crc kubenswrapper[4958]: I1006 12:30:36.246037 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/226865fc-14de-4b5f-a693-a27ef3d06efa-nova-migration-ssh-key-1" (OuterVolumeSpecName: "nova-migration-ssh-key-1") pod "226865fc-14de-4b5f-a693-a27ef3d06efa" (UID: "226865fc-14de-4b5f-a693-a27ef3d06efa"). InnerVolumeSpecName "nova-migration-ssh-key-1". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 06 12:30:36 crc kubenswrapper[4958]: I1006 12:30:36.250304 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/226865fc-14de-4b5f-a693-a27ef3d06efa-nova-extra-config-0" (OuterVolumeSpecName: "nova-extra-config-0") pod "226865fc-14de-4b5f-a693-a27ef3d06efa" (UID: "226865fc-14de-4b5f-a693-a27ef3d06efa"). InnerVolumeSpecName "nova-extra-config-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 06 12:30:36 crc kubenswrapper[4958]: I1006 12:30:36.256905 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/226865fc-14de-4b5f-a693-a27ef3d06efa-nova-cell1-compute-config-1" (OuterVolumeSpecName: "nova-cell1-compute-config-1") pod "226865fc-14de-4b5f-a693-a27ef3d06efa" (UID: "226865fc-14de-4b5f-a693-a27ef3d06efa"). InnerVolumeSpecName "nova-cell1-compute-config-1". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 06 12:30:36 crc kubenswrapper[4958]: I1006 12:30:36.257816 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/226865fc-14de-4b5f-a693-a27ef3d06efa-inventory" (OuterVolumeSpecName: "inventory") pod "226865fc-14de-4b5f-a693-a27ef3d06efa" (UID: "226865fc-14de-4b5f-a693-a27ef3d06efa"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 06 12:30:36 crc kubenswrapper[4958]: I1006 12:30:36.275459 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/226865fc-14de-4b5f-a693-a27ef3d06efa-nova-migration-ssh-key-0" (OuterVolumeSpecName: "nova-migration-ssh-key-0") pod "226865fc-14de-4b5f-a693-a27ef3d06efa" (UID: "226865fc-14de-4b5f-a693-a27ef3d06efa"). InnerVolumeSpecName "nova-migration-ssh-key-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 06 12:30:36 crc kubenswrapper[4958]: I1006 12:30:36.276003 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/226865fc-14de-4b5f-a693-a27ef3d06efa-nova-cell1-compute-config-0" (OuterVolumeSpecName: "nova-cell1-compute-config-0") pod "226865fc-14de-4b5f-a693-a27ef3d06efa" (UID: "226865fc-14de-4b5f-a693-a27ef3d06efa"). InnerVolumeSpecName "nova-cell1-compute-config-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 06 12:30:36 crc kubenswrapper[4958]: I1006 12:30:36.276496 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/226865fc-14de-4b5f-a693-a27ef3d06efa-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "226865fc-14de-4b5f-a693-a27ef3d06efa" (UID: "226865fc-14de-4b5f-a693-a27ef3d06efa"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 06 12:30:36 crc kubenswrapper[4958]: I1006 12:30:36.313106 4958 reconciler_common.go:293] "Volume detached for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/226865fc-14de-4b5f-a693-a27ef3d06efa-nova-migration-ssh-key-0\") on node \"crc\" DevicePath \"\"" Oct 06 12:30:36 crc kubenswrapper[4958]: I1006 12:30:36.313169 4958 reconciler_common.go:293] "Volume detached for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/226865fc-14de-4b5f-a693-a27ef3d06efa-nova-cell1-compute-config-0\") on node \"crc\" DevicePath \"\"" Oct 06 12:30:36 crc kubenswrapper[4958]: I1006 12:30:36.313184 4958 reconciler_common.go:293] "Volume detached for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/226865fc-14de-4b5f-a693-a27ef3d06efa-nova-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 06 12:30:36 crc kubenswrapper[4958]: I1006 12:30:36.313195 4958 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ldcv8\" (UniqueName: \"kubernetes.io/projected/226865fc-14de-4b5f-a693-a27ef3d06efa-kube-api-access-ldcv8\") on node \"crc\" DevicePath \"\"" Oct 06 12:30:36 crc kubenswrapper[4958]: I1006 12:30:36.313207 4958 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/226865fc-14de-4b5f-a693-a27ef3d06efa-ssh-key\") on node \"crc\" DevicePath \"\"" Oct 06 12:30:36 crc kubenswrapper[4958]: I1006 12:30:36.313219 4958 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/226865fc-14de-4b5f-a693-a27ef3d06efa-inventory\") on node \"crc\" DevicePath \"\"" Oct 06 12:30:36 crc kubenswrapper[4958]: I1006 12:30:36.313230 4958 reconciler_common.go:293] "Volume detached for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/226865fc-14de-4b5f-a693-a27ef3d06efa-nova-migration-ssh-key-1\") on node \"crc\" DevicePath \"\"" Oct 06 12:30:36 crc kubenswrapper[4958]: I1006 12:30:36.313242 4958 reconciler_common.go:293] "Volume detached for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/226865fc-14de-4b5f-a693-a27ef3d06efa-nova-cell1-compute-config-1\") on node \"crc\" DevicePath \"\"" Oct 06 12:30:36 crc kubenswrapper[4958]: I1006 12:30:36.313266 4958 reconciler_common.go:293] "Volume detached for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/226865fc-14de-4b5f-a693-a27ef3d06efa-nova-extra-config-0\") on node \"crc\" DevicePath \"\"" Oct 06 12:30:36 crc kubenswrapper[4958]: I1006 12:30:36.595965 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-nx8z6" event={"ID":"226865fc-14de-4b5f-a693-a27ef3d06efa","Type":"ContainerDied","Data":"e61fe60eabcef2683e94c7e2c8724a3e271e973a58986750833f8517a3dd7ebd"} Oct 06 12:30:36 crc kubenswrapper[4958]: I1006 12:30:36.596021 4958 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e61fe60eabcef2683e94c7e2c8724a3e271e973a58986750833f8517a3dd7ebd" Oct 06 12:30:36 crc kubenswrapper[4958]: I1006 12:30:36.596070 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-nx8z6" Oct 06 12:30:36 crc kubenswrapper[4958]: I1006 12:30:36.709076 4958 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/telemetry-edpm-deployment-openstack-edpm-ipam-47hl4"] Oct 06 12:30:36 crc kubenswrapper[4958]: E1006 12:30:36.709581 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="226865fc-14de-4b5f-a693-a27ef3d06efa" containerName="nova-edpm-deployment-openstack-edpm-ipam" Oct 06 12:30:36 crc kubenswrapper[4958]: I1006 12:30:36.709605 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="226865fc-14de-4b5f-a693-a27ef3d06efa" containerName="nova-edpm-deployment-openstack-edpm-ipam" Oct 06 12:30:36 crc kubenswrapper[4958]: E1006 12:30:36.709620 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="51131f3b-9af1-45ce-8e80-cd7439743329" containerName="collect-profiles" Oct 06 12:30:36 crc kubenswrapper[4958]: I1006 12:30:36.709629 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="51131f3b-9af1-45ce-8e80-cd7439743329" containerName="collect-profiles" Oct 06 12:30:36 crc kubenswrapper[4958]: I1006 12:30:36.709849 4958 memory_manager.go:354] "RemoveStaleState removing state" podUID="226865fc-14de-4b5f-a693-a27ef3d06efa" containerName="nova-edpm-deployment-openstack-edpm-ipam" Oct 06 12:30:36 crc kubenswrapper[4958]: I1006 12:30:36.709875 4958 memory_manager.go:354] "RemoveStaleState removing state" podUID="51131f3b-9af1-45ce-8e80-cd7439743329" containerName="collect-profiles" Oct 06 12:30:36 crc kubenswrapper[4958]: I1006 12:30:36.710741 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-47hl4" Oct 06 12:30:36 crc kubenswrapper[4958]: I1006 12:30:36.713347 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Oct 06 12:30:36 crc kubenswrapper[4958]: I1006 12:30:36.713421 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Oct 06 12:30:36 crc kubenswrapper[4958]: I1006 12:30:36.713631 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-frkbw" Oct 06 12:30:36 crc kubenswrapper[4958]: I1006 12:30:36.714500 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-compute-config-data" Oct 06 12:30:36 crc kubenswrapper[4958]: I1006 12:30:36.714982 4958 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Oct 06 12:30:36 crc kubenswrapper[4958]: I1006 12:30:36.722709 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/telemetry-edpm-deployment-openstack-edpm-ipam-47hl4"] Oct 06 12:30:36 crc kubenswrapper[4958]: I1006 12:30:36.825365 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/5569cfb8-1cd6-4f3d-9eee-282ddce72171-ceilometer-compute-config-data-2\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-47hl4\" (UID: \"5569cfb8-1cd6-4f3d-9eee-282ddce72171\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-47hl4" Oct 06 12:30:36 crc kubenswrapper[4958]: I1006 12:30:36.825495 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/5569cfb8-1cd6-4f3d-9eee-282ddce72171-ceilometer-compute-config-data-1\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-47hl4\" (UID: \"5569cfb8-1cd6-4f3d-9eee-282ddce72171\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-47hl4" Oct 06 12:30:36 crc kubenswrapper[4958]: I1006 12:30:36.825523 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q7n6c\" (UniqueName: \"kubernetes.io/projected/5569cfb8-1cd6-4f3d-9eee-282ddce72171-kube-api-access-q7n6c\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-47hl4\" (UID: \"5569cfb8-1cd6-4f3d-9eee-282ddce72171\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-47hl4" Oct 06 12:30:36 crc kubenswrapper[4958]: I1006 12:30:36.825696 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/5569cfb8-1cd6-4f3d-9eee-282ddce72171-ceilometer-compute-config-data-0\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-47hl4\" (UID: \"5569cfb8-1cd6-4f3d-9eee-282ddce72171\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-47hl4" Oct 06 12:30:36 crc kubenswrapper[4958]: I1006 12:30:36.825757 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/5569cfb8-1cd6-4f3d-9eee-282ddce72171-ssh-key\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-47hl4\" (UID: \"5569cfb8-1cd6-4f3d-9eee-282ddce72171\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-47hl4" Oct 06 12:30:36 crc kubenswrapper[4958]: I1006 12:30:36.825828 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5569cfb8-1cd6-4f3d-9eee-282ddce72171-telemetry-combined-ca-bundle\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-47hl4\" (UID: \"5569cfb8-1cd6-4f3d-9eee-282ddce72171\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-47hl4" Oct 06 12:30:36 crc kubenswrapper[4958]: I1006 12:30:36.826004 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/5569cfb8-1cd6-4f3d-9eee-282ddce72171-inventory\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-47hl4\" (UID: \"5569cfb8-1cd6-4f3d-9eee-282ddce72171\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-47hl4" Oct 06 12:30:36 crc kubenswrapper[4958]: I1006 12:30:36.928634 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/5569cfb8-1cd6-4f3d-9eee-282ddce72171-ceilometer-compute-config-data-2\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-47hl4\" (UID: \"5569cfb8-1cd6-4f3d-9eee-282ddce72171\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-47hl4" Oct 06 12:30:36 crc kubenswrapper[4958]: I1006 12:30:36.928839 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/5569cfb8-1cd6-4f3d-9eee-282ddce72171-ceilometer-compute-config-data-1\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-47hl4\" (UID: \"5569cfb8-1cd6-4f3d-9eee-282ddce72171\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-47hl4" Oct 06 12:30:36 crc kubenswrapper[4958]: I1006 12:30:36.928898 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q7n6c\" (UniqueName: \"kubernetes.io/projected/5569cfb8-1cd6-4f3d-9eee-282ddce72171-kube-api-access-q7n6c\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-47hl4\" (UID: \"5569cfb8-1cd6-4f3d-9eee-282ddce72171\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-47hl4" Oct 06 12:30:36 crc kubenswrapper[4958]: I1006 12:30:36.928982 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/5569cfb8-1cd6-4f3d-9eee-282ddce72171-ceilometer-compute-config-data-0\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-47hl4\" (UID: \"5569cfb8-1cd6-4f3d-9eee-282ddce72171\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-47hl4" Oct 06 12:30:36 crc kubenswrapper[4958]: I1006 12:30:36.929022 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/5569cfb8-1cd6-4f3d-9eee-282ddce72171-ssh-key\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-47hl4\" (UID: \"5569cfb8-1cd6-4f3d-9eee-282ddce72171\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-47hl4" Oct 06 12:30:36 crc kubenswrapper[4958]: I1006 12:30:36.929087 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5569cfb8-1cd6-4f3d-9eee-282ddce72171-telemetry-combined-ca-bundle\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-47hl4\" (UID: \"5569cfb8-1cd6-4f3d-9eee-282ddce72171\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-47hl4" Oct 06 12:30:36 crc kubenswrapper[4958]: I1006 12:30:36.929137 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/5569cfb8-1cd6-4f3d-9eee-282ddce72171-inventory\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-47hl4\" (UID: \"5569cfb8-1cd6-4f3d-9eee-282ddce72171\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-47hl4" Oct 06 12:30:36 crc kubenswrapper[4958]: I1006 12:30:36.932971 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/5569cfb8-1cd6-4f3d-9eee-282ddce72171-ceilometer-compute-config-data-0\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-47hl4\" (UID: \"5569cfb8-1cd6-4f3d-9eee-282ddce72171\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-47hl4" Oct 06 12:30:36 crc kubenswrapper[4958]: I1006 12:30:36.933349 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5569cfb8-1cd6-4f3d-9eee-282ddce72171-telemetry-combined-ca-bundle\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-47hl4\" (UID: \"5569cfb8-1cd6-4f3d-9eee-282ddce72171\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-47hl4" Oct 06 12:30:36 crc kubenswrapper[4958]: I1006 12:30:36.933884 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/5569cfb8-1cd6-4f3d-9eee-282ddce72171-inventory\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-47hl4\" (UID: \"5569cfb8-1cd6-4f3d-9eee-282ddce72171\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-47hl4" Oct 06 12:30:36 crc kubenswrapper[4958]: I1006 12:30:36.933910 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/5569cfb8-1cd6-4f3d-9eee-282ddce72171-ceilometer-compute-config-data-1\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-47hl4\" (UID: \"5569cfb8-1cd6-4f3d-9eee-282ddce72171\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-47hl4" Oct 06 12:30:36 crc kubenswrapper[4958]: I1006 12:30:36.934650 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/5569cfb8-1cd6-4f3d-9eee-282ddce72171-ceilometer-compute-config-data-2\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-47hl4\" (UID: \"5569cfb8-1cd6-4f3d-9eee-282ddce72171\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-47hl4" Oct 06 12:30:36 crc kubenswrapper[4958]: I1006 12:30:36.934674 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/5569cfb8-1cd6-4f3d-9eee-282ddce72171-ssh-key\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-47hl4\" (UID: \"5569cfb8-1cd6-4f3d-9eee-282ddce72171\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-47hl4" Oct 06 12:30:36 crc kubenswrapper[4958]: I1006 12:30:36.951020 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q7n6c\" (UniqueName: \"kubernetes.io/projected/5569cfb8-1cd6-4f3d-9eee-282ddce72171-kube-api-access-q7n6c\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-47hl4\" (UID: \"5569cfb8-1cd6-4f3d-9eee-282ddce72171\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-47hl4" Oct 06 12:30:37 crc kubenswrapper[4958]: I1006 12:30:37.028377 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-47hl4" Oct 06 12:30:37 crc kubenswrapper[4958]: I1006 12:30:37.658193 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/telemetry-edpm-deployment-openstack-edpm-ipam-47hl4"] Oct 06 12:30:37 crc kubenswrapper[4958]: I1006 12:30:37.668426 4958 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Oct 06 12:30:38 crc kubenswrapper[4958]: I1006 12:30:38.615644 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-47hl4" event={"ID":"5569cfb8-1cd6-4f3d-9eee-282ddce72171","Type":"ContainerStarted","Data":"b16aa37df53a02d232f74951d3a59e55e0b5505bca8b7f0c533222f2964f5a96"} Oct 06 12:30:38 crc kubenswrapper[4958]: I1006 12:30:38.616336 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-47hl4" event={"ID":"5569cfb8-1cd6-4f3d-9eee-282ddce72171","Type":"ContainerStarted","Data":"0c6b212335927d4877b443f769fea0bb591079dce646b10884d4610fa20e216d"} Oct 06 12:30:38 crc kubenswrapper[4958]: I1006 12:30:38.641185 4958 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-47hl4" podStartSLOduration=1.994331173 podStartE2EDuration="2.641166909s" podCreationTimestamp="2025-10-06 12:30:36 +0000 UTC" firstStartedPulling="2025-10-06 12:30:37.668176034 +0000 UTC m=+2591.554201342" lastFinishedPulling="2025-10-06 12:30:38.31501174 +0000 UTC m=+2592.201037078" observedRunningTime="2025-10-06 12:30:38.632598298 +0000 UTC m=+2592.518623606" watchObservedRunningTime="2025-10-06 12:30:38.641166909 +0000 UTC m=+2592.527192217" Oct 06 12:30:46 crc kubenswrapper[4958]: I1006 12:30:46.927760 4958 scope.go:117] "RemoveContainer" containerID="d5a93cdafe689a1af3c191a363fd93dd9e51df0fafe51e825fe13eb8ebfdd2e5" Oct 06 12:30:46 crc kubenswrapper[4958]: E1006 12:30:46.928527 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-whw6z_openshift-machine-config-operator(1a63a5e9-6d00-40ff-a080-cfcaf03a1c1b)\"" pod="openshift-machine-config-operator/machine-config-daemon-whw6z" podUID="1a63a5e9-6d00-40ff-a080-cfcaf03a1c1b" Oct 06 12:30:57 crc kubenswrapper[4958]: I1006 12:30:57.913542 4958 scope.go:117] "RemoveContainer" containerID="d5a93cdafe689a1af3c191a363fd93dd9e51df0fafe51e825fe13eb8ebfdd2e5" Oct 06 12:30:57 crc kubenswrapper[4958]: E1006 12:30:57.914829 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-whw6z_openshift-machine-config-operator(1a63a5e9-6d00-40ff-a080-cfcaf03a1c1b)\"" pod="openshift-machine-config-operator/machine-config-daemon-whw6z" podUID="1a63a5e9-6d00-40ff-a080-cfcaf03a1c1b" Oct 06 12:31:12 crc kubenswrapper[4958]: I1006 12:31:12.915253 4958 scope.go:117] "RemoveContainer" containerID="d5a93cdafe689a1af3c191a363fd93dd9e51df0fafe51e825fe13eb8ebfdd2e5" Oct 06 12:31:12 crc kubenswrapper[4958]: E1006 12:31:12.916589 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-whw6z_openshift-machine-config-operator(1a63a5e9-6d00-40ff-a080-cfcaf03a1c1b)\"" pod="openshift-machine-config-operator/machine-config-daemon-whw6z" podUID="1a63a5e9-6d00-40ff-a080-cfcaf03a1c1b" Oct 06 12:31:23 crc kubenswrapper[4958]: I1006 12:31:23.914075 4958 scope.go:117] "RemoveContainer" containerID="d5a93cdafe689a1af3c191a363fd93dd9e51df0fafe51e825fe13eb8ebfdd2e5" Oct 06 12:31:23 crc kubenswrapper[4958]: E1006 12:31:23.916394 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-whw6z_openshift-machine-config-operator(1a63a5e9-6d00-40ff-a080-cfcaf03a1c1b)\"" pod="openshift-machine-config-operator/machine-config-daemon-whw6z" podUID="1a63a5e9-6d00-40ff-a080-cfcaf03a1c1b" Oct 06 12:31:36 crc kubenswrapper[4958]: I1006 12:31:36.925457 4958 scope.go:117] "RemoveContainer" containerID="d5a93cdafe689a1af3c191a363fd93dd9e51df0fafe51e825fe13eb8ebfdd2e5" Oct 06 12:31:36 crc kubenswrapper[4958]: E1006 12:31:36.926753 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-whw6z_openshift-machine-config-operator(1a63a5e9-6d00-40ff-a080-cfcaf03a1c1b)\"" pod="openshift-machine-config-operator/machine-config-daemon-whw6z" podUID="1a63a5e9-6d00-40ff-a080-cfcaf03a1c1b" Oct 06 12:31:47 crc kubenswrapper[4958]: I1006 12:31:47.914879 4958 scope.go:117] "RemoveContainer" containerID="d5a93cdafe689a1af3c191a363fd93dd9e51df0fafe51e825fe13eb8ebfdd2e5" Oct 06 12:31:47 crc kubenswrapper[4958]: E1006 12:31:47.916433 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-whw6z_openshift-machine-config-operator(1a63a5e9-6d00-40ff-a080-cfcaf03a1c1b)\"" pod="openshift-machine-config-operator/machine-config-daemon-whw6z" podUID="1a63a5e9-6d00-40ff-a080-cfcaf03a1c1b" Oct 06 12:32:00 crc kubenswrapper[4958]: I1006 12:32:00.913462 4958 scope.go:117] "RemoveContainer" containerID="d5a93cdafe689a1af3c191a363fd93dd9e51df0fafe51e825fe13eb8ebfdd2e5" Oct 06 12:32:00 crc kubenswrapper[4958]: E1006 12:32:00.914525 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-whw6z_openshift-machine-config-operator(1a63a5e9-6d00-40ff-a080-cfcaf03a1c1b)\"" pod="openshift-machine-config-operator/machine-config-daemon-whw6z" podUID="1a63a5e9-6d00-40ff-a080-cfcaf03a1c1b" Oct 06 12:32:15 crc kubenswrapper[4958]: I1006 12:32:15.913580 4958 scope.go:117] "RemoveContainer" containerID="d5a93cdafe689a1af3c191a363fd93dd9e51df0fafe51e825fe13eb8ebfdd2e5" Oct 06 12:32:15 crc kubenswrapper[4958]: E1006 12:32:15.914939 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-whw6z_openshift-machine-config-operator(1a63a5e9-6d00-40ff-a080-cfcaf03a1c1b)\"" pod="openshift-machine-config-operator/machine-config-daemon-whw6z" podUID="1a63a5e9-6d00-40ff-a080-cfcaf03a1c1b" Oct 06 12:32:26 crc kubenswrapper[4958]: I1006 12:32:26.921418 4958 scope.go:117] "RemoveContainer" containerID="d5a93cdafe689a1af3c191a363fd93dd9e51df0fafe51e825fe13eb8ebfdd2e5" Oct 06 12:32:27 crc kubenswrapper[4958]: I1006 12:32:27.867711 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-whw6z" event={"ID":"1a63a5e9-6d00-40ff-a080-cfcaf03a1c1b","Type":"ContainerStarted","Data":"a73e0730b223a642835b16bd60a60b353bbfb0c2de557e1f9819566cffd07688"} Oct 06 12:33:31 crc kubenswrapper[4958]: I1006 12:33:31.591107 4958 generic.go:334] "Generic (PLEG): container finished" podID="5569cfb8-1cd6-4f3d-9eee-282ddce72171" containerID="b16aa37df53a02d232f74951d3a59e55e0b5505bca8b7f0c533222f2964f5a96" exitCode=0 Oct 06 12:33:31 crc kubenswrapper[4958]: I1006 12:33:31.591932 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-47hl4" event={"ID":"5569cfb8-1cd6-4f3d-9eee-282ddce72171","Type":"ContainerDied","Data":"b16aa37df53a02d232f74951d3a59e55e0b5505bca8b7f0c533222f2964f5a96"} Oct 06 12:33:33 crc kubenswrapper[4958]: I1006 12:33:33.243630 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-47hl4" Oct 06 12:33:33 crc kubenswrapper[4958]: I1006 12:33:33.351643 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/5569cfb8-1cd6-4f3d-9eee-282ddce72171-ceilometer-compute-config-data-1\") pod \"5569cfb8-1cd6-4f3d-9eee-282ddce72171\" (UID: \"5569cfb8-1cd6-4f3d-9eee-282ddce72171\") " Oct 06 12:33:33 crc kubenswrapper[4958]: I1006 12:33:33.351711 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/5569cfb8-1cd6-4f3d-9eee-282ddce72171-ceilometer-compute-config-data-0\") pod \"5569cfb8-1cd6-4f3d-9eee-282ddce72171\" (UID: \"5569cfb8-1cd6-4f3d-9eee-282ddce72171\") " Oct 06 12:33:33 crc kubenswrapper[4958]: I1006 12:33:33.351794 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/5569cfb8-1cd6-4f3d-9eee-282ddce72171-ssh-key\") pod \"5569cfb8-1cd6-4f3d-9eee-282ddce72171\" (UID: \"5569cfb8-1cd6-4f3d-9eee-282ddce72171\") " Oct 06 12:33:33 crc kubenswrapper[4958]: I1006 12:33:33.351865 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5569cfb8-1cd6-4f3d-9eee-282ddce72171-telemetry-combined-ca-bundle\") pod \"5569cfb8-1cd6-4f3d-9eee-282ddce72171\" (UID: \"5569cfb8-1cd6-4f3d-9eee-282ddce72171\") " Oct 06 12:33:33 crc kubenswrapper[4958]: I1006 12:33:33.351906 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/5569cfb8-1cd6-4f3d-9eee-282ddce72171-ceilometer-compute-config-data-2\") pod \"5569cfb8-1cd6-4f3d-9eee-282ddce72171\" (UID: \"5569cfb8-1cd6-4f3d-9eee-282ddce72171\") " Oct 06 12:33:33 crc kubenswrapper[4958]: I1006 12:33:33.351983 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-q7n6c\" (UniqueName: \"kubernetes.io/projected/5569cfb8-1cd6-4f3d-9eee-282ddce72171-kube-api-access-q7n6c\") pod \"5569cfb8-1cd6-4f3d-9eee-282ddce72171\" (UID: \"5569cfb8-1cd6-4f3d-9eee-282ddce72171\") " Oct 06 12:33:33 crc kubenswrapper[4958]: I1006 12:33:33.352039 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/5569cfb8-1cd6-4f3d-9eee-282ddce72171-inventory\") pod \"5569cfb8-1cd6-4f3d-9eee-282ddce72171\" (UID: \"5569cfb8-1cd6-4f3d-9eee-282ddce72171\") " Oct 06 12:33:33 crc kubenswrapper[4958]: I1006 12:33:33.359790 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5569cfb8-1cd6-4f3d-9eee-282ddce72171-telemetry-combined-ca-bundle" (OuterVolumeSpecName: "telemetry-combined-ca-bundle") pod "5569cfb8-1cd6-4f3d-9eee-282ddce72171" (UID: "5569cfb8-1cd6-4f3d-9eee-282ddce72171"). InnerVolumeSpecName "telemetry-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 06 12:33:33 crc kubenswrapper[4958]: I1006 12:33:33.365420 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5569cfb8-1cd6-4f3d-9eee-282ddce72171-kube-api-access-q7n6c" (OuterVolumeSpecName: "kube-api-access-q7n6c") pod "5569cfb8-1cd6-4f3d-9eee-282ddce72171" (UID: "5569cfb8-1cd6-4f3d-9eee-282ddce72171"). InnerVolumeSpecName "kube-api-access-q7n6c". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 06 12:33:33 crc kubenswrapper[4958]: I1006 12:33:33.389292 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5569cfb8-1cd6-4f3d-9eee-282ddce72171-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "5569cfb8-1cd6-4f3d-9eee-282ddce72171" (UID: "5569cfb8-1cd6-4f3d-9eee-282ddce72171"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 06 12:33:33 crc kubenswrapper[4958]: I1006 12:33:33.397832 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5569cfb8-1cd6-4f3d-9eee-282ddce72171-ceilometer-compute-config-data-2" (OuterVolumeSpecName: "ceilometer-compute-config-data-2") pod "5569cfb8-1cd6-4f3d-9eee-282ddce72171" (UID: "5569cfb8-1cd6-4f3d-9eee-282ddce72171"). InnerVolumeSpecName "ceilometer-compute-config-data-2". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 06 12:33:33 crc kubenswrapper[4958]: I1006 12:33:33.398958 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5569cfb8-1cd6-4f3d-9eee-282ddce72171-ceilometer-compute-config-data-0" (OuterVolumeSpecName: "ceilometer-compute-config-data-0") pod "5569cfb8-1cd6-4f3d-9eee-282ddce72171" (UID: "5569cfb8-1cd6-4f3d-9eee-282ddce72171"). InnerVolumeSpecName "ceilometer-compute-config-data-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 06 12:33:33 crc kubenswrapper[4958]: I1006 12:33:33.402577 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5569cfb8-1cd6-4f3d-9eee-282ddce72171-ceilometer-compute-config-data-1" (OuterVolumeSpecName: "ceilometer-compute-config-data-1") pod "5569cfb8-1cd6-4f3d-9eee-282ddce72171" (UID: "5569cfb8-1cd6-4f3d-9eee-282ddce72171"). InnerVolumeSpecName "ceilometer-compute-config-data-1". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 06 12:33:33 crc kubenswrapper[4958]: I1006 12:33:33.403214 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5569cfb8-1cd6-4f3d-9eee-282ddce72171-inventory" (OuterVolumeSpecName: "inventory") pod "5569cfb8-1cd6-4f3d-9eee-282ddce72171" (UID: "5569cfb8-1cd6-4f3d-9eee-282ddce72171"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 06 12:33:33 crc kubenswrapper[4958]: I1006 12:33:33.454718 4958 reconciler_common.go:293] "Volume detached for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5569cfb8-1cd6-4f3d-9eee-282ddce72171-telemetry-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 06 12:33:33 crc kubenswrapper[4958]: I1006 12:33:33.454807 4958 reconciler_common.go:293] "Volume detached for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/5569cfb8-1cd6-4f3d-9eee-282ddce72171-ceilometer-compute-config-data-2\") on node \"crc\" DevicePath \"\"" Oct 06 12:33:33 crc kubenswrapper[4958]: I1006 12:33:33.454823 4958 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-q7n6c\" (UniqueName: \"kubernetes.io/projected/5569cfb8-1cd6-4f3d-9eee-282ddce72171-kube-api-access-q7n6c\") on node \"crc\" DevicePath \"\"" Oct 06 12:33:33 crc kubenswrapper[4958]: I1006 12:33:33.454870 4958 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/5569cfb8-1cd6-4f3d-9eee-282ddce72171-inventory\") on node \"crc\" DevicePath \"\"" Oct 06 12:33:33 crc kubenswrapper[4958]: I1006 12:33:33.454884 4958 reconciler_common.go:293] "Volume detached for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/5569cfb8-1cd6-4f3d-9eee-282ddce72171-ceilometer-compute-config-data-1\") on node \"crc\" DevicePath \"\"" Oct 06 12:33:33 crc kubenswrapper[4958]: I1006 12:33:33.454900 4958 reconciler_common.go:293] "Volume detached for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/5569cfb8-1cd6-4f3d-9eee-282ddce72171-ceilometer-compute-config-data-0\") on node \"crc\" DevicePath \"\"" Oct 06 12:33:33 crc kubenswrapper[4958]: I1006 12:33:33.454915 4958 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/5569cfb8-1cd6-4f3d-9eee-282ddce72171-ssh-key\") on node \"crc\" DevicePath \"\"" Oct 06 12:33:33 crc kubenswrapper[4958]: I1006 12:33:33.617934 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-47hl4" event={"ID":"5569cfb8-1cd6-4f3d-9eee-282ddce72171","Type":"ContainerDied","Data":"0c6b212335927d4877b443f769fea0bb591079dce646b10884d4610fa20e216d"} Oct 06 12:33:33 crc kubenswrapper[4958]: I1006 12:33:33.617986 4958 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0c6b212335927d4877b443f769fea0bb591079dce646b10884d4610fa20e216d" Oct 06 12:33:33 crc kubenswrapper[4958]: I1006 12:33:33.618038 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-47hl4" Oct 06 12:34:06 crc kubenswrapper[4958]: E1006 12:34:06.787978 4958 upgradeaware.go:427] Error proxying data from client to backend: readfrom tcp 38.102.83.73:49756->38.102.83.73:43425: write tcp 38.102.83.73:49756->38.102.83.73:43425: write: broken pipe Oct 06 12:34:10 crc kubenswrapper[4958]: I1006 12:34:10.186506 4958 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-d8qjr"] Oct 06 12:34:10 crc kubenswrapper[4958]: E1006 12:34:10.188526 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5569cfb8-1cd6-4f3d-9eee-282ddce72171" containerName="telemetry-edpm-deployment-openstack-edpm-ipam" Oct 06 12:34:10 crc kubenswrapper[4958]: I1006 12:34:10.188615 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="5569cfb8-1cd6-4f3d-9eee-282ddce72171" containerName="telemetry-edpm-deployment-openstack-edpm-ipam" Oct 06 12:34:10 crc kubenswrapper[4958]: I1006 12:34:10.188877 4958 memory_manager.go:354] "RemoveStaleState removing state" podUID="5569cfb8-1cd6-4f3d-9eee-282ddce72171" containerName="telemetry-edpm-deployment-openstack-edpm-ipam" Oct 06 12:34:10 crc kubenswrapper[4958]: I1006 12:34:10.190351 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-d8qjr" Oct 06 12:34:10 crc kubenswrapper[4958]: I1006 12:34:10.209886 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-d8qjr"] Oct 06 12:34:10 crc kubenswrapper[4958]: I1006 12:34:10.267458 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ee0eeb09-63be-42a4-81dd-a7edb965687b-utilities\") pod \"redhat-marketplace-d8qjr\" (UID: \"ee0eeb09-63be-42a4-81dd-a7edb965687b\") " pod="openshift-marketplace/redhat-marketplace-d8qjr" Oct 06 12:34:10 crc kubenswrapper[4958]: I1006 12:34:10.267675 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ee0eeb09-63be-42a4-81dd-a7edb965687b-catalog-content\") pod \"redhat-marketplace-d8qjr\" (UID: \"ee0eeb09-63be-42a4-81dd-a7edb965687b\") " pod="openshift-marketplace/redhat-marketplace-d8qjr" Oct 06 12:34:10 crc kubenswrapper[4958]: I1006 12:34:10.267725 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7sk9x\" (UniqueName: \"kubernetes.io/projected/ee0eeb09-63be-42a4-81dd-a7edb965687b-kube-api-access-7sk9x\") pod \"redhat-marketplace-d8qjr\" (UID: \"ee0eeb09-63be-42a4-81dd-a7edb965687b\") " pod="openshift-marketplace/redhat-marketplace-d8qjr" Oct 06 12:34:10 crc kubenswrapper[4958]: I1006 12:34:10.370689 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ee0eeb09-63be-42a4-81dd-a7edb965687b-catalog-content\") pod \"redhat-marketplace-d8qjr\" (UID: \"ee0eeb09-63be-42a4-81dd-a7edb965687b\") " pod="openshift-marketplace/redhat-marketplace-d8qjr" Oct 06 12:34:10 crc kubenswrapper[4958]: I1006 12:34:10.370764 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7sk9x\" (UniqueName: \"kubernetes.io/projected/ee0eeb09-63be-42a4-81dd-a7edb965687b-kube-api-access-7sk9x\") pod \"redhat-marketplace-d8qjr\" (UID: \"ee0eeb09-63be-42a4-81dd-a7edb965687b\") " pod="openshift-marketplace/redhat-marketplace-d8qjr" Oct 06 12:34:10 crc kubenswrapper[4958]: I1006 12:34:10.371009 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ee0eeb09-63be-42a4-81dd-a7edb965687b-utilities\") pod \"redhat-marketplace-d8qjr\" (UID: \"ee0eeb09-63be-42a4-81dd-a7edb965687b\") " pod="openshift-marketplace/redhat-marketplace-d8qjr" Oct 06 12:34:10 crc kubenswrapper[4958]: I1006 12:34:10.371404 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ee0eeb09-63be-42a4-81dd-a7edb965687b-catalog-content\") pod \"redhat-marketplace-d8qjr\" (UID: \"ee0eeb09-63be-42a4-81dd-a7edb965687b\") " pod="openshift-marketplace/redhat-marketplace-d8qjr" Oct 06 12:34:10 crc kubenswrapper[4958]: I1006 12:34:10.371469 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ee0eeb09-63be-42a4-81dd-a7edb965687b-utilities\") pod \"redhat-marketplace-d8qjr\" (UID: \"ee0eeb09-63be-42a4-81dd-a7edb965687b\") " pod="openshift-marketplace/redhat-marketplace-d8qjr" Oct 06 12:34:10 crc kubenswrapper[4958]: I1006 12:34:10.402559 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7sk9x\" (UniqueName: \"kubernetes.io/projected/ee0eeb09-63be-42a4-81dd-a7edb965687b-kube-api-access-7sk9x\") pod \"redhat-marketplace-d8qjr\" (UID: \"ee0eeb09-63be-42a4-81dd-a7edb965687b\") " pod="openshift-marketplace/redhat-marketplace-d8qjr" Oct 06 12:34:10 crc kubenswrapper[4958]: I1006 12:34:10.514124 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-d8qjr" Oct 06 12:34:11 crc kubenswrapper[4958]: I1006 12:34:11.026820 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-d8qjr"] Oct 06 12:34:12 crc kubenswrapper[4958]: I1006 12:34:12.032456 4958 generic.go:334] "Generic (PLEG): container finished" podID="ee0eeb09-63be-42a4-81dd-a7edb965687b" containerID="a3d7793f1f5b246cb011bf3dc7022535ffd82b1e7bbf4ba6311fcebdb9f43a6e" exitCode=0 Oct 06 12:34:12 crc kubenswrapper[4958]: I1006 12:34:12.032971 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-d8qjr" event={"ID":"ee0eeb09-63be-42a4-81dd-a7edb965687b","Type":"ContainerDied","Data":"a3d7793f1f5b246cb011bf3dc7022535ffd82b1e7bbf4ba6311fcebdb9f43a6e"} Oct 06 12:34:12 crc kubenswrapper[4958]: I1006 12:34:12.033010 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-d8qjr" event={"ID":"ee0eeb09-63be-42a4-81dd-a7edb965687b","Type":"ContainerStarted","Data":"03c340f7854c7be64d90a952e912505bde6629f8f76416109adb51d68b1a2094"} Oct 06 12:34:14 crc kubenswrapper[4958]: I1006 12:34:14.078250 4958 generic.go:334] "Generic (PLEG): container finished" podID="ee0eeb09-63be-42a4-81dd-a7edb965687b" containerID="f901ce2cd45475fb3a63c44233aa38fa42cb1da743790b467731b1c0aa80d7d5" exitCode=0 Oct 06 12:34:14 crc kubenswrapper[4958]: I1006 12:34:14.078357 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-d8qjr" event={"ID":"ee0eeb09-63be-42a4-81dd-a7edb965687b","Type":"ContainerDied","Data":"f901ce2cd45475fb3a63c44233aa38fa42cb1da743790b467731b1c0aa80d7d5"} Oct 06 12:34:15 crc kubenswrapper[4958]: I1006 12:34:15.094072 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-d8qjr" event={"ID":"ee0eeb09-63be-42a4-81dd-a7edb965687b","Type":"ContainerStarted","Data":"9954c913f64d921c430c4fd326cc441daf8ce6bab84ace16f4e3a7cbe113edcc"} Oct 06 12:34:15 crc kubenswrapper[4958]: I1006 12:34:15.120292 4958 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-d8qjr" podStartSLOduration=2.655639247 podStartE2EDuration="5.120264515s" podCreationTimestamp="2025-10-06 12:34:10 +0000 UTC" firstStartedPulling="2025-10-06 12:34:12.035447679 +0000 UTC m=+2805.921472997" lastFinishedPulling="2025-10-06 12:34:14.500072957 +0000 UTC m=+2808.386098265" observedRunningTime="2025-10-06 12:34:15.111698895 +0000 UTC m=+2808.997724253" watchObservedRunningTime="2025-10-06 12:34:15.120264515 +0000 UTC m=+2809.006289823" Oct 06 12:34:20 crc kubenswrapper[4958]: I1006 12:34:20.515287 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-d8qjr" Oct 06 12:34:20 crc kubenswrapper[4958]: I1006 12:34:20.516205 4958 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-d8qjr" Oct 06 12:34:20 crc kubenswrapper[4958]: I1006 12:34:20.592350 4958 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-d8qjr" Oct 06 12:34:21 crc kubenswrapper[4958]: I1006 12:34:21.268667 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-d8qjr" Oct 06 12:34:21 crc kubenswrapper[4958]: I1006 12:34:21.326823 4958 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-d8qjr"] Oct 06 12:34:23 crc kubenswrapper[4958]: I1006 12:34:23.208012 4958 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-d8qjr" podUID="ee0eeb09-63be-42a4-81dd-a7edb965687b" containerName="registry-server" containerID="cri-o://9954c913f64d921c430c4fd326cc441daf8ce6bab84ace16f4e3a7cbe113edcc" gracePeriod=2 Oct 06 12:34:23 crc kubenswrapper[4958]: I1006 12:34:23.768937 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-d8qjr" Oct 06 12:34:23 crc kubenswrapper[4958]: I1006 12:34:23.952728 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ee0eeb09-63be-42a4-81dd-a7edb965687b-catalog-content\") pod \"ee0eeb09-63be-42a4-81dd-a7edb965687b\" (UID: \"ee0eeb09-63be-42a4-81dd-a7edb965687b\") " Oct 06 12:34:23 crc kubenswrapper[4958]: I1006 12:34:23.952884 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ee0eeb09-63be-42a4-81dd-a7edb965687b-utilities\") pod \"ee0eeb09-63be-42a4-81dd-a7edb965687b\" (UID: \"ee0eeb09-63be-42a4-81dd-a7edb965687b\") " Oct 06 12:34:23 crc kubenswrapper[4958]: I1006 12:34:23.953026 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7sk9x\" (UniqueName: \"kubernetes.io/projected/ee0eeb09-63be-42a4-81dd-a7edb965687b-kube-api-access-7sk9x\") pod \"ee0eeb09-63be-42a4-81dd-a7edb965687b\" (UID: \"ee0eeb09-63be-42a4-81dd-a7edb965687b\") " Oct 06 12:34:23 crc kubenswrapper[4958]: I1006 12:34:23.953735 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ee0eeb09-63be-42a4-81dd-a7edb965687b-utilities" (OuterVolumeSpecName: "utilities") pod "ee0eeb09-63be-42a4-81dd-a7edb965687b" (UID: "ee0eeb09-63be-42a4-81dd-a7edb965687b"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 06 12:34:23 crc kubenswrapper[4958]: I1006 12:34:23.960338 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ee0eeb09-63be-42a4-81dd-a7edb965687b-kube-api-access-7sk9x" (OuterVolumeSpecName: "kube-api-access-7sk9x") pod "ee0eeb09-63be-42a4-81dd-a7edb965687b" (UID: "ee0eeb09-63be-42a4-81dd-a7edb965687b"). InnerVolumeSpecName "kube-api-access-7sk9x". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 06 12:34:23 crc kubenswrapper[4958]: I1006 12:34:23.966793 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ee0eeb09-63be-42a4-81dd-a7edb965687b-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "ee0eeb09-63be-42a4-81dd-a7edb965687b" (UID: "ee0eeb09-63be-42a4-81dd-a7edb965687b"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 06 12:34:24 crc kubenswrapper[4958]: I1006 12:34:24.055726 4958 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ee0eeb09-63be-42a4-81dd-a7edb965687b-utilities\") on node \"crc\" DevicePath \"\"" Oct 06 12:34:24 crc kubenswrapper[4958]: I1006 12:34:24.055771 4958 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7sk9x\" (UniqueName: \"kubernetes.io/projected/ee0eeb09-63be-42a4-81dd-a7edb965687b-kube-api-access-7sk9x\") on node \"crc\" DevicePath \"\"" Oct 06 12:34:24 crc kubenswrapper[4958]: I1006 12:34:24.055786 4958 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ee0eeb09-63be-42a4-81dd-a7edb965687b-catalog-content\") on node \"crc\" DevicePath \"\"" Oct 06 12:34:24 crc kubenswrapper[4958]: I1006 12:34:24.242428 4958 generic.go:334] "Generic (PLEG): container finished" podID="ee0eeb09-63be-42a4-81dd-a7edb965687b" containerID="9954c913f64d921c430c4fd326cc441daf8ce6bab84ace16f4e3a7cbe113edcc" exitCode=0 Oct 06 12:34:24 crc kubenswrapper[4958]: I1006 12:34:24.242640 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-d8qjr" Oct 06 12:34:24 crc kubenswrapper[4958]: I1006 12:34:24.242669 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-d8qjr" event={"ID":"ee0eeb09-63be-42a4-81dd-a7edb965687b","Type":"ContainerDied","Data":"9954c913f64d921c430c4fd326cc441daf8ce6bab84ace16f4e3a7cbe113edcc"} Oct 06 12:34:24 crc kubenswrapper[4958]: I1006 12:34:24.243305 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-d8qjr" event={"ID":"ee0eeb09-63be-42a4-81dd-a7edb965687b","Type":"ContainerDied","Data":"03c340f7854c7be64d90a952e912505bde6629f8f76416109adb51d68b1a2094"} Oct 06 12:34:24 crc kubenswrapper[4958]: I1006 12:34:24.243875 4958 scope.go:117] "RemoveContainer" containerID="9954c913f64d921c430c4fd326cc441daf8ce6bab84ace16f4e3a7cbe113edcc" Oct 06 12:34:24 crc kubenswrapper[4958]: I1006 12:34:24.271936 4958 scope.go:117] "RemoveContainer" containerID="f901ce2cd45475fb3a63c44233aa38fa42cb1da743790b467731b1c0aa80d7d5" Oct 06 12:34:24 crc kubenswrapper[4958]: I1006 12:34:24.308549 4958 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-d8qjr"] Oct 06 12:34:24 crc kubenswrapper[4958]: I1006 12:34:24.324475 4958 scope.go:117] "RemoveContainer" containerID="a3d7793f1f5b246cb011bf3dc7022535ffd82b1e7bbf4ba6311fcebdb9f43a6e" Oct 06 12:34:24 crc kubenswrapper[4958]: I1006 12:34:24.324762 4958 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-d8qjr"] Oct 06 12:34:24 crc kubenswrapper[4958]: I1006 12:34:24.358774 4958 scope.go:117] "RemoveContainer" containerID="9954c913f64d921c430c4fd326cc441daf8ce6bab84ace16f4e3a7cbe113edcc" Oct 06 12:34:24 crc kubenswrapper[4958]: E1006 12:34:24.359340 4958 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9954c913f64d921c430c4fd326cc441daf8ce6bab84ace16f4e3a7cbe113edcc\": container with ID starting with 9954c913f64d921c430c4fd326cc441daf8ce6bab84ace16f4e3a7cbe113edcc not found: ID does not exist" containerID="9954c913f64d921c430c4fd326cc441daf8ce6bab84ace16f4e3a7cbe113edcc" Oct 06 12:34:24 crc kubenswrapper[4958]: I1006 12:34:24.359375 4958 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9954c913f64d921c430c4fd326cc441daf8ce6bab84ace16f4e3a7cbe113edcc"} err="failed to get container status \"9954c913f64d921c430c4fd326cc441daf8ce6bab84ace16f4e3a7cbe113edcc\": rpc error: code = NotFound desc = could not find container \"9954c913f64d921c430c4fd326cc441daf8ce6bab84ace16f4e3a7cbe113edcc\": container with ID starting with 9954c913f64d921c430c4fd326cc441daf8ce6bab84ace16f4e3a7cbe113edcc not found: ID does not exist" Oct 06 12:34:24 crc kubenswrapper[4958]: I1006 12:34:24.359407 4958 scope.go:117] "RemoveContainer" containerID="f901ce2cd45475fb3a63c44233aa38fa42cb1da743790b467731b1c0aa80d7d5" Oct 06 12:34:24 crc kubenswrapper[4958]: E1006 12:34:24.359634 4958 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f901ce2cd45475fb3a63c44233aa38fa42cb1da743790b467731b1c0aa80d7d5\": container with ID starting with f901ce2cd45475fb3a63c44233aa38fa42cb1da743790b467731b1c0aa80d7d5 not found: ID does not exist" containerID="f901ce2cd45475fb3a63c44233aa38fa42cb1da743790b467731b1c0aa80d7d5" Oct 06 12:34:24 crc kubenswrapper[4958]: I1006 12:34:24.359666 4958 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f901ce2cd45475fb3a63c44233aa38fa42cb1da743790b467731b1c0aa80d7d5"} err="failed to get container status \"f901ce2cd45475fb3a63c44233aa38fa42cb1da743790b467731b1c0aa80d7d5\": rpc error: code = NotFound desc = could not find container \"f901ce2cd45475fb3a63c44233aa38fa42cb1da743790b467731b1c0aa80d7d5\": container with ID starting with f901ce2cd45475fb3a63c44233aa38fa42cb1da743790b467731b1c0aa80d7d5 not found: ID does not exist" Oct 06 12:34:24 crc kubenswrapper[4958]: I1006 12:34:24.359680 4958 scope.go:117] "RemoveContainer" containerID="a3d7793f1f5b246cb011bf3dc7022535ffd82b1e7bbf4ba6311fcebdb9f43a6e" Oct 06 12:34:24 crc kubenswrapper[4958]: E1006 12:34:24.359933 4958 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a3d7793f1f5b246cb011bf3dc7022535ffd82b1e7bbf4ba6311fcebdb9f43a6e\": container with ID starting with a3d7793f1f5b246cb011bf3dc7022535ffd82b1e7bbf4ba6311fcebdb9f43a6e not found: ID does not exist" containerID="a3d7793f1f5b246cb011bf3dc7022535ffd82b1e7bbf4ba6311fcebdb9f43a6e" Oct 06 12:34:24 crc kubenswrapper[4958]: I1006 12:34:24.359948 4958 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a3d7793f1f5b246cb011bf3dc7022535ffd82b1e7bbf4ba6311fcebdb9f43a6e"} err="failed to get container status \"a3d7793f1f5b246cb011bf3dc7022535ffd82b1e7bbf4ba6311fcebdb9f43a6e\": rpc error: code = NotFound desc = could not find container \"a3d7793f1f5b246cb011bf3dc7022535ffd82b1e7bbf4ba6311fcebdb9f43a6e\": container with ID starting with a3d7793f1f5b246cb011bf3dc7022535ffd82b1e7bbf4ba6311fcebdb9f43a6e not found: ID does not exist" Oct 06 12:34:24 crc kubenswrapper[4958]: I1006 12:34:24.927309 4958 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ee0eeb09-63be-42a4-81dd-a7edb965687b" path="/var/lib/kubelet/pods/ee0eeb09-63be-42a4-81dd-a7edb965687b/volumes" Oct 06 12:34:53 crc kubenswrapper[4958]: I1006 12:34:53.802113 4958 patch_prober.go:28] interesting pod/machine-config-daemon-whw6z container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 06 12:34:53 crc kubenswrapper[4958]: I1006 12:34:53.802940 4958 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-whw6z" podUID="1a63a5e9-6d00-40ff-a080-cfcaf03a1c1b" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 06 12:35:08 crc kubenswrapper[4958]: I1006 12:35:08.686064 4958 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-b6ljf"] Oct 06 12:35:08 crc kubenswrapper[4958]: E1006 12:35:08.687188 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ee0eeb09-63be-42a4-81dd-a7edb965687b" containerName="registry-server" Oct 06 12:35:08 crc kubenswrapper[4958]: I1006 12:35:08.687208 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="ee0eeb09-63be-42a4-81dd-a7edb965687b" containerName="registry-server" Oct 06 12:35:08 crc kubenswrapper[4958]: E1006 12:35:08.687230 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ee0eeb09-63be-42a4-81dd-a7edb965687b" containerName="extract-content" Oct 06 12:35:08 crc kubenswrapper[4958]: I1006 12:35:08.687250 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="ee0eeb09-63be-42a4-81dd-a7edb965687b" containerName="extract-content" Oct 06 12:35:08 crc kubenswrapper[4958]: E1006 12:35:08.687301 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ee0eeb09-63be-42a4-81dd-a7edb965687b" containerName="extract-utilities" Oct 06 12:35:08 crc kubenswrapper[4958]: I1006 12:35:08.687311 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="ee0eeb09-63be-42a4-81dd-a7edb965687b" containerName="extract-utilities" Oct 06 12:35:08 crc kubenswrapper[4958]: I1006 12:35:08.687530 4958 memory_manager.go:354] "RemoveStaleState removing state" podUID="ee0eeb09-63be-42a4-81dd-a7edb965687b" containerName="registry-server" Oct 06 12:35:08 crc kubenswrapper[4958]: I1006 12:35:08.689190 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-b6ljf" Oct 06 12:35:08 crc kubenswrapper[4958]: I1006 12:35:08.697444 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-b6ljf"] Oct 06 12:35:08 crc kubenswrapper[4958]: I1006 12:35:08.757483 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2fa2bf6b-972c-42e9-b5e1-11e952cd8f6c-catalog-content\") pod \"certified-operators-b6ljf\" (UID: \"2fa2bf6b-972c-42e9-b5e1-11e952cd8f6c\") " pod="openshift-marketplace/certified-operators-b6ljf" Oct 06 12:35:08 crc kubenswrapper[4958]: I1006 12:35:08.757993 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h5629\" (UniqueName: \"kubernetes.io/projected/2fa2bf6b-972c-42e9-b5e1-11e952cd8f6c-kube-api-access-h5629\") pod \"certified-operators-b6ljf\" (UID: \"2fa2bf6b-972c-42e9-b5e1-11e952cd8f6c\") " pod="openshift-marketplace/certified-operators-b6ljf" Oct 06 12:35:08 crc kubenswrapper[4958]: I1006 12:35:08.758254 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2fa2bf6b-972c-42e9-b5e1-11e952cd8f6c-utilities\") pod \"certified-operators-b6ljf\" (UID: \"2fa2bf6b-972c-42e9-b5e1-11e952cd8f6c\") " pod="openshift-marketplace/certified-operators-b6ljf" Oct 06 12:35:08 crc kubenswrapper[4958]: I1006 12:35:08.859101 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2fa2bf6b-972c-42e9-b5e1-11e952cd8f6c-utilities\") pod \"certified-operators-b6ljf\" (UID: \"2fa2bf6b-972c-42e9-b5e1-11e952cd8f6c\") " pod="openshift-marketplace/certified-operators-b6ljf" Oct 06 12:35:08 crc kubenswrapper[4958]: I1006 12:35:08.859207 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2fa2bf6b-972c-42e9-b5e1-11e952cd8f6c-catalog-content\") pod \"certified-operators-b6ljf\" (UID: \"2fa2bf6b-972c-42e9-b5e1-11e952cd8f6c\") " pod="openshift-marketplace/certified-operators-b6ljf" Oct 06 12:35:08 crc kubenswrapper[4958]: I1006 12:35:08.859298 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h5629\" (UniqueName: \"kubernetes.io/projected/2fa2bf6b-972c-42e9-b5e1-11e952cd8f6c-kube-api-access-h5629\") pod \"certified-operators-b6ljf\" (UID: \"2fa2bf6b-972c-42e9-b5e1-11e952cd8f6c\") " pod="openshift-marketplace/certified-operators-b6ljf" Oct 06 12:35:08 crc kubenswrapper[4958]: I1006 12:35:08.859674 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2fa2bf6b-972c-42e9-b5e1-11e952cd8f6c-utilities\") pod \"certified-operators-b6ljf\" (UID: \"2fa2bf6b-972c-42e9-b5e1-11e952cd8f6c\") " pod="openshift-marketplace/certified-operators-b6ljf" Oct 06 12:35:08 crc kubenswrapper[4958]: I1006 12:35:08.859720 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2fa2bf6b-972c-42e9-b5e1-11e952cd8f6c-catalog-content\") pod \"certified-operators-b6ljf\" (UID: \"2fa2bf6b-972c-42e9-b5e1-11e952cd8f6c\") " pod="openshift-marketplace/certified-operators-b6ljf" Oct 06 12:35:08 crc kubenswrapper[4958]: I1006 12:35:08.879040 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h5629\" (UniqueName: \"kubernetes.io/projected/2fa2bf6b-972c-42e9-b5e1-11e952cd8f6c-kube-api-access-h5629\") pod \"certified-operators-b6ljf\" (UID: \"2fa2bf6b-972c-42e9-b5e1-11e952cd8f6c\") " pod="openshift-marketplace/certified-operators-b6ljf" Oct 06 12:35:09 crc kubenswrapper[4958]: I1006 12:35:09.053823 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-b6ljf" Oct 06 12:35:09 crc kubenswrapper[4958]: I1006 12:35:09.573753 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-b6ljf"] Oct 06 12:35:09 crc kubenswrapper[4958]: I1006 12:35:09.755703 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-b6ljf" event={"ID":"2fa2bf6b-972c-42e9-b5e1-11e952cd8f6c","Type":"ContainerStarted","Data":"347005a0ac84616d07deebc2438152eefd28f5dcceed4fc04ed1c0e71210faef"} Oct 06 12:35:10 crc kubenswrapper[4958]: I1006 12:35:10.478800 4958 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-7mhzg"] Oct 06 12:35:10 crc kubenswrapper[4958]: I1006 12:35:10.480863 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-7mhzg" Oct 06 12:35:10 crc kubenswrapper[4958]: I1006 12:35:10.506788 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-7mhzg"] Oct 06 12:35:10 crc kubenswrapper[4958]: I1006 12:35:10.593712 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2jmbk\" (UniqueName: \"kubernetes.io/projected/cb725344-85ad-46d2-9a7b-f3467c6abb5b-kube-api-access-2jmbk\") pod \"redhat-operators-7mhzg\" (UID: \"cb725344-85ad-46d2-9a7b-f3467c6abb5b\") " pod="openshift-marketplace/redhat-operators-7mhzg" Oct 06 12:35:10 crc kubenswrapper[4958]: I1006 12:35:10.593872 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cb725344-85ad-46d2-9a7b-f3467c6abb5b-utilities\") pod \"redhat-operators-7mhzg\" (UID: \"cb725344-85ad-46d2-9a7b-f3467c6abb5b\") " pod="openshift-marketplace/redhat-operators-7mhzg" Oct 06 12:35:10 crc kubenswrapper[4958]: I1006 12:35:10.593982 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cb725344-85ad-46d2-9a7b-f3467c6abb5b-catalog-content\") pod \"redhat-operators-7mhzg\" (UID: \"cb725344-85ad-46d2-9a7b-f3467c6abb5b\") " pod="openshift-marketplace/redhat-operators-7mhzg" Oct 06 12:35:10 crc kubenswrapper[4958]: I1006 12:35:10.696420 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cb725344-85ad-46d2-9a7b-f3467c6abb5b-utilities\") pod \"redhat-operators-7mhzg\" (UID: \"cb725344-85ad-46d2-9a7b-f3467c6abb5b\") " pod="openshift-marketplace/redhat-operators-7mhzg" Oct 06 12:35:10 crc kubenswrapper[4958]: I1006 12:35:10.696575 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cb725344-85ad-46d2-9a7b-f3467c6abb5b-catalog-content\") pod \"redhat-operators-7mhzg\" (UID: \"cb725344-85ad-46d2-9a7b-f3467c6abb5b\") " pod="openshift-marketplace/redhat-operators-7mhzg" Oct 06 12:35:10 crc kubenswrapper[4958]: I1006 12:35:10.696619 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2jmbk\" (UniqueName: \"kubernetes.io/projected/cb725344-85ad-46d2-9a7b-f3467c6abb5b-kube-api-access-2jmbk\") pod \"redhat-operators-7mhzg\" (UID: \"cb725344-85ad-46d2-9a7b-f3467c6abb5b\") " pod="openshift-marketplace/redhat-operators-7mhzg" Oct 06 12:35:10 crc kubenswrapper[4958]: I1006 12:35:10.696991 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cb725344-85ad-46d2-9a7b-f3467c6abb5b-utilities\") pod \"redhat-operators-7mhzg\" (UID: \"cb725344-85ad-46d2-9a7b-f3467c6abb5b\") " pod="openshift-marketplace/redhat-operators-7mhzg" Oct 06 12:35:10 crc kubenswrapper[4958]: I1006 12:35:10.697300 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cb725344-85ad-46d2-9a7b-f3467c6abb5b-catalog-content\") pod \"redhat-operators-7mhzg\" (UID: \"cb725344-85ad-46d2-9a7b-f3467c6abb5b\") " pod="openshift-marketplace/redhat-operators-7mhzg" Oct 06 12:35:10 crc kubenswrapper[4958]: I1006 12:35:10.720718 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2jmbk\" (UniqueName: \"kubernetes.io/projected/cb725344-85ad-46d2-9a7b-f3467c6abb5b-kube-api-access-2jmbk\") pod \"redhat-operators-7mhzg\" (UID: \"cb725344-85ad-46d2-9a7b-f3467c6abb5b\") " pod="openshift-marketplace/redhat-operators-7mhzg" Oct 06 12:35:10 crc kubenswrapper[4958]: I1006 12:35:10.765444 4958 generic.go:334] "Generic (PLEG): container finished" podID="2fa2bf6b-972c-42e9-b5e1-11e952cd8f6c" containerID="2c73d7a48548bc5cfae5d3dacd1571f13ae6e1b274f35dc32a5d67e207ae7f84" exitCode=0 Oct 06 12:35:10 crc kubenswrapper[4958]: I1006 12:35:10.765489 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-b6ljf" event={"ID":"2fa2bf6b-972c-42e9-b5e1-11e952cd8f6c","Type":"ContainerDied","Data":"2c73d7a48548bc5cfae5d3dacd1571f13ae6e1b274f35dc32a5d67e207ae7f84"} Oct 06 12:35:10 crc kubenswrapper[4958]: I1006 12:35:10.804824 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-7mhzg" Oct 06 12:35:11 crc kubenswrapper[4958]: I1006 12:35:11.077259 4958 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-frg9q"] Oct 06 12:35:11 crc kubenswrapper[4958]: I1006 12:35:11.078982 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-frg9q" Oct 06 12:35:11 crc kubenswrapper[4958]: I1006 12:35:11.115473 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-frg9q"] Oct 06 12:35:11 crc kubenswrapper[4958]: I1006 12:35:11.205641 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x2xzc\" (UniqueName: \"kubernetes.io/projected/1a22ded1-1c29-4518-aab8-576cde5eb9a2-kube-api-access-x2xzc\") pod \"community-operators-frg9q\" (UID: \"1a22ded1-1c29-4518-aab8-576cde5eb9a2\") " pod="openshift-marketplace/community-operators-frg9q" Oct 06 12:35:11 crc kubenswrapper[4958]: I1006 12:35:11.206318 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1a22ded1-1c29-4518-aab8-576cde5eb9a2-utilities\") pod \"community-operators-frg9q\" (UID: \"1a22ded1-1c29-4518-aab8-576cde5eb9a2\") " pod="openshift-marketplace/community-operators-frg9q" Oct 06 12:35:11 crc kubenswrapper[4958]: I1006 12:35:11.206457 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1a22ded1-1c29-4518-aab8-576cde5eb9a2-catalog-content\") pod \"community-operators-frg9q\" (UID: \"1a22ded1-1c29-4518-aab8-576cde5eb9a2\") " pod="openshift-marketplace/community-operators-frg9q" Oct 06 12:35:11 crc kubenswrapper[4958]: I1006 12:35:11.303564 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-7mhzg"] Oct 06 12:35:11 crc kubenswrapper[4958]: I1006 12:35:11.308642 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x2xzc\" (UniqueName: \"kubernetes.io/projected/1a22ded1-1c29-4518-aab8-576cde5eb9a2-kube-api-access-x2xzc\") pod \"community-operators-frg9q\" (UID: \"1a22ded1-1c29-4518-aab8-576cde5eb9a2\") " pod="openshift-marketplace/community-operators-frg9q" Oct 06 12:35:11 crc kubenswrapper[4958]: I1006 12:35:11.308735 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1a22ded1-1c29-4518-aab8-576cde5eb9a2-utilities\") pod \"community-operators-frg9q\" (UID: \"1a22ded1-1c29-4518-aab8-576cde5eb9a2\") " pod="openshift-marketplace/community-operators-frg9q" Oct 06 12:35:11 crc kubenswrapper[4958]: I1006 12:35:11.308762 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1a22ded1-1c29-4518-aab8-576cde5eb9a2-catalog-content\") pod \"community-operators-frg9q\" (UID: \"1a22ded1-1c29-4518-aab8-576cde5eb9a2\") " pod="openshift-marketplace/community-operators-frg9q" Oct 06 12:35:11 crc kubenswrapper[4958]: I1006 12:35:11.309305 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1a22ded1-1c29-4518-aab8-576cde5eb9a2-catalog-content\") pod \"community-operators-frg9q\" (UID: \"1a22ded1-1c29-4518-aab8-576cde5eb9a2\") " pod="openshift-marketplace/community-operators-frg9q" Oct 06 12:35:11 crc kubenswrapper[4958]: I1006 12:35:11.309463 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1a22ded1-1c29-4518-aab8-576cde5eb9a2-utilities\") pod \"community-operators-frg9q\" (UID: \"1a22ded1-1c29-4518-aab8-576cde5eb9a2\") " pod="openshift-marketplace/community-operators-frg9q" Oct 06 12:35:11 crc kubenswrapper[4958]: W1006 12:35:11.309653 4958 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podcb725344_85ad_46d2_9a7b_f3467c6abb5b.slice/crio-8a65e4c0b678d5a32fc544bd776b88628270734c01f2ee62af3b9b1e7f6aea5a WatchSource:0}: Error finding container 8a65e4c0b678d5a32fc544bd776b88628270734c01f2ee62af3b9b1e7f6aea5a: Status 404 returned error can't find the container with id 8a65e4c0b678d5a32fc544bd776b88628270734c01f2ee62af3b9b1e7f6aea5a Oct 06 12:35:11 crc kubenswrapper[4958]: I1006 12:35:11.327938 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x2xzc\" (UniqueName: \"kubernetes.io/projected/1a22ded1-1c29-4518-aab8-576cde5eb9a2-kube-api-access-x2xzc\") pod \"community-operators-frg9q\" (UID: \"1a22ded1-1c29-4518-aab8-576cde5eb9a2\") " pod="openshift-marketplace/community-operators-frg9q" Oct 06 12:35:11 crc kubenswrapper[4958]: I1006 12:35:11.401982 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-frg9q" Oct 06 12:35:11 crc kubenswrapper[4958]: I1006 12:35:11.753885 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-frg9q"] Oct 06 12:35:11 crc kubenswrapper[4958]: I1006 12:35:11.776894 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-frg9q" event={"ID":"1a22ded1-1c29-4518-aab8-576cde5eb9a2","Type":"ContainerStarted","Data":"fc48cd0849a0995ff359c225edaf4974cf26636d95b497899e9d338ef8fc1324"} Oct 06 12:35:11 crc kubenswrapper[4958]: I1006 12:35:11.778457 4958 generic.go:334] "Generic (PLEG): container finished" podID="cb725344-85ad-46d2-9a7b-f3467c6abb5b" containerID="21eea412e02558a1726cff00aef27be9170e59c62f8eb2be96e07b509e76ae00" exitCode=0 Oct 06 12:35:11 crc kubenswrapper[4958]: I1006 12:35:11.778513 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-7mhzg" event={"ID":"cb725344-85ad-46d2-9a7b-f3467c6abb5b","Type":"ContainerDied","Data":"21eea412e02558a1726cff00aef27be9170e59c62f8eb2be96e07b509e76ae00"} Oct 06 12:35:11 crc kubenswrapper[4958]: I1006 12:35:11.778542 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-7mhzg" event={"ID":"cb725344-85ad-46d2-9a7b-f3467c6abb5b","Type":"ContainerStarted","Data":"8a65e4c0b678d5a32fc544bd776b88628270734c01f2ee62af3b9b1e7f6aea5a"} Oct 06 12:35:12 crc kubenswrapper[4958]: I1006 12:35:12.792046 4958 generic.go:334] "Generic (PLEG): container finished" podID="1a22ded1-1c29-4518-aab8-576cde5eb9a2" containerID="f304a2ce590ee5823ac68dc65808b901b2af48abb0429b8e877dd05d3f04445b" exitCode=0 Oct 06 12:35:12 crc kubenswrapper[4958]: I1006 12:35:12.792306 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-frg9q" event={"ID":"1a22ded1-1c29-4518-aab8-576cde5eb9a2","Type":"ContainerDied","Data":"f304a2ce590ee5823ac68dc65808b901b2af48abb0429b8e877dd05d3f04445b"} Oct 06 12:35:12 crc kubenswrapper[4958]: I1006 12:35:12.799999 4958 generic.go:334] "Generic (PLEG): container finished" podID="2fa2bf6b-972c-42e9-b5e1-11e952cd8f6c" containerID="a5e23e6fdd7b4ddd36a32184ddd80886c7052baf84adcaf9f0ba3a010ccf800e" exitCode=0 Oct 06 12:35:12 crc kubenswrapper[4958]: I1006 12:35:12.800056 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-b6ljf" event={"ID":"2fa2bf6b-972c-42e9-b5e1-11e952cd8f6c","Type":"ContainerDied","Data":"a5e23e6fdd7b4ddd36a32184ddd80886c7052baf84adcaf9f0ba3a010ccf800e"} Oct 06 12:35:13 crc kubenswrapper[4958]: I1006 12:35:13.820043 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-7mhzg" event={"ID":"cb725344-85ad-46d2-9a7b-f3467c6abb5b","Type":"ContainerStarted","Data":"831ab2b58d4910300d2144305a3eac80b2781675fe715792451908d9594d093e"} Oct 06 12:35:14 crc kubenswrapper[4958]: I1006 12:35:14.833163 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-frg9q" event={"ID":"1a22ded1-1c29-4518-aab8-576cde5eb9a2","Type":"ContainerStarted","Data":"272e2c5d675db20882284b84c53f0795f048dd0cd386e0765301789f28b833ba"} Oct 06 12:35:14 crc kubenswrapper[4958]: I1006 12:35:14.838134 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-b6ljf" event={"ID":"2fa2bf6b-972c-42e9-b5e1-11e952cd8f6c","Type":"ContainerStarted","Data":"ed194a3963c6f7f4400003d335758b43b35ebad04913370b47f500c209501ee6"} Oct 06 12:35:14 crc kubenswrapper[4958]: I1006 12:35:14.883598 4958 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-b6ljf" podStartSLOduration=3.841892356 podStartE2EDuration="6.883579213s" podCreationTimestamp="2025-10-06 12:35:08 +0000 UTC" firstStartedPulling="2025-10-06 12:35:10.767886378 +0000 UTC m=+2864.653911686" lastFinishedPulling="2025-10-06 12:35:13.809573195 +0000 UTC m=+2867.695598543" observedRunningTime="2025-10-06 12:35:14.881917634 +0000 UTC m=+2868.767942982" watchObservedRunningTime="2025-10-06 12:35:14.883579213 +0000 UTC m=+2868.769604531" Oct 06 12:35:15 crc kubenswrapper[4958]: I1006 12:35:15.854701 4958 generic.go:334] "Generic (PLEG): container finished" podID="cb725344-85ad-46d2-9a7b-f3467c6abb5b" containerID="831ab2b58d4910300d2144305a3eac80b2781675fe715792451908d9594d093e" exitCode=0 Oct 06 12:35:15 crc kubenswrapper[4958]: I1006 12:35:15.854767 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-7mhzg" event={"ID":"cb725344-85ad-46d2-9a7b-f3467c6abb5b","Type":"ContainerDied","Data":"831ab2b58d4910300d2144305a3eac80b2781675fe715792451908d9594d093e"} Oct 06 12:35:16 crc kubenswrapper[4958]: I1006 12:35:16.872988 4958 generic.go:334] "Generic (PLEG): container finished" podID="1a22ded1-1c29-4518-aab8-576cde5eb9a2" containerID="272e2c5d675db20882284b84c53f0795f048dd0cd386e0765301789f28b833ba" exitCode=0 Oct 06 12:35:16 crc kubenswrapper[4958]: I1006 12:35:16.873050 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-frg9q" event={"ID":"1a22ded1-1c29-4518-aab8-576cde5eb9a2","Type":"ContainerDied","Data":"272e2c5d675db20882284b84c53f0795f048dd0cd386e0765301789f28b833ba"} Oct 06 12:35:17 crc kubenswrapper[4958]: I1006 12:35:17.885923 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-frg9q" event={"ID":"1a22ded1-1c29-4518-aab8-576cde5eb9a2","Type":"ContainerStarted","Data":"af2d5350f392ecbaa87d54c0b86f76914bb2c82d74b3cdec92a7355afeeecca2"} Oct 06 12:35:17 crc kubenswrapper[4958]: I1006 12:35:17.889411 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-7mhzg" event={"ID":"cb725344-85ad-46d2-9a7b-f3467c6abb5b","Type":"ContainerStarted","Data":"9ff527c7caf10078f8310150b5119a0ae33a15bc36c277b7066c85a69ad07a48"} Oct 06 12:35:17 crc kubenswrapper[4958]: I1006 12:35:17.905824 4958 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-frg9q" podStartSLOduration=2.232845789 podStartE2EDuration="6.905805858s" podCreationTimestamp="2025-10-06 12:35:11 +0000 UTC" firstStartedPulling="2025-10-06 12:35:12.795803463 +0000 UTC m=+2866.681828811" lastFinishedPulling="2025-10-06 12:35:17.468763572 +0000 UTC m=+2871.354788880" observedRunningTime="2025-10-06 12:35:17.901921852 +0000 UTC m=+2871.787947170" watchObservedRunningTime="2025-10-06 12:35:17.905805858 +0000 UTC m=+2871.791831166" Oct 06 12:35:17 crc kubenswrapper[4958]: I1006 12:35:17.928825 4958 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-7mhzg" podStartSLOduration=3.078925963 podStartE2EDuration="7.928808445s" podCreationTimestamp="2025-10-06 12:35:10 +0000 UTC" firstStartedPulling="2025-10-06 12:35:11.794196824 +0000 UTC m=+2865.680222132" lastFinishedPulling="2025-10-06 12:35:16.644079266 +0000 UTC m=+2870.530104614" observedRunningTime="2025-10-06 12:35:17.921981091 +0000 UTC m=+2871.808006409" watchObservedRunningTime="2025-10-06 12:35:17.928808445 +0000 UTC m=+2871.814833753" Oct 06 12:35:19 crc kubenswrapper[4958]: I1006 12:35:19.054582 4958 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-b6ljf" Oct 06 12:35:19 crc kubenswrapper[4958]: I1006 12:35:19.055044 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-b6ljf" Oct 06 12:35:19 crc kubenswrapper[4958]: I1006 12:35:19.125782 4958 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-b6ljf" Oct 06 12:35:19 crc kubenswrapper[4958]: I1006 12:35:19.958000 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-b6ljf" Oct 06 12:35:20 crc kubenswrapper[4958]: I1006 12:35:20.821589 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-7mhzg" Oct 06 12:35:20 crc kubenswrapper[4958]: I1006 12:35:20.823553 4958 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-7mhzg" Oct 06 12:35:21 crc kubenswrapper[4958]: I1006 12:35:21.402925 4958 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-frg9q" Oct 06 12:35:21 crc kubenswrapper[4958]: I1006 12:35:21.402966 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-frg9q" Oct 06 12:35:21 crc kubenswrapper[4958]: I1006 12:35:21.460076 4958 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-frg9q" Oct 06 12:35:21 crc kubenswrapper[4958]: I1006 12:35:21.670692 4958 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-b6ljf"] Oct 06 12:35:21 crc kubenswrapper[4958]: I1006 12:35:21.901521 4958 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-7mhzg" podUID="cb725344-85ad-46d2-9a7b-f3467c6abb5b" containerName="registry-server" probeResult="failure" output=< Oct 06 12:35:21 crc kubenswrapper[4958]: timeout: failed to connect service ":50051" within 1s Oct 06 12:35:21 crc kubenswrapper[4958]: > Oct 06 12:35:22 crc kubenswrapper[4958]: I1006 12:35:22.937427 4958 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-b6ljf" podUID="2fa2bf6b-972c-42e9-b5e1-11e952cd8f6c" containerName="registry-server" containerID="cri-o://ed194a3963c6f7f4400003d335758b43b35ebad04913370b47f500c209501ee6" gracePeriod=2 Oct 06 12:35:23 crc kubenswrapper[4958]: I1006 12:35:23.459997 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-b6ljf" Oct 06 12:35:23 crc kubenswrapper[4958]: I1006 12:35:23.579607 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2fa2bf6b-972c-42e9-b5e1-11e952cd8f6c-catalog-content\") pod \"2fa2bf6b-972c-42e9-b5e1-11e952cd8f6c\" (UID: \"2fa2bf6b-972c-42e9-b5e1-11e952cd8f6c\") " Oct 06 12:35:23 crc kubenswrapper[4958]: I1006 12:35:23.579815 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2fa2bf6b-972c-42e9-b5e1-11e952cd8f6c-utilities\") pod \"2fa2bf6b-972c-42e9-b5e1-11e952cd8f6c\" (UID: \"2fa2bf6b-972c-42e9-b5e1-11e952cd8f6c\") " Oct 06 12:35:23 crc kubenswrapper[4958]: I1006 12:35:23.579911 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-h5629\" (UniqueName: \"kubernetes.io/projected/2fa2bf6b-972c-42e9-b5e1-11e952cd8f6c-kube-api-access-h5629\") pod \"2fa2bf6b-972c-42e9-b5e1-11e952cd8f6c\" (UID: \"2fa2bf6b-972c-42e9-b5e1-11e952cd8f6c\") " Oct 06 12:35:23 crc kubenswrapper[4958]: I1006 12:35:23.581051 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2fa2bf6b-972c-42e9-b5e1-11e952cd8f6c-utilities" (OuterVolumeSpecName: "utilities") pod "2fa2bf6b-972c-42e9-b5e1-11e952cd8f6c" (UID: "2fa2bf6b-972c-42e9-b5e1-11e952cd8f6c"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 06 12:35:23 crc kubenswrapper[4958]: I1006 12:35:23.585259 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2fa2bf6b-972c-42e9-b5e1-11e952cd8f6c-kube-api-access-h5629" (OuterVolumeSpecName: "kube-api-access-h5629") pod "2fa2bf6b-972c-42e9-b5e1-11e952cd8f6c" (UID: "2fa2bf6b-972c-42e9-b5e1-11e952cd8f6c"). InnerVolumeSpecName "kube-api-access-h5629". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 06 12:35:23 crc kubenswrapper[4958]: I1006 12:35:23.643538 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2fa2bf6b-972c-42e9-b5e1-11e952cd8f6c-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "2fa2bf6b-972c-42e9-b5e1-11e952cd8f6c" (UID: "2fa2bf6b-972c-42e9-b5e1-11e952cd8f6c"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 06 12:35:23 crc kubenswrapper[4958]: I1006 12:35:23.682574 4958 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2fa2bf6b-972c-42e9-b5e1-11e952cd8f6c-utilities\") on node \"crc\" DevicePath \"\"" Oct 06 12:35:23 crc kubenswrapper[4958]: I1006 12:35:23.682609 4958 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-h5629\" (UniqueName: \"kubernetes.io/projected/2fa2bf6b-972c-42e9-b5e1-11e952cd8f6c-kube-api-access-h5629\") on node \"crc\" DevicePath \"\"" Oct 06 12:35:23 crc kubenswrapper[4958]: I1006 12:35:23.682624 4958 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2fa2bf6b-972c-42e9-b5e1-11e952cd8f6c-catalog-content\") on node \"crc\" DevicePath \"\"" Oct 06 12:35:23 crc kubenswrapper[4958]: I1006 12:35:23.802067 4958 patch_prober.go:28] interesting pod/machine-config-daemon-whw6z container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 06 12:35:23 crc kubenswrapper[4958]: I1006 12:35:23.802196 4958 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-whw6z" podUID="1a63a5e9-6d00-40ff-a080-cfcaf03a1c1b" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 06 12:35:23 crc kubenswrapper[4958]: I1006 12:35:23.958762 4958 generic.go:334] "Generic (PLEG): container finished" podID="2fa2bf6b-972c-42e9-b5e1-11e952cd8f6c" containerID="ed194a3963c6f7f4400003d335758b43b35ebad04913370b47f500c209501ee6" exitCode=0 Oct 06 12:35:23 crc kubenswrapper[4958]: I1006 12:35:23.958800 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-b6ljf" Oct 06 12:35:23 crc kubenswrapper[4958]: I1006 12:35:23.958823 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-b6ljf" event={"ID":"2fa2bf6b-972c-42e9-b5e1-11e952cd8f6c","Type":"ContainerDied","Data":"ed194a3963c6f7f4400003d335758b43b35ebad04913370b47f500c209501ee6"} Oct 06 12:35:23 crc kubenswrapper[4958]: I1006 12:35:23.958865 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-b6ljf" event={"ID":"2fa2bf6b-972c-42e9-b5e1-11e952cd8f6c","Type":"ContainerDied","Data":"347005a0ac84616d07deebc2438152eefd28f5dcceed4fc04ed1c0e71210faef"} Oct 06 12:35:23 crc kubenswrapper[4958]: I1006 12:35:23.958904 4958 scope.go:117] "RemoveContainer" containerID="ed194a3963c6f7f4400003d335758b43b35ebad04913370b47f500c209501ee6" Oct 06 12:35:23 crc kubenswrapper[4958]: I1006 12:35:23.996552 4958 scope.go:117] "RemoveContainer" containerID="a5e23e6fdd7b4ddd36a32184ddd80886c7052baf84adcaf9f0ba3a010ccf800e" Oct 06 12:35:24 crc kubenswrapper[4958]: I1006 12:35:24.000661 4958 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-b6ljf"] Oct 06 12:35:24 crc kubenswrapper[4958]: I1006 12:35:24.013424 4958 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-b6ljf"] Oct 06 12:35:24 crc kubenswrapper[4958]: I1006 12:35:24.020799 4958 scope.go:117] "RemoveContainer" containerID="2c73d7a48548bc5cfae5d3dacd1571f13ae6e1b274f35dc32a5d67e207ae7f84" Oct 06 12:35:24 crc kubenswrapper[4958]: I1006 12:35:24.084060 4958 scope.go:117] "RemoveContainer" containerID="ed194a3963c6f7f4400003d335758b43b35ebad04913370b47f500c209501ee6" Oct 06 12:35:24 crc kubenswrapper[4958]: E1006 12:35:24.085059 4958 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ed194a3963c6f7f4400003d335758b43b35ebad04913370b47f500c209501ee6\": container with ID starting with ed194a3963c6f7f4400003d335758b43b35ebad04913370b47f500c209501ee6 not found: ID does not exist" containerID="ed194a3963c6f7f4400003d335758b43b35ebad04913370b47f500c209501ee6" Oct 06 12:35:24 crc kubenswrapper[4958]: I1006 12:35:24.085097 4958 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ed194a3963c6f7f4400003d335758b43b35ebad04913370b47f500c209501ee6"} err="failed to get container status \"ed194a3963c6f7f4400003d335758b43b35ebad04913370b47f500c209501ee6\": rpc error: code = NotFound desc = could not find container \"ed194a3963c6f7f4400003d335758b43b35ebad04913370b47f500c209501ee6\": container with ID starting with ed194a3963c6f7f4400003d335758b43b35ebad04913370b47f500c209501ee6 not found: ID does not exist" Oct 06 12:35:24 crc kubenswrapper[4958]: I1006 12:35:24.085122 4958 scope.go:117] "RemoveContainer" containerID="a5e23e6fdd7b4ddd36a32184ddd80886c7052baf84adcaf9f0ba3a010ccf800e" Oct 06 12:35:24 crc kubenswrapper[4958]: E1006 12:35:24.085463 4958 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a5e23e6fdd7b4ddd36a32184ddd80886c7052baf84adcaf9f0ba3a010ccf800e\": container with ID starting with a5e23e6fdd7b4ddd36a32184ddd80886c7052baf84adcaf9f0ba3a010ccf800e not found: ID does not exist" containerID="a5e23e6fdd7b4ddd36a32184ddd80886c7052baf84adcaf9f0ba3a010ccf800e" Oct 06 12:35:24 crc kubenswrapper[4958]: I1006 12:35:24.085505 4958 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a5e23e6fdd7b4ddd36a32184ddd80886c7052baf84adcaf9f0ba3a010ccf800e"} err="failed to get container status \"a5e23e6fdd7b4ddd36a32184ddd80886c7052baf84adcaf9f0ba3a010ccf800e\": rpc error: code = NotFound desc = could not find container \"a5e23e6fdd7b4ddd36a32184ddd80886c7052baf84adcaf9f0ba3a010ccf800e\": container with ID starting with a5e23e6fdd7b4ddd36a32184ddd80886c7052baf84adcaf9f0ba3a010ccf800e not found: ID does not exist" Oct 06 12:35:24 crc kubenswrapper[4958]: I1006 12:35:24.085530 4958 scope.go:117] "RemoveContainer" containerID="2c73d7a48548bc5cfae5d3dacd1571f13ae6e1b274f35dc32a5d67e207ae7f84" Oct 06 12:35:24 crc kubenswrapper[4958]: E1006 12:35:24.086109 4958 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2c73d7a48548bc5cfae5d3dacd1571f13ae6e1b274f35dc32a5d67e207ae7f84\": container with ID starting with 2c73d7a48548bc5cfae5d3dacd1571f13ae6e1b274f35dc32a5d67e207ae7f84 not found: ID does not exist" containerID="2c73d7a48548bc5cfae5d3dacd1571f13ae6e1b274f35dc32a5d67e207ae7f84" Oct 06 12:35:24 crc kubenswrapper[4958]: I1006 12:35:24.086210 4958 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2c73d7a48548bc5cfae5d3dacd1571f13ae6e1b274f35dc32a5d67e207ae7f84"} err="failed to get container status \"2c73d7a48548bc5cfae5d3dacd1571f13ae6e1b274f35dc32a5d67e207ae7f84\": rpc error: code = NotFound desc = could not find container \"2c73d7a48548bc5cfae5d3dacd1571f13ae6e1b274f35dc32a5d67e207ae7f84\": container with ID starting with 2c73d7a48548bc5cfae5d3dacd1571f13ae6e1b274f35dc32a5d67e207ae7f84 not found: ID does not exist" Oct 06 12:35:24 crc kubenswrapper[4958]: I1006 12:35:24.957536 4958 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2fa2bf6b-972c-42e9-b5e1-11e952cd8f6c" path="/var/lib/kubelet/pods/2fa2bf6b-972c-42e9-b5e1-11e952cd8f6c/volumes" Oct 06 12:35:30 crc kubenswrapper[4958]: I1006 12:35:30.873033 4958 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-7mhzg" Oct 06 12:35:30 crc kubenswrapper[4958]: I1006 12:35:30.941110 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-7mhzg" Oct 06 12:35:31 crc kubenswrapper[4958]: I1006 12:35:31.115771 4958 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-7mhzg"] Oct 06 12:35:31 crc kubenswrapper[4958]: I1006 12:35:31.448772 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-frg9q" Oct 06 12:35:32 crc kubenswrapper[4958]: I1006 12:35:32.047822 4958 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-7mhzg" podUID="cb725344-85ad-46d2-9a7b-f3467c6abb5b" containerName="registry-server" containerID="cri-o://9ff527c7caf10078f8310150b5119a0ae33a15bc36c277b7066c85a69ad07a48" gracePeriod=2 Oct 06 12:35:32 crc kubenswrapper[4958]: I1006 12:35:32.603766 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-7mhzg" Oct 06 12:35:32 crc kubenswrapper[4958]: I1006 12:35:32.681432 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2jmbk\" (UniqueName: \"kubernetes.io/projected/cb725344-85ad-46d2-9a7b-f3467c6abb5b-kube-api-access-2jmbk\") pod \"cb725344-85ad-46d2-9a7b-f3467c6abb5b\" (UID: \"cb725344-85ad-46d2-9a7b-f3467c6abb5b\") " Oct 06 12:35:32 crc kubenswrapper[4958]: I1006 12:35:32.681493 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cb725344-85ad-46d2-9a7b-f3467c6abb5b-utilities\") pod \"cb725344-85ad-46d2-9a7b-f3467c6abb5b\" (UID: \"cb725344-85ad-46d2-9a7b-f3467c6abb5b\") " Oct 06 12:35:32 crc kubenswrapper[4958]: I1006 12:35:32.681609 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cb725344-85ad-46d2-9a7b-f3467c6abb5b-catalog-content\") pod \"cb725344-85ad-46d2-9a7b-f3467c6abb5b\" (UID: \"cb725344-85ad-46d2-9a7b-f3467c6abb5b\") " Oct 06 12:35:32 crc kubenswrapper[4958]: I1006 12:35:32.683015 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/cb725344-85ad-46d2-9a7b-f3467c6abb5b-utilities" (OuterVolumeSpecName: "utilities") pod "cb725344-85ad-46d2-9a7b-f3467c6abb5b" (UID: "cb725344-85ad-46d2-9a7b-f3467c6abb5b"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 06 12:35:32 crc kubenswrapper[4958]: I1006 12:35:32.687291 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cb725344-85ad-46d2-9a7b-f3467c6abb5b-kube-api-access-2jmbk" (OuterVolumeSpecName: "kube-api-access-2jmbk") pod "cb725344-85ad-46d2-9a7b-f3467c6abb5b" (UID: "cb725344-85ad-46d2-9a7b-f3467c6abb5b"). InnerVolumeSpecName "kube-api-access-2jmbk". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 06 12:35:32 crc kubenswrapper[4958]: I1006 12:35:32.773006 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/cb725344-85ad-46d2-9a7b-f3467c6abb5b-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "cb725344-85ad-46d2-9a7b-f3467c6abb5b" (UID: "cb725344-85ad-46d2-9a7b-f3467c6abb5b"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 06 12:35:32 crc kubenswrapper[4958]: I1006 12:35:32.783917 4958 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2jmbk\" (UniqueName: \"kubernetes.io/projected/cb725344-85ad-46d2-9a7b-f3467c6abb5b-kube-api-access-2jmbk\") on node \"crc\" DevicePath \"\"" Oct 06 12:35:32 crc kubenswrapper[4958]: I1006 12:35:32.783976 4958 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cb725344-85ad-46d2-9a7b-f3467c6abb5b-utilities\") on node \"crc\" DevicePath \"\"" Oct 06 12:35:32 crc kubenswrapper[4958]: I1006 12:35:32.783991 4958 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cb725344-85ad-46d2-9a7b-f3467c6abb5b-catalog-content\") on node \"crc\" DevicePath \"\"" Oct 06 12:35:33 crc kubenswrapper[4958]: I1006 12:35:33.062657 4958 generic.go:334] "Generic (PLEG): container finished" podID="cb725344-85ad-46d2-9a7b-f3467c6abb5b" containerID="9ff527c7caf10078f8310150b5119a0ae33a15bc36c277b7066c85a69ad07a48" exitCode=0 Oct 06 12:35:33 crc kubenswrapper[4958]: I1006 12:35:33.062727 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-7mhzg" event={"ID":"cb725344-85ad-46d2-9a7b-f3467c6abb5b","Type":"ContainerDied","Data":"9ff527c7caf10078f8310150b5119a0ae33a15bc36c277b7066c85a69ad07a48"} Oct 06 12:35:33 crc kubenswrapper[4958]: I1006 12:35:33.063102 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-7mhzg" event={"ID":"cb725344-85ad-46d2-9a7b-f3467c6abb5b","Type":"ContainerDied","Data":"8a65e4c0b678d5a32fc544bd776b88628270734c01f2ee62af3b9b1e7f6aea5a"} Oct 06 12:35:33 crc kubenswrapper[4958]: I1006 12:35:33.063140 4958 scope.go:117] "RemoveContainer" containerID="9ff527c7caf10078f8310150b5119a0ae33a15bc36c277b7066c85a69ad07a48" Oct 06 12:35:33 crc kubenswrapper[4958]: I1006 12:35:33.062799 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-7mhzg" Oct 06 12:35:33 crc kubenswrapper[4958]: I1006 12:35:33.108066 4958 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-7mhzg"] Oct 06 12:35:33 crc kubenswrapper[4958]: I1006 12:35:33.110909 4958 scope.go:117] "RemoveContainer" containerID="831ab2b58d4910300d2144305a3eac80b2781675fe715792451908d9594d093e" Oct 06 12:35:33 crc kubenswrapper[4958]: I1006 12:35:33.120012 4958 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-7mhzg"] Oct 06 12:35:33 crc kubenswrapper[4958]: I1006 12:35:33.136523 4958 scope.go:117] "RemoveContainer" containerID="21eea412e02558a1726cff00aef27be9170e59c62f8eb2be96e07b509e76ae00" Oct 06 12:35:33 crc kubenswrapper[4958]: I1006 12:35:33.191707 4958 scope.go:117] "RemoveContainer" containerID="9ff527c7caf10078f8310150b5119a0ae33a15bc36c277b7066c85a69ad07a48" Oct 06 12:35:33 crc kubenswrapper[4958]: E1006 12:35:33.192313 4958 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9ff527c7caf10078f8310150b5119a0ae33a15bc36c277b7066c85a69ad07a48\": container with ID starting with 9ff527c7caf10078f8310150b5119a0ae33a15bc36c277b7066c85a69ad07a48 not found: ID does not exist" containerID="9ff527c7caf10078f8310150b5119a0ae33a15bc36c277b7066c85a69ad07a48" Oct 06 12:35:33 crc kubenswrapper[4958]: I1006 12:35:33.192427 4958 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9ff527c7caf10078f8310150b5119a0ae33a15bc36c277b7066c85a69ad07a48"} err="failed to get container status \"9ff527c7caf10078f8310150b5119a0ae33a15bc36c277b7066c85a69ad07a48\": rpc error: code = NotFound desc = could not find container \"9ff527c7caf10078f8310150b5119a0ae33a15bc36c277b7066c85a69ad07a48\": container with ID starting with 9ff527c7caf10078f8310150b5119a0ae33a15bc36c277b7066c85a69ad07a48 not found: ID does not exist" Oct 06 12:35:33 crc kubenswrapper[4958]: I1006 12:35:33.192466 4958 scope.go:117] "RemoveContainer" containerID="831ab2b58d4910300d2144305a3eac80b2781675fe715792451908d9594d093e" Oct 06 12:35:33 crc kubenswrapper[4958]: E1006 12:35:33.193005 4958 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"831ab2b58d4910300d2144305a3eac80b2781675fe715792451908d9594d093e\": container with ID starting with 831ab2b58d4910300d2144305a3eac80b2781675fe715792451908d9594d093e not found: ID does not exist" containerID="831ab2b58d4910300d2144305a3eac80b2781675fe715792451908d9594d093e" Oct 06 12:35:33 crc kubenswrapper[4958]: I1006 12:35:33.193050 4958 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"831ab2b58d4910300d2144305a3eac80b2781675fe715792451908d9594d093e"} err="failed to get container status \"831ab2b58d4910300d2144305a3eac80b2781675fe715792451908d9594d093e\": rpc error: code = NotFound desc = could not find container \"831ab2b58d4910300d2144305a3eac80b2781675fe715792451908d9594d093e\": container with ID starting with 831ab2b58d4910300d2144305a3eac80b2781675fe715792451908d9594d093e not found: ID does not exist" Oct 06 12:35:33 crc kubenswrapper[4958]: I1006 12:35:33.193080 4958 scope.go:117] "RemoveContainer" containerID="21eea412e02558a1726cff00aef27be9170e59c62f8eb2be96e07b509e76ae00" Oct 06 12:35:33 crc kubenswrapper[4958]: E1006 12:35:33.193516 4958 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"21eea412e02558a1726cff00aef27be9170e59c62f8eb2be96e07b509e76ae00\": container with ID starting with 21eea412e02558a1726cff00aef27be9170e59c62f8eb2be96e07b509e76ae00 not found: ID does not exist" containerID="21eea412e02558a1726cff00aef27be9170e59c62f8eb2be96e07b509e76ae00" Oct 06 12:35:33 crc kubenswrapper[4958]: I1006 12:35:33.193573 4958 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"21eea412e02558a1726cff00aef27be9170e59c62f8eb2be96e07b509e76ae00"} err="failed to get container status \"21eea412e02558a1726cff00aef27be9170e59c62f8eb2be96e07b509e76ae00\": rpc error: code = NotFound desc = could not find container \"21eea412e02558a1726cff00aef27be9170e59c62f8eb2be96e07b509e76ae00\": container with ID starting with 21eea412e02558a1726cff00aef27be9170e59c62f8eb2be96e07b509e76ae00 not found: ID does not exist" Oct 06 12:35:33 crc kubenswrapper[4958]: I1006 12:35:33.717935 4958 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-frg9q"] Oct 06 12:35:33 crc kubenswrapper[4958]: I1006 12:35:33.718318 4958 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-frg9q" podUID="1a22ded1-1c29-4518-aab8-576cde5eb9a2" containerName="registry-server" containerID="cri-o://af2d5350f392ecbaa87d54c0b86f76914bb2c82d74b3cdec92a7355afeeecca2" gracePeriod=2 Oct 06 12:35:34 crc kubenswrapper[4958]: I1006 12:35:34.081459 4958 generic.go:334] "Generic (PLEG): container finished" podID="1a22ded1-1c29-4518-aab8-576cde5eb9a2" containerID="af2d5350f392ecbaa87d54c0b86f76914bb2c82d74b3cdec92a7355afeeecca2" exitCode=0 Oct 06 12:35:34 crc kubenswrapper[4958]: I1006 12:35:34.081554 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-frg9q" event={"ID":"1a22ded1-1c29-4518-aab8-576cde5eb9a2","Type":"ContainerDied","Data":"af2d5350f392ecbaa87d54c0b86f76914bb2c82d74b3cdec92a7355afeeecca2"} Oct 06 12:35:34 crc kubenswrapper[4958]: I1006 12:35:34.180217 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-frg9q" Oct 06 12:35:34 crc kubenswrapper[4958]: I1006 12:35:34.319342 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1a22ded1-1c29-4518-aab8-576cde5eb9a2-utilities\") pod \"1a22ded1-1c29-4518-aab8-576cde5eb9a2\" (UID: \"1a22ded1-1c29-4518-aab8-576cde5eb9a2\") " Oct 06 12:35:34 crc kubenswrapper[4958]: I1006 12:35:34.319448 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x2xzc\" (UniqueName: \"kubernetes.io/projected/1a22ded1-1c29-4518-aab8-576cde5eb9a2-kube-api-access-x2xzc\") pod \"1a22ded1-1c29-4518-aab8-576cde5eb9a2\" (UID: \"1a22ded1-1c29-4518-aab8-576cde5eb9a2\") " Oct 06 12:35:34 crc kubenswrapper[4958]: I1006 12:35:34.319553 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1a22ded1-1c29-4518-aab8-576cde5eb9a2-catalog-content\") pod \"1a22ded1-1c29-4518-aab8-576cde5eb9a2\" (UID: \"1a22ded1-1c29-4518-aab8-576cde5eb9a2\") " Oct 06 12:35:34 crc kubenswrapper[4958]: I1006 12:35:34.320554 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1a22ded1-1c29-4518-aab8-576cde5eb9a2-utilities" (OuterVolumeSpecName: "utilities") pod "1a22ded1-1c29-4518-aab8-576cde5eb9a2" (UID: "1a22ded1-1c29-4518-aab8-576cde5eb9a2"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 06 12:35:34 crc kubenswrapper[4958]: I1006 12:35:34.323301 4958 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1a22ded1-1c29-4518-aab8-576cde5eb9a2-utilities\") on node \"crc\" DevicePath \"\"" Oct 06 12:35:34 crc kubenswrapper[4958]: I1006 12:35:34.327101 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1a22ded1-1c29-4518-aab8-576cde5eb9a2-kube-api-access-x2xzc" (OuterVolumeSpecName: "kube-api-access-x2xzc") pod "1a22ded1-1c29-4518-aab8-576cde5eb9a2" (UID: "1a22ded1-1c29-4518-aab8-576cde5eb9a2"). InnerVolumeSpecName "kube-api-access-x2xzc". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 06 12:35:34 crc kubenswrapper[4958]: I1006 12:35:34.395138 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1a22ded1-1c29-4518-aab8-576cde5eb9a2-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "1a22ded1-1c29-4518-aab8-576cde5eb9a2" (UID: "1a22ded1-1c29-4518-aab8-576cde5eb9a2"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 06 12:35:34 crc kubenswrapper[4958]: I1006 12:35:34.425251 4958 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x2xzc\" (UniqueName: \"kubernetes.io/projected/1a22ded1-1c29-4518-aab8-576cde5eb9a2-kube-api-access-x2xzc\") on node \"crc\" DevicePath \"\"" Oct 06 12:35:34 crc kubenswrapper[4958]: I1006 12:35:34.425286 4958 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1a22ded1-1c29-4518-aab8-576cde5eb9a2-catalog-content\") on node \"crc\" DevicePath \"\"" Oct 06 12:35:34 crc kubenswrapper[4958]: I1006 12:35:34.929286 4958 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cb725344-85ad-46d2-9a7b-f3467c6abb5b" path="/var/lib/kubelet/pods/cb725344-85ad-46d2-9a7b-f3467c6abb5b/volumes" Oct 06 12:35:35 crc kubenswrapper[4958]: I1006 12:35:35.096395 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-frg9q" event={"ID":"1a22ded1-1c29-4518-aab8-576cde5eb9a2","Type":"ContainerDied","Data":"fc48cd0849a0995ff359c225edaf4974cf26636d95b497899e9d338ef8fc1324"} Oct 06 12:35:35 crc kubenswrapper[4958]: I1006 12:35:35.096459 4958 scope.go:117] "RemoveContainer" containerID="af2d5350f392ecbaa87d54c0b86f76914bb2c82d74b3cdec92a7355afeeecca2" Oct 06 12:35:35 crc kubenswrapper[4958]: I1006 12:35:35.096546 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-frg9q" Oct 06 12:35:35 crc kubenswrapper[4958]: I1006 12:35:35.128067 4958 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-frg9q"] Oct 06 12:35:35 crc kubenswrapper[4958]: I1006 12:35:35.129024 4958 scope.go:117] "RemoveContainer" containerID="272e2c5d675db20882284b84c53f0795f048dd0cd386e0765301789f28b833ba" Oct 06 12:35:35 crc kubenswrapper[4958]: I1006 12:35:35.135135 4958 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-frg9q"] Oct 06 12:35:35 crc kubenswrapper[4958]: I1006 12:35:35.153495 4958 scope.go:117] "RemoveContainer" containerID="f304a2ce590ee5823ac68dc65808b901b2af48abb0429b8e877dd05d3f04445b" Oct 06 12:35:36 crc kubenswrapper[4958]: I1006 12:35:36.954746 4958 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1a22ded1-1c29-4518-aab8-576cde5eb9a2" path="/var/lib/kubelet/pods/1a22ded1-1c29-4518-aab8-576cde5eb9a2/volumes" Oct 06 12:35:53 crc kubenswrapper[4958]: I1006 12:35:53.801521 4958 patch_prober.go:28] interesting pod/machine-config-daemon-whw6z container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 06 12:35:53 crc kubenswrapper[4958]: I1006 12:35:53.802249 4958 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-whw6z" podUID="1a63a5e9-6d00-40ff-a080-cfcaf03a1c1b" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 06 12:35:53 crc kubenswrapper[4958]: I1006 12:35:53.802321 4958 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-whw6z" Oct 06 12:35:53 crc kubenswrapper[4958]: I1006 12:35:53.803495 4958 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"a73e0730b223a642835b16bd60a60b353bbfb0c2de557e1f9819566cffd07688"} pod="openshift-machine-config-operator/machine-config-daemon-whw6z" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Oct 06 12:35:53 crc kubenswrapper[4958]: I1006 12:35:53.803636 4958 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-whw6z" podUID="1a63a5e9-6d00-40ff-a080-cfcaf03a1c1b" containerName="machine-config-daemon" containerID="cri-o://a73e0730b223a642835b16bd60a60b353bbfb0c2de557e1f9819566cffd07688" gracePeriod=600 Oct 06 12:35:54 crc kubenswrapper[4958]: I1006 12:35:54.316927 4958 generic.go:334] "Generic (PLEG): container finished" podID="1a63a5e9-6d00-40ff-a080-cfcaf03a1c1b" containerID="a73e0730b223a642835b16bd60a60b353bbfb0c2de557e1f9819566cffd07688" exitCode=0 Oct 06 12:35:54 crc kubenswrapper[4958]: I1006 12:35:54.317029 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-whw6z" event={"ID":"1a63a5e9-6d00-40ff-a080-cfcaf03a1c1b","Type":"ContainerDied","Data":"a73e0730b223a642835b16bd60a60b353bbfb0c2de557e1f9819566cffd07688"} Oct 06 12:35:54 crc kubenswrapper[4958]: I1006 12:35:54.317372 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-whw6z" event={"ID":"1a63a5e9-6d00-40ff-a080-cfcaf03a1c1b","Type":"ContainerStarted","Data":"a755faf5e8304153c327fba15e9be2bd5d0327c67bfaf9992d2f4d37443732d6"} Oct 06 12:35:54 crc kubenswrapper[4958]: I1006 12:35:54.317400 4958 scope.go:117] "RemoveContainer" containerID="d5a93cdafe689a1af3c191a363fd93dd9e51df0fafe51e825fe13eb8ebfdd2e5" Oct 06 12:38:23 crc kubenswrapper[4958]: I1006 12:38:23.801695 4958 patch_prober.go:28] interesting pod/machine-config-daemon-whw6z container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 06 12:38:23 crc kubenswrapper[4958]: I1006 12:38:23.802173 4958 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-whw6z" podUID="1a63a5e9-6d00-40ff-a080-cfcaf03a1c1b" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 06 12:38:53 crc kubenswrapper[4958]: I1006 12:38:53.802255 4958 patch_prober.go:28] interesting pod/machine-config-daemon-whw6z container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 06 12:38:53 crc kubenswrapper[4958]: I1006 12:38:53.803115 4958 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-whw6z" podUID="1a63a5e9-6d00-40ff-a080-cfcaf03a1c1b" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 06 12:39:23 crc kubenswrapper[4958]: I1006 12:39:23.802332 4958 patch_prober.go:28] interesting pod/machine-config-daemon-whw6z container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 06 12:39:23 crc kubenswrapper[4958]: I1006 12:39:23.802830 4958 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-whw6z" podUID="1a63a5e9-6d00-40ff-a080-cfcaf03a1c1b" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 06 12:39:23 crc kubenswrapper[4958]: I1006 12:39:23.802897 4958 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-whw6z" Oct 06 12:39:23 crc kubenswrapper[4958]: I1006 12:39:23.803582 4958 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"a755faf5e8304153c327fba15e9be2bd5d0327c67bfaf9992d2f4d37443732d6"} pod="openshift-machine-config-operator/machine-config-daemon-whw6z" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Oct 06 12:39:23 crc kubenswrapper[4958]: I1006 12:39:23.803636 4958 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-whw6z" podUID="1a63a5e9-6d00-40ff-a080-cfcaf03a1c1b" containerName="machine-config-daemon" containerID="cri-o://a755faf5e8304153c327fba15e9be2bd5d0327c67bfaf9992d2f4d37443732d6" gracePeriod=600 Oct 06 12:39:23 crc kubenswrapper[4958]: E1006 12:39:23.925900 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-whw6z_openshift-machine-config-operator(1a63a5e9-6d00-40ff-a080-cfcaf03a1c1b)\"" pod="openshift-machine-config-operator/machine-config-daemon-whw6z" podUID="1a63a5e9-6d00-40ff-a080-cfcaf03a1c1b" Oct 06 12:39:24 crc kubenswrapper[4958]: I1006 12:39:24.481443 4958 generic.go:334] "Generic (PLEG): container finished" podID="1a63a5e9-6d00-40ff-a080-cfcaf03a1c1b" containerID="a755faf5e8304153c327fba15e9be2bd5d0327c67bfaf9992d2f4d37443732d6" exitCode=0 Oct 06 12:39:24 crc kubenswrapper[4958]: I1006 12:39:24.481859 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-whw6z" event={"ID":"1a63a5e9-6d00-40ff-a080-cfcaf03a1c1b","Type":"ContainerDied","Data":"a755faf5e8304153c327fba15e9be2bd5d0327c67bfaf9992d2f4d37443732d6"} Oct 06 12:39:24 crc kubenswrapper[4958]: I1006 12:39:24.481902 4958 scope.go:117] "RemoveContainer" containerID="a73e0730b223a642835b16bd60a60b353bbfb0c2de557e1f9819566cffd07688" Oct 06 12:39:24 crc kubenswrapper[4958]: I1006 12:39:24.482660 4958 scope.go:117] "RemoveContainer" containerID="a755faf5e8304153c327fba15e9be2bd5d0327c67bfaf9992d2f4d37443732d6" Oct 06 12:39:24 crc kubenswrapper[4958]: E1006 12:39:24.483046 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-whw6z_openshift-machine-config-operator(1a63a5e9-6d00-40ff-a080-cfcaf03a1c1b)\"" pod="openshift-machine-config-operator/machine-config-daemon-whw6z" podUID="1a63a5e9-6d00-40ff-a080-cfcaf03a1c1b" Oct 06 12:39:38 crc kubenswrapper[4958]: I1006 12:39:38.913112 4958 scope.go:117] "RemoveContainer" containerID="a755faf5e8304153c327fba15e9be2bd5d0327c67bfaf9992d2f4d37443732d6" Oct 06 12:39:38 crc kubenswrapper[4958]: E1006 12:39:38.913906 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-whw6z_openshift-machine-config-operator(1a63a5e9-6d00-40ff-a080-cfcaf03a1c1b)\"" pod="openshift-machine-config-operator/machine-config-daemon-whw6z" podUID="1a63a5e9-6d00-40ff-a080-cfcaf03a1c1b" Oct 06 12:39:51 crc kubenswrapper[4958]: I1006 12:39:51.913235 4958 scope.go:117] "RemoveContainer" containerID="a755faf5e8304153c327fba15e9be2bd5d0327c67bfaf9992d2f4d37443732d6" Oct 06 12:39:51 crc kubenswrapper[4958]: E1006 12:39:51.914171 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-whw6z_openshift-machine-config-operator(1a63a5e9-6d00-40ff-a080-cfcaf03a1c1b)\"" pod="openshift-machine-config-operator/machine-config-daemon-whw6z" podUID="1a63a5e9-6d00-40ff-a080-cfcaf03a1c1b" Oct 06 12:40:02 crc kubenswrapper[4958]: I1006 12:40:02.913388 4958 scope.go:117] "RemoveContainer" containerID="a755faf5e8304153c327fba15e9be2bd5d0327c67bfaf9992d2f4d37443732d6" Oct 06 12:40:02 crc kubenswrapper[4958]: E1006 12:40:02.914587 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-whw6z_openshift-machine-config-operator(1a63a5e9-6d00-40ff-a080-cfcaf03a1c1b)\"" pod="openshift-machine-config-operator/machine-config-daemon-whw6z" podUID="1a63a5e9-6d00-40ff-a080-cfcaf03a1c1b" Oct 06 12:40:16 crc kubenswrapper[4958]: I1006 12:40:16.925284 4958 scope.go:117] "RemoveContainer" containerID="a755faf5e8304153c327fba15e9be2bd5d0327c67bfaf9992d2f4d37443732d6" Oct 06 12:40:16 crc kubenswrapper[4958]: E1006 12:40:16.925975 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-whw6z_openshift-machine-config-operator(1a63a5e9-6d00-40ff-a080-cfcaf03a1c1b)\"" pod="openshift-machine-config-operator/machine-config-daemon-whw6z" podUID="1a63a5e9-6d00-40ff-a080-cfcaf03a1c1b" Oct 06 12:40:30 crc kubenswrapper[4958]: I1006 12:40:30.913297 4958 scope.go:117] "RemoveContainer" containerID="a755faf5e8304153c327fba15e9be2bd5d0327c67bfaf9992d2f4d37443732d6" Oct 06 12:40:30 crc kubenswrapper[4958]: E1006 12:40:30.914289 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-whw6z_openshift-machine-config-operator(1a63a5e9-6d00-40ff-a080-cfcaf03a1c1b)\"" pod="openshift-machine-config-operator/machine-config-daemon-whw6z" podUID="1a63a5e9-6d00-40ff-a080-cfcaf03a1c1b" Oct 06 12:40:44 crc kubenswrapper[4958]: I1006 12:40:44.913976 4958 scope.go:117] "RemoveContainer" containerID="a755faf5e8304153c327fba15e9be2bd5d0327c67bfaf9992d2f4d37443732d6" Oct 06 12:40:44 crc kubenswrapper[4958]: E1006 12:40:44.914882 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-whw6z_openshift-machine-config-operator(1a63a5e9-6d00-40ff-a080-cfcaf03a1c1b)\"" pod="openshift-machine-config-operator/machine-config-daemon-whw6z" podUID="1a63a5e9-6d00-40ff-a080-cfcaf03a1c1b" Oct 06 12:40:59 crc kubenswrapper[4958]: I1006 12:40:59.914126 4958 scope.go:117] "RemoveContainer" containerID="a755faf5e8304153c327fba15e9be2bd5d0327c67bfaf9992d2f4d37443732d6" Oct 06 12:40:59 crc kubenswrapper[4958]: E1006 12:40:59.915212 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-whw6z_openshift-machine-config-operator(1a63a5e9-6d00-40ff-a080-cfcaf03a1c1b)\"" pod="openshift-machine-config-operator/machine-config-daemon-whw6z" podUID="1a63a5e9-6d00-40ff-a080-cfcaf03a1c1b" Oct 06 12:41:13 crc kubenswrapper[4958]: I1006 12:41:13.913909 4958 scope.go:117] "RemoveContainer" containerID="a755faf5e8304153c327fba15e9be2bd5d0327c67bfaf9992d2f4d37443732d6" Oct 06 12:41:13 crc kubenswrapper[4958]: E1006 12:41:13.914916 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-whw6z_openshift-machine-config-operator(1a63a5e9-6d00-40ff-a080-cfcaf03a1c1b)\"" pod="openshift-machine-config-operator/machine-config-daemon-whw6z" podUID="1a63a5e9-6d00-40ff-a080-cfcaf03a1c1b" Oct 06 12:41:28 crc kubenswrapper[4958]: I1006 12:41:28.913655 4958 scope.go:117] "RemoveContainer" containerID="a755faf5e8304153c327fba15e9be2bd5d0327c67bfaf9992d2f4d37443732d6" Oct 06 12:41:28 crc kubenswrapper[4958]: E1006 12:41:28.914274 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-whw6z_openshift-machine-config-operator(1a63a5e9-6d00-40ff-a080-cfcaf03a1c1b)\"" pod="openshift-machine-config-operator/machine-config-daemon-whw6z" podUID="1a63a5e9-6d00-40ff-a080-cfcaf03a1c1b" Oct 06 12:41:43 crc kubenswrapper[4958]: I1006 12:41:43.913757 4958 scope.go:117] "RemoveContainer" containerID="a755faf5e8304153c327fba15e9be2bd5d0327c67bfaf9992d2f4d37443732d6" Oct 06 12:41:43 crc kubenswrapper[4958]: E1006 12:41:43.914506 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-whw6z_openshift-machine-config-operator(1a63a5e9-6d00-40ff-a080-cfcaf03a1c1b)\"" pod="openshift-machine-config-operator/machine-config-daemon-whw6z" podUID="1a63a5e9-6d00-40ff-a080-cfcaf03a1c1b" Oct 06 12:41:54 crc kubenswrapper[4958]: I1006 12:41:54.913352 4958 scope.go:117] "RemoveContainer" containerID="a755faf5e8304153c327fba15e9be2bd5d0327c67bfaf9992d2f4d37443732d6" Oct 06 12:41:54 crc kubenswrapper[4958]: E1006 12:41:54.914258 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-whw6z_openshift-machine-config-operator(1a63a5e9-6d00-40ff-a080-cfcaf03a1c1b)\"" pod="openshift-machine-config-operator/machine-config-daemon-whw6z" podUID="1a63a5e9-6d00-40ff-a080-cfcaf03a1c1b" Oct 06 12:42:07 crc kubenswrapper[4958]: I1006 12:42:07.913337 4958 scope.go:117] "RemoveContainer" containerID="a755faf5e8304153c327fba15e9be2bd5d0327c67bfaf9992d2f4d37443732d6" Oct 06 12:42:07 crc kubenswrapper[4958]: E1006 12:42:07.914226 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-whw6z_openshift-machine-config-operator(1a63a5e9-6d00-40ff-a080-cfcaf03a1c1b)\"" pod="openshift-machine-config-operator/machine-config-daemon-whw6z" podUID="1a63a5e9-6d00-40ff-a080-cfcaf03a1c1b" Oct 06 12:42:18 crc kubenswrapper[4958]: I1006 12:42:18.912977 4958 scope.go:117] "RemoveContainer" containerID="a755faf5e8304153c327fba15e9be2bd5d0327c67bfaf9992d2f4d37443732d6" Oct 06 12:42:18 crc kubenswrapper[4958]: E1006 12:42:18.913667 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-whw6z_openshift-machine-config-operator(1a63a5e9-6d00-40ff-a080-cfcaf03a1c1b)\"" pod="openshift-machine-config-operator/machine-config-daemon-whw6z" podUID="1a63a5e9-6d00-40ff-a080-cfcaf03a1c1b" Oct 06 12:42:32 crc kubenswrapper[4958]: I1006 12:42:32.913176 4958 scope.go:117] "RemoveContainer" containerID="a755faf5e8304153c327fba15e9be2bd5d0327c67bfaf9992d2f4d37443732d6" Oct 06 12:42:32 crc kubenswrapper[4958]: E1006 12:42:32.914081 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-whw6z_openshift-machine-config-operator(1a63a5e9-6d00-40ff-a080-cfcaf03a1c1b)\"" pod="openshift-machine-config-operator/machine-config-daemon-whw6z" podUID="1a63a5e9-6d00-40ff-a080-cfcaf03a1c1b" Oct 06 12:42:44 crc kubenswrapper[4958]: I1006 12:42:44.913500 4958 scope.go:117] "RemoveContainer" containerID="a755faf5e8304153c327fba15e9be2bd5d0327c67bfaf9992d2f4d37443732d6" Oct 06 12:42:44 crc kubenswrapper[4958]: E1006 12:42:44.914291 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-whw6z_openshift-machine-config-operator(1a63a5e9-6d00-40ff-a080-cfcaf03a1c1b)\"" pod="openshift-machine-config-operator/machine-config-daemon-whw6z" podUID="1a63a5e9-6d00-40ff-a080-cfcaf03a1c1b" Oct 06 12:42:59 crc kubenswrapper[4958]: I1006 12:42:59.913025 4958 scope.go:117] "RemoveContainer" containerID="a755faf5e8304153c327fba15e9be2bd5d0327c67bfaf9992d2f4d37443732d6" Oct 06 12:42:59 crc kubenswrapper[4958]: E1006 12:42:59.913878 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-whw6z_openshift-machine-config-operator(1a63a5e9-6d00-40ff-a080-cfcaf03a1c1b)\"" pod="openshift-machine-config-operator/machine-config-daemon-whw6z" podUID="1a63a5e9-6d00-40ff-a080-cfcaf03a1c1b" Oct 06 12:43:10 crc kubenswrapper[4958]: I1006 12:43:10.913543 4958 scope.go:117] "RemoveContainer" containerID="a755faf5e8304153c327fba15e9be2bd5d0327c67bfaf9992d2f4d37443732d6" Oct 06 12:43:10 crc kubenswrapper[4958]: E1006 12:43:10.914740 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-whw6z_openshift-machine-config-operator(1a63a5e9-6d00-40ff-a080-cfcaf03a1c1b)\"" pod="openshift-machine-config-operator/machine-config-daemon-whw6z" podUID="1a63a5e9-6d00-40ff-a080-cfcaf03a1c1b" Oct 06 12:43:21 crc kubenswrapper[4958]: I1006 12:43:21.913643 4958 scope.go:117] "RemoveContainer" containerID="a755faf5e8304153c327fba15e9be2bd5d0327c67bfaf9992d2f4d37443732d6" Oct 06 12:43:21 crc kubenswrapper[4958]: E1006 12:43:21.915221 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-whw6z_openshift-machine-config-operator(1a63a5e9-6d00-40ff-a080-cfcaf03a1c1b)\"" pod="openshift-machine-config-operator/machine-config-daemon-whw6z" podUID="1a63a5e9-6d00-40ff-a080-cfcaf03a1c1b" Oct 06 12:43:33 crc kubenswrapper[4958]: I1006 12:43:33.913307 4958 scope.go:117] "RemoveContainer" containerID="a755faf5e8304153c327fba15e9be2bd5d0327c67bfaf9992d2f4d37443732d6" Oct 06 12:43:33 crc kubenswrapper[4958]: E1006 12:43:33.914084 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-whw6z_openshift-machine-config-operator(1a63a5e9-6d00-40ff-a080-cfcaf03a1c1b)\"" pod="openshift-machine-config-operator/machine-config-daemon-whw6z" podUID="1a63a5e9-6d00-40ff-a080-cfcaf03a1c1b" Oct 06 12:43:44 crc kubenswrapper[4958]: I1006 12:43:44.913831 4958 scope.go:117] "RemoveContainer" containerID="a755faf5e8304153c327fba15e9be2bd5d0327c67bfaf9992d2f4d37443732d6" Oct 06 12:43:44 crc kubenswrapper[4958]: E1006 12:43:44.914814 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-whw6z_openshift-machine-config-operator(1a63a5e9-6d00-40ff-a080-cfcaf03a1c1b)\"" pod="openshift-machine-config-operator/machine-config-daemon-whw6z" podUID="1a63a5e9-6d00-40ff-a080-cfcaf03a1c1b" Oct 06 12:43:57 crc kubenswrapper[4958]: I1006 12:43:57.913107 4958 scope.go:117] "RemoveContainer" containerID="a755faf5e8304153c327fba15e9be2bd5d0327c67bfaf9992d2f4d37443732d6" Oct 06 12:43:57 crc kubenswrapper[4958]: E1006 12:43:57.913754 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-whw6z_openshift-machine-config-operator(1a63a5e9-6d00-40ff-a080-cfcaf03a1c1b)\"" pod="openshift-machine-config-operator/machine-config-daemon-whw6z" podUID="1a63a5e9-6d00-40ff-a080-cfcaf03a1c1b" Oct 06 12:44:11 crc kubenswrapper[4958]: I1006 12:44:11.913343 4958 scope.go:117] "RemoveContainer" containerID="a755faf5e8304153c327fba15e9be2bd5d0327c67bfaf9992d2f4d37443732d6" Oct 06 12:44:11 crc kubenswrapper[4958]: E1006 12:44:11.917945 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-whw6z_openshift-machine-config-operator(1a63a5e9-6d00-40ff-a080-cfcaf03a1c1b)\"" pod="openshift-machine-config-operator/machine-config-daemon-whw6z" podUID="1a63a5e9-6d00-40ff-a080-cfcaf03a1c1b" Oct 06 12:44:16 crc kubenswrapper[4958]: I1006 12:44:16.251052 4958 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-7n65n"] Oct 06 12:44:16 crc kubenswrapper[4958]: E1006 12:44:16.252053 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1a22ded1-1c29-4518-aab8-576cde5eb9a2" containerName="extract-content" Oct 06 12:44:16 crc kubenswrapper[4958]: I1006 12:44:16.252070 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="1a22ded1-1c29-4518-aab8-576cde5eb9a2" containerName="extract-content" Oct 06 12:44:16 crc kubenswrapper[4958]: E1006 12:44:16.252087 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cb725344-85ad-46d2-9a7b-f3467c6abb5b" containerName="extract-utilities" Oct 06 12:44:16 crc kubenswrapper[4958]: I1006 12:44:16.252096 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="cb725344-85ad-46d2-9a7b-f3467c6abb5b" containerName="extract-utilities" Oct 06 12:44:16 crc kubenswrapper[4958]: E1006 12:44:16.252121 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2fa2bf6b-972c-42e9-b5e1-11e952cd8f6c" containerName="extract-content" Oct 06 12:44:16 crc kubenswrapper[4958]: I1006 12:44:16.252131 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="2fa2bf6b-972c-42e9-b5e1-11e952cd8f6c" containerName="extract-content" Oct 06 12:44:16 crc kubenswrapper[4958]: E1006 12:44:16.252168 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2fa2bf6b-972c-42e9-b5e1-11e952cd8f6c" containerName="registry-server" Oct 06 12:44:16 crc kubenswrapper[4958]: I1006 12:44:16.252177 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="2fa2bf6b-972c-42e9-b5e1-11e952cd8f6c" containerName="registry-server" Oct 06 12:44:16 crc kubenswrapper[4958]: E1006 12:44:16.252210 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2fa2bf6b-972c-42e9-b5e1-11e952cd8f6c" containerName="extract-utilities" Oct 06 12:44:16 crc kubenswrapper[4958]: I1006 12:44:16.252219 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="2fa2bf6b-972c-42e9-b5e1-11e952cd8f6c" containerName="extract-utilities" Oct 06 12:44:16 crc kubenswrapper[4958]: E1006 12:44:16.252234 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1a22ded1-1c29-4518-aab8-576cde5eb9a2" containerName="extract-utilities" Oct 06 12:44:16 crc kubenswrapper[4958]: I1006 12:44:16.252242 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="1a22ded1-1c29-4518-aab8-576cde5eb9a2" containerName="extract-utilities" Oct 06 12:44:16 crc kubenswrapper[4958]: E1006 12:44:16.252257 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1a22ded1-1c29-4518-aab8-576cde5eb9a2" containerName="registry-server" Oct 06 12:44:16 crc kubenswrapper[4958]: I1006 12:44:16.252264 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="1a22ded1-1c29-4518-aab8-576cde5eb9a2" containerName="registry-server" Oct 06 12:44:16 crc kubenswrapper[4958]: E1006 12:44:16.252278 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cb725344-85ad-46d2-9a7b-f3467c6abb5b" containerName="extract-content" Oct 06 12:44:16 crc kubenswrapper[4958]: I1006 12:44:16.252286 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="cb725344-85ad-46d2-9a7b-f3467c6abb5b" containerName="extract-content" Oct 06 12:44:16 crc kubenswrapper[4958]: E1006 12:44:16.252296 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cb725344-85ad-46d2-9a7b-f3467c6abb5b" containerName="registry-server" Oct 06 12:44:16 crc kubenswrapper[4958]: I1006 12:44:16.252303 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="cb725344-85ad-46d2-9a7b-f3467c6abb5b" containerName="registry-server" Oct 06 12:44:16 crc kubenswrapper[4958]: I1006 12:44:16.252532 4958 memory_manager.go:354] "RemoveStaleState removing state" podUID="1a22ded1-1c29-4518-aab8-576cde5eb9a2" containerName="registry-server" Oct 06 12:44:16 crc kubenswrapper[4958]: I1006 12:44:16.252547 4958 memory_manager.go:354] "RemoveStaleState removing state" podUID="2fa2bf6b-972c-42e9-b5e1-11e952cd8f6c" containerName="registry-server" Oct 06 12:44:16 crc kubenswrapper[4958]: I1006 12:44:16.252572 4958 memory_manager.go:354] "RemoveStaleState removing state" podUID="cb725344-85ad-46d2-9a7b-f3467c6abb5b" containerName="registry-server" Oct 06 12:44:16 crc kubenswrapper[4958]: I1006 12:44:16.254243 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-7n65n" Oct 06 12:44:16 crc kubenswrapper[4958]: I1006 12:44:16.270900 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-7n65n"] Oct 06 12:44:16 crc kubenswrapper[4958]: I1006 12:44:16.291440 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bvjg4\" (UniqueName: \"kubernetes.io/projected/1911bbda-da43-42a5-a13c-a1a9c1f34312-kube-api-access-bvjg4\") pod \"redhat-marketplace-7n65n\" (UID: \"1911bbda-da43-42a5-a13c-a1a9c1f34312\") " pod="openshift-marketplace/redhat-marketplace-7n65n" Oct 06 12:44:16 crc kubenswrapper[4958]: I1006 12:44:16.291548 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1911bbda-da43-42a5-a13c-a1a9c1f34312-utilities\") pod \"redhat-marketplace-7n65n\" (UID: \"1911bbda-da43-42a5-a13c-a1a9c1f34312\") " pod="openshift-marketplace/redhat-marketplace-7n65n" Oct 06 12:44:16 crc kubenswrapper[4958]: I1006 12:44:16.291677 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1911bbda-da43-42a5-a13c-a1a9c1f34312-catalog-content\") pod \"redhat-marketplace-7n65n\" (UID: \"1911bbda-da43-42a5-a13c-a1a9c1f34312\") " pod="openshift-marketplace/redhat-marketplace-7n65n" Oct 06 12:44:16 crc kubenswrapper[4958]: I1006 12:44:16.393710 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1911bbda-da43-42a5-a13c-a1a9c1f34312-catalog-content\") pod \"redhat-marketplace-7n65n\" (UID: \"1911bbda-da43-42a5-a13c-a1a9c1f34312\") " pod="openshift-marketplace/redhat-marketplace-7n65n" Oct 06 12:44:16 crc kubenswrapper[4958]: I1006 12:44:16.393815 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bvjg4\" (UniqueName: \"kubernetes.io/projected/1911bbda-da43-42a5-a13c-a1a9c1f34312-kube-api-access-bvjg4\") pod \"redhat-marketplace-7n65n\" (UID: \"1911bbda-da43-42a5-a13c-a1a9c1f34312\") " pod="openshift-marketplace/redhat-marketplace-7n65n" Oct 06 12:44:16 crc kubenswrapper[4958]: I1006 12:44:16.393870 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1911bbda-da43-42a5-a13c-a1a9c1f34312-utilities\") pod \"redhat-marketplace-7n65n\" (UID: \"1911bbda-da43-42a5-a13c-a1a9c1f34312\") " pod="openshift-marketplace/redhat-marketplace-7n65n" Oct 06 12:44:16 crc kubenswrapper[4958]: I1006 12:44:16.394160 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1911bbda-da43-42a5-a13c-a1a9c1f34312-catalog-content\") pod \"redhat-marketplace-7n65n\" (UID: \"1911bbda-da43-42a5-a13c-a1a9c1f34312\") " pod="openshift-marketplace/redhat-marketplace-7n65n" Oct 06 12:44:16 crc kubenswrapper[4958]: I1006 12:44:16.394350 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1911bbda-da43-42a5-a13c-a1a9c1f34312-utilities\") pod \"redhat-marketplace-7n65n\" (UID: \"1911bbda-da43-42a5-a13c-a1a9c1f34312\") " pod="openshift-marketplace/redhat-marketplace-7n65n" Oct 06 12:44:16 crc kubenswrapper[4958]: I1006 12:44:16.422971 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bvjg4\" (UniqueName: \"kubernetes.io/projected/1911bbda-da43-42a5-a13c-a1a9c1f34312-kube-api-access-bvjg4\") pod \"redhat-marketplace-7n65n\" (UID: \"1911bbda-da43-42a5-a13c-a1a9c1f34312\") " pod="openshift-marketplace/redhat-marketplace-7n65n" Oct 06 12:44:16 crc kubenswrapper[4958]: I1006 12:44:16.581672 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-7n65n" Oct 06 12:44:17 crc kubenswrapper[4958]: I1006 12:44:17.044077 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-7n65n"] Oct 06 12:44:17 crc kubenswrapper[4958]: I1006 12:44:17.423057 4958 generic.go:334] "Generic (PLEG): container finished" podID="1911bbda-da43-42a5-a13c-a1a9c1f34312" containerID="9a04d38032bfc12d7196aa3efd0e84293daee52ae09021c9a5eacc4b8fe1f4ca" exitCode=0 Oct 06 12:44:17 crc kubenswrapper[4958]: I1006 12:44:17.423177 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-7n65n" event={"ID":"1911bbda-da43-42a5-a13c-a1a9c1f34312","Type":"ContainerDied","Data":"9a04d38032bfc12d7196aa3efd0e84293daee52ae09021c9a5eacc4b8fe1f4ca"} Oct 06 12:44:17 crc kubenswrapper[4958]: I1006 12:44:17.423508 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-7n65n" event={"ID":"1911bbda-da43-42a5-a13c-a1a9c1f34312","Type":"ContainerStarted","Data":"263370d171af21cdf4ed770e497eb6e8e0266ae842b4cfe631d51fe9702a878b"} Oct 06 12:44:17 crc kubenswrapper[4958]: I1006 12:44:17.428317 4958 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Oct 06 12:44:19 crc kubenswrapper[4958]: I1006 12:44:19.448764 4958 generic.go:334] "Generic (PLEG): container finished" podID="1911bbda-da43-42a5-a13c-a1a9c1f34312" containerID="5d1d5938c507c2a6ee4cd695f27d24235a91f0205ccf2e0615cd42efcb74f6fc" exitCode=0 Oct 06 12:44:19 crc kubenswrapper[4958]: I1006 12:44:19.448810 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-7n65n" event={"ID":"1911bbda-da43-42a5-a13c-a1a9c1f34312","Type":"ContainerDied","Data":"5d1d5938c507c2a6ee4cd695f27d24235a91f0205ccf2e0615cd42efcb74f6fc"} Oct 06 12:44:20 crc kubenswrapper[4958]: I1006 12:44:20.459393 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-7n65n" event={"ID":"1911bbda-da43-42a5-a13c-a1a9c1f34312","Type":"ContainerStarted","Data":"d082291f60ed56bbce8f7a91d184bb70ed807a316d923bce6a6d895a9ae9fa48"} Oct 06 12:44:20 crc kubenswrapper[4958]: I1006 12:44:20.479901 4958 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-7n65n" podStartSLOduration=1.915042269 podStartE2EDuration="4.479885641s" podCreationTimestamp="2025-10-06 12:44:16 +0000 UTC" firstStartedPulling="2025-10-06 12:44:17.428016805 +0000 UTC m=+3411.314042113" lastFinishedPulling="2025-10-06 12:44:19.992860157 +0000 UTC m=+3413.878885485" observedRunningTime="2025-10-06 12:44:20.474183136 +0000 UTC m=+3414.360208444" watchObservedRunningTime="2025-10-06 12:44:20.479885641 +0000 UTC m=+3414.365910949" Oct 06 12:44:25 crc kubenswrapper[4958]: I1006 12:44:25.912947 4958 scope.go:117] "RemoveContainer" containerID="a755faf5e8304153c327fba15e9be2bd5d0327c67bfaf9992d2f4d37443732d6" Oct 06 12:44:26 crc kubenswrapper[4958]: I1006 12:44:26.523709 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-whw6z" event={"ID":"1a63a5e9-6d00-40ff-a080-cfcaf03a1c1b","Type":"ContainerStarted","Data":"55c91d9b6f0526c3f4d84b26912ccb29208036769e8b90f3b9af429198f89ce0"} Oct 06 12:44:26 crc kubenswrapper[4958]: I1006 12:44:26.581904 4958 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-7n65n" Oct 06 12:44:26 crc kubenswrapper[4958]: I1006 12:44:26.581972 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-7n65n" Oct 06 12:44:26 crc kubenswrapper[4958]: I1006 12:44:26.639000 4958 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-7n65n" Oct 06 12:44:27 crc kubenswrapper[4958]: I1006 12:44:27.582267 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-7n65n" Oct 06 12:44:27 crc kubenswrapper[4958]: I1006 12:44:27.634944 4958 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-7n65n"] Oct 06 12:44:29 crc kubenswrapper[4958]: I1006 12:44:29.552583 4958 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-7n65n" podUID="1911bbda-da43-42a5-a13c-a1a9c1f34312" containerName="registry-server" containerID="cri-o://d082291f60ed56bbce8f7a91d184bb70ed807a316d923bce6a6d895a9ae9fa48" gracePeriod=2 Oct 06 12:44:30 crc kubenswrapper[4958]: I1006 12:44:30.153373 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-7n65n" Oct 06 12:44:30 crc kubenswrapper[4958]: I1006 12:44:30.251008 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bvjg4\" (UniqueName: \"kubernetes.io/projected/1911bbda-da43-42a5-a13c-a1a9c1f34312-kube-api-access-bvjg4\") pod \"1911bbda-da43-42a5-a13c-a1a9c1f34312\" (UID: \"1911bbda-da43-42a5-a13c-a1a9c1f34312\") " Oct 06 12:44:30 crc kubenswrapper[4958]: I1006 12:44:30.251130 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1911bbda-da43-42a5-a13c-a1a9c1f34312-utilities\") pod \"1911bbda-da43-42a5-a13c-a1a9c1f34312\" (UID: \"1911bbda-da43-42a5-a13c-a1a9c1f34312\") " Oct 06 12:44:30 crc kubenswrapper[4958]: I1006 12:44:30.251454 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1911bbda-da43-42a5-a13c-a1a9c1f34312-catalog-content\") pod \"1911bbda-da43-42a5-a13c-a1a9c1f34312\" (UID: \"1911bbda-da43-42a5-a13c-a1a9c1f34312\") " Oct 06 12:44:30 crc kubenswrapper[4958]: I1006 12:44:30.259162 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1911bbda-da43-42a5-a13c-a1a9c1f34312-utilities" (OuterVolumeSpecName: "utilities") pod "1911bbda-da43-42a5-a13c-a1a9c1f34312" (UID: "1911bbda-da43-42a5-a13c-a1a9c1f34312"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 06 12:44:30 crc kubenswrapper[4958]: I1006 12:44:30.259644 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1911bbda-da43-42a5-a13c-a1a9c1f34312-kube-api-access-bvjg4" (OuterVolumeSpecName: "kube-api-access-bvjg4") pod "1911bbda-da43-42a5-a13c-a1a9c1f34312" (UID: "1911bbda-da43-42a5-a13c-a1a9c1f34312"). InnerVolumeSpecName "kube-api-access-bvjg4". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 06 12:44:30 crc kubenswrapper[4958]: I1006 12:44:30.268376 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1911bbda-da43-42a5-a13c-a1a9c1f34312-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "1911bbda-da43-42a5-a13c-a1a9c1f34312" (UID: "1911bbda-da43-42a5-a13c-a1a9c1f34312"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 06 12:44:30 crc kubenswrapper[4958]: I1006 12:44:30.354762 4958 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bvjg4\" (UniqueName: \"kubernetes.io/projected/1911bbda-da43-42a5-a13c-a1a9c1f34312-kube-api-access-bvjg4\") on node \"crc\" DevicePath \"\"" Oct 06 12:44:30 crc kubenswrapper[4958]: I1006 12:44:30.354838 4958 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1911bbda-da43-42a5-a13c-a1a9c1f34312-utilities\") on node \"crc\" DevicePath \"\"" Oct 06 12:44:30 crc kubenswrapper[4958]: I1006 12:44:30.354861 4958 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1911bbda-da43-42a5-a13c-a1a9c1f34312-catalog-content\") on node \"crc\" DevicePath \"\"" Oct 06 12:44:30 crc kubenswrapper[4958]: I1006 12:44:30.566795 4958 generic.go:334] "Generic (PLEG): container finished" podID="1911bbda-da43-42a5-a13c-a1a9c1f34312" containerID="d082291f60ed56bbce8f7a91d184bb70ed807a316d923bce6a6d895a9ae9fa48" exitCode=0 Oct 06 12:44:30 crc kubenswrapper[4958]: I1006 12:44:30.566859 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-7n65n" event={"ID":"1911bbda-da43-42a5-a13c-a1a9c1f34312","Type":"ContainerDied","Data":"d082291f60ed56bbce8f7a91d184bb70ed807a316d923bce6a6d895a9ae9fa48"} Oct 06 12:44:30 crc kubenswrapper[4958]: I1006 12:44:30.567250 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-7n65n" event={"ID":"1911bbda-da43-42a5-a13c-a1a9c1f34312","Type":"ContainerDied","Data":"263370d171af21cdf4ed770e497eb6e8e0266ae842b4cfe631d51fe9702a878b"} Oct 06 12:44:30 crc kubenswrapper[4958]: I1006 12:44:30.567281 4958 scope.go:117] "RemoveContainer" containerID="d082291f60ed56bbce8f7a91d184bb70ed807a316d923bce6a6d895a9ae9fa48" Oct 06 12:44:30 crc kubenswrapper[4958]: I1006 12:44:30.566888 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-7n65n" Oct 06 12:44:30 crc kubenswrapper[4958]: I1006 12:44:30.594166 4958 scope.go:117] "RemoveContainer" containerID="5d1d5938c507c2a6ee4cd695f27d24235a91f0205ccf2e0615cd42efcb74f6fc" Oct 06 12:44:30 crc kubenswrapper[4958]: I1006 12:44:30.617948 4958 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-7n65n"] Oct 06 12:44:30 crc kubenswrapper[4958]: I1006 12:44:30.632145 4958 scope.go:117] "RemoveContainer" containerID="9a04d38032bfc12d7196aa3efd0e84293daee52ae09021c9a5eacc4b8fe1f4ca" Oct 06 12:44:30 crc kubenswrapper[4958]: I1006 12:44:30.633524 4958 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-7n65n"] Oct 06 12:44:30 crc kubenswrapper[4958]: I1006 12:44:30.680890 4958 scope.go:117] "RemoveContainer" containerID="d082291f60ed56bbce8f7a91d184bb70ed807a316d923bce6a6d895a9ae9fa48" Oct 06 12:44:30 crc kubenswrapper[4958]: E1006 12:44:30.681314 4958 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d082291f60ed56bbce8f7a91d184bb70ed807a316d923bce6a6d895a9ae9fa48\": container with ID starting with d082291f60ed56bbce8f7a91d184bb70ed807a316d923bce6a6d895a9ae9fa48 not found: ID does not exist" containerID="d082291f60ed56bbce8f7a91d184bb70ed807a316d923bce6a6d895a9ae9fa48" Oct 06 12:44:30 crc kubenswrapper[4958]: I1006 12:44:30.681348 4958 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d082291f60ed56bbce8f7a91d184bb70ed807a316d923bce6a6d895a9ae9fa48"} err="failed to get container status \"d082291f60ed56bbce8f7a91d184bb70ed807a316d923bce6a6d895a9ae9fa48\": rpc error: code = NotFound desc = could not find container \"d082291f60ed56bbce8f7a91d184bb70ed807a316d923bce6a6d895a9ae9fa48\": container with ID starting with d082291f60ed56bbce8f7a91d184bb70ed807a316d923bce6a6d895a9ae9fa48 not found: ID does not exist" Oct 06 12:44:30 crc kubenswrapper[4958]: I1006 12:44:30.681373 4958 scope.go:117] "RemoveContainer" containerID="5d1d5938c507c2a6ee4cd695f27d24235a91f0205ccf2e0615cd42efcb74f6fc" Oct 06 12:44:30 crc kubenswrapper[4958]: E1006 12:44:30.681754 4958 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5d1d5938c507c2a6ee4cd695f27d24235a91f0205ccf2e0615cd42efcb74f6fc\": container with ID starting with 5d1d5938c507c2a6ee4cd695f27d24235a91f0205ccf2e0615cd42efcb74f6fc not found: ID does not exist" containerID="5d1d5938c507c2a6ee4cd695f27d24235a91f0205ccf2e0615cd42efcb74f6fc" Oct 06 12:44:30 crc kubenswrapper[4958]: I1006 12:44:30.681778 4958 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5d1d5938c507c2a6ee4cd695f27d24235a91f0205ccf2e0615cd42efcb74f6fc"} err="failed to get container status \"5d1d5938c507c2a6ee4cd695f27d24235a91f0205ccf2e0615cd42efcb74f6fc\": rpc error: code = NotFound desc = could not find container \"5d1d5938c507c2a6ee4cd695f27d24235a91f0205ccf2e0615cd42efcb74f6fc\": container with ID starting with 5d1d5938c507c2a6ee4cd695f27d24235a91f0205ccf2e0615cd42efcb74f6fc not found: ID does not exist" Oct 06 12:44:30 crc kubenswrapper[4958]: I1006 12:44:30.681798 4958 scope.go:117] "RemoveContainer" containerID="9a04d38032bfc12d7196aa3efd0e84293daee52ae09021c9a5eacc4b8fe1f4ca" Oct 06 12:44:30 crc kubenswrapper[4958]: E1006 12:44:30.682173 4958 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9a04d38032bfc12d7196aa3efd0e84293daee52ae09021c9a5eacc4b8fe1f4ca\": container with ID starting with 9a04d38032bfc12d7196aa3efd0e84293daee52ae09021c9a5eacc4b8fe1f4ca not found: ID does not exist" containerID="9a04d38032bfc12d7196aa3efd0e84293daee52ae09021c9a5eacc4b8fe1f4ca" Oct 06 12:44:30 crc kubenswrapper[4958]: I1006 12:44:30.682218 4958 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9a04d38032bfc12d7196aa3efd0e84293daee52ae09021c9a5eacc4b8fe1f4ca"} err="failed to get container status \"9a04d38032bfc12d7196aa3efd0e84293daee52ae09021c9a5eacc4b8fe1f4ca\": rpc error: code = NotFound desc = could not find container \"9a04d38032bfc12d7196aa3efd0e84293daee52ae09021c9a5eacc4b8fe1f4ca\": container with ID starting with 9a04d38032bfc12d7196aa3efd0e84293daee52ae09021c9a5eacc4b8fe1f4ca not found: ID does not exist" Oct 06 12:44:30 crc kubenswrapper[4958]: I1006 12:44:30.929060 4958 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1911bbda-da43-42a5-a13c-a1a9c1f34312" path="/var/lib/kubelet/pods/1911bbda-da43-42a5-a13c-a1a9c1f34312/volumes" Oct 06 12:45:00 crc kubenswrapper[4958]: I1006 12:45:00.220817 4958 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29329245-bttcc"] Oct 06 12:45:00 crc kubenswrapper[4958]: E1006 12:45:00.221826 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1911bbda-da43-42a5-a13c-a1a9c1f34312" containerName="extract-utilities" Oct 06 12:45:00 crc kubenswrapper[4958]: I1006 12:45:00.221842 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="1911bbda-da43-42a5-a13c-a1a9c1f34312" containerName="extract-utilities" Oct 06 12:45:00 crc kubenswrapper[4958]: E1006 12:45:00.221866 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1911bbda-da43-42a5-a13c-a1a9c1f34312" containerName="extract-content" Oct 06 12:45:00 crc kubenswrapper[4958]: I1006 12:45:00.221872 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="1911bbda-da43-42a5-a13c-a1a9c1f34312" containerName="extract-content" Oct 06 12:45:00 crc kubenswrapper[4958]: E1006 12:45:00.221887 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1911bbda-da43-42a5-a13c-a1a9c1f34312" containerName="registry-server" Oct 06 12:45:00 crc kubenswrapper[4958]: I1006 12:45:00.221893 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="1911bbda-da43-42a5-a13c-a1a9c1f34312" containerName="registry-server" Oct 06 12:45:00 crc kubenswrapper[4958]: I1006 12:45:00.222071 4958 memory_manager.go:354] "RemoveStaleState removing state" podUID="1911bbda-da43-42a5-a13c-a1a9c1f34312" containerName="registry-server" Oct 06 12:45:00 crc kubenswrapper[4958]: I1006 12:45:00.222873 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29329245-bttcc" Oct 06 12:45:00 crc kubenswrapper[4958]: I1006 12:45:00.225656 4958 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Oct 06 12:45:00 crc kubenswrapper[4958]: I1006 12:45:00.225656 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Oct 06 12:45:00 crc kubenswrapper[4958]: I1006 12:45:00.232235 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29329245-bttcc"] Oct 06 12:45:00 crc kubenswrapper[4958]: I1006 12:45:00.294364 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/f2a33723-60fc-479f-912c-799341c0deba-secret-volume\") pod \"collect-profiles-29329245-bttcc\" (UID: \"f2a33723-60fc-479f-912c-799341c0deba\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29329245-bttcc" Oct 06 12:45:00 crc kubenswrapper[4958]: I1006 12:45:00.294524 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k5x75\" (UniqueName: \"kubernetes.io/projected/f2a33723-60fc-479f-912c-799341c0deba-kube-api-access-k5x75\") pod \"collect-profiles-29329245-bttcc\" (UID: \"f2a33723-60fc-479f-912c-799341c0deba\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29329245-bttcc" Oct 06 12:45:00 crc kubenswrapper[4958]: I1006 12:45:00.294558 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/f2a33723-60fc-479f-912c-799341c0deba-config-volume\") pod \"collect-profiles-29329245-bttcc\" (UID: \"f2a33723-60fc-479f-912c-799341c0deba\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29329245-bttcc" Oct 06 12:45:00 crc kubenswrapper[4958]: I1006 12:45:00.396549 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k5x75\" (UniqueName: \"kubernetes.io/projected/f2a33723-60fc-479f-912c-799341c0deba-kube-api-access-k5x75\") pod \"collect-profiles-29329245-bttcc\" (UID: \"f2a33723-60fc-479f-912c-799341c0deba\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29329245-bttcc" Oct 06 12:45:00 crc kubenswrapper[4958]: I1006 12:45:00.396729 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/f2a33723-60fc-479f-912c-799341c0deba-config-volume\") pod \"collect-profiles-29329245-bttcc\" (UID: \"f2a33723-60fc-479f-912c-799341c0deba\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29329245-bttcc" Oct 06 12:45:00 crc kubenswrapper[4958]: I1006 12:45:00.396851 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/f2a33723-60fc-479f-912c-799341c0deba-secret-volume\") pod \"collect-profiles-29329245-bttcc\" (UID: \"f2a33723-60fc-479f-912c-799341c0deba\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29329245-bttcc" Oct 06 12:45:00 crc kubenswrapper[4958]: I1006 12:45:00.397660 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/f2a33723-60fc-479f-912c-799341c0deba-config-volume\") pod \"collect-profiles-29329245-bttcc\" (UID: \"f2a33723-60fc-479f-912c-799341c0deba\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29329245-bttcc" Oct 06 12:45:00 crc kubenswrapper[4958]: I1006 12:45:00.405300 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/f2a33723-60fc-479f-912c-799341c0deba-secret-volume\") pod \"collect-profiles-29329245-bttcc\" (UID: \"f2a33723-60fc-479f-912c-799341c0deba\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29329245-bttcc" Oct 06 12:45:00 crc kubenswrapper[4958]: I1006 12:45:00.415106 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k5x75\" (UniqueName: \"kubernetes.io/projected/f2a33723-60fc-479f-912c-799341c0deba-kube-api-access-k5x75\") pod \"collect-profiles-29329245-bttcc\" (UID: \"f2a33723-60fc-479f-912c-799341c0deba\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29329245-bttcc" Oct 06 12:45:00 crc kubenswrapper[4958]: I1006 12:45:00.555092 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29329245-bttcc" Oct 06 12:45:00 crc kubenswrapper[4958]: I1006 12:45:00.981414 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29329245-bttcc"] Oct 06 12:45:00 crc kubenswrapper[4958]: W1006 12:45:00.985943 4958 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf2a33723_60fc_479f_912c_799341c0deba.slice/crio-b430a068c77ad7d10889a342dc6ab6db3920205a60980f29617f1496250506c3 WatchSource:0}: Error finding container b430a068c77ad7d10889a342dc6ab6db3920205a60980f29617f1496250506c3: Status 404 returned error can't find the container with id b430a068c77ad7d10889a342dc6ab6db3920205a60980f29617f1496250506c3 Oct 06 12:45:01 crc kubenswrapper[4958]: I1006 12:45:01.943666 4958 generic.go:334] "Generic (PLEG): container finished" podID="f2a33723-60fc-479f-912c-799341c0deba" containerID="8979002fccc3c0dd33fd689f00cc5b07110ff792ddbcdd9f4ddd889deecd5683" exitCode=0 Oct 06 12:45:01 crc kubenswrapper[4958]: I1006 12:45:01.943722 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29329245-bttcc" event={"ID":"f2a33723-60fc-479f-912c-799341c0deba","Type":"ContainerDied","Data":"8979002fccc3c0dd33fd689f00cc5b07110ff792ddbcdd9f4ddd889deecd5683"} Oct 06 12:45:01 crc kubenswrapper[4958]: I1006 12:45:01.943750 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29329245-bttcc" event={"ID":"f2a33723-60fc-479f-912c-799341c0deba","Type":"ContainerStarted","Data":"b430a068c77ad7d10889a342dc6ab6db3920205a60980f29617f1496250506c3"} Oct 06 12:45:03 crc kubenswrapper[4958]: I1006 12:45:03.411835 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29329245-bttcc" Oct 06 12:45:03 crc kubenswrapper[4958]: I1006 12:45:03.480086 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/f2a33723-60fc-479f-912c-799341c0deba-secret-volume\") pod \"f2a33723-60fc-479f-912c-799341c0deba\" (UID: \"f2a33723-60fc-479f-912c-799341c0deba\") " Oct 06 12:45:03 crc kubenswrapper[4958]: I1006 12:45:03.480135 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-k5x75\" (UniqueName: \"kubernetes.io/projected/f2a33723-60fc-479f-912c-799341c0deba-kube-api-access-k5x75\") pod \"f2a33723-60fc-479f-912c-799341c0deba\" (UID: \"f2a33723-60fc-479f-912c-799341c0deba\") " Oct 06 12:45:03 crc kubenswrapper[4958]: I1006 12:45:03.480206 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/f2a33723-60fc-479f-912c-799341c0deba-config-volume\") pod \"f2a33723-60fc-479f-912c-799341c0deba\" (UID: \"f2a33723-60fc-479f-912c-799341c0deba\") " Oct 06 12:45:03 crc kubenswrapper[4958]: I1006 12:45:03.480948 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f2a33723-60fc-479f-912c-799341c0deba-config-volume" (OuterVolumeSpecName: "config-volume") pod "f2a33723-60fc-479f-912c-799341c0deba" (UID: "f2a33723-60fc-479f-912c-799341c0deba"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 06 12:45:03 crc kubenswrapper[4958]: I1006 12:45:03.486935 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f2a33723-60fc-479f-912c-799341c0deba-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "f2a33723-60fc-479f-912c-799341c0deba" (UID: "f2a33723-60fc-479f-912c-799341c0deba"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 06 12:45:03 crc kubenswrapper[4958]: I1006 12:45:03.486953 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f2a33723-60fc-479f-912c-799341c0deba-kube-api-access-k5x75" (OuterVolumeSpecName: "kube-api-access-k5x75") pod "f2a33723-60fc-479f-912c-799341c0deba" (UID: "f2a33723-60fc-479f-912c-799341c0deba"). InnerVolumeSpecName "kube-api-access-k5x75". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 06 12:45:03 crc kubenswrapper[4958]: I1006 12:45:03.582336 4958 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/f2a33723-60fc-479f-912c-799341c0deba-secret-volume\") on node \"crc\" DevicePath \"\"" Oct 06 12:45:03 crc kubenswrapper[4958]: I1006 12:45:03.582368 4958 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-k5x75\" (UniqueName: \"kubernetes.io/projected/f2a33723-60fc-479f-912c-799341c0deba-kube-api-access-k5x75\") on node \"crc\" DevicePath \"\"" Oct 06 12:45:03 crc kubenswrapper[4958]: I1006 12:45:03.582377 4958 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/f2a33723-60fc-479f-912c-799341c0deba-config-volume\") on node \"crc\" DevicePath \"\"" Oct 06 12:45:03 crc kubenswrapper[4958]: I1006 12:45:03.964886 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29329245-bttcc" event={"ID":"f2a33723-60fc-479f-912c-799341c0deba","Type":"ContainerDied","Data":"b430a068c77ad7d10889a342dc6ab6db3920205a60980f29617f1496250506c3"} Oct 06 12:45:03 crc kubenswrapper[4958]: I1006 12:45:03.964953 4958 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b430a068c77ad7d10889a342dc6ab6db3920205a60980f29617f1496250506c3" Oct 06 12:45:03 crc kubenswrapper[4958]: I1006 12:45:03.964983 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29329245-bttcc" Oct 06 12:45:04 crc kubenswrapper[4958]: I1006 12:45:04.484244 4958 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29329200-chprd"] Oct 06 12:45:04 crc kubenswrapper[4958]: I1006 12:45:04.492876 4958 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29329200-chprd"] Oct 06 12:45:04 crc kubenswrapper[4958]: I1006 12:45:04.925693 4958 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3f003179-aa9d-491f-aefb-aaedbeaf375b" path="/var/lib/kubelet/pods/3f003179-aa9d-491f-aefb-aaedbeaf375b/volumes" Oct 06 12:45:15 crc kubenswrapper[4958]: I1006 12:45:15.312011 4958 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-297n5"] Oct 06 12:45:15 crc kubenswrapper[4958]: E1006 12:45:15.312991 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f2a33723-60fc-479f-912c-799341c0deba" containerName="collect-profiles" Oct 06 12:45:15 crc kubenswrapper[4958]: I1006 12:45:15.313006 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="f2a33723-60fc-479f-912c-799341c0deba" containerName="collect-profiles" Oct 06 12:45:15 crc kubenswrapper[4958]: I1006 12:45:15.313227 4958 memory_manager.go:354] "RemoveStaleState removing state" podUID="f2a33723-60fc-479f-912c-799341c0deba" containerName="collect-profiles" Oct 06 12:45:15 crc kubenswrapper[4958]: I1006 12:45:15.314688 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-297n5" Oct 06 12:45:15 crc kubenswrapper[4958]: I1006 12:45:15.324519 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-297n5"] Oct 06 12:45:15 crc kubenswrapper[4958]: I1006 12:45:15.419476 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5fkjl\" (UniqueName: \"kubernetes.io/projected/a620ada0-924f-489e-abab-14587258e21c-kube-api-access-5fkjl\") pod \"certified-operators-297n5\" (UID: \"a620ada0-924f-489e-abab-14587258e21c\") " pod="openshift-marketplace/certified-operators-297n5" Oct 06 12:45:15 crc kubenswrapper[4958]: I1006 12:45:15.419579 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a620ada0-924f-489e-abab-14587258e21c-catalog-content\") pod \"certified-operators-297n5\" (UID: \"a620ada0-924f-489e-abab-14587258e21c\") " pod="openshift-marketplace/certified-operators-297n5" Oct 06 12:45:15 crc kubenswrapper[4958]: I1006 12:45:15.419732 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a620ada0-924f-489e-abab-14587258e21c-utilities\") pod \"certified-operators-297n5\" (UID: \"a620ada0-924f-489e-abab-14587258e21c\") " pod="openshift-marketplace/certified-operators-297n5" Oct 06 12:45:15 crc kubenswrapper[4958]: I1006 12:45:15.522020 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5fkjl\" (UniqueName: \"kubernetes.io/projected/a620ada0-924f-489e-abab-14587258e21c-kube-api-access-5fkjl\") pod \"certified-operators-297n5\" (UID: \"a620ada0-924f-489e-abab-14587258e21c\") " pod="openshift-marketplace/certified-operators-297n5" Oct 06 12:45:15 crc kubenswrapper[4958]: I1006 12:45:15.522125 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a620ada0-924f-489e-abab-14587258e21c-catalog-content\") pod \"certified-operators-297n5\" (UID: \"a620ada0-924f-489e-abab-14587258e21c\") " pod="openshift-marketplace/certified-operators-297n5" Oct 06 12:45:15 crc kubenswrapper[4958]: I1006 12:45:15.522211 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a620ada0-924f-489e-abab-14587258e21c-utilities\") pod \"certified-operators-297n5\" (UID: \"a620ada0-924f-489e-abab-14587258e21c\") " pod="openshift-marketplace/certified-operators-297n5" Oct 06 12:45:15 crc kubenswrapper[4958]: I1006 12:45:15.522944 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a620ada0-924f-489e-abab-14587258e21c-utilities\") pod \"certified-operators-297n5\" (UID: \"a620ada0-924f-489e-abab-14587258e21c\") " pod="openshift-marketplace/certified-operators-297n5" Oct 06 12:45:15 crc kubenswrapper[4958]: I1006 12:45:15.523015 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a620ada0-924f-489e-abab-14587258e21c-catalog-content\") pod \"certified-operators-297n5\" (UID: \"a620ada0-924f-489e-abab-14587258e21c\") " pod="openshift-marketplace/certified-operators-297n5" Oct 06 12:45:15 crc kubenswrapper[4958]: I1006 12:45:15.546282 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5fkjl\" (UniqueName: \"kubernetes.io/projected/a620ada0-924f-489e-abab-14587258e21c-kube-api-access-5fkjl\") pod \"certified-operators-297n5\" (UID: \"a620ada0-924f-489e-abab-14587258e21c\") " pod="openshift-marketplace/certified-operators-297n5" Oct 06 12:45:15 crc kubenswrapper[4958]: I1006 12:45:15.655371 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-297n5" Oct 06 12:45:16 crc kubenswrapper[4958]: I1006 12:45:16.116633 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-297n5"] Oct 06 12:45:17 crc kubenswrapper[4958]: I1006 12:45:17.089450 4958 generic.go:334] "Generic (PLEG): container finished" podID="a620ada0-924f-489e-abab-14587258e21c" containerID="a238dcb0f096a61e7b8ba1eec03b37e7c8673b9ba45536bb8fe85f92804f197d" exitCode=0 Oct 06 12:45:17 crc kubenswrapper[4958]: I1006 12:45:17.089557 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-297n5" event={"ID":"a620ada0-924f-489e-abab-14587258e21c","Type":"ContainerDied","Data":"a238dcb0f096a61e7b8ba1eec03b37e7c8673b9ba45536bb8fe85f92804f197d"} Oct 06 12:45:17 crc kubenswrapper[4958]: I1006 12:45:17.091305 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-297n5" event={"ID":"a620ada0-924f-489e-abab-14587258e21c","Type":"ContainerStarted","Data":"166e5d292b0d4fa1876d55cacc3c810d14d1a0188febb050dfe098a830a6d410"} Oct 06 12:45:18 crc kubenswrapper[4958]: I1006 12:45:18.102432 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-297n5" event={"ID":"a620ada0-924f-489e-abab-14587258e21c","Type":"ContainerStarted","Data":"f30249b838e0a3df92db05b4b84888a805a2f28006d5c000249d3932d18400bf"} Oct 06 12:45:19 crc kubenswrapper[4958]: I1006 12:45:19.112295 4958 generic.go:334] "Generic (PLEG): container finished" podID="a620ada0-924f-489e-abab-14587258e21c" containerID="f30249b838e0a3df92db05b4b84888a805a2f28006d5c000249d3932d18400bf" exitCode=0 Oct 06 12:45:19 crc kubenswrapper[4958]: I1006 12:45:19.112359 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-297n5" event={"ID":"a620ada0-924f-489e-abab-14587258e21c","Type":"ContainerDied","Data":"f30249b838e0a3df92db05b4b84888a805a2f28006d5c000249d3932d18400bf"} Oct 06 12:45:20 crc kubenswrapper[4958]: I1006 12:45:20.124051 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-297n5" event={"ID":"a620ada0-924f-489e-abab-14587258e21c","Type":"ContainerStarted","Data":"14cd027056d88d9078847fee28dc1d74239bd091d146818c8d8dd425cfa081fb"} Oct 06 12:45:20 crc kubenswrapper[4958]: I1006 12:45:20.151578 4958 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-297n5" podStartSLOduration=2.735821655 podStartE2EDuration="5.151552396s" podCreationTimestamp="2025-10-06 12:45:15 +0000 UTC" firstStartedPulling="2025-10-06 12:45:17.092257101 +0000 UTC m=+3470.978282419" lastFinishedPulling="2025-10-06 12:45:19.507987812 +0000 UTC m=+3473.394013160" observedRunningTime="2025-10-06 12:45:20.143122686 +0000 UTC m=+3474.029147994" watchObservedRunningTime="2025-10-06 12:45:20.151552396 +0000 UTC m=+3474.037577714" Oct 06 12:45:25 crc kubenswrapper[4958]: I1006 12:45:25.655898 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-297n5" Oct 06 12:45:25 crc kubenswrapper[4958]: I1006 12:45:25.656527 4958 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-297n5" Oct 06 12:45:25 crc kubenswrapper[4958]: I1006 12:45:25.714733 4958 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-297n5" Oct 06 12:45:26 crc kubenswrapper[4958]: I1006 12:45:26.237220 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-297n5" Oct 06 12:45:26 crc kubenswrapper[4958]: I1006 12:45:26.287210 4958 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-297n5"] Oct 06 12:45:28 crc kubenswrapper[4958]: I1006 12:45:28.190329 4958 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-297n5" podUID="a620ada0-924f-489e-abab-14587258e21c" containerName="registry-server" containerID="cri-o://14cd027056d88d9078847fee28dc1d74239bd091d146818c8d8dd425cfa081fb" gracePeriod=2 Oct 06 12:45:28 crc kubenswrapper[4958]: I1006 12:45:28.775187 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-297n5" Oct 06 12:45:28 crc kubenswrapper[4958]: I1006 12:45:28.880687 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5fkjl\" (UniqueName: \"kubernetes.io/projected/a620ada0-924f-489e-abab-14587258e21c-kube-api-access-5fkjl\") pod \"a620ada0-924f-489e-abab-14587258e21c\" (UID: \"a620ada0-924f-489e-abab-14587258e21c\") " Oct 06 12:45:28 crc kubenswrapper[4958]: I1006 12:45:28.880754 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a620ada0-924f-489e-abab-14587258e21c-utilities\") pod \"a620ada0-924f-489e-abab-14587258e21c\" (UID: \"a620ada0-924f-489e-abab-14587258e21c\") " Oct 06 12:45:28 crc kubenswrapper[4958]: I1006 12:45:28.880956 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a620ada0-924f-489e-abab-14587258e21c-catalog-content\") pod \"a620ada0-924f-489e-abab-14587258e21c\" (UID: \"a620ada0-924f-489e-abab-14587258e21c\") " Oct 06 12:45:28 crc kubenswrapper[4958]: I1006 12:45:28.881863 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a620ada0-924f-489e-abab-14587258e21c-utilities" (OuterVolumeSpecName: "utilities") pod "a620ada0-924f-489e-abab-14587258e21c" (UID: "a620ada0-924f-489e-abab-14587258e21c"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 06 12:45:28 crc kubenswrapper[4958]: I1006 12:45:28.897301 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a620ada0-924f-489e-abab-14587258e21c-kube-api-access-5fkjl" (OuterVolumeSpecName: "kube-api-access-5fkjl") pod "a620ada0-924f-489e-abab-14587258e21c" (UID: "a620ada0-924f-489e-abab-14587258e21c"). InnerVolumeSpecName "kube-api-access-5fkjl". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 06 12:45:28 crc kubenswrapper[4958]: I1006 12:45:28.952759 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a620ada0-924f-489e-abab-14587258e21c-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "a620ada0-924f-489e-abab-14587258e21c" (UID: "a620ada0-924f-489e-abab-14587258e21c"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 06 12:45:28 crc kubenswrapper[4958]: I1006 12:45:28.982551 4958 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5fkjl\" (UniqueName: \"kubernetes.io/projected/a620ada0-924f-489e-abab-14587258e21c-kube-api-access-5fkjl\") on node \"crc\" DevicePath \"\"" Oct 06 12:45:28 crc kubenswrapper[4958]: I1006 12:45:28.982586 4958 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a620ada0-924f-489e-abab-14587258e21c-utilities\") on node \"crc\" DevicePath \"\"" Oct 06 12:45:28 crc kubenswrapper[4958]: I1006 12:45:28.982595 4958 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a620ada0-924f-489e-abab-14587258e21c-catalog-content\") on node \"crc\" DevicePath \"\"" Oct 06 12:45:29 crc kubenswrapper[4958]: I1006 12:45:29.201137 4958 generic.go:334] "Generic (PLEG): container finished" podID="a620ada0-924f-489e-abab-14587258e21c" containerID="14cd027056d88d9078847fee28dc1d74239bd091d146818c8d8dd425cfa081fb" exitCode=0 Oct 06 12:45:29 crc kubenswrapper[4958]: I1006 12:45:29.201218 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-297n5" Oct 06 12:45:29 crc kubenswrapper[4958]: I1006 12:45:29.201245 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-297n5" event={"ID":"a620ada0-924f-489e-abab-14587258e21c","Type":"ContainerDied","Data":"14cd027056d88d9078847fee28dc1d74239bd091d146818c8d8dd425cfa081fb"} Oct 06 12:45:29 crc kubenswrapper[4958]: I1006 12:45:29.201571 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-297n5" event={"ID":"a620ada0-924f-489e-abab-14587258e21c","Type":"ContainerDied","Data":"166e5d292b0d4fa1876d55cacc3c810d14d1a0188febb050dfe098a830a6d410"} Oct 06 12:45:29 crc kubenswrapper[4958]: I1006 12:45:29.201596 4958 scope.go:117] "RemoveContainer" containerID="14cd027056d88d9078847fee28dc1d74239bd091d146818c8d8dd425cfa081fb" Oct 06 12:45:29 crc kubenswrapper[4958]: I1006 12:45:29.239749 4958 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-297n5"] Oct 06 12:45:29 crc kubenswrapper[4958]: I1006 12:45:29.243186 4958 scope.go:117] "RemoveContainer" containerID="f30249b838e0a3df92db05b4b84888a805a2f28006d5c000249d3932d18400bf" Oct 06 12:45:29 crc kubenswrapper[4958]: I1006 12:45:29.247512 4958 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-297n5"] Oct 06 12:45:29 crc kubenswrapper[4958]: I1006 12:45:29.272651 4958 scope.go:117] "RemoveContainer" containerID="a238dcb0f096a61e7b8ba1eec03b37e7c8673b9ba45536bb8fe85f92804f197d" Oct 06 12:45:29 crc kubenswrapper[4958]: I1006 12:45:29.328697 4958 scope.go:117] "RemoveContainer" containerID="14cd027056d88d9078847fee28dc1d74239bd091d146818c8d8dd425cfa081fb" Oct 06 12:45:29 crc kubenswrapper[4958]: E1006 12:45:29.329188 4958 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"14cd027056d88d9078847fee28dc1d74239bd091d146818c8d8dd425cfa081fb\": container with ID starting with 14cd027056d88d9078847fee28dc1d74239bd091d146818c8d8dd425cfa081fb not found: ID does not exist" containerID="14cd027056d88d9078847fee28dc1d74239bd091d146818c8d8dd425cfa081fb" Oct 06 12:45:29 crc kubenswrapper[4958]: I1006 12:45:29.329429 4958 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"14cd027056d88d9078847fee28dc1d74239bd091d146818c8d8dd425cfa081fb"} err="failed to get container status \"14cd027056d88d9078847fee28dc1d74239bd091d146818c8d8dd425cfa081fb\": rpc error: code = NotFound desc = could not find container \"14cd027056d88d9078847fee28dc1d74239bd091d146818c8d8dd425cfa081fb\": container with ID starting with 14cd027056d88d9078847fee28dc1d74239bd091d146818c8d8dd425cfa081fb not found: ID does not exist" Oct 06 12:45:29 crc kubenswrapper[4958]: I1006 12:45:29.329463 4958 scope.go:117] "RemoveContainer" containerID="f30249b838e0a3df92db05b4b84888a805a2f28006d5c000249d3932d18400bf" Oct 06 12:45:29 crc kubenswrapper[4958]: E1006 12:45:29.329741 4958 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f30249b838e0a3df92db05b4b84888a805a2f28006d5c000249d3932d18400bf\": container with ID starting with f30249b838e0a3df92db05b4b84888a805a2f28006d5c000249d3932d18400bf not found: ID does not exist" containerID="f30249b838e0a3df92db05b4b84888a805a2f28006d5c000249d3932d18400bf" Oct 06 12:45:29 crc kubenswrapper[4958]: I1006 12:45:29.329775 4958 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f30249b838e0a3df92db05b4b84888a805a2f28006d5c000249d3932d18400bf"} err="failed to get container status \"f30249b838e0a3df92db05b4b84888a805a2f28006d5c000249d3932d18400bf\": rpc error: code = NotFound desc = could not find container \"f30249b838e0a3df92db05b4b84888a805a2f28006d5c000249d3932d18400bf\": container with ID starting with f30249b838e0a3df92db05b4b84888a805a2f28006d5c000249d3932d18400bf not found: ID does not exist" Oct 06 12:45:29 crc kubenswrapper[4958]: I1006 12:45:29.329798 4958 scope.go:117] "RemoveContainer" containerID="a238dcb0f096a61e7b8ba1eec03b37e7c8673b9ba45536bb8fe85f92804f197d" Oct 06 12:45:29 crc kubenswrapper[4958]: E1006 12:45:29.330026 4958 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a238dcb0f096a61e7b8ba1eec03b37e7c8673b9ba45536bb8fe85f92804f197d\": container with ID starting with a238dcb0f096a61e7b8ba1eec03b37e7c8673b9ba45536bb8fe85f92804f197d not found: ID does not exist" containerID="a238dcb0f096a61e7b8ba1eec03b37e7c8673b9ba45536bb8fe85f92804f197d" Oct 06 12:45:29 crc kubenswrapper[4958]: I1006 12:45:29.330057 4958 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a238dcb0f096a61e7b8ba1eec03b37e7c8673b9ba45536bb8fe85f92804f197d"} err="failed to get container status \"a238dcb0f096a61e7b8ba1eec03b37e7c8673b9ba45536bb8fe85f92804f197d\": rpc error: code = NotFound desc = could not find container \"a238dcb0f096a61e7b8ba1eec03b37e7c8673b9ba45536bb8fe85f92804f197d\": container with ID starting with a238dcb0f096a61e7b8ba1eec03b37e7c8673b9ba45536bb8fe85f92804f197d not found: ID does not exist" Oct 06 12:45:30 crc kubenswrapper[4958]: I1006 12:45:30.934862 4958 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a620ada0-924f-489e-abab-14587258e21c" path="/var/lib/kubelet/pods/a620ada0-924f-489e-abab-14587258e21c/volumes" Oct 06 12:45:32 crc kubenswrapper[4958]: I1006 12:45:32.100371 4958 scope.go:117] "RemoveContainer" containerID="022212a3d1184d485ef8da170af4a07c86e89e97b2c33c18c3fff9675515fcdc" Oct 06 12:45:32 crc kubenswrapper[4958]: I1006 12:45:32.223691 4958 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-rgzvz"] Oct 06 12:45:32 crc kubenswrapper[4958]: E1006 12:45:32.224185 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a620ada0-924f-489e-abab-14587258e21c" containerName="extract-content" Oct 06 12:45:32 crc kubenswrapper[4958]: I1006 12:45:32.224203 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="a620ada0-924f-489e-abab-14587258e21c" containerName="extract-content" Oct 06 12:45:32 crc kubenswrapper[4958]: E1006 12:45:32.224219 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a620ada0-924f-489e-abab-14587258e21c" containerName="registry-server" Oct 06 12:45:32 crc kubenswrapper[4958]: I1006 12:45:32.224225 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="a620ada0-924f-489e-abab-14587258e21c" containerName="registry-server" Oct 06 12:45:32 crc kubenswrapper[4958]: E1006 12:45:32.224244 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a620ada0-924f-489e-abab-14587258e21c" containerName="extract-utilities" Oct 06 12:45:32 crc kubenswrapper[4958]: I1006 12:45:32.224251 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="a620ada0-924f-489e-abab-14587258e21c" containerName="extract-utilities" Oct 06 12:45:32 crc kubenswrapper[4958]: I1006 12:45:32.224475 4958 memory_manager.go:354] "RemoveStaleState removing state" podUID="a620ada0-924f-489e-abab-14587258e21c" containerName="registry-server" Oct 06 12:45:32 crc kubenswrapper[4958]: I1006 12:45:32.225924 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-rgzvz" Oct 06 12:45:32 crc kubenswrapper[4958]: I1006 12:45:32.247596 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-rgzvz"] Oct 06 12:45:32 crc kubenswrapper[4958]: I1006 12:45:32.365989 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cd8a9fb9-10d3-46e2-9c08-1275733a9d2e-catalog-content\") pod \"redhat-operators-rgzvz\" (UID: \"cd8a9fb9-10d3-46e2-9c08-1275733a9d2e\") " pod="openshift-marketplace/redhat-operators-rgzvz" Oct 06 12:45:32 crc kubenswrapper[4958]: I1006 12:45:32.366068 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-47hwf\" (UniqueName: \"kubernetes.io/projected/cd8a9fb9-10d3-46e2-9c08-1275733a9d2e-kube-api-access-47hwf\") pod \"redhat-operators-rgzvz\" (UID: \"cd8a9fb9-10d3-46e2-9c08-1275733a9d2e\") " pod="openshift-marketplace/redhat-operators-rgzvz" Oct 06 12:45:32 crc kubenswrapper[4958]: I1006 12:45:32.366164 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cd8a9fb9-10d3-46e2-9c08-1275733a9d2e-utilities\") pod \"redhat-operators-rgzvz\" (UID: \"cd8a9fb9-10d3-46e2-9c08-1275733a9d2e\") " pod="openshift-marketplace/redhat-operators-rgzvz" Oct 06 12:45:32 crc kubenswrapper[4958]: I1006 12:45:32.468420 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cd8a9fb9-10d3-46e2-9c08-1275733a9d2e-utilities\") pod \"redhat-operators-rgzvz\" (UID: \"cd8a9fb9-10d3-46e2-9c08-1275733a9d2e\") " pod="openshift-marketplace/redhat-operators-rgzvz" Oct 06 12:45:32 crc kubenswrapper[4958]: I1006 12:45:32.468579 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cd8a9fb9-10d3-46e2-9c08-1275733a9d2e-catalog-content\") pod \"redhat-operators-rgzvz\" (UID: \"cd8a9fb9-10d3-46e2-9c08-1275733a9d2e\") " pod="openshift-marketplace/redhat-operators-rgzvz" Oct 06 12:45:32 crc kubenswrapper[4958]: I1006 12:45:32.468650 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-47hwf\" (UniqueName: \"kubernetes.io/projected/cd8a9fb9-10d3-46e2-9c08-1275733a9d2e-kube-api-access-47hwf\") pod \"redhat-operators-rgzvz\" (UID: \"cd8a9fb9-10d3-46e2-9c08-1275733a9d2e\") " pod="openshift-marketplace/redhat-operators-rgzvz" Oct 06 12:45:32 crc kubenswrapper[4958]: I1006 12:45:32.468946 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cd8a9fb9-10d3-46e2-9c08-1275733a9d2e-utilities\") pod \"redhat-operators-rgzvz\" (UID: \"cd8a9fb9-10d3-46e2-9c08-1275733a9d2e\") " pod="openshift-marketplace/redhat-operators-rgzvz" Oct 06 12:45:32 crc kubenswrapper[4958]: I1006 12:45:32.469088 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cd8a9fb9-10d3-46e2-9c08-1275733a9d2e-catalog-content\") pod \"redhat-operators-rgzvz\" (UID: \"cd8a9fb9-10d3-46e2-9c08-1275733a9d2e\") " pod="openshift-marketplace/redhat-operators-rgzvz" Oct 06 12:45:32 crc kubenswrapper[4958]: I1006 12:45:32.489070 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-47hwf\" (UniqueName: \"kubernetes.io/projected/cd8a9fb9-10d3-46e2-9c08-1275733a9d2e-kube-api-access-47hwf\") pod \"redhat-operators-rgzvz\" (UID: \"cd8a9fb9-10d3-46e2-9c08-1275733a9d2e\") " pod="openshift-marketplace/redhat-operators-rgzvz" Oct 06 12:45:32 crc kubenswrapper[4958]: I1006 12:45:32.573797 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-rgzvz" Oct 06 12:45:33 crc kubenswrapper[4958]: I1006 12:45:33.091345 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-rgzvz"] Oct 06 12:45:33 crc kubenswrapper[4958]: I1006 12:45:33.268294 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-rgzvz" event={"ID":"cd8a9fb9-10d3-46e2-9c08-1275733a9d2e","Type":"ContainerStarted","Data":"136b5f8766d8dc8f23960e7727a73c5de8982a04d473a25dabb29d912b56f01a"} Oct 06 12:45:34 crc kubenswrapper[4958]: I1006 12:45:34.277602 4958 generic.go:334] "Generic (PLEG): container finished" podID="cd8a9fb9-10d3-46e2-9c08-1275733a9d2e" containerID="798836cb056b7a8f16b89c693bfb3570ee392fa4ba487248f77a15cf672644f8" exitCode=0 Oct 06 12:45:34 crc kubenswrapper[4958]: I1006 12:45:34.277652 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-rgzvz" event={"ID":"cd8a9fb9-10d3-46e2-9c08-1275733a9d2e","Type":"ContainerDied","Data":"798836cb056b7a8f16b89c693bfb3570ee392fa4ba487248f77a15cf672644f8"} Oct 06 12:45:37 crc kubenswrapper[4958]: I1006 12:45:37.310351 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-rgzvz" event={"ID":"cd8a9fb9-10d3-46e2-9c08-1275733a9d2e","Type":"ContainerStarted","Data":"fd37daca212250fece83f966220fee70590d3fc56358fff7605688f0f857a150"} Oct 06 12:45:38 crc kubenswrapper[4958]: I1006 12:45:38.328236 4958 generic.go:334] "Generic (PLEG): container finished" podID="cd8a9fb9-10d3-46e2-9c08-1275733a9d2e" containerID="fd37daca212250fece83f966220fee70590d3fc56358fff7605688f0f857a150" exitCode=0 Oct 06 12:45:38 crc kubenswrapper[4958]: I1006 12:45:38.328325 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-rgzvz" event={"ID":"cd8a9fb9-10d3-46e2-9c08-1275733a9d2e","Type":"ContainerDied","Data":"fd37daca212250fece83f966220fee70590d3fc56358fff7605688f0f857a150"} Oct 06 12:45:41 crc kubenswrapper[4958]: I1006 12:45:41.358768 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-rgzvz" event={"ID":"cd8a9fb9-10d3-46e2-9c08-1275733a9d2e","Type":"ContainerStarted","Data":"655b2f594aa06d28470deb41f2ea83da436a4825f380112164322af4298c5018"} Oct 06 12:45:41 crc kubenswrapper[4958]: I1006 12:45:41.389308 4958 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-rgzvz" podStartSLOduration=3.235027282 podStartE2EDuration="9.389291449s" podCreationTimestamp="2025-10-06 12:45:32 +0000 UTC" firstStartedPulling="2025-10-06 12:45:34.27968128 +0000 UTC m=+3488.165706588" lastFinishedPulling="2025-10-06 12:45:40.433945397 +0000 UTC m=+3494.319970755" observedRunningTime="2025-10-06 12:45:41.382759028 +0000 UTC m=+3495.268784356" watchObservedRunningTime="2025-10-06 12:45:41.389291449 +0000 UTC m=+3495.275316757" Oct 06 12:45:42 crc kubenswrapper[4958]: I1006 12:45:42.574170 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-rgzvz" Oct 06 12:45:42 crc kubenswrapper[4958]: I1006 12:45:42.576768 4958 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-rgzvz" Oct 06 12:45:43 crc kubenswrapper[4958]: I1006 12:45:43.644652 4958 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-rgzvz" podUID="cd8a9fb9-10d3-46e2-9c08-1275733a9d2e" containerName="registry-server" probeResult="failure" output=< Oct 06 12:45:43 crc kubenswrapper[4958]: timeout: failed to connect service ":50051" within 1s Oct 06 12:45:43 crc kubenswrapper[4958]: > Oct 06 12:45:52 crc kubenswrapper[4958]: I1006 12:45:52.647511 4958 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-rgzvz" Oct 06 12:45:52 crc kubenswrapper[4958]: I1006 12:45:52.698954 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-rgzvz" Oct 06 12:45:52 crc kubenswrapper[4958]: I1006 12:45:52.885475 4958 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-rgzvz"] Oct 06 12:45:54 crc kubenswrapper[4958]: I1006 12:45:54.489161 4958 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-rgzvz" podUID="cd8a9fb9-10d3-46e2-9c08-1275733a9d2e" containerName="registry-server" containerID="cri-o://655b2f594aa06d28470deb41f2ea83da436a4825f380112164322af4298c5018" gracePeriod=2 Oct 06 12:45:55 crc kubenswrapper[4958]: I1006 12:45:55.081080 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-rgzvz" Oct 06 12:45:55 crc kubenswrapper[4958]: I1006 12:45:55.118984 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-47hwf\" (UniqueName: \"kubernetes.io/projected/cd8a9fb9-10d3-46e2-9c08-1275733a9d2e-kube-api-access-47hwf\") pod \"cd8a9fb9-10d3-46e2-9c08-1275733a9d2e\" (UID: \"cd8a9fb9-10d3-46e2-9c08-1275733a9d2e\") " Oct 06 12:45:55 crc kubenswrapper[4958]: I1006 12:45:55.119282 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cd8a9fb9-10d3-46e2-9c08-1275733a9d2e-utilities\") pod \"cd8a9fb9-10d3-46e2-9c08-1275733a9d2e\" (UID: \"cd8a9fb9-10d3-46e2-9c08-1275733a9d2e\") " Oct 06 12:45:55 crc kubenswrapper[4958]: I1006 12:45:55.119362 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cd8a9fb9-10d3-46e2-9c08-1275733a9d2e-catalog-content\") pod \"cd8a9fb9-10d3-46e2-9c08-1275733a9d2e\" (UID: \"cd8a9fb9-10d3-46e2-9c08-1275733a9d2e\") " Oct 06 12:45:55 crc kubenswrapper[4958]: I1006 12:45:55.121917 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/cd8a9fb9-10d3-46e2-9c08-1275733a9d2e-utilities" (OuterVolumeSpecName: "utilities") pod "cd8a9fb9-10d3-46e2-9c08-1275733a9d2e" (UID: "cd8a9fb9-10d3-46e2-9c08-1275733a9d2e"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 06 12:45:55 crc kubenswrapper[4958]: I1006 12:45:55.127181 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cd8a9fb9-10d3-46e2-9c08-1275733a9d2e-kube-api-access-47hwf" (OuterVolumeSpecName: "kube-api-access-47hwf") pod "cd8a9fb9-10d3-46e2-9c08-1275733a9d2e" (UID: "cd8a9fb9-10d3-46e2-9c08-1275733a9d2e"). InnerVolumeSpecName "kube-api-access-47hwf". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 06 12:45:55 crc kubenswrapper[4958]: I1006 12:45:55.221383 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/cd8a9fb9-10d3-46e2-9c08-1275733a9d2e-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "cd8a9fb9-10d3-46e2-9c08-1275733a9d2e" (UID: "cd8a9fb9-10d3-46e2-9c08-1275733a9d2e"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 06 12:45:55 crc kubenswrapper[4958]: I1006 12:45:55.221554 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cd8a9fb9-10d3-46e2-9c08-1275733a9d2e-catalog-content\") pod \"cd8a9fb9-10d3-46e2-9c08-1275733a9d2e\" (UID: \"cd8a9fb9-10d3-46e2-9c08-1275733a9d2e\") " Oct 06 12:45:55 crc kubenswrapper[4958]: I1006 12:45:55.222070 4958 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cd8a9fb9-10d3-46e2-9c08-1275733a9d2e-utilities\") on node \"crc\" DevicePath \"\"" Oct 06 12:45:55 crc kubenswrapper[4958]: I1006 12:45:55.222105 4958 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-47hwf\" (UniqueName: \"kubernetes.io/projected/cd8a9fb9-10d3-46e2-9c08-1275733a9d2e-kube-api-access-47hwf\") on node \"crc\" DevicePath \"\"" Oct 06 12:45:55 crc kubenswrapper[4958]: W1006 12:45:55.222188 4958 empty_dir.go:500] Warning: Unmount skipped because path does not exist: /var/lib/kubelet/pods/cd8a9fb9-10d3-46e2-9c08-1275733a9d2e/volumes/kubernetes.io~empty-dir/catalog-content Oct 06 12:45:55 crc kubenswrapper[4958]: I1006 12:45:55.222201 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/cd8a9fb9-10d3-46e2-9c08-1275733a9d2e-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "cd8a9fb9-10d3-46e2-9c08-1275733a9d2e" (UID: "cd8a9fb9-10d3-46e2-9c08-1275733a9d2e"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 06 12:45:55 crc kubenswrapper[4958]: I1006 12:45:55.323928 4958 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cd8a9fb9-10d3-46e2-9c08-1275733a9d2e-catalog-content\") on node \"crc\" DevicePath \"\"" Oct 06 12:45:55 crc kubenswrapper[4958]: I1006 12:45:55.503388 4958 generic.go:334] "Generic (PLEG): container finished" podID="cd8a9fb9-10d3-46e2-9c08-1275733a9d2e" containerID="655b2f594aa06d28470deb41f2ea83da436a4825f380112164322af4298c5018" exitCode=0 Oct 06 12:45:55 crc kubenswrapper[4958]: I1006 12:45:55.503438 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-rgzvz" event={"ID":"cd8a9fb9-10d3-46e2-9c08-1275733a9d2e","Type":"ContainerDied","Data":"655b2f594aa06d28470deb41f2ea83da436a4825f380112164322af4298c5018"} Oct 06 12:45:55 crc kubenswrapper[4958]: I1006 12:45:55.503468 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-rgzvz" event={"ID":"cd8a9fb9-10d3-46e2-9c08-1275733a9d2e","Type":"ContainerDied","Data":"136b5f8766d8dc8f23960e7727a73c5de8982a04d473a25dabb29d912b56f01a"} Oct 06 12:45:55 crc kubenswrapper[4958]: I1006 12:45:55.503484 4958 scope.go:117] "RemoveContainer" containerID="655b2f594aa06d28470deb41f2ea83da436a4825f380112164322af4298c5018" Oct 06 12:45:55 crc kubenswrapper[4958]: I1006 12:45:55.503578 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-rgzvz" Oct 06 12:45:55 crc kubenswrapper[4958]: I1006 12:45:55.544346 4958 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-rgzvz"] Oct 06 12:45:55 crc kubenswrapper[4958]: I1006 12:45:55.546613 4958 scope.go:117] "RemoveContainer" containerID="fd37daca212250fece83f966220fee70590d3fc56358fff7605688f0f857a150" Oct 06 12:45:55 crc kubenswrapper[4958]: I1006 12:45:55.555379 4958 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-rgzvz"] Oct 06 12:45:55 crc kubenswrapper[4958]: I1006 12:45:55.580489 4958 scope.go:117] "RemoveContainer" containerID="798836cb056b7a8f16b89c693bfb3570ee392fa4ba487248f77a15cf672644f8" Oct 06 12:45:55 crc kubenswrapper[4958]: I1006 12:45:55.628897 4958 scope.go:117] "RemoveContainer" containerID="655b2f594aa06d28470deb41f2ea83da436a4825f380112164322af4298c5018" Oct 06 12:45:55 crc kubenswrapper[4958]: E1006 12:45:55.629546 4958 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"655b2f594aa06d28470deb41f2ea83da436a4825f380112164322af4298c5018\": container with ID starting with 655b2f594aa06d28470deb41f2ea83da436a4825f380112164322af4298c5018 not found: ID does not exist" containerID="655b2f594aa06d28470deb41f2ea83da436a4825f380112164322af4298c5018" Oct 06 12:45:55 crc kubenswrapper[4958]: I1006 12:45:55.629600 4958 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"655b2f594aa06d28470deb41f2ea83da436a4825f380112164322af4298c5018"} err="failed to get container status \"655b2f594aa06d28470deb41f2ea83da436a4825f380112164322af4298c5018\": rpc error: code = NotFound desc = could not find container \"655b2f594aa06d28470deb41f2ea83da436a4825f380112164322af4298c5018\": container with ID starting with 655b2f594aa06d28470deb41f2ea83da436a4825f380112164322af4298c5018 not found: ID does not exist" Oct 06 12:45:55 crc kubenswrapper[4958]: I1006 12:45:55.629631 4958 scope.go:117] "RemoveContainer" containerID="fd37daca212250fece83f966220fee70590d3fc56358fff7605688f0f857a150" Oct 06 12:45:55 crc kubenswrapper[4958]: E1006 12:45:55.630079 4958 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fd37daca212250fece83f966220fee70590d3fc56358fff7605688f0f857a150\": container with ID starting with fd37daca212250fece83f966220fee70590d3fc56358fff7605688f0f857a150 not found: ID does not exist" containerID="fd37daca212250fece83f966220fee70590d3fc56358fff7605688f0f857a150" Oct 06 12:45:55 crc kubenswrapper[4958]: I1006 12:45:55.630109 4958 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fd37daca212250fece83f966220fee70590d3fc56358fff7605688f0f857a150"} err="failed to get container status \"fd37daca212250fece83f966220fee70590d3fc56358fff7605688f0f857a150\": rpc error: code = NotFound desc = could not find container \"fd37daca212250fece83f966220fee70590d3fc56358fff7605688f0f857a150\": container with ID starting with fd37daca212250fece83f966220fee70590d3fc56358fff7605688f0f857a150 not found: ID does not exist" Oct 06 12:45:55 crc kubenswrapper[4958]: I1006 12:45:55.630128 4958 scope.go:117] "RemoveContainer" containerID="798836cb056b7a8f16b89c693bfb3570ee392fa4ba487248f77a15cf672644f8" Oct 06 12:45:55 crc kubenswrapper[4958]: E1006 12:45:55.630418 4958 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"798836cb056b7a8f16b89c693bfb3570ee392fa4ba487248f77a15cf672644f8\": container with ID starting with 798836cb056b7a8f16b89c693bfb3570ee392fa4ba487248f77a15cf672644f8 not found: ID does not exist" containerID="798836cb056b7a8f16b89c693bfb3570ee392fa4ba487248f77a15cf672644f8" Oct 06 12:45:55 crc kubenswrapper[4958]: I1006 12:45:55.630452 4958 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"798836cb056b7a8f16b89c693bfb3570ee392fa4ba487248f77a15cf672644f8"} err="failed to get container status \"798836cb056b7a8f16b89c693bfb3570ee392fa4ba487248f77a15cf672644f8\": rpc error: code = NotFound desc = could not find container \"798836cb056b7a8f16b89c693bfb3570ee392fa4ba487248f77a15cf672644f8\": container with ID starting with 798836cb056b7a8f16b89c693bfb3570ee392fa4ba487248f77a15cf672644f8 not found: ID does not exist" Oct 06 12:45:56 crc kubenswrapper[4958]: I1006 12:45:56.940321 4958 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cd8a9fb9-10d3-46e2-9c08-1275733a9d2e" path="/var/lib/kubelet/pods/cd8a9fb9-10d3-46e2-9c08-1275733a9d2e/volumes" Oct 06 12:46:53 crc kubenswrapper[4958]: I1006 12:46:53.802052 4958 patch_prober.go:28] interesting pod/machine-config-daemon-whw6z container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 06 12:46:53 crc kubenswrapper[4958]: I1006 12:46:53.802760 4958 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-whw6z" podUID="1a63a5e9-6d00-40ff-a080-cfcaf03a1c1b" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 06 12:47:23 crc kubenswrapper[4958]: I1006 12:47:23.802000 4958 patch_prober.go:28] interesting pod/machine-config-daemon-whw6z container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 06 12:47:23 crc kubenswrapper[4958]: I1006 12:47:23.802599 4958 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-whw6z" podUID="1a63a5e9-6d00-40ff-a080-cfcaf03a1c1b" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 06 12:47:53 crc kubenswrapper[4958]: I1006 12:47:53.801394 4958 patch_prober.go:28] interesting pod/machine-config-daemon-whw6z container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 06 12:47:53 crc kubenswrapper[4958]: I1006 12:47:53.801984 4958 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-whw6z" podUID="1a63a5e9-6d00-40ff-a080-cfcaf03a1c1b" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 06 12:47:53 crc kubenswrapper[4958]: I1006 12:47:53.802047 4958 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-whw6z" Oct 06 12:47:53 crc kubenswrapper[4958]: I1006 12:47:53.803231 4958 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"55c91d9b6f0526c3f4d84b26912ccb29208036769e8b90f3b9af429198f89ce0"} pod="openshift-machine-config-operator/machine-config-daemon-whw6z" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Oct 06 12:47:53 crc kubenswrapper[4958]: I1006 12:47:53.803363 4958 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-whw6z" podUID="1a63a5e9-6d00-40ff-a080-cfcaf03a1c1b" containerName="machine-config-daemon" containerID="cri-o://55c91d9b6f0526c3f4d84b26912ccb29208036769e8b90f3b9af429198f89ce0" gracePeriod=600 Oct 06 12:47:54 crc kubenswrapper[4958]: I1006 12:47:54.714540 4958 generic.go:334] "Generic (PLEG): container finished" podID="1a63a5e9-6d00-40ff-a080-cfcaf03a1c1b" containerID="55c91d9b6f0526c3f4d84b26912ccb29208036769e8b90f3b9af429198f89ce0" exitCode=0 Oct 06 12:47:54 crc kubenswrapper[4958]: I1006 12:47:54.715331 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-whw6z" event={"ID":"1a63a5e9-6d00-40ff-a080-cfcaf03a1c1b","Type":"ContainerDied","Data":"55c91d9b6f0526c3f4d84b26912ccb29208036769e8b90f3b9af429198f89ce0"} Oct 06 12:47:54 crc kubenswrapper[4958]: I1006 12:47:54.715771 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-whw6z" event={"ID":"1a63a5e9-6d00-40ff-a080-cfcaf03a1c1b","Type":"ContainerStarted","Data":"d0b06b048e692dd8fadf2c195bdf63a8211a24a34b55945146baca45ef1b9dbc"} Oct 06 12:47:54 crc kubenswrapper[4958]: I1006 12:47:54.715823 4958 scope.go:117] "RemoveContainer" containerID="a755faf5e8304153c327fba15e9be2bd5d0327c67bfaf9992d2f4d37443732d6" Oct 06 12:50:09 crc kubenswrapper[4958]: I1006 12:50:09.939819 4958 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-dq6f6"] Oct 06 12:50:09 crc kubenswrapper[4958]: E1006 12:50:09.940672 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cd8a9fb9-10d3-46e2-9c08-1275733a9d2e" containerName="extract-utilities" Oct 06 12:50:09 crc kubenswrapper[4958]: I1006 12:50:09.940684 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="cd8a9fb9-10d3-46e2-9c08-1275733a9d2e" containerName="extract-utilities" Oct 06 12:50:09 crc kubenswrapper[4958]: E1006 12:50:09.940698 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cd8a9fb9-10d3-46e2-9c08-1275733a9d2e" containerName="registry-server" Oct 06 12:50:09 crc kubenswrapper[4958]: I1006 12:50:09.940706 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="cd8a9fb9-10d3-46e2-9c08-1275733a9d2e" containerName="registry-server" Oct 06 12:50:09 crc kubenswrapper[4958]: E1006 12:50:09.940734 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cd8a9fb9-10d3-46e2-9c08-1275733a9d2e" containerName="extract-content" Oct 06 12:50:09 crc kubenswrapper[4958]: I1006 12:50:09.940740 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="cd8a9fb9-10d3-46e2-9c08-1275733a9d2e" containerName="extract-content" Oct 06 12:50:09 crc kubenswrapper[4958]: I1006 12:50:09.940915 4958 memory_manager.go:354] "RemoveStaleState removing state" podUID="cd8a9fb9-10d3-46e2-9c08-1275733a9d2e" containerName="registry-server" Oct 06 12:50:09 crc kubenswrapper[4958]: I1006 12:50:09.942229 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-dq6f6" Oct 06 12:50:09 crc kubenswrapper[4958]: I1006 12:50:09.952850 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-dq6f6"] Oct 06 12:50:10 crc kubenswrapper[4958]: I1006 12:50:10.120847 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9b23c272-226e-4507-b385-a05634dbd3ec-catalog-content\") pod \"community-operators-dq6f6\" (UID: \"9b23c272-226e-4507-b385-a05634dbd3ec\") " pod="openshift-marketplace/community-operators-dq6f6" Oct 06 12:50:10 crc kubenswrapper[4958]: I1006 12:50:10.120925 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9b23c272-226e-4507-b385-a05634dbd3ec-utilities\") pod \"community-operators-dq6f6\" (UID: \"9b23c272-226e-4507-b385-a05634dbd3ec\") " pod="openshift-marketplace/community-operators-dq6f6" Oct 06 12:50:10 crc kubenswrapper[4958]: I1006 12:50:10.120973 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wd94c\" (UniqueName: \"kubernetes.io/projected/9b23c272-226e-4507-b385-a05634dbd3ec-kube-api-access-wd94c\") pod \"community-operators-dq6f6\" (UID: \"9b23c272-226e-4507-b385-a05634dbd3ec\") " pod="openshift-marketplace/community-operators-dq6f6" Oct 06 12:50:10 crc kubenswrapper[4958]: I1006 12:50:10.223030 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9b23c272-226e-4507-b385-a05634dbd3ec-utilities\") pod \"community-operators-dq6f6\" (UID: \"9b23c272-226e-4507-b385-a05634dbd3ec\") " pod="openshift-marketplace/community-operators-dq6f6" Oct 06 12:50:10 crc kubenswrapper[4958]: I1006 12:50:10.223078 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wd94c\" (UniqueName: \"kubernetes.io/projected/9b23c272-226e-4507-b385-a05634dbd3ec-kube-api-access-wd94c\") pod \"community-operators-dq6f6\" (UID: \"9b23c272-226e-4507-b385-a05634dbd3ec\") " pod="openshift-marketplace/community-operators-dq6f6" Oct 06 12:50:10 crc kubenswrapper[4958]: I1006 12:50:10.223210 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9b23c272-226e-4507-b385-a05634dbd3ec-catalog-content\") pod \"community-operators-dq6f6\" (UID: \"9b23c272-226e-4507-b385-a05634dbd3ec\") " pod="openshift-marketplace/community-operators-dq6f6" Oct 06 12:50:10 crc kubenswrapper[4958]: I1006 12:50:10.223597 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9b23c272-226e-4507-b385-a05634dbd3ec-utilities\") pod \"community-operators-dq6f6\" (UID: \"9b23c272-226e-4507-b385-a05634dbd3ec\") " pod="openshift-marketplace/community-operators-dq6f6" Oct 06 12:50:10 crc kubenswrapper[4958]: I1006 12:50:10.223648 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9b23c272-226e-4507-b385-a05634dbd3ec-catalog-content\") pod \"community-operators-dq6f6\" (UID: \"9b23c272-226e-4507-b385-a05634dbd3ec\") " pod="openshift-marketplace/community-operators-dq6f6" Oct 06 12:50:10 crc kubenswrapper[4958]: I1006 12:50:10.274015 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wd94c\" (UniqueName: \"kubernetes.io/projected/9b23c272-226e-4507-b385-a05634dbd3ec-kube-api-access-wd94c\") pod \"community-operators-dq6f6\" (UID: \"9b23c272-226e-4507-b385-a05634dbd3ec\") " pod="openshift-marketplace/community-operators-dq6f6" Oct 06 12:50:10 crc kubenswrapper[4958]: I1006 12:50:10.560987 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-dq6f6" Oct 06 12:50:11 crc kubenswrapper[4958]: I1006 12:50:11.168197 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-dq6f6"] Oct 06 12:50:12 crc kubenswrapper[4958]: I1006 12:50:12.168885 4958 generic.go:334] "Generic (PLEG): container finished" podID="9b23c272-226e-4507-b385-a05634dbd3ec" containerID="0987319b80269b133e618c76db190843a23d00e82d066e6ac12b9769eb359c6e" exitCode=0 Oct 06 12:50:12 crc kubenswrapper[4958]: I1006 12:50:12.168973 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-dq6f6" event={"ID":"9b23c272-226e-4507-b385-a05634dbd3ec","Type":"ContainerDied","Data":"0987319b80269b133e618c76db190843a23d00e82d066e6ac12b9769eb359c6e"} Oct 06 12:50:12 crc kubenswrapper[4958]: I1006 12:50:12.169297 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-dq6f6" event={"ID":"9b23c272-226e-4507-b385-a05634dbd3ec","Type":"ContainerStarted","Data":"2955cab296736247b8bc6d0ca64934ce0754da6f918320c330bdacf8d8804502"} Oct 06 12:50:12 crc kubenswrapper[4958]: I1006 12:50:12.172529 4958 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Oct 06 12:50:13 crc kubenswrapper[4958]: I1006 12:50:13.182109 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-dq6f6" event={"ID":"9b23c272-226e-4507-b385-a05634dbd3ec","Type":"ContainerStarted","Data":"6d94e859cc0827a821829e14563929cf2f6097a43209748155a3020d888c74f9"} Oct 06 12:50:14 crc kubenswrapper[4958]: I1006 12:50:14.196183 4958 generic.go:334] "Generic (PLEG): container finished" podID="9b23c272-226e-4507-b385-a05634dbd3ec" containerID="6d94e859cc0827a821829e14563929cf2f6097a43209748155a3020d888c74f9" exitCode=0 Oct 06 12:50:14 crc kubenswrapper[4958]: I1006 12:50:14.196481 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-dq6f6" event={"ID":"9b23c272-226e-4507-b385-a05634dbd3ec","Type":"ContainerDied","Data":"6d94e859cc0827a821829e14563929cf2f6097a43209748155a3020d888c74f9"} Oct 06 12:50:15 crc kubenswrapper[4958]: I1006 12:50:15.205998 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-dq6f6" event={"ID":"9b23c272-226e-4507-b385-a05634dbd3ec","Type":"ContainerStarted","Data":"70b0341ca3f967f3b127829f7087df26ecaf45d06448b8937a25405a68fe4544"} Oct 06 12:50:15 crc kubenswrapper[4958]: I1006 12:50:15.250586 4958 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-dq6f6" podStartSLOduration=3.812735393 podStartE2EDuration="6.250563425s" podCreationTimestamp="2025-10-06 12:50:09 +0000 UTC" firstStartedPulling="2025-10-06 12:50:12.172129081 +0000 UTC m=+3766.058154409" lastFinishedPulling="2025-10-06 12:50:14.609957133 +0000 UTC m=+3768.495982441" observedRunningTime="2025-10-06 12:50:15.242454355 +0000 UTC m=+3769.128479663" watchObservedRunningTime="2025-10-06 12:50:15.250563425 +0000 UTC m=+3769.136588733" Oct 06 12:50:20 crc kubenswrapper[4958]: I1006 12:50:20.561821 4958 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-dq6f6" Oct 06 12:50:20 crc kubenswrapper[4958]: I1006 12:50:20.562464 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-dq6f6" Oct 06 12:50:20 crc kubenswrapper[4958]: I1006 12:50:20.612436 4958 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-dq6f6" Oct 06 12:50:21 crc kubenswrapper[4958]: I1006 12:50:21.306634 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-dq6f6" Oct 06 12:50:21 crc kubenswrapper[4958]: I1006 12:50:21.359840 4958 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-dq6f6"] Oct 06 12:50:23 crc kubenswrapper[4958]: I1006 12:50:23.291123 4958 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-dq6f6" podUID="9b23c272-226e-4507-b385-a05634dbd3ec" containerName="registry-server" containerID="cri-o://70b0341ca3f967f3b127829f7087df26ecaf45d06448b8937a25405a68fe4544" gracePeriod=2 Oct 06 12:50:23 crc kubenswrapper[4958]: I1006 12:50:23.802255 4958 patch_prober.go:28] interesting pod/machine-config-daemon-whw6z container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 06 12:50:23 crc kubenswrapper[4958]: I1006 12:50:23.802564 4958 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-whw6z" podUID="1a63a5e9-6d00-40ff-a080-cfcaf03a1c1b" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 06 12:50:23 crc kubenswrapper[4958]: I1006 12:50:23.932582 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-dq6f6" Oct 06 12:50:24 crc kubenswrapper[4958]: I1006 12:50:24.101506 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wd94c\" (UniqueName: \"kubernetes.io/projected/9b23c272-226e-4507-b385-a05634dbd3ec-kube-api-access-wd94c\") pod \"9b23c272-226e-4507-b385-a05634dbd3ec\" (UID: \"9b23c272-226e-4507-b385-a05634dbd3ec\") " Oct 06 12:50:24 crc kubenswrapper[4958]: I1006 12:50:24.101652 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9b23c272-226e-4507-b385-a05634dbd3ec-catalog-content\") pod \"9b23c272-226e-4507-b385-a05634dbd3ec\" (UID: \"9b23c272-226e-4507-b385-a05634dbd3ec\") " Oct 06 12:50:24 crc kubenswrapper[4958]: I1006 12:50:24.101751 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9b23c272-226e-4507-b385-a05634dbd3ec-utilities\") pod \"9b23c272-226e-4507-b385-a05634dbd3ec\" (UID: \"9b23c272-226e-4507-b385-a05634dbd3ec\") " Oct 06 12:50:24 crc kubenswrapper[4958]: I1006 12:50:24.102770 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9b23c272-226e-4507-b385-a05634dbd3ec-utilities" (OuterVolumeSpecName: "utilities") pod "9b23c272-226e-4507-b385-a05634dbd3ec" (UID: "9b23c272-226e-4507-b385-a05634dbd3ec"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 06 12:50:24 crc kubenswrapper[4958]: I1006 12:50:24.103547 4958 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9b23c272-226e-4507-b385-a05634dbd3ec-utilities\") on node \"crc\" DevicePath \"\"" Oct 06 12:50:24 crc kubenswrapper[4958]: I1006 12:50:24.113063 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9b23c272-226e-4507-b385-a05634dbd3ec-kube-api-access-wd94c" (OuterVolumeSpecName: "kube-api-access-wd94c") pod "9b23c272-226e-4507-b385-a05634dbd3ec" (UID: "9b23c272-226e-4507-b385-a05634dbd3ec"). InnerVolumeSpecName "kube-api-access-wd94c". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 06 12:50:24 crc kubenswrapper[4958]: I1006 12:50:24.147933 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9b23c272-226e-4507-b385-a05634dbd3ec-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "9b23c272-226e-4507-b385-a05634dbd3ec" (UID: "9b23c272-226e-4507-b385-a05634dbd3ec"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 06 12:50:24 crc kubenswrapper[4958]: I1006 12:50:24.205296 4958 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wd94c\" (UniqueName: \"kubernetes.io/projected/9b23c272-226e-4507-b385-a05634dbd3ec-kube-api-access-wd94c\") on node \"crc\" DevicePath \"\"" Oct 06 12:50:24 crc kubenswrapper[4958]: I1006 12:50:24.205335 4958 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9b23c272-226e-4507-b385-a05634dbd3ec-catalog-content\") on node \"crc\" DevicePath \"\"" Oct 06 12:50:24 crc kubenswrapper[4958]: I1006 12:50:24.314694 4958 generic.go:334] "Generic (PLEG): container finished" podID="9b23c272-226e-4507-b385-a05634dbd3ec" containerID="70b0341ca3f967f3b127829f7087df26ecaf45d06448b8937a25405a68fe4544" exitCode=0 Oct 06 12:50:24 crc kubenswrapper[4958]: I1006 12:50:24.314734 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-dq6f6" event={"ID":"9b23c272-226e-4507-b385-a05634dbd3ec","Type":"ContainerDied","Data":"70b0341ca3f967f3b127829f7087df26ecaf45d06448b8937a25405a68fe4544"} Oct 06 12:50:24 crc kubenswrapper[4958]: I1006 12:50:24.314759 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-dq6f6" event={"ID":"9b23c272-226e-4507-b385-a05634dbd3ec","Type":"ContainerDied","Data":"2955cab296736247b8bc6d0ca64934ce0754da6f918320c330bdacf8d8804502"} Oct 06 12:50:24 crc kubenswrapper[4958]: I1006 12:50:24.314776 4958 scope.go:117] "RemoveContainer" containerID="70b0341ca3f967f3b127829f7087df26ecaf45d06448b8937a25405a68fe4544" Oct 06 12:50:24 crc kubenswrapper[4958]: I1006 12:50:24.314885 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-dq6f6" Oct 06 12:50:24 crc kubenswrapper[4958]: I1006 12:50:24.343077 4958 scope.go:117] "RemoveContainer" containerID="6d94e859cc0827a821829e14563929cf2f6097a43209748155a3020d888c74f9" Oct 06 12:50:24 crc kubenswrapper[4958]: I1006 12:50:24.350831 4958 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-dq6f6"] Oct 06 12:50:24 crc kubenswrapper[4958]: I1006 12:50:24.360104 4958 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-dq6f6"] Oct 06 12:50:24 crc kubenswrapper[4958]: I1006 12:50:24.383779 4958 scope.go:117] "RemoveContainer" containerID="0987319b80269b133e618c76db190843a23d00e82d066e6ac12b9769eb359c6e" Oct 06 12:50:24 crc kubenswrapper[4958]: I1006 12:50:24.421080 4958 scope.go:117] "RemoveContainer" containerID="70b0341ca3f967f3b127829f7087df26ecaf45d06448b8937a25405a68fe4544" Oct 06 12:50:24 crc kubenswrapper[4958]: E1006 12:50:24.421651 4958 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"70b0341ca3f967f3b127829f7087df26ecaf45d06448b8937a25405a68fe4544\": container with ID starting with 70b0341ca3f967f3b127829f7087df26ecaf45d06448b8937a25405a68fe4544 not found: ID does not exist" containerID="70b0341ca3f967f3b127829f7087df26ecaf45d06448b8937a25405a68fe4544" Oct 06 12:50:24 crc kubenswrapper[4958]: I1006 12:50:24.421699 4958 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"70b0341ca3f967f3b127829f7087df26ecaf45d06448b8937a25405a68fe4544"} err="failed to get container status \"70b0341ca3f967f3b127829f7087df26ecaf45d06448b8937a25405a68fe4544\": rpc error: code = NotFound desc = could not find container \"70b0341ca3f967f3b127829f7087df26ecaf45d06448b8937a25405a68fe4544\": container with ID starting with 70b0341ca3f967f3b127829f7087df26ecaf45d06448b8937a25405a68fe4544 not found: ID does not exist" Oct 06 12:50:24 crc kubenswrapper[4958]: I1006 12:50:24.421729 4958 scope.go:117] "RemoveContainer" containerID="6d94e859cc0827a821829e14563929cf2f6097a43209748155a3020d888c74f9" Oct 06 12:50:24 crc kubenswrapper[4958]: E1006 12:50:24.422046 4958 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6d94e859cc0827a821829e14563929cf2f6097a43209748155a3020d888c74f9\": container with ID starting with 6d94e859cc0827a821829e14563929cf2f6097a43209748155a3020d888c74f9 not found: ID does not exist" containerID="6d94e859cc0827a821829e14563929cf2f6097a43209748155a3020d888c74f9" Oct 06 12:50:24 crc kubenswrapper[4958]: I1006 12:50:24.422080 4958 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6d94e859cc0827a821829e14563929cf2f6097a43209748155a3020d888c74f9"} err="failed to get container status \"6d94e859cc0827a821829e14563929cf2f6097a43209748155a3020d888c74f9\": rpc error: code = NotFound desc = could not find container \"6d94e859cc0827a821829e14563929cf2f6097a43209748155a3020d888c74f9\": container with ID starting with 6d94e859cc0827a821829e14563929cf2f6097a43209748155a3020d888c74f9 not found: ID does not exist" Oct 06 12:50:24 crc kubenswrapper[4958]: I1006 12:50:24.422106 4958 scope.go:117] "RemoveContainer" containerID="0987319b80269b133e618c76db190843a23d00e82d066e6ac12b9769eb359c6e" Oct 06 12:50:24 crc kubenswrapper[4958]: E1006 12:50:24.422576 4958 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0987319b80269b133e618c76db190843a23d00e82d066e6ac12b9769eb359c6e\": container with ID starting with 0987319b80269b133e618c76db190843a23d00e82d066e6ac12b9769eb359c6e not found: ID does not exist" containerID="0987319b80269b133e618c76db190843a23d00e82d066e6ac12b9769eb359c6e" Oct 06 12:50:24 crc kubenswrapper[4958]: I1006 12:50:24.422613 4958 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0987319b80269b133e618c76db190843a23d00e82d066e6ac12b9769eb359c6e"} err="failed to get container status \"0987319b80269b133e618c76db190843a23d00e82d066e6ac12b9769eb359c6e\": rpc error: code = NotFound desc = could not find container \"0987319b80269b133e618c76db190843a23d00e82d066e6ac12b9769eb359c6e\": container with ID starting with 0987319b80269b133e618c76db190843a23d00e82d066e6ac12b9769eb359c6e not found: ID does not exist" Oct 06 12:50:24 crc kubenswrapper[4958]: I1006 12:50:24.925603 4958 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9b23c272-226e-4507-b385-a05634dbd3ec" path="/var/lib/kubelet/pods/9b23c272-226e-4507-b385-a05634dbd3ec/volumes" Oct 06 12:50:53 crc kubenswrapper[4958]: I1006 12:50:53.801488 4958 patch_prober.go:28] interesting pod/machine-config-daemon-whw6z container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 06 12:50:53 crc kubenswrapper[4958]: I1006 12:50:53.802072 4958 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-whw6z" podUID="1a63a5e9-6d00-40ff-a080-cfcaf03a1c1b" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 06 12:51:23 crc kubenswrapper[4958]: I1006 12:51:23.801615 4958 patch_prober.go:28] interesting pod/machine-config-daemon-whw6z container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 06 12:51:23 crc kubenswrapper[4958]: I1006 12:51:23.802493 4958 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-whw6z" podUID="1a63a5e9-6d00-40ff-a080-cfcaf03a1c1b" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 06 12:51:23 crc kubenswrapper[4958]: I1006 12:51:23.802585 4958 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-whw6z" Oct 06 12:51:23 crc kubenswrapper[4958]: I1006 12:51:23.803906 4958 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"d0b06b048e692dd8fadf2c195bdf63a8211a24a34b55945146baca45ef1b9dbc"} pod="openshift-machine-config-operator/machine-config-daemon-whw6z" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Oct 06 12:51:23 crc kubenswrapper[4958]: I1006 12:51:23.804052 4958 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-whw6z" podUID="1a63a5e9-6d00-40ff-a080-cfcaf03a1c1b" containerName="machine-config-daemon" containerID="cri-o://d0b06b048e692dd8fadf2c195bdf63a8211a24a34b55945146baca45ef1b9dbc" gracePeriod=600 Oct 06 12:51:24 crc kubenswrapper[4958]: E1006 12:51:24.043548 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-whw6z_openshift-machine-config-operator(1a63a5e9-6d00-40ff-a080-cfcaf03a1c1b)\"" pod="openshift-machine-config-operator/machine-config-daemon-whw6z" podUID="1a63a5e9-6d00-40ff-a080-cfcaf03a1c1b" Oct 06 12:51:24 crc kubenswrapper[4958]: I1006 12:51:24.932681 4958 generic.go:334] "Generic (PLEG): container finished" podID="1a63a5e9-6d00-40ff-a080-cfcaf03a1c1b" containerID="d0b06b048e692dd8fadf2c195bdf63a8211a24a34b55945146baca45ef1b9dbc" exitCode=0 Oct 06 12:51:24 crc kubenswrapper[4958]: I1006 12:51:24.932735 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-whw6z" event={"ID":"1a63a5e9-6d00-40ff-a080-cfcaf03a1c1b","Type":"ContainerDied","Data":"d0b06b048e692dd8fadf2c195bdf63a8211a24a34b55945146baca45ef1b9dbc"} Oct 06 12:51:24 crc kubenswrapper[4958]: I1006 12:51:24.932772 4958 scope.go:117] "RemoveContainer" containerID="55c91d9b6f0526c3f4d84b26912ccb29208036769e8b90f3b9af429198f89ce0" Oct 06 12:51:24 crc kubenswrapper[4958]: I1006 12:51:24.933901 4958 scope.go:117] "RemoveContainer" containerID="d0b06b048e692dd8fadf2c195bdf63a8211a24a34b55945146baca45ef1b9dbc" Oct 06 12:51:24 crc kubenswrapper[4958]: E1006 12:51:24.934694 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-whw6z_openshift-machine-config-operator(1a63a5e9-6d00-40ff-a080-cfcaf03a1c1b)\"" pod="openshift-machine-config-operator/machine-config-daemon-whw6z" podUID="1a63a5e9-6d00-40ff-a080-cfcaf03a1c1b" Oct 06 12:51:38 crc kubenswrapper[4958]: I1006 12:51:38.914592 4958 scope.go:117] "RemoveContainer" containerID="d0b06b048e692dd8fadf2c195bdf63a8211a24a34b55945146baca45ef1b9dbc" Oct 06 12:51:38 crc kubenswrapper[4958]: E1006 12:51:38.915927 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-whw6z_openshift-machine-config-operator(1a63a5e9-6d00-40ff-a080-cfcaf03a1c1b)\"" pod="openshift-machine-config-operator/machine-config-daemon-whw6z" podUID="1a63a5e9-6d00-40ff-a080-cfcaf03a1c1b" Oct 06 12:51:53 crc kubenswrapper[4958]: I1006 12:51:53.913735 4958 scope.go:117] "RemoveContainer" containerID="d0b06b048e692dd8fadf2c195bdf63a8211a24a34b55945146baca45ef1b9dbc" Oct 06 12:51:53 crc kubenswrapper[4958]: E1006 12:51:53.914537 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-whw6z_openshift-machine-config-operator(1a63a5e9-6d00-40ff-a080-cfcaf03a1c1b)\"" pod="openshift-machine-config-operator/machine-config-daemon-whw6z" podUID="1a63a5e9-6d00-40ff-a080-cfcaf03a1c1b" Oct 06 12:52:04 crc kubenswrapper[4958]: I1006 12:52:04.913727 4958 scope.go:117] "RemoveContainer" containerID="d0b06b048e692dd8fadf2c195bdf63a8211a24a34b55945146baca45ef1b9dbc" Oct 06 12:52:04 crc kubenswrapper[4958]: E1006 12:52:04.914664 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-whw6z_openshift-machine-config-operator(1a63a5e9-6d00-40ff-a080-cfcaf03a1c1b)\"" pod="openshift-machine-config-operator/machine-config-daemon-whw6z" podUID="1a63a5e9-6d00-40ff-a080-cfcaf03a1c1b" Oct 06 12:52:15 crc kubenswrapper[4958]: I1006 12:52:15.913859 4958 scope.go:117] "RemoveContainer" containerID="d0b06b048e692dd8fadf2c195bdf63a8211a24a34b55945146baca45ef1b9dbc" Oct 06 12:52:15 crc kubenswrapper[4958]: E1006 12:52:15.914733 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-whw6z_openshift-machine-config-operator(1a63a5e9-6d00-40ff-a080-cfcaf03a1c1b)\"" pod="openshift-machine-config-operator/machine-config-daemon-whw6z" podUID="1a63a5e9-6d00-40ff-a080-cfcaf03a1c1b" Oct 06 12:52:29 crc kubenswrapper[4958]: I1006 12:52:29.913570 4958 scope.go:117] "RemoveContainer" containerID="d0b06b048e692dd8fadf2c195bdf63a8211a24a34b55945146baca45ef1b9dbc" Oct 06 12:52:29 crc kubenswrapper[4958]: E1006 12:52:29.914454 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-whw6z_openshift-machine-config-operator(1a63a5e9-6d00-40ff-a080-cfcaf03a1c1b)\"" pod="openshift-machine-config-operator/machine-config-daemon-whw6z" podUID="1a63a5e9-6d00-40ff-a080-cfcaf03a1c1b" Oct 06 12:52:40 crc kubenswrapper[4958]: I1006 12:52:40.913186 4958 scope.go:117] "RemoveContainer" containerID="d0b06b048e692dd8fadf2c195bdf63a8211a24a34b55945146baca45ef1b9dbc" Oct 06 12:52:40 crc kubenswrapper[4958]: E1006 12:52:40.914059 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-whw6z_openshift-machine-config-operator(1a63a5e9-6d00-40ff-a080-cfcaf03a1c1b)\"" pod="openshift-machine-config-operator/machine-config-daemon-whw6z" podUID="1a63a5e9-6d00-40ff-a080-cfcaf03a1c1b" Oct 06 12:52:52 crc kubenswrapper[4958]: I1006 12:52:52.913818 4958 scope.go:117] "RemoveContainer" containerID="d0b06b048e692dd8fadf2c195bdf63a8211a24a34b55945146baca45ef1b9dbc" Oct 06 12:52:52 crc kubenswrapper[4958]: E1006 12:52:52.915072 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-whw6z_openshift-machine-config-operator(1a63a5e9-6d00-40ff-a080-cfcaf03a1c1b)\"" pod="openshift-machine-config-operator/machine-config-daemon-whw6z" podUID="1a63a5e9-6d00-40ff-a080-cfcaf03a1c1b" Oct 06 12:53:05 crc kubenswrapper[4958]: I1006 12:53:05.913955 4958 scope.go:117] "RemoveContainer" containerID="d0b06b048e692dd8fadf2c195bdf63a8211a24a34b55945146baca45ef1b9dbc" Oct 06 12:53:05 crc kubenswrapper[4958]: E1006 12:53:05.915258 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-whw6z_openshift-machine-config-operator(1a63a5e9-6d00-40ff-a080-cfcaf03a1c1b)\"" pod="openshift-machine-config-operator/machine-config-daemon-whw6z" podUID="1a63a5e9-6d00-40ff-a080-cfcaf03a1c1b" Oct 06 12:53:20 crc kubenswrapper[4958]: I1006 12:53:20.915068 4958 scope.go:117] "RemoveContainer" containerID="d0b06b048e692dd8fadf2c195bdf63a8211a24a34b55945146baca45ef1b9dbc" Oct 06 12:53:20 crc kubenswrapper[4958]: E1006 12:53:20.916179 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-whw6z_openshift-machine-config-operator(1a63a5e9-6d00-40ff-a080-cfcaf03a1c1b)\"" pod="openshift-machine-config-operator/machine-config-daemon-whw6z" podUID="1a63a5e9-6d00-40ff-a080-cfcaf03a1c1b" Oct 06 12:53:32 crc kubenswrapper[4958]: I1006 12:53:32.913820 4958 scope.go:117] "RemoveContainer" containerID="d0b06b048e692dd8fadf2c195bdf63a8211a24a34b55945146baca45ef1b9dbc" Oct 06 12:53:32 crc kubenswrapper[4958]: E1006 12:53:32.915003 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-whw6z_openshift-machine-config-operator(1a63a5e9-6d00-40ff-a080-cfcaf03a1c1b)\"" pod="openshift-machine-config-operator/machine-config-daemon-whw6z" podUID="1a63a5e9-6d00-40ff-a080-cfcaf03a1c1b" Oct 06 12:53:44 crc kubenswrapper[4958]: I1006 12:53:44.913892 4958 scope.go:117] "RemoveContainer" containerID="d0b06b048e692dd8fadf2c195bdf63a8211a24a34b55945146baca45ef1b9dbc" Oct 06 12:53:44 crc kubenswrapper[4958]: E1006 12:53:44.915022 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-whw6z_openshift-machine-config-operator(1a63a5e9-6d00-40ff-a080-cfcaf03a1c1b)\"" pod="openshift-machine-config-operator/machine-config-daemon-whw6z" podUID="1a63a5e9-6d00-40ff-a080-cfcaf03a1c1b" Oct 06 12:53:58 crc kubenswrapper[4958]: I1006 12:53:58.914996 4958 scope.go:117] "RemoveContainer" containerID="d0b06b048e692dd8fadf2c195bdf63a8211a24a34b55945146baca45ef1b9dbc" Oct 06 12:53:58 crc kubenswrapper[4958]: E1006 12:53:58.916281 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-whw6z_openshift-machine-config-operator(1a63a5e9-6d00-40ff-a080-cfcaf03a1c1b)\"" pod="openshift-machine-config-operator/machine-config-daemon-whw6z" podUID="1a63a5e9-6d00-40ff-a080-cfcaf03a1c1b" Oct 06 12:54:09 crc kubenswrapper[4958]: I1006 12:54:09.912968 4958 scope.go:117] "RemoveContainer" containerID="d0b06b048e692dd8fadf2c195bdf63a8211a24a34b55945146baca45ef1b9dbc" Oct 06 12:54:09 crc kubenswrapper[4958]: E1006 12:54:09.913923 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-whw6z_openshift-machine-config-operator(1a63a5e9-6d00-40ff-a080-cfcaf03a1c1b)\"" pod="openshift-machine-config-operator/machine-config-daemon-whw6z" podUID="1a63a5e9-6d00-40ff-a080-cfcaf03a1c1b" Oct 06 12:54:23 crc kubenswrapper[4958]: I1006 12:54:23.913966 4958 scope.go:117] "RemoveContainer" containerID="d0b06b048e692dd8fadf2c195bdf63a8211a24a34b55945146baca45ef1b9dbc" Oct 06 12:54:23 crc kubenswrapper[4958]: E1006 12:54:23.914686 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-whw6z_openshift-machine-config-operator(1a63a5e9-6d00-40ff-a080-cfcaf03a1c1b)\"" pod="openshift-machine-config-operator/machine-config-daemon-whw6z" podUID="1a63a5e9-6d00-40ff-a080-cfcaf03a1c1b" Oct 06 12:54:37 crc kubenswrapper[4958]: I1006 12:54:37.913699 4958 scope.go:117] "RemoveContainer" containerID="d0b06b048e692dd8fadf2c195bdf63a8211a24a34b55945146baca45ef1b9dbc" Oct 06 12:54:37 crc kubenswrapper[4958]: E1006 12:54:37.914740 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-whw6z_openshift-machine-config-operator(1a63a5e9-6d00-40ff-a080-cfcaf03a1c1b)\"" pod="openshift-machine-config-operator/machine-config-daemon-whw6z" podUID="1a63a5e9-6d00-40ff-a080-cfcaf03a1c1b" Oct 06 12:54:51 crc kubenswrapper[4958]: I1006 12:54:51.912797 4958 scope.go:117] "RemoveContainer" containerID="d0b06b048e692dd8fadf2c195bdf63a8211a24a34b55945146baca45ef1b9dbc" Oct 06 12:54:51 crc kubenswrapper[4958]: E1006 12:54:51.913417 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-whw6z_openshift-machine-config-operator(1a63a5e9-6d00-40ff-a080-cfcaf03a1c1b)\"" pod="openshift-machine-config-operator/machine-config-daemon-whw6z" podUID="1a63a5e9-6d00-40ff-a080-cfcaf03a1c1b" Oct 06 12:55:04 crc kubenswrapper[4958]: I1006 12:55:04.915879 4958 scope.go:117] "RemoveContainer" containerID="d0b06b048e692dd8fadf2c195bdf63a8211a24a34b55945146baca45ef1b9dbc" Oct 06 12:55:04 crc kubenswrapper[4958]: E1006 12:55:04.917126 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-whw6z_openshift-machine-config-operator(1a63a5e9-6d00-40ff-a080-cfcaf03a1c1b)\"" pod="openshift-machine-config-operator/machine-config-daemon-whw6z" podUID="1a63a5e9-6d00-40ff-a080-cfcaf03a1c1b" Oct 06 12:55:16 crc kubenswrapper[4958]: I1006 12:55:16.928218 4958 scope.go:117] "RemoveContainer" containerID="d0b06b048e692dd8fadf2c195bdf63a8211a24a34b55945146baca45ef1b9dbc" Oct 06 12:55:16 crc kubenswrapper[4958]: E1006 12:55:16.929541 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-whw6z_openshift-machine-config-operator(1a63a5e9-6d00-40ff-a080-cfcaf03a1c1b)\"" pod="openshift-machine-config-operator/machine-config-daemon-whw6z" podUID="1a63a5e9-6d00-40ff-a080-cfcaf03a1c1b" Oct 06 12:55:27 crc kubenswrapper[4958]: I1006 12:55:27.914199 4958 scope.go:117] "RemoveContainer" containerID="d0b06b048e692dd8fadf2c195bdf63a8211a24a34b55945146baca45ef1b9dbc" Oct 06 12:55:27 crc kubenswrapper[4958]: E1006 12:55:27.915494 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-whw6z_openshift-machine-config-operator(1a63a5e9-6d00-40ff-a080-cfcaf03a1c1b)\"" pod="openshift-machine-config-operator/machine-config-daemon-whw6z" podUID="1a63a5e9-6d00-40ff-a080-cfcaf03a1c1b" Oct 06 12:55:35 crc kubenswrapper[4958]: I1006 12:55:35.104576 4958 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-j8825"] Oct 06 12:55:35 crc kubenswrapper[4958]: E1006 12:55:35.105899 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9b23c272-226e-4507-b385-a05634dbd3ec" containerName="extract-utilities" Oct 06 12:55:35 crc kubenswrapper[4958]: I1006 12:55:35.105914 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="9b23c272-226e-4507-b385-a05634dbd3ec" containerName="extract-utilities" Oct 06 12:55:35 crc kubenswrapper[4958]: E1006 12:55:35.105944 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9b23c272-226e-4507-b385-a05634dbd3ec" containerName="registry-server" Oct 06 12:55:35 crc kubenswrapper[4958]: I1006 12:55:35.105950 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="9b23c272-226e-4507-b385-a05634dbd3ec" containerName="registry-server" Oct 06 12:55:35 crc kubenswrapper[4958]: E1006 12:55:35.105961 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9b23c272-226e-4507-b385-a05634dbd3ec" containerName="extract-content" Oct 06 12:55:35 crc kubenswrapper[4958]: I1006 12:55:35.105967 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="9b23c272-226e-4507-b385-a05634dbd3ec" containerName="extract-content" Oct 06 12:55:35 crc kubenswrapper[4958]: I1006 12:55:35.106127 4958 memory_manager.go:354] "RemoveStaleState removing state" podUID="9b23c272-226e-4507-b385-a05634dbd3ec" containerName="registry-server" Oct 06 12:55:35 crc kubenswrapper[4958]: I1006 12:55:35.108259 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-j8825" Oct 06 12:55:35 crc kubenswrapper[4958]: I1006 12:55:35.123858 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-j8825"] Oct 06 12:55:35 crc kubenswrapper[4958]: I1006 12:55:35.126933 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fa4dfbcb-139d-4e14-affb-42a2b373694a-catalog-content\") pod \"redhat-marketplace-j8825\" (UID: \"fa4dfbcb-139d-4e14-affb-42a2b373694a\") " pod="openshift-marketplace/redhat-marketplace-j8825" Oct 06 12:55:35 crc kubenswrapper[4958]: I1006 12:55:35.127092 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6n4lh\" (UniqueName: \"kubernetes.io/projected/fa4dfbcb-139d-4e14-affb-42a2b373694a-kube-api-access-6n4lh\") pod \"redhat-marketplace-j8825\" (UID: \"fa4dfbcb-139d-4e14-affb-42a2b373694a\") " pod="openshift-marketplace/redhat-marketplace-j8825" Oct 06 12:55:35 crc kubenswrapper[4958]: I1006 12:55:35.127193 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fa4dfbcb-139d-4e14-affb-42a2b373694a-utilities\") pod \"redhat-marketplace-j8825\" (UID: \"fa4dfbcb-139d-4e14-affb-42a2b373694a\") " pod="openshift-marketplace/redhat-marketplace-j8825" Oct 06 12:55:35 crc kubenswrapper[4958]: I1006 12:55:35.229718 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fa4dfbcb-139d-4e14-affb-42a2b373694a-catalog-content\") pod \"redhat-marketplace-j8825\" (UID: \"fa4dfbcb-139d-4e14-affb-42a2b373694a\") " pod="openshift-marketplace/redhat-marketplace-j8825" Oct 06 12:55:35 crc kubenswrapper[4958]: I1006 12:55:35.229802 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6n4lh\" (UniqueName: \"kubernetes.io/projected/fa4dfbcb-139d-4e14-affb-42a2b373694a-kube-api-access-6n4lh\") pod \"redhat-marketplace-j8825\" (UID: \"fa4dfbcb-139d-4e14-affb-42a2b373694a\") " pod="openshift-marketplace/redhat-marketplace-j8825" Oct 06 12:55:35 crc kubenswrapper[4958]: I1006 12:55:35.229855 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fa4dfbcb-139d-4e14-affb-42a2b373694a-utilities\") pod \"redhat-marketplace-j8825\" (UID: \"fa4dfbcb-139d-4e14-affb-42a2b373694a\") " pod="openshift-marketplace/redhat-marketplace-j8825" Oct 06 12:55:35 crc kubenswrapper[4958]: I1006 12:55:35.230314 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fa4dfbcb-139d-4e14-affb-42a2b373694a-catalog-content\") pod \"redhat-marketplace-j8825\" (UID: \"fa4dfbcb-139d-4e14-affb-42a2b373694a\") " pod="openshift-marketplace/redhat-marketplace-j8825" Oct 06 12:55:35 crc kubenswrapper[4958]: I1006 12:55:35.230411 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fa4dfbcb-139d-4e14-affb-42a2b373694a-utilities\") pod \"redhat-marketplace-j8825\" (UID: \"fa4dfbcb-139d-4e14-affb-42a2b373694a\") " pod="openshift-marketplace/redhat-marketplace-j8825" Oct 06 12:55:35 crc kubenswrapper[4958]: I1006 12:55:35.637108 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6n4lh\" (UniqueName: \"kubernetes.io/projected/fa4dfbcb-139d-4e14-affb-42a2b373694a-kube-api-access-6n4lh\") pod \"redhat-marketplace-j8825\" (UID: \"fa4dfbcb-139d-4e14-affb-42a2b373694a\") " pod="openshift-marketplace/redhat-marketplace-j8825" Oct 06 12:55:35 crc kubenswrapper[4958]: I1006 12:55:35.735565 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-j8825" Oct 06 12:55:36 crc kubenswrapper[4958]: I1006 12:55:36.227330 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-j8825"] Oct 06 12:55:36 crc kubenswrapper[4958]: W1006 12:55:36.243350 4958 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podfa4dfbcb_139d_4e14_affb_42a2b373694a.slice/crio-777f6c14cdf15c221e5ac2d32d4084f343dfeab2d47b238fd3e543f686c9daff WatchSource:0}: Error finding container 777f6c14cdf15c221e5ac2d32d4084f343dfeab2d47b238fd3e543f686c9daff: Status 404 returned error can't find the container with id 777f6c14cdf15c221e5ac2d32d4084f343dfeab2d47b238fd3e543f686c9daff Oct 06 12:55:36 crc kubenswrapper[4958]: I1006 12:55:36.697276 4958 generic.go:334] "Generic (PLEG): container finished" podID="fa4dfbcb-139d-4e14-affb-42a2b373694a" containerID="47f176a6e28628067bc9f9028566a2a324c4d032467ae4acb4aaf227eee60179" exitCode=0 Oct 06 12:55:36 crc kubenswrapper[4958]: I1006 12:55:36.697360 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-j8825" event={"ID":"fa4dfbcb-139d-4e14-affb-42a2b373694a","Type":"ContainerDied","Data":"47f176a6e28628067bc9f9028566a2a324c4d032467ae4acb4aaf227eee60179"} Oct 06 12:55:36 crc kubenswrapper[4958]: I1006 12:55:36.697939 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-j8825" event={"ID":"fa4dfbcb-139d-4e14-affb-42a2b373694a","Type":"ContainerStarted","Data":"777f6c14cdf15c221e5ac2d32d4084f343dfeab2d47b238fd3e543f686c9daff"} Oct 06 12:55:36 crc kubenswrapper[4958]: I1006 12:55:36.699526 4958 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Oct 06 12:55:38 crc kubenswrapper[4958]: I1006 12:55:38.722139 4958 generic.go:334] "Generic (PLEG): container finished" podID="fa4dfbcb-139d-4e14-affb-42a2b373694a" containerID="428e5af7e8c7c54051195b00971dbe80857c33237b995d4b339f6387630ef2c1" exitCode=0 Oct 06 12:55:38 crc kubenswrapper[4958]: I1006 12:55:38.722317 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-j8825" event={"ID":"fa4dfbcb-139d-4e14-affb-42a2b373694a","Type":"ContainerDied","Data":"428e5af7e8c7c54051195b00971dbe80857c33237b995d4b339f6387630ef2c1"} Oct 06 12:55:39 crc kubenswrapper[4958]: I1006 12:55:39.746595 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-j8825" event={"ID":"fa4dfbcb-139d-4e14-affb-42a2b373694a","Type":"ContainerStarted","Data":"27d81d5b075feb2bb4ef58983c8fd98df10e6e8402be65b217d72ca9d601f863"} Oct 06 12:55:39 crc kubenswrapper[4958]: I1006 12:55:39.779053 4958 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-j8825" podStartSLOduration=2.055829326 podStartE2EDuration="4.77903577s" podCreationTimestamp="2025-10-06 12:55:35 +0000 UTC" firstStartedPulling="2025-10-06 12:55:36.699193091 +0000 UTC m=+4090.585218419" lastFinishedPulling="2025-10-06 12:55:39.422399555 +0000 UTC m=+4093.308424863" observedRunningTime="2025-10-06 12:55:39.767906327 +0000 UTC m=+4093.653931665" watchObservedRunningTime="2025-10-06 12:55:39.77903577 +0000 UTC m=+4093.665061078" Oct 06 12:55:40 crc kubenswrapper[4958]: I1006 12:55:40.913928 4958 scope.go:117] "RemoveContainer" containerID="d0b06b048e692dd8fadf2c195bdf63a8211a24a34b55945146baca45ef1b9dbc" Oct 06 12:55:40 crc kubenswrapper[4958]: E1006 12:55:40.914562 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-whw6z_openshift-machine-config-operator(1a63a5e9-6d00-40ff-a080-cfcaf03a1c1b)\"" pod="openshift-machine-config-operator/machine-config-daemon-whw6z" podUID="1a63a5e9-6d00-40ff-a080-cfcaf03a1c1b" Oct 06 12:55:45 crc kubenswrapper[4958]: I1006 12:55:45.736104 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-j8825" Oct 06 12:55:45 crc kubenswrapper[4958]: I1006 12:55:45.736807 4958 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-j8825" Oct 06 12:55:45 crc kubenswrapper[4958]: I1006 12:55:45.786672 4958 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-j8825" Oct 06 12:55:45 crc kubenswrapper[4958]: I1006 12:55:45.881029 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-j8825" Oct 06 12:55:46 crc kubenswrapper[4958]: I1006 12:55:46.033108 4958 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-j8825"] Oct 06 12:55:47 crc kubenswrapper[4958]: I1006 12:55:47.845334 4958 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-j8825" podUID="fa4dfbcb-139d-4e14-affb-42a2b373694a" containerName="registry-server" containerID="cri-o://27d81d5b075feb2bb4ef58983c8fd98df10e6e8402be65b217d72ca9d601f863" gracePeriod=2 Oct 06 12:55:48 crc kubenswrapper[4958]: I1006 12:55:48.401532 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-j8825" Oct 06 12:55:48 crc kubenswrapper[4958]: I1006 12:55:48.432487 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fa4dfbcb-139d-4e14-affb-42a2b373694a-catalog-content\") pod \"fa4dfbcb-139d-4e14-affb-42a2b373694a\" (UID: \"fa4dfbcb-139d-4e14-affb-42a2b373694a\") " Oct 06 12:55:48 crc kubenswrapper[4958]: I1006 12:55:48.432534 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fa4dfbcb-139d-4e14-affb-42a2b373694a-utilities\") pod \"fa4dfbcb-139d-4e14-affb-42a2b373694a\" (UID: \"fa4dfbcb-139d-4e14-affb-42a2b373694a\") " Oct 06 12:55:48 crc kubenswrapper[4958]: I1006 12:55:48.432751 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6n4lh\" (UniqueName: \"kubernetes.io/projected/fa4dfbcb-139d-4e14-affb-42a2b373694a-kube-api-access-6n4lh\") pod \"fa4dfbcb-139d-4e14-affb-42a2b373694a\" (UID: \"fa4dfbcb-139d-4e14-affb-42a2b373694a\") " Oct 06 12:55:48 crc kubenswrapper[4958]: I1006 12:55:48.434203 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/fa4dfbcb-139d-4e14-affb-42a2b373694a-utilities" (OuterVolumeSpecName: "utilities") pod "fa4dfbcb-139d-4e14-affb-42a2b373694a" (UID: "fa4dfbcb-139d-4e14-affb-42a2b373694a"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 06 12:55:48 crc kubenswrapper[4958]: I1006 12:55:48.439646 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fa4dfbcb-139d-4e14-affb-42a2b373694a-kube-api-access-6n4lh" (OuterVolumeSpecName: "kube-api-access-6n4lh") pod "fa4dfbcb-139d-4e14-affb-42a2b373694a" (UID: "fa4dfbcb-139d-4e14-affb-42a2b373694a"). InnerVolumeSpecName "kube-api-access-6n4lh". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 06 12:55:48 crc kubenswrapper[4958]: I1006 12:55:48.446649 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/fa4dfbcb-139d-4e14-affb-42a2b373694a-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "fa4dfbcb-139d-4e14-affb-42a2b373694a" (UID: "fa4dfbcb-139d-4e14-affb-42a2b373694a"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 06 12:55:48 crc kubenswrapper[4958]: I1006 12:55:48.535333 4958 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fa4dfbcb-139d-4e14-affb-42a2b373694a-catalog-content\") on node \"crc\" DevicePath \"\"" Oct 06 12:55:48 crc kubenswrapper[4958]: I1006 12:55:48.535374 4958 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fa4dfbcb-139d-4e14-affb-42a2b373694a-utilities\") on node \"crc\" DevicePath \"\"" Oct 06 12:55:48 crc kubenswrapper[4958]: I1006 12:55:48.535396 4958 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6n4lh\" (UniqueName: \"kubernetes.io/projected/fa4dfbcb-139d-4e14-affb-42a2b373694a-kube-api-access-6n4lh\") on node \"crc\" DevicePath \"\"" Oct 06 12:55:48 crc kubenswrapper[4958]: I1006 12:55:48.859216 4958 generic.go:334] "Generic (PLEG): container finished" podID="fa4dfbcb-139d-4e14-affb-42a2b373694a" containerID="27d81d5b075feb2bb4ef58983c8fd98df10e6e8402be65b217d72ca9d601f863" exitCode=0 Oct 06 12:55:48 crc kubenswrapper[4958]: I1006 12:55:48.859327 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-j8825" Oct 06 12:55:48 crc kubenswrapper[4958]: I1006 12:55:48.859316 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-j8825" event={"ID":"fa4dfbcb-139d-4e14-affb-42a2b373694a","Type":"ContainerDied","Data":"27d81d5b075feb2bb4ef58983c8fd98df10e6e8402be65b217d72ca9d601f863"} Oct 06 12:55:48 crc kubenswrapper[4958]: I1006 12:55:48.859887 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-j8825" event={"ID":"fa4dfbcb-139d-4e14-affb-42a2b373694a","Type":"ContainerDied","Data":"777f6c14cdf15c221e5ac2d32d4084f343dfeab2d47b238fd3e543f686c9daff"} Oct 06 12:55:48 crc kubenswrapper[4958]: I1006 12:55:48.859921 4958 scope.go:117] "RemoveContainer" containerID="27d81d5b075feb2bb4ef58983c8fd98df10e6e8402be65b217d72ca9d601f863" Oct 06 12:55:48 crc kubenswrapper[4958]: I1006 12:55:48.918423 4958 scope.go:117] "RemoveContainer" containerID="428e5af7e8c7c54051195b00971dbe80857c33237b995d4b339f6387630ef2c1" Oct 06 12:55:48 crc kubenswrapper[4958]: I1006 12:55:48.925702 4958 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-j8825"] Oct 06 12:55:48 crc kubenswrapper[4958]: I1006 12:55:48.936029 4958 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-j8825"] Oct 06 12:55:49 crc kubenswrapper[4958]: I1006 12:55:49.762464 4958 scope.go:117] "RemoveContainer" containerID="47f176a6e28628067bc9f9028566a2a324c4d032467ae4acb4aaf227eee60179" Oct 06 12:55:49 crc kubenswrapper[4958]: I1006 12:55:49.804182 4958 scope.go:117] "RemoveContainer" containerID="27d81d5b075feb2bb4ef58983c8fd98df10e6e8402be65b217d72ca9d601f863" Oct 06 12:55:49 crc kubenswrapper[4958]: E1006 12:55:49.804896 4958 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"27d81d5b075feb2bb4ef58983c8fd98df10e6e8402be65b217d72ca9d601f863\": container with ID starting with 27d81d5b075feb2bb4ef58983c8fd98df10e6e8402be65b217d72ca9d601f863 not found: ID does not exist" containerID="27d81d5b075feb2bb4ef58983c8fd98df10e6e8402be65b217d72ca9d601f863" Oct 06 12:55:49 crc kubenswrapper[4958]: I1006 12:55:49.804954 4958 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"27d81d5b075feb2bb4ef58983c8fd98df10e6e8402be65b217d72ca9d601f863"} err="failed to get container status \"27d81d5b075feb2bb4ef58983c8fd98df10e6e8402be65b217d72ca9d601f863\": rpc error: code = NotFound desc = could not find container \"27d81d5b075feb2bb4ef58983c8fd98df10e6e8402be65b217d72ca9d601f863\": container with ID starting with 27d81d5b075feb2bb4ef58983c8fd98df10e6e8402be65b217d72ca9d601f863 not found: ID does not exist" Oct 06 12:55:49 crc kubenswrapper[4958]: I1006 12:55:49.804987 4958 scope.go:117] "RemoveContainer" containerID="428e5af7e8c7c54051195b00971dbe80857c33237b995d4b339f6387630ef2c1" Oct 06 12:55:49 crc kubenswrapper[4958]: E1006 12:55:49.805432 4958 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"428e5af7e8c7c54051195b00971dbe80857c33237b995d4b339f6387630ef2c1\": container with ID starting with 428e5af7e8c7c54051195b00971dbe80857c33237b995d4b339f6387630ef2c1 not found: ID does not exist" containerID="428e5af7e8c7c54051195b00971dbe80857c33237b995d4b339f6387630ef2c1" Oct 06 12:55:49 crc kubenswrapper[4958]: I1006 12:55:49.805485 4958 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"428e5af7e8c7c54051195b00971dbe80857c33237b995d4b339f6387630ef2c1"} err="failed to get container status \"428e5af7e8c7c54051195b00971dbe80857c33237b995d4b339f6387630ef2c1\": rpc error: code = NotFound desc = could not find container \"428e5af7e8c7c54051195b00971dbe80857c33237b995d4b339f6387630ef2c1\": container with ID starting with 428e5af7e8c7c54051195b00971dbe80857c33237b995d4b339f6387630ef2c1 not found: ID does not exist" Oct 06 12:55:49 crc kubenswrapper[4958]: I1006 12:55:49.805520 4958 scope.go:117] "RemoveContainer" containerID="47f176a6e28628067bc9f9028566a2a324c4d032467ae4acb4aaf227eee60179" Oct 06 12:55:49 crc kubenswrapper[4958]: E1006 12:55:49.805829 4958 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"47f176a6e28628067bc9f9028566a2a324c4d032467ae4acb4aaf227eee60179\": container with ID starting with 47f176a6e28628067bc9f9028566a2a324c4d032467ae4acb4aaf227eee60179 not found: ID does not exist" containerID="47f176a6e28628067bc9f9028566a2a324c4d032467ae4acb4aaf227eee60179" Oct 06 12:55:49 crc kubenswrapper[4958]: I1006 12:55:49.805871 4958 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"47f176a6e28628067bc9f9028566a2a324c4d032467ae4acb4aaf227eee60179"} err="failed to get container status \"47f176a6e28628067bc9f9028566a2a324c4d032467ae4acb4aaf227eee60179\": rpc error: code = NotFound desc = could not find container \"47f176a6e28628067bc9f9028566a2a324c4d032467ae4acb4aaf227eee60179\": container with ID starting with 47f176a6e28628067bc9f9028566a2a324c4d032467ae4acb4aaf227eee60179 not found: ID does not exist" Oct 06 12:55:50 crc kubenswrapper[4958]: I1006 12:55:50.933556 4958 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fa4dfbcb-139d-4e14-affb-42a2b373694a" path="/var/lib/kubelet/pods/fa4dfbcb-139d-4e14-affb-42a2b373694a/volumes" Oct 06 12:55:53 crc kubenswrapper[4958]: I1006 12:55:53.912898 4958 scope.go:117] "RemoveContainer" containerID="d0b06b048e692dd8fadf2c195bdf63a8211a24a34b55945146baca45ef1b9dbc" Oct 06 12:55:53 crc kubenswrapper[4958]: E1006 12:55:53.913371 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-whw6z_openshift-machine-config-operator(1a63a5e9-6d00-40ff-a080-cfcaf03a1c1b)\"" pod="openshift-machine-config-operator/machine-config-daemon-whw6z" podUID="1a63a5e9-6d00-40ff-a080-cfcaf03a1c1b" Oct 06 12:56:05 crc kubenswrapper[4958]: I1006 12:56:05.913406 4958 scope.go:117] "RemoveContainer" containerID="d0b06b048e692dd8fadf2c195bdf63a8211a24a34b55945146baca45ef1b9dbc" Oct 06 12:56:05 crc kubenswrapper[4958]: E1006 12:56:05.914192 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-whw6z_openshift-machine-config-operator(1a63a5e9-6d00-40ff-a080-cfcaf03a1c1b)\"" pod="openshift-machine-config-operator/machine-config-daemon-whw6z" podUID="1a63a5e9-6d00-40ff-a080-cfcaf03a1c1b" Oct 06 12:56:12 crc kubenswrapper[4958]: I1006 12:56:12.791941 4958 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-t2g89"] Oct 06 12:56:12 crc kubenswrapper[4958]: E1006 12:56:12.792690 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fa4dfbcb-139d-4e14-affb-42a2b373694a" containerName="extract-content" Oct 06 12:56:12 crc kubenswrapper[4958]: I1006 12:56:12.792706 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="fa4dfbcb-139d-4e14-affb-42a2b373694a" containerName="extract-content" Oct 06 12:56:12 crc kubenswrapper[4958]: E1006 12:56:12.792749 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fa4dfbcb-139d-4e14-affb-42a2b373694a" containerName="registry-server" Oct 06 12:56:12 crc kubenswrapper[4958]: I1006 12:56:12.792757 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="fa4dfbcb-139d-4e14-affb-42a2b373694a" containerName="registry-server" Oct 06 12:56:12 crc kubenswrapper[4958]: E1006 12:56:12.792768 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fa4dfbcb-139d-4e14-affb-42a2b373694a" containerName="extract-utilities" Oct 06 12:56:12 crc kubenswrapper[4958]: I1006 12:56:12.792777 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="fa4dfbcb-139d-4e14-affb-42a2b373694a" containerName="extract-utilities" Oct 06 12:56:12 crc kubenswrapper[4958]: I1006 12:56:12.793019 4958 memory_manager.go:354] "RemoveStaleState removing state" podUID="fa4dfbcb-139d-4e14-affb-42a2b373694a" containerName="registry-server" Oct 06 12:56:12 crc kubenswrapper[4958]: I1006 12:56:12.794818 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-t2g89" Oct 06 12:56:12 crc kubenswrapper[4958]: I1006 12:56:12.818547 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-t2g89"] Oct 06 12:56:12 crc kubenswrapper[4958]: I1006 12:56:12.838743 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b11c8e70-c15b-4269-aadd-d1a0b91dac80-utilities\") pod \"redhat-operators-t2g89\" (UID: \"b11c8e70-c15b-4269-aadd-d1a0b91dac80\") " pod="openshift-marketplace/redhat-operators-t2g89" Oct 06 12:56:12 crc kubenswrapper[4958]: I1006 12:56:12.838824 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kx9wx\" (UniqueName: \"kubernetes.io/projected/b11c8e70-c15b-4269-aadd-d1a0b91dac80-kube-api-access-kx9wx\") pod \"redhat-operators-t2g89\" (UID: \"b11c8e70-c15b-4269-aadd-d1a0b91dac80\") " pod="openshift-marketplace/redhat-operators-t2g89" Oct 06 12:56:12 crc kubenswrapper[4958]: I1006 12:56:12.838910 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b11c8e70-c15b-4269-aadd-d1a0b91dac80-catalog-content\") pod \"redhat-operators-t2g89\" (UID: \"b11c8e70-c15b-4269-aadd-d1a0b91dac80\") " pod="openshift-marketplace/redhat-operators-t2g89" Oct 06 12:56:12 crc kubenswrapper[4958]: I1006 12:56:12.940764 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b11c8e70-c15b-4269-aadd-d1a0b91dac80-utilities\") pod \"redhat-operators-t2g89\" (UID: \"b11c8e70-c15b-4269-aadd-d1a0b91dac80\") " pod="openshift-marketplace/redhat-operators-t2g89" Oct 06 12:56:12 crc kubenswrapper[4958]: I1006 12:56:12.941045 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kx9wx\" (UniqueName: \"kubernetes.io/projected/b11c8e70-c15b-4269-aadd-d1a0b91dac80-kube-api-access-kx9wx\") pod \"redhat-operators-t2g89\" (UID: \"b11c8e70-c15b-4269-aadd-d1a0b91dac80\") " pod="openshift-marketplace/redhat-operators-t2g89" Oct 06 12:56:12 crc kubenswrapper[4958]: I1006 12:56:12.941101 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b11c8e70-c15b-4269-aadd-d1a0b91dac80-catalog-content\") pod \"redhat-operators-t2g89\" (UID: \"b11c8e70-c15b-4269-aadd-d1a0b91dac80\") " pod="openshift-marketplace/redhat-operators-t2g89" Oct 06 12:56:12 crc kubenswrapper[4958]: I1006 12:56:12.941585 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b11c8e70-c15b-4269-aadd-d1a0b91dac80-utilities\") pod \"redhat-operators-t2g89\" (UID: \"b11c8e70-c15b-4269-aadd-d1a0b91dac80\") " pod="openshift-marketplace/redhat-operators-t2g89" Oct 06 12:56:12 crc kubenswrapper[4958]: I1006 12:56:12.941664 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b11c8e70-c15b-4269-aadd-d1a0b91dac80-catalog-content\") pod \"redhat-operators-t2g89\" (UID: \"b11c8e70-c15b-4269-aadd-d1a0b91dac80\") " pod="openshift-marketplace/redhat-operators-t2g89" Oct 06 12:56:12 crc kubenswrapper[4958]: I1006 12:56:12.960856 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kx9wx\" (UniqueName: \"kubernetes.io/projected/b11c8e70-c15b-4269-aadd-d1a0b91dac80-kube-api-access-kx9wx\") pod \"redhat-operators-t2g89\" (UID: \"b11c8e70-c15b-4269-aadd-d1a0b91dac80\") " pod="openshift-marketplace/redhat-operators-t2g89" Oct 06 12:56:13 crc kubenswrapper[4958]: I1006 12:56:13.124189 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-t2g89" Oct 06 12:56:13 crc kubenswrapper[4958]: I1006 12:56:13.600529 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-t2g89"] Oct 06 12:56:14 crc kubenswrapper[4958]: I1006 12:56:14.133071 4958 generic.go:334] "Generic (PLEG): container finished" podID="b11c8e70-c15b-4269-aadd-d1a0b91dac80" containerID="f2fd025dd2681dd70c0774520ddd0a189971a89bfc814d28c7915c735e40c5f8" exitCode=0 Oct 06 12:56:14 crc kubenswrapper[4958]: I1006 12:56:14.133197 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-t2g89" event={"ID":"b11c8e70-c15b-4269-aadd-d1a0b91dac80","Type":"ContainerDied","Data":"f2fd025dd2681dd70c0774520ddd0a189971a89bfc814d28c7915c735e40c5f8"} Oct 06 12:56:14 crc kubenswrapper[4958]: I1006 12:56:14.133245 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-t2g89" event={"ID":"b11c8e70-c15b-4269-aadd-d1a0b91dac80","Type":"ContainerStarted","Data":"f07ae08d1cc932ec1ce40ba7e7248fdf3d627b5178204e0a826d51776fc0f1f3"} Oct 06 12:56:16 crc kubenswrapper[4958]: I1006 12:56:16.159704 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-t2g89" event={"ID":"b11c8e70-c15b-4269-aadd-d1a0b91dac80","Type":"ContainerStarted","Data":"10a6dc14a9d6475c5353bbf5365f52d5b511559716a2bc4c24e691066fbbb49b"} Oct 06 12:56:17 crc kubenswrapper[4958]: I1006 12:56:17.170625 4958 generic.go:334] "Generic (PLEG): container finished" podID="b11c8e70-c15b-4269-aadd-d1a0b91dac80" containerID="10a6dc14a9d6475c5353bbf5365f52d5b511559716a2bc4c24e691066fbbb49b" exitCode=0 Oct 06 12:56:17 crc kubenswrapper[4958]: I1006 12:56:17.170680 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-t2g89" event={"ID":"b11c8e70-c15b-4269-aadd-d1a0b91dac80","Type":"ContainerDied","Data":"10a6dc14a9d6475c5353bbf5365f52d5b511559716a2bc4c24e691066fbbb49b"} Oct 06 12:56:17 crc kubenswrapper[4958]: I1006 12:56:17.918304 4958 scope.go:117] "RemoveContainer" containerID="d0b06b048e692dd8fadf2c195bdf63a8211a24a34b55945146baca45ef1b9dbc" Oct 06 12:56:17 crc kubenswrapper[4958]: E1006 12:56:17.919094 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-whw6z_openshift-machine-config-operator(1a63a5e9-6d00-40ff-a080-cfcaf03a1c1b)\"" pod="openshift-machine-config-operator/machine-config-daemon-whw6z" podUID="1a63a5e9-6d00-40ff-a080-cfcaf03a1c1b" Oct 06 12:56:18 crc kubenswrapper[4958]: I1006 12:56:18.180799 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-t2g89" event={"ID":"b11c8e70-c15b-4269-aadd-d1a0b91dac80","Type":"ContainerStarted","Data":"0123e00308acec054d51c6bdea15547f9621d4e5263875bd655f677ef248c558"} Oct 06 12:56:18 crc kubenswrapper[4958]: I1006 12:56:18.196208 4958 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-t2g89" podStartSLOduration=2.643717964 podStartE2EDuration="6.196192841s" podCreationTimestamp="2025-10-06 12:56:12 +0000 UTC" firstStartedPulling="2025-10-06 12:56:14.135564378 +0000 UTC m=+4128.021589726" lastFinishedPulling="2025-10-06 12:56:17.688039285 +0000 UTC m=+4131.574064603" observedRunningTime="2025-10-06 12:56:18.195174294 +0000 UTC m=+4132.081199612" watchObservedRunningTime="2025-10-06 12:56:18.196192841 +0000 UTC m=+4132.082218149" Oct 06 12:56:23 crc kubenswrapper[4958]: I1006 12:56:23.124900 4958 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-t2g89" Oct 06 12:56:23 crc kubenswrapper[4958]: I1006 12:56:23.125438 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-t2g89" Oct 06 12:56:23 crc kubenswrapper[4958]: I1006 12:56:23.408989 4958 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-t2g89" Oct 06 12:56:23 crc kubenswrapper[4958]: I1006 12:56:23.563976 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-t2g89" Oct 06 12:56:23 crc kubenswrapper[4958]: I1006 12:56:23.645731 4958 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-t2g89"] Oct 06 12:56:25 crc kubenswrapper[4958]: I1006 12:56:25.246302 4958 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-t2g89" podUID="b11c8e70-c15b-4269-aadd-d1a0b91dac80" containerName="registry-server" containerID="cri-o://0123e00308acec054d51c6bdea15547f9621d4e5263875bd655f677ef248c558" gracePeriod=2 Oct 06 12:56:25 crc kubenswrapper[4958]: I1006 12:56:25.834205 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-t2g89" Oct 06 12:56:25 crc kubenswrapper[4958]: I1006 12:56:25.929325 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kx9wx\" (UniqueName: \"kubernetes.io/projected/b11c8e70-c15b-4269-aadd-d1a0b91dac80-kube-api-access-kx9wx\") pod \"b11c8e70-c15b-4269-aadd-d1a0b91dac80\" (UID: \"b11c8e70-c15b-4269-aadd-d1a0b91dac80\") " Oct 06 12:56:25 crc kubenswrapper[4958]: I1006 12:56:25.929439 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b11c8e70-c15b-4269-aadd-d1a0b91dac80-utilities\") pod \"b11c8e70-c15b-4269-aadd-d1a0b91dac80\" (UID: \"b11c8e70-c15b-4269-aadd-d1a0b91dac80\") " Oct 06 12:56:25 crc kubenswrapper[4958]: I1006 12:56:25.929640 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b11c8e70-c15b-4269-aadd-d1a0b91dac80-catalog-content\") pod \"b11c8e70-c15b-4269-aadd-d1a0b91dac80\" (UID: \"b11c8e70-c15b-4269-aadd-d1a0b91dac80\") " Oct 06 12:56:25 crc kubenswrapper[4958]: I1006 12:56:25.930728 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b11c8e70-c15b-4269-aadd-d1a0b91dac80-utilities" (OuterVolumeSpecName: "utilities") pod "b11c8e70-c15b-4269-aadd-d1a0b91dac80" (UID: "b11c8e70-c15b-4269-aadd-d1a0b91dac80"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 06 12:56:25 crc kubenswrapper[4958]: I1006 12:56:25.941844 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b11c8e70-c15b-4269-aadd-d1a0b91dac80-kube-api-access-kx9wx" (OuterVolumeSpecName: "kube-api-access-kx9wx") pod "b11c8e70-c15b-4269-aadd-d1a0b91dac80" (UID: "b11c8e70-c15b-4269-aadd-d1a0b91dac80"). InnerVolumeSpecName "kube-api-access-kx9wx". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 06 12:56:26 crc kubenswrapper[4958]: I1006 12:56:26.029657 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b11c8e70-c15b-4269-aadd-d1a0b91dac80-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "b11c8e70-c15b-4269-aadd-d1a0b91dac80" (UID: "b11c8e70-c15b-4269-aadd-d1a0b91dac80"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 06 12:56:26 crc kubenswrapper[4958]: I1006 12:56:26.032374 4958 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kx9wx\" (UniqueName: \"kubernetes.io/projected/b11c8e70-c15b-4269-aadd-d1a0b91dac80-kube-api-access-kx9wx\") on node \"crc\" DevicePath \"\"" Oct 06 12:56:26 crc kubenswrapper[4958]: I1006 12:56:26.032416 4958 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b11c8e70-c15b-4269-aadd-d1a0b91dac80-utilities\") on node \"crc\" DevicePath \"\"" Oct 06 12:56:26 crc kubenswrapper[4958]: I1006 12:56:26.032431 4958 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b11c8e70-c15b-4269-aadd-d1a0b91dac80-catalog-content\") on node \"crc\" DevicePath \"\"" Oct 06 12:56:26 crc kubenswrapper[4958]: I1006 12:56:26.260837 4958 generic.go:334] "Generic (PLEG): container finished" podID="b11c8e70-c15b-4269-aadd-d1a0b91dac80" containerID="0123e00308acec054d51c6bdea15547f9621d4e5263875bd655f677ef248c558" exitCode=0 Oct 06 12:56:26 crc kubenswrapper[4958]: I1006 12:56:26.261520 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-t2g89" event={"ID":"b11c8e70-c15b-4269-aadd-d1a0b91dac80","Type":"ContainerDied","Data":"0123e00308acec054d51c6bdea15547f9621d4e5263875bd655f677ef248c558"} Oct 06 12:56:26 crc kubenswrapper[4958]: I1006 12:56:26.261583 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-t2g89" Oct 06 12:56:26 crc kubenswrapper[4958]: I1006 12:56:26.261623 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-t2g89" event={"ID":"b11c8e70-c15b-4269-aadd-d1a0b91dac80","Type":"ContainerDied","Data":"f07ae08d1cc932ec1ce40ba7e7248fdf3d627b5178204e0a826d51776fc0f1f3"} Oct 06 12:56:26 crc kubenswrapper[4958]: I1006 12:56:26.261664 4958 scope.go:117] "RemoveContainer" containerID="0123e00308acec054d51c6bdea15547f9621d4e5263875bd655f677ef248c558" Oct 06 12:56:26 crc kubenswrapper[4958]: I1006 12:56:26.285903 4958 scope.go:117] "RemoveContainer" containerID="10a6dc14a9d6475c5353bbf5365f52d5b511559716a2bc4c24e691066fbbb49b" Oct 06 12:56:26 crc kubenswrapper[4958]: I1006 12:56:26.302840 4958 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-t2g89"] Oct 06 12:56:26 crc kubenswrapper[4958]: I1006 12:56:26.310454 4958 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-t2g89"] Oct 06 12:56:26 crc kubenswrapper[4958]: I1006 12:56:26.327609 4958 scope.go:117] "RemoveContainer" containerID="f2fd025dd2681dd70c0774520ddd0a189971a89bfc814d28c7915c735e40c5f8" Oct 06 12:56:26 crc kubenswrapper[4958]: I1006 12:56:26.374020 4958 scope.go:117] "RemoveContainer" containerID="0123e00308acec054d51c6bdea15547f9621d4e5263875bd655f677ef248c558" Oct 06 12:56:26 crc kubenswrapper[4958]: E1006 12:56:26.374498 4958 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0123e00308acec054d51c6bdea15547f9621d4e5263875bd655f677ef248c558\": container with ID starting with 0123e00308acec054d51c6bdea15547f9621d4e5263875bd655f677ef248c558 not found: ID does not exist" containerID="0123e00308acec054d51c6bdea15547f9621d4e5263875bd655f677ef248c558" Oct 06 12:56:26 crc kubenswrapper[4958]: I1006 12:56:26.374575 4958 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0123e00308acec054d51c6bdea15547f9621d4e5263875bd655f677ef248c558"} err="failed to get container status \"0123e00308acec054d51c6bdea15547f9621d4e5263875bd655f677ef248c558\": rpc error: code = NotFound desc = could not find container \"0123e00308acec054d51c6bdea15547f9621d4e5263875bd655f677ef248c558\": container with ID starting with 0123e00308acec054d51c6bdea15547f9621d4e5263875bd655f677ef248c558 not found: ID does not exist" Oct 06 12:56:26 crc kubenswrapper[4958]: I1006 12:56:26.374653 4958 scope.go:117] "RemoveContainer" containerID="10a6dc14a9d6475c5353bbf5365f52d5b511559716a2bc4c24e691066fbbb49b" Oct 06 12:56:26 crc kubenswrapper[4958]: E1006 12:56:26.374973 4958 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"10a6dc14a9d6475c5353bbf5365f52d5b511559716a2bc4c24e691066fbbb49b\": container with ID starting with 10a6dc14a9d6475c5353bbf5365f52d5b511559716a2bc4c24e691066fbbb49b not found: ID does not exist" containerID="10a6dc14a9d6475c5353bbf5365f52d5b511559716a2bc4c24e691066fbbb49b" Oct 06 12:56:26 crc kubenswrapper[4958]: I1006 12:56:26.374993 4958 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"10a6dc14a9d6475c5353bbf5365f52d5b511559716a2bc4c24e691066fbbb49b"} err="failed to get container status \"10a6dc14a9d6475c5353bbf5365f52d5b511559716a2bc4c24e691066fbbb49b\": rpc error: code = NotFound desc = could not find container \"10a6dc14a9d6475c5353bbf5365f52d5b511559716a2bc4c24e691066fbbb49b\": container with ID starting with 10a6dc14a9d6475c5353bbf5365f52d5b511559716a2bc4c24e691066fbbb49b not found: ID does not exist" Oct 06 12:56:26 crc kubenswrapper[4958]: I1006 12:56:26.375007 4958 scope.go:117] "RemoveContainer" containerID="f2fd025dd2681dd70c0774520ddd0a189971a89bfc814d28c7915c735e40c5f8" Oct 06 12:56:26 crc kubenswrapper[4958]: E1006 12:56:26.375341 4958 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f2fd025dd2681dd70c0774520ddd0a189971a89bfc814d28c7915c735e40c5f8\": container with ID starting with f2fd025dd2681dd70c0774520ddd0a189971a89bfc814d28c7915c735e40c5f8 not found: ID does not exist" containerID="f2fd025dd2681dd70c0774520ddd0a189971a89bfc814d28c7915c735e40c5f8" Oct 06 12:56:26 crc kubenswrapper[4958]: I1006 12:56:26.375377 4958 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f2fd025dd2681dd70c0774520ddd0a189971a89bfc814d28c7915c735e40c5f8"} err="failed to get container status \"f2fd025dd2681dd70c0774520ddd0a189971a89bfc814d28c7915c735e40c5f8\": rpc error: code = NotFound desc = could not find container \"f2fd025dd2681dd70c0774520ddd0a189971a89bfc814d28c7915c735e40c5f8\": container with ID starting with f2fd025dd2681dd70c0774520ddd0a189971a89bfc814d28c7915c735e40c5f8 not found: ID does not exist" Oct 06 12:56:26 crc kubenswrapper[4958]: I1006 12:56:26.930317 4958 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b11c8e70-c15b-4269-aadd-d1a0b91dac80" path="/var/lib/kubelet/pods/b11c8e70-c15b-4269-aadd-d1a0b91dac80/volumes" Oct 06 12:56:31 crc kubenswrapper[4958]: I1006 12:56:31.913201 4958 scope.go:117] "RemoveContainer" containerID="d0b06b048e692dd8fadf2c195bdf63a8211a24a34b55945146baca45ef1b9dbc" Oct 06 12:56:32 crc kubenswrapper[4958]: I1006 12:56:32.327819 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-whw6z" event={"ID":"1a63a5e9-6d00-40ff-a080-cfcaf03a1c1b","Type":"ContainerStarted","Data":"4b480221c89ad6ea54b7e021fe811dda80d2531729ab9dcc45e8465cea93e58b"} Oct 06 12:56:36 crc kubenswrapper[4958]: I1006 12:56:36.420689 4958 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-5hhw5"] Oct 06 12:56:36 crc kubenswrapper[4958]: E1006 12:56:36.421581 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b11c8e70-c15b-4269-aadd-d1a0b91dac80" containerName="extract-utilities" Oct 06 12:56:36 crc kubenswrapper[4958]: I1006 12:56:36.421598 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="b11c8e70-c15b-4269-aadd-d1a0b91dac80" containerName="extract-utilities" Oct 06 12:56:36 crc kubenswrapper[4958]: E1006 12:56:36.421614 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b11c8e70-c15b-4269-aadd-d1a0b91dac80" containerName="extract-content" Oct 06 12:56:36 crc kubenswrapper[4958]: I1006 12:56:36.421623 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="b11c8e70-c15b-4269-aadd-d1a0b91dac80" containerName="extract-content" Oct 06 12:56:36 crc kubenswrapper[4958]: E1006 12:56:36.421660 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b11c8e70-c15b-4269-aadd-d1a0b91dac80" containerName="registry-server" Oct 06 12:56:36 crc kubenswrapper[4958]: I1006 12:56:36.421668 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="b11c8e70-c15b-4269-aadd-d1a0b91dac80" containerName="registry-server" Oct 06 12:56:36 crc kubenswrapper[4958]: I1006 12:56:36.421902 4958 memory_manager.go:354] "RemoveStaleState removing state" podUID="b11c8e70-c15b-4269-aadd-d1a0b91dac80" containerName="registry-server" Oct 06 12:56:36 crc kubenswrapper[4958]: I1006 12:56:36.423977 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-5hhw5" Oct 06 12:56:36 crc kubenswrapper[4958]: I1006 12:56:36.433352 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-5hhw5"] Oct 06 12:56:36 crc kubenswrapper[4958]: I1006 12:56:36.526815 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n9gpg\" (UniqueName: \"kubernetes.io/projected/68031784-42b9-47c8-b884-df4858d89d1b-kube-api-access-n9gpg\") pod \"certified-operators-5hhw5\" (UID: \"68031784-42b9-47c8-b884-df4858d89d1b\") " pod="openshift-marketplace/certified-operators-5hhw5" Oct 06 12:56:36 crc kubenswrapper[4958]: I1006 12:56:36.526952 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/68031784-42b9-47c8-b884-df4858d89d1b-utilities\") pod \"certified-operators-5hhw5\" (UID: \"68031784-42b9-47c8-b884-df4858d89d1b\") " pod="openshift-marketplace/certified-operators-5hhw5" Oct 06 12:56:36 crc kubenswrapper[4958]: I1006 12:56:36.527367 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/68031784-42b9-47c8-b884-df4858d89d1b-catalog-content\") pod \"certified-operators-5hhw5\" (UID: \"68031784-42b9-47c8-b884-df4858d89d1b\") " pod="openshift-marketplace/certified-operators-5hhw5" Oct 06 12:56:36 crc kubenswrapper[4958]: I1006 12:56:36.629057 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/68031784-42b9-47c8-b884-df4858d89d1b-catalog-content\") pod \"certified-operators-5hhw5\" (UID: \"68031784-42b9-47c8-b884-df4858d89d1b\") " pod="openshift-marketplace/certified-operators-5hhw5" Oct 06 12:56:36 crc kubenswrapper[4958]: I1006 12:56:36.629161 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n9gpg\" (UniqueName: \"kubernetes.io/projected/68031784-42b9-47c8-b884-df4858d89d1b-kube-api-access-n9gpg\") pod \"certified-operators-5hhw5\" (UID: \"68031784-42b9-47c8-b884-df4858d89d1b\") " pod="openshift-marketplace/certified-operators-5hhw5" Oct 06 12:56:36 crc kubenswrapper[4958]: I1006 12:56:36.629262 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/68031784-42b9-47c8-b884-df4858d89d1b-utilities\") pod \"certified-operators-5hhw5\" (UID: \"68031784-42b9-47c8-b884-df4858d89d1b\") " pod="openshift-marketplace/certified-operators-5hhw5" Oct 06 12:56:36 crc kubenswrapper[4958]: I1006 12:56:36.629635 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/68031784-42b9-47c8-b884-df4858d89d1b-catalog-content\") pod \"certified-operators-5hhw5\" (UID: \"68031784-42b9-47c8-b884-df4858d89d1b\") " pod="openshift-marketplace/certified-operators-5hhw5" Oct 06 12:56:36 crc kubenswrapper[4958]: I1006 12:56:36.629691 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/68031784-42b9-47c8-b884-df4858d89d1b-utilities\") pod \"certified-operators-5hhw5\" (UID: \"68031784-42b9-47c8-b884-df4858d89d1b\") " pod="openshift-marketplace/certified-operators-5hhw5" Oct 06 12:56:36 crc kubenswrapper[4958]: I1006 12:56:36.654981 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n9gpg\" (UniqueName: \"kubernetes.io/projected/68031784-42b9-47c8-b884-df4858d89d1b-kube-api-access-n9gpg\") pod \"certified-operators-5hhw5\" (UID: \"68031784-42b9-47c8-b884-df4858d89d1b\") " pod="openshift-marketplace/certified-operators-5hhw5" Oct 06 12:56:36 crc kubenswrapper[4958]: I1006 12:56:36.757500 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-5hhw5" Oct 06 12:56:37 crc kubenswrapper[4958]: I1006 12:56:37.250323 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-5hhw5"] Oct 06 12:56:37 crc kubenswrapper[4958]: W1006 12:56:37.262377 4958 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod68031784_42b9_47c8_b884_df4858d89d1b.slice/crio-41f645367c56931ffa654aa02df349b2baee211172753c1060dd8c6054d65ad1 WatchSource:0}: Error finding container 41f645367c56931ffa654aa02df349b2baee211172753c1060dd8c6054d65ad1: Status 404 returned error can't find the container with id 41f645367c56931ffa654aa02df349b2baee211172753c1060dd8c6054d65ad1 Oct 06 12:56:37 crc kubenswrapper[4958]: I1006 12:56:37.382028 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-5hhw5" event={"ID":"68031784-42b9-47c8-b884-df4858d89d1b","Type":"ContainerStarted","Data":"41f645367c56931ffa654aa02df349b2baee211172753c1060dd8c6054d65ad1"} Oct 06 12:56:38 crc kubenswrapper[4958]: I1006 12:56:38.390991 4958 generic.go:334] "Generic (PLEG): container finished" podID="68031784-42b9-47c8-b884-df4858d89d1b" containerID="d38818fae0a5c0b8a071b3f4c452d41fd414b676952d2f3b4e9ac5f9536ba588" exitCode=0 Oct 06 12:56:38 crc kubenswrapper[4958]: I1006 12:56:38.391086 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-5hhw5" event={"ID":"68031784-42b9-47c8-b884-df4858d89d1b","Type":"ContainerDied","Data":"d38818fae0a5c0b8a071b3f4c452d41fd414b676952d2f3b4e9ac5f9536ba588"} Oct 06 12:56:39 crc kubenswrapper[4958]: I1006 12:56:39.406284 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-5hhw5" event={"ID":"68031784-42b9-47c8-b884-df4858d89d1b","Type":"ContainerStarted","Data":"d41de6baf2b6b920eef6ad3f2cd62b9ddac591d518cc177a6d43ca692d9ad0f9"} Oct 06 12:56:40 crc kubenswrapper[4958]: I1006 12:56:40.420967 4958 generic.go:334] "Generic (PLEG): container finished" podID="68031784-42b9-47c8-b884-df4858d89d1b" containerID="d41de6baf2b6b920eef6ad3f2cd62b9ddac591d518cc177a6d43ca692d9ad0f9" exitCode=0 Oct 06 12:56:40 crc kubenswrapper[4958]: I1006 12:56:40.421093 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-5hhw5" event={"ID":"68031784-42b9-47c8-b884-df4858d89d1b","Type":"ContainerDied","Data":"d41de6baf2b6b920eef6ad3f2cd62b9ddac591d518cc177a6d43ca692d9ad0f9"} Oct 06 12:56:41 crc kubenswrapper[4958]: I1006 12:56:41.436741 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-5hhw5" event={"ID":"68031784-42b9-47c8-b884-df4858d89d1b","Type":"ContainerStarted","Data":"18a41337892510d21c6388666182d7c838d2c0d5ed4b7fe69063e6d5cdce012c"} Oct 06 12:56:41 crc kubenswrapper[4958]: I1006 12:56:41.470360 4958 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-5hhw5" podStartSLOduration=2.812418966 podStartE2EDuration="5.470338697s" podCreationTimestamp="2025-10-06 12:56:36 +0000 UTC" firstStartedPulling="2025-10-06 12:56:38.39590419 +0000 UTC m=+4152.281929508" lastFinishedPulling="2025-10-06 12:56:41.053823911 +0000 UTC m=+4154.939849239" observedRunningTime="2025-10-06 12:56:41.458270949 +0000 UTC m=+4155.344296297" watchObservedRunningTime="2025-10-06 12:56:41.470338697 +0000 UTC m=+4155.356364025" Oct 06 12:56:46 crc kubenswrapper[4958]: I1006 12:56:46.757698 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-5hhw5" Oct 06 12:56:46 crc kubenswrapper[4958]: I1006 12:56:46.758458 4958 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-5hhw5" Oct 06 12:56:46 crc kubenswrapper[4958]: I1006 12:56:46.837930 4958 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-5hhw5" Oct 06 12:56:47 crc kubenswrapper[4958]: I1006 12:56:47.558619 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-5hhw5" Oct 06 12:56:47 crc kubenswrapper[4958]: I1006 12:56:47.609881 4958 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-5hhw5"] Oct 06 12:56:49 crc kubenswrapper[4958]: I1006 12:56:49.510316 4958 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-5hhw5" podUID="68031784-42b9-47c8-b884-df4858d89d1b" containerName="registry-server" containerID="cri-o://18a41337892510d21c6388666182d7c838d2c0d5ed4b7fe69063e6d5cdce012c" gracePeriod=2 Oct 06 12:56:50 crc kubenswrapper[4958]: I1006 12:56:50.065160 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-5hhw5" Oct 06 12:56:50 crc kubenswrapper[4958]: I1006 12:56:50.124009 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/68031784-42b9-47c8-b884-df4858d89d1b-catalog-content\") pod \"68031784-42b9-47c8-b884-df4858d89d1b\" (UID: \"68031784-42b9-47c8-b884-df4858d89d1b\") " Oct 06 12:56:50 crc kubenswrapper[4958]: I1006 12:56:50.124078 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-n9gpg\" (UniqueName: \"kubernetes.io/projected/68031784-42b9-47c8-b884-df4858d89d1b-kube-api-access-n9gpg\") pod \"68031784-42b9-47c8-b884-df4858d89d1b\" (UID: \"68031784-42b9-47c8-b884-df4858d89d1b\") " Oct 06 12:56:50 crc kubenswrapper[4958]: I1006 12:56:50.124247 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/68031784-42b9-47c8-b884-df4858d89d1b-utilities\") pod \"68031784-42b9-47c8-b884-df4858d89d1b\" (UID: \"68031784-42b9-47c8-b884-df4858d89d1b\") " Oct 06 12:56:50 crc kubenswrapper[4958]: I1006 12:56:50.125187 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/68031784-42b9-47c8-b884-df4858d89d1b-utilities" (OuterVolumeSpecName: "utilities") pod "68031784-42b9-47c8-b884-df4858d89d1b" (UID: "68031784-42b9-47c8-b884-df4858d89d1b"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 06 12:56:50 crc kubenswrapper[4958]: I1006 12:56:50.131696 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/68031784-42b9-47c8-b884-df4858d89d1b-kube-api-access-n9gpg" (OuterVolumeSpecName: "kube-api-access-n9gpg") pod "68031784-42b9-47c8-b884-df4858d89d1b" (UID: "68031784-42b9-47c8-b884-df4858d89d1b"). InnerVolumeSpecName "kube-api-access-n9gpg". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 06 12:56:50 crc kubenswrapper[4958]: I1006 12:56:50.228122 4958 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/68031784-42b9-47c8-b884-df4858d89d1b-utilities\") on node \"crc\" DevicePath \"\"" Oct 06 12:56:50 crc kubenswrapper[4958]: I1006 12:56:50.228198 4958 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-n9gpg\" (UniqueName: \"kubernetes.io/projected/68031784-42b9-47c8-b884-df4858d89d1b-kube-api-access-n9gpg\") on node \"crc\" DevicePath \"\"" Oct 06 12:56:50 crc kubenswrapper[4958]: I1006 12:56:50.271312 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/68031784-42b9-47c8-b884-df4858d89d1b-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "68031784-42b9-47c8-b884-df4858d89d1b" (UID: "68031784-42b9-47c8-b884-df4858d89d1b"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 06 12:56:50 crc kubenswrapper[4958]: I1006 12:56:50.330698 4958 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/68031784-42b9-47c8-b884-df4858d89d1b-catalog-content\") on node \"crc\" DevicePath \"\"" Oct 06 12:56:50 crc kubenswrapper[4958]: I1006 12:56:50.542544 4958 generic.go:334] "Generic (PLEG): container finished" podID="68031784-42b9-47c8-b884-df4858d89d1b" containerID="18a41337892510d21c6388666182d7c838d2c0d5ed4b7fe69063e6d5cdce012c" exitCode=0 Oct 06 12:56:50 crc kubenswrapper[4958]: I1006 12:56:50.542662 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-5hhw5" Oct 06 12:56:50 crc kubenswrapper[4958]: I1006 12:56:50.542691 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-5hhw5" event={"ID":"68031784-42b9-47c8-b884-df4858d89d1b","Type":"ContainerDied","Data":"18a41337892510d21c6388666182d7c838d2c0d5ed4b7fe69063e6d5cdce012c"} Oct 06 12:56:50 crc kubenswrapper[4958]: I1006 12:56:50.543233 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-5hhw5" event={"ID":"68031784-42b9-47c8-b884-df4858d89d1b","Type":"ContainerDied","Data":"41f645367c56931ffa654aa02df349b2baee211172753c1060dd8c6054d65ad1"} Oct 06 12:56:50 crc kubenswrapper[4958]: I1006 12:56:50.543269 4958 scope.go:117] "RemoveContainer" containerID="18a41337892510d21c6388666182d7c838d2c0d5ed4b7fe69063e6d5cdce012c" Oct 06 12:56:50 crc kubenswrapper[4958]: I1006 12:56:50.579044 4958 scope.go:117] "RemoveContainer" containerID="d41de6baf2b6b920eef6ad3f2cd62b9ddac591d518cc177a6d43ca692d9ad0f9" Oct 06 12:56:50 crc kubenswrapper[4958]: I1006 12:56:50.594896 4958 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-5hhw5"] Oct 06 12:56:50 crc kubenswrapper[4958]: I1006 12:56:50.608330 4958 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-5hhw5"] Oct 06 12:56:50 crc kubenswrapper[4958]: I1006 12:56:50.613664 4958 scope.go:117] "RemoveContainer" containerID="d38818fae0a5c0b8a071b3f4c452d41fd414b676952d2f3b4e9ac5f9536ba588" Oct 06 12:56:50 crc kubenswrapper[4958]: I1006 12:56:50.649584 4958 scope.go:117] "RemoveContainer" containerID="18a41337892510d21c6388666182d7c838d2c0d5ed4b7fe69063e6d5cdce012c" Oct 06 12:56:50 crc kubenswrapper[4958]: E1006 12:56:50.650802 4958 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"18a41337892510d21c6388666182d7c838d2c0d5ed4b7fe69063e6d5cdce012c\": container with ID starting with 18a41337892510d21c6388666182d7c838d2c0d5ed4b7fe69063e6d5cdce012c not found: ID does not exist" containerID="18a41337892510d21c6388666182d7c838d2c0d5ed4b7fe69063e6d5cdce012c" Oct 06 12:56:50 crc kubenswrapper[4958]: I1006 12:56:50.650861 4958 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"18a41337892510d21c6388666182d7c838d2c0d5ed4b7fe69063e6d5cdce012c"} err="failed to get container status \"18a41337892510d21c6388666182d7c838d2c0d5ed4b7fe69063e6d5cdce012c\": rpc error: code = NotFound desc = could not find container \"18a41337892510d21c6388666182d7c838d2c0d5ed4b7fe69063e6d5cdce012c\": container with ID starting with 18a41337892510d21c6388666182d7c838d2c0d5ed4b7fe69063e6d5cdce012c not found: ID does not exist" Oct 06 12:56:50 crc kubenswrapper[4958]: I1006 12:56:50.650895 4958 scope.go:117] "RemoveContainer" containerID="d41de6baf2b6b920eef6ad3f2cd62b9ddac591d518cc177a6d43ca692d9ad0f9" Oct 06 12:56:50 crc kubenswrapper[4958]: E1006 12:56:50.651294 4958 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d41de6baf2b6b920eef6ad3f2cd62b9ddac591d518cc177a6d43ca692d9ad0f9\": container with ID starting with d41de6baf2b6b920eef6ad3f2cd62b9ddac591d518cc177a6d43ca692d9ad0f9 not found: ID does not exist" containerID="d41de6baf2b6b920eef6ad3f2cd62b9ddac591d518cc177a6d43ca692d9ad0f9" Oct 06 12:56:50 crc kubenswrapper[4958]: I1006 12:56:50.651358 4958 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d41de6baf2b6b920eef6ad3f2cd62b9ddac591d518cc177a6d43ca692d9ad0f9"} err="failed to get container status \"d41de6baf2b6b920eef6ad3f2cd62b9ddac591d518cc177a6d43ca692d9ad0f9\": rpc error: code = NotFound desc = could not find container \"d41de6baf2b6b920eef6ad3f2cd62b9ddac591d518cc177a6d43ca692d9ad0f9\": container with ID starting with d41de6baf2b6b920eef6ad3f2cd62b9ddac591d518cc177a6d43ca692d9ad0f9 not found: ID does not exist" Oct 06 12:56:50 crc kubenswrapper[4958]: I1006 12:56:50.651390 4958 scope.go:117] "RemoveContainer" containerID="d38818fae0a5c0b8a071b3f4c452d41fd414b676952d2f3b4e9ac5f9536ba588" Oct 06 12:56:50 crc kubenswrapper[4958]: E1006 12:56:50.651902 4958 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d38818fae0a5c0b8a071b3f4c452d41fd414b676952d2f3b4e9ac5f9536ba588\": container with ID starting with d38818fae0a5c0b8a071b3f4c452d41fd414b676952d2f3b4e9ac5f9536ba588 not found: ID does not exist" containerID="d38818fae0a5c0b8a071b3f4c452d41fd414b676952d2f3b4e9ac5f9536ba588" Oct 06 12:56:50 crc kubenswrapper[4958]: I1006 12:56:50.651937 4958 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d38818fae0a5c0b8a071b3f4c452d41fd414b676952d2f3b4e9ac5f9536ba588"} err="failed to get container status \"d38818fae0a5c0b8a071b3f4c452d41fd414b676952d2f3b4e9ac5f9536ba588\": rpc error: code = NotFound desc = could not find container \"d38818fae0a5c0b8a071b3f4c452d41fd414b676952d2f3b4e9ac5f9536ba588\": container with ID starting with d38818fae0a5c0b8a071b3f4c452d41fd414b676952d2f3b4e9ac5f9536ba588 not found: ID does not exist" Oct 06 12:56:50 crc kubenswrapper[4958]: I1006 12:56:50.923480 4958 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="68031784-42b9-47c8-b884-df4858d89d1b" path="/var/lib/kubelet/pods/68031784-42b9-47c8-b884-df4858d89d1b/volumes" Oct 06 12:58:53 crc kubenswrapper[4958]: I1006 12:58:53.801830 4958 patch_prober.go:28] interesting pod/machine-config-daemon-whw6z container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 06 12:58:53 crc kubenswrapper[4958]: I1006 12:58:53.802335 4958 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-whw6z" podUID="1a63a5e9-6d00-40ff-a080-cfcaf03a1c1b" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 06 12:59:23 crc kubenswrapper[4958]: I1006 12:59:23.802352 4958 patch_prober.go:28] interesting pod/machine-config-daemon-whw6z container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 06 12:59:23 crc kubenswrapper[4958]: I1006 12:59:23.802908 4958 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-whw6z" podUID="1a63a5e9-6d00-40ff-a080-cfcaf03a1c1b" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 06 12:59:53 crc kubenswrapper[4958]: I1006 12:59:53.802374 4958 patch_prober.go:28] interesting pod/machine-config-daemon-whw6z container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 06 12:59:53 crc kubenswrapper[4958]: I1006 12:59:53.802894 4958 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-whw6z" podUID="1a63a5e9-6d00-40ff-a080-cfcaf03a1c1b" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 06 12:59:53 crc kubenswrapper[4958]: I1006 12:59:53.802956 4958 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-whw6z" Oct 06 12:59:53 crc kubenswrapper[4958]: I1006 12:59:53.803833 4958 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"4b480221c89ad6ea54b7e021fe811dda80d2531729ab9dcc45e8465cea93e58b"} pod="openshift-machine-config-operator/machine-config-daemon-whw6z" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Oct 06 12:59:53 crc kubenswrapper[4958]: I1006 12:59:53.803899 4958 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-whw6z" podUID="1a63a5e9-6d00-40ff-a080-cfcaf03a1c1b" containerName="machine-config-daemon" containerID="cri-o://4b480221c89ad6ea54b7e021fe811dda80d2531729ab9dcc45e8465cea93e58b" gracePeriod=600 Oct 06 12:59:54 crc kubenswrapper[4958]: I1006 12:59:54.478875 4958 generic.go:334] "Generic (PLEG): container finished" podID="1a63a5e9-6d00-40ff-a080-cfcaf03a1c1b" containerID="4b480221c89ad6ea54b7e021fe811dda80d2531729ab9dcc45e8465cea93e58b" exitCode=0 Oct 06 12:59:54 crc kubenswrapper[4958]: I1006 12:59:54.478929 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-whw6z" event={"ID":"1a63a5e9-6d00-40ff-a080-cfcaf03a1c1b","Type":"ContainerDied","Data":"4b480221c89ad6ea54b7e021fe811dda80d2531729ab9dcc45e8465cea93e58b"} Oct 06 12:59:54 crc kubenswrapper[4958]: I1006 12:59:54.479443 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-whw6z" event={"ID":"1a63a5e9-6d00-40ff-a080-cfcaf03a1c1b","Type":"ContainerStarted","Data":"9bfe594bd1bd50636dd556d4ca46badc87684952b4d8eba0a11662e99a16bf6a"} Oct 06 12:59:54 crc kubenswrapper[4958]: I1006 12:59:54.479465 4958 scope.go:117] "RemoveContainer" containerID="d0b06b048e692dd8fadf2c195bdf63a8211a24a34b55945146baca45ef1b9dbc" Oct 06 13:00:00 crc kubenswrapper[4958]: I1006 13:00:00.163338 4958 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29329260-p9x85"] Oct 06 13:00:00 crc kubenswrapper[4958]: E1006 13:00:00.164507 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="68031784-42b9-47c8-b884-df4858d89d1b" containerName="extract-content" Oct 06 13:00:00 crc kubenswrapper[4958]: I1006 13:00:00.164529 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="68031784-42b9-47c8-b884-df4858d89d1b" containerName="extract-content" Oct 06 13:00:00 crc kubenswrapper[4958]: E1006 13:00:00.164580 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="68031784-42b9-47c8-b884-df4858d89d1b" containerName="extract-utilities" Oct 06 13:00:00 crc kubenswrapper[4958]: I1006 13:00:00.164591 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="68031784-42b9-47c8-b884-df4858d89d1b" containerName="extract-utilities" Oct 06 13:00:00 crc kubenswrapper[4958]: E1006 13:00:00.164616 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="68031784-42b9-47c8-b884-df4858d89d1b" containerName="registry-server" Oct 06 13:00:00 crc kubenswrapper[4958]: I1006 13:00:00.164628 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="68031784-42b9-47c8-b884-df4858d89d1b" containerName="registry-server" Oct 06 13:00:00 crc kubenswrapper[4958]: I1006 13:00:00.164891 4958 memory_manager.go:354] "RemoveStaleState removing state" podUID="68031784-42b9-47c8-b884-df4858d89d1b" containerName="registry-server" Oct 06 13:00:00 crc kubenswrapper[4958]: I1006 13:00:00.165770 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29329260-p9x85" Oct 06 13:00:00 crc kubenswrapper[4958]: I1006 13:00:00.168829 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Oct 06 13:00:00 crc kubenswrapper[4958]: I1006 13:00:00.169129 4958 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Oct 06 13:00:00 crc kubenswrapper[4958]: I1006 13:00:00.179525 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29329260-p9x85"] Oct 06 13:00:00 crc kubenswrapper[4958]: I1006 13:00:00.295199 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/07fe6646-6eed-402c-a63e-2275c8cb108b-config-volume\") pod \"collect-profiles-29329260-p9x85\" (UID: \"07fe6646-6eed-402c-a63e-2275c8cb108b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29329260-p9x85" Oct 06 13:00:00 crc kubenswrapper[4958]: I1006 13:00:00.295361 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zrxff\" (UniqueName: \"kubernetes.io/projected/07fe6646-6eed-402c-a63e-2275c8cb108b-kube-api-access-zrxff\") pod \"collect-profiles-29329260-p9x85\" (UID: \"07fe6646-6eed-402c-a63e-2275c8cb108b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29329260-p9x85" Oct 06 13:00:00 crc kubenswrapper[4958]: I1006 13:00:00.295394 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/07fe6646-6eed-402c-a63e-2275c8cb108b-secret-volume\") pod \"collect-profiles-29329260-p9x85\" (UID: \"07fe6646-6eed-402c-a63e-2275c8cb108b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29329260-p9x85" Oct 06 13:00:00 crc kubenswrapper[4958]: I1006 13:00:00.397582 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/07fe6646-6eed-402c-a63e-2275c8cb108b-config-volume\") pod \"collect-profiles-29329260-p9x85\" (UID: \"07fe6646-6eed-402c-a63e-2275c8cb108b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29329260-p9x85" Oct 06 13:00:00 crc kubenswrapper[4958]: I1006 13:00:00.397787 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zrxff\" (UniqueName: \"kubernetes.io/projected/07fe6646-6eed-402c-a63e-2275c8cb108b-kube-api-access-zrxff\") pod \"collect-profiles-29329260-p9x85\" (UID: \"07fe6646-6eed-402c-a63e-2275c8cb108b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29329260-p9x85" Oct 06 13:00:00 crc kubenswrapper[4958]: I1006 13:00:00.397839 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/07fe6646-6eed-402c-a63e-2275c8cb108b-secret-volume\") pod \"collect-profiles-29329260-p9x85\" (UID: \"07fe6646-6eed-402c-a63e-2275c8cb108b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29329260-p9x85" Oct 06 13:00:00 crc kubenswrapper[4958]: I1006 13:00:00.398477 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/07fe6646-6eed-402c-a63e-2275c8cb108b-config-volume\") pod \"collect-profiles-29329260-p9x85\" (UID: \"07fe6646-6eed-402c-a63e-2275c8cb108b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29329260-p9x85" Oct 06 13:00:00 crc kubenswrapper[4958]: I1006 13:00:00.409436 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/07fe6646-6eed-402c-a63e-2275c8cb108b-secret-volume\") pod \"collect-profiles-29329260-p9x85\" (UID: \"07fe6646-6eed-402c-a63e-2275c8cb108b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29329260-p9x85" Oct 06 13:00:00 crc kubenswrapper[4958]: I1006 13:00:00.426381 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zrxff\" (UniqueName: \"kubernetes.io/projected/07fe6646-6eed-402c-a63e-2275c8cb108b-kube-api-access-zrxff\") pod \"collect-profiles-29329260-p9x85\" (UID: \"07fe6646-6eed-402c-a63e-2275c8cb108b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29329260-p9x85" Oct 06 13:00:00 crc kubenswrapper[4958]: I1006 13:00:00.499630 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29329260-p9x85" Oct 06 13:00:00 crc kubenswrapper[4958]: I1006 13:00:00.988637 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29329260-p9x85"] Oct 06 13:00:01 crc kubenswrapper[4958]: I1006 13:00:01.557461 4958 generic.go:334] "Generic (PLEG): container finished" podID="07fe6646-6eed-402c-a63e-2275c8cb108b" containerID="ea89cb1703e6e9a428c895b352f815adcee23a7d29bdb54e85bf4a677a5304c4" exitCode=0 Oct 06 13:00:01 crc kubenswrapper[4958]: I1006 13:00:01.557555 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29329260-p9x85" event={"ID":"07fe6646-6eed-402c-a63e-2275c8cb108b","Type":"ContainerDied","Data":"ea89cb1703e6e9a428c895b352f815adcee23a7d29bdb54e85bf4a677a5304c4"} Oct 06 13:00:01 crc kubenswrapper[4958]: I1006 13:00:01.557607 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29329260-p9x85" event={"ID":"07fe6646-6eed-402c-a63e-2275c8cb108b","Type":"ContainerStarted","Data":"a850367db04a18757a1770cab45c57ee63c7fc42032d1af1d7502bd3e1f451d4"} Oct 06 13:00:02 crc kubenswrapper[4958]: I1006 13:00:02.993893 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29329260-p9x85" Oct 06 13:00:03 crc kubenswrapper[4958]: I1006 13:00:03.151435 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/07fe6646-6eed-402c-a63e-2275c8cb108b-config-volume\") pod \"07fe6646-6eed-402c-a63e-2275c8cb108b\" (UID: \"07fe6646-6eed-402c-a63e-2275c8cb108b\") " Oct 06 13:00:03 crc kubenswrapper[4958]: I1006 13:00:03.151677 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zrxff\" (UniqueName: \"kubernetes.io/projected/07fe6646-6eed-402c-a63e-2275c8cb108b-kube-api-access-zrxff\") pod \"07fe6646-6eed-402c-a63e-2275c8cb108b\" (UID: \"07fe6646-6eed-402c-a63e-2275c8cb108b\") " Oct 06 13:00:03 crc kubenswrapper[4958]: I1006 13:00:03.151733 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/07fe6646-6eed-402c-a63e-2275c8cb108b-secret-volume\") pod \"07fe6646-6eed-402c-a63e-2275c8cb108b\" (UID: \"07fe6646-6eed-402c-a63e-2275c8cb108b\") " Oct 06 13:00:03 crc kubenswrapper[4958]: I1006 13:00:03.152432 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/07fe6646-6eed-402c-a63e-2275c8cb108b-config-volume" (OuterVolumeSpecName: "config-volume") pod "07fe6646-6eed-402c-a63e-2275c8cb108b" (UID: "07fe6646-6eed-402c-a63e-2275c8cb108b"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 06 13:00:03 crc kubenswrapper[4958]: I1006 13:00:03.159561 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/07fe6646-6eed-402c-a63e-2275c8cb108b-kube-api-access-zrxff" (OuterVolumeSpecName: "kube-api-access-zrxff") pod "07fe6646-6eed-402c-a63e-2275c8cb108b" (UID: "07fe6646-6eed-402c-a63e-2275c8cb108b"). InnerVolumeSpecName "kube-api-access-zrxff". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 06 13:00:03 crc kubenswrapper[4958]: I1006 13:00:03.162589 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/07fe6646-6eed-402c-a63e-2275c8cb108b-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "07fe6646-6eed-402c-a63e-2275c8cb108b" (UID: "07fe6646-6eed-402c-a63e-2275c8cb108b"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 06 13:00:03 crc kubenswrapper[4958]: I1006 13:00:03.253663 4958 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/07fe6646-6eed-402c-a63e-2275c8cb108b-secret-volume\") on node \"crc\" DevicePath \"\"" Oct 06 13:00:03 crc kubenswrapper[4958]: I1006 13:00:03.253934 4958 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/07fe6646-6eed-402c-a63e-2275c8cb108b-config-volume\") on node \"crc\" DevicePath \"\"" Oct 06 13:00:03 crc kubenswrapper[4958]: I1006 13:00:03.254009 4958 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zrxff\" (UniqueName: \"kubernetes.io/projected/07fe6646-6eed-402c-a63e-2275c8cb108b-kube-api-access-zrxff\") on node \"crc\" DevicePath \"\"" Oct 06 13:00:03 crc kubenswrapper[4958]: I1006 13:00:03.576676 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29329260-p9x85" event={"ID":"07fe6646-6eed-402c-a63e-2275c8cb108b","Type":"ContainerDied","Data":"a850367db04a18757a1770cab45c57ee63c7fc42032d1af1d7502bd3e1f451d4"} Oct 06 13:00:03 crc kubenswrapper[4958]: I1006 13:00:03.576714 4958 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a850367db04a18757a1770cab45c57ee63c7fc42032d1af1d7502bd3e1f451d4" Oct 06 13:00:03 crc kubenswrapper[4958]: I1006 13:00:03.576742 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29329260-p9x85" Oct 06 13:00:04 crc kubenswrapper[4958]: I1006 13:00:04.080762 4958 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29329215-wbdm2"] Oct 06 13:00:04 crc kubenswrapper[4958]: I1006 13:00:04.094472 4958 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29329215-wbdm2"] Oct 06 13:00:04 crc kubenswrapper[4958]: I1006 13:00:04.925827 4958 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="15fb0054-0dd0-4d78-a368-5bb9b70bdfc7" path="/var/lib/kubelet/pods/15fb0054-0dd0-4d78-a368-5bb9b70bdfc7/volumes" Oct 06 13:00:32 crc kubenswrapper[4958]: I1006 13:00:32.544084 4958 scope.go:117] "RemoveContainer" containerID="d674dab3f923d3dec17353d18dd694cd7604db3a82c433e90d9d0eddd074c3d3" Oct 06 13:01:00 crc kubenswrapper[4958]: I1006 13:01:00.145016 4958 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-cron-29329261-tpdvf"] Oct 06 13:01:00 crc kubenswrapper[4958]: E1006 13:01:00.145870 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="07fe6646-6eed-402c-a63e-2275c8cb108b" containerName="collect-profiles" Oct 06 13:01:00 crc kubenswrapper[4958]: I1006 13:01:00.145888 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="07fe6646-6eed-402c-a63e-2275c8cb108b" containerName="collect-profiles" Oct 06 13:01:00 crc kubenswrapper[4958]: I1006 13:01:00.146092 4958 memory_manager.go:354] "RemoveStaleState removing state" podUID="07fe6646-6eed-402c-a63e-2275c8cb108b" containerName="collect-profiles" Oct 06 13:01:00 crc kubenswrapper[4958]: I1006 13:01:00.146655 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29329261-tpdvf" Oct 06 13:01:00 crc kubenswrapper[4958]: I1006 13:01:00.209902 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-cron-29329261-tpdvf"] Oct 06 13:01:00 crc kubenswrapper[4958]: I1006 13:01:00.252075 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/80b45d5d-1a89-4c03-a387-d74a9e2912f4-fernet-keys\") pod \"keystone-cron-29329261-tpdvf\" (UID: \"80b45d5d-1a89-4c03-a387-d74a9e2912f4\") " pod="openstack/keystone-cron-29329261-tpdvf" Oct 06 13:01:00 crc kubenswrapper[4958]: I1006 13:01:00.252141 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7ss9z\" (UniqueName: \"kubernetes.io/projected/80b45d5d-1a89-4c03-a387-d74a9e2912f4-kube-api-access-7ss9z\") pod \"keystone-cron-29329261-tpdvf\" (UID: \"80b45d5d-1a89-4c03-a387-d74a9e2912f4\") " pod="openstack/keystone-cron-29329261-tpdvf" Oct 06 13:01:00 crc kubenswrapper[4958]: I1006 13:01:00.252272 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/80b45d5d-1a89-4c03-a387-d74a9e2912f4-config-data\") pod \"keystone-cron-29329261-tpdvf\" (UID: \"80b45d5d-1a89-4c03-a387-d74a9e2912f4\") " pod="openstack/keystone-cron-29329261-tpdvf" Oct 06 13:01:00 crc kubenswrapper[4958]: I1006 13:01:00.252295 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/80b45d5d-1a89-4c03-a387-d74a9e2912f4-combined-ca-bundle\") pod \"keystone-cron-29329261-tpdvf\" (UID: \"80b45d5d-1a89-4c03-a387-d74a9e2912f4\") " pod="openstack/keystone-cron-29329261-tpdvf" Oct 06 13:01:00 crc kubenswrapper[4958]: I1006 13:01:00.353948 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/80b45d5d-1a89-4c03-a387-d74a9e2912f4-fernet-keys\") pod \"keystone-cron-29329261-tpdvf\" (UID: \"80b45d5d-1a89-4c03-a387-d74a9e2912f4\") " pod="openstack/keystone-cron-29329261-tpdvf" Oct 06 13:01:00 crc kubenswrapper[4958]: I1006 13:01:00.354011 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7ss9z\" (UniqueName: \"kubernetes.io/projected/80b45d5d-1a89-4c03-a387-d74a9e2912f4-kube-api-access-7ss9z\") pod \"keystone-cron-29329261-tpdvf\" (UID: \"80b45d5d-1a89-4c03-a387-d74a9e2912f4\") " pod="openstack/keystone-cron-29329261-tpdvf" Oct 06 13:01:00 crc kubenswrapper[4958]: I1006 13:01:00.354081 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/80b45d5d-1a89-4c03-a387-d74a9e2912f4-config-data\") pod \"keystone-cron-29329261-tpdvf\" (UID: \"80b45d5d-1a89-4c03-a387-d74a9e2912f4\") " pod="openstack/keystone-cron-29329261-tpdvf" Oct 06 13:01:00 crc kubenswrapper[4958]: I1006 13:01:00.354098 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/80b45d5d-1a89-4c03-a387-d74a9e2912f4-combined-ca-bundle\") pod \"keystone-cron-29329261-tpdvf\" (UID: \"80b45d5d-1a89-4c03-a387-d74a9e2912f4\") " pod="openstack/keystone-cron-29329261-tpdvf" Oct 06 13:01:00 crc kubenswrapper[4958]: I1006 13:01:00.369254 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/80b45d5d-1a89-4c03-a387-d74a9e2912f4-fernet-keys\") pod \"keystone-cron-29329261-tpdvf\" (UID: \"80b45d5d-1a89-4c03-a387-d74a9e2912f4\") " pod="openstack/keystone-cron-29329261-tpdvf" Oct 06 13:01:00 crc kubenswrapper[4958]: I1006 13:01:00.369318 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/80b45d5d-1a89-4c03-a387-d74a9e2912f4-config-data\") pod \"keystone-cron-29329261-tpdvf\" (UID: \"80b45d5d-1a89-4c03-a387-d74a9e2912f4\") " pod="openstack/keystone-cron-29329261-tpdvf" Oct 06 13:01:00 crc kubenswrapper[4958]: I1006 13:01:00.372011 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/80b45d5d-1a89-4c03-a387-d74a9e2912f4-combined-ca-bundle\") pod \"keystone-cron-29329261-tpdvf\" (UID: \"80b45d5d-1a89-4c03-a387-d74a9e2912f4\") " pod="openstack/keystone-cron-29329261-tpdvf" Oct 06 13:01:00 crc kubenswrapper[4958]: I1006 13:01:00.376362 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7ss9z\" (UniqueName: \"kubernetes.io/projected/80b45d5d-1a89-4c03-a387-d74a9e2912f4-kube-api-access-7ss9z\") pod \"keystone-cron-29329261-tpdvf\" (UID: \"80b45d5d-1a89-4c03-a387-d74a9e2912f4\") " pod="openstack/keystone-cron-29329261-tpdvf" Oct 06 13:01:00 crc kubenswrapper[4958]: I1006 13:01:00.473033 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29329261-tpdvf" Oct 06 13:01:00 crc kubenswrapper[4958]: I1006 13:01:00.924622 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-cron-29329261-tpdvf"] Oct 06 13:01:01 crc kubenswrapper[4958]: I1006 13:01:01.130759 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29329261-tpdvf" event={"ID":"80b45d5d-1a89-4c03-a387-d74a9e2912f4","Type":"ContainerStarted","Data":"169239b38cc73b8bc7262b837455e99e0dcbb9c0d1e6a541e3c420fc77ee0141"} Oct 06 13:01:01 crc kubenswrapper[4958]: I1006 13:01:01.130796 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29329261-tpdvf" event={"ID":"80b45d5d-1a89-4c03-a387-d74a9e2912f4","Type":"ContainerStarted","Data":"8bc2c4b53e1e7165623ba1221b077c921ca2e68e41f63f3ddcf35350109b4d0b"} Oct 06 13:01:01 crc kubenswrapper[4958]: I1006 13:01:01.148884 4958 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-cron-29329261-tpdvf" podStartSLOduration=1.148858095 podStartE2EDuration="1.148858095s" podCreationTimestamp="2025-10-06 13:01:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-06 13:01:01.145806235 +0000 UTC m=+4415.031831543" watchObservedRunningTime="2025-10-06 13:01:01.148858095 +0000 UTC m=+4415.034883403" Oct 06 13:01:03 crc kubenswrapper[4958]: I1006 13:01:03.149636 4958 generic.go:334] "Generic (PLEG): container finished" podID="80b45d5d-1a89-4c03-a387-d74a9e2912f4" containerID="169239b38cc73b8bc7262b837455e99e0dcbb9c0d1e6a541e3c420fc77ee0141" exitCode=0 Oct 06 13:01:03 crc kubenswrapper[4958]: I1006 13:01:03.149803 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29329261-tpdvf" event={"ID":"80b45d5d-1a89-4c03-a387-d74a9e2912f4","Type":"ContainerDied","Data":"169239b38cc73b8bc7262b837455e99e0dcbb9c0d1e6a541e3c420fc77ee0141"} Oct 06 13:01:04 crc kubenswrapper[4958]: I1006 13:01:04.856326 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29329261-tpdvf" Oct 06 13:01:05 crc kubenswrapper[4958]: I1006 13:01:05.042104 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/80b45d5d-1a89-4c03-a387-d74a9e2912f4-config-data\") pod \"80b45d5d-1a89-4c03-a387-d74a9e2912f4\" (UID: \"80b45d5d-1a89-4c03-a387-d74a9e2912f4\") " Oct 06 13:01:05 crc kubenswrapper[4958]: I1006 13:01:05.042210 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7ss9z\" (UniqueName: \"kubernetes.io/projected/80b45d5d-1a89-4c03-a387-d74a9e2912f4-kube-api-access-7ss9z\") pod \"80b45d5d-1a89-4c03-a387-d74a9e2912f4\" (UID: \"80b45d5d-1a89-4c03-a387-d74a9e2912f4\") " Oct 06 13:01:05 crc kubenswrapper[4958]: I1006 13:01:05.042287 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/80b45d5d-1a89-4c03-a387-d74a9e2912f4-fernet-keys\") pod \"80b45d5d-1a89-4c03-a387-d74a9e2912f4\" (UID: \"80b45d5d-1a89-4c03-a387-d74a9e2912f4\") " Oct 06 13:01:05 crc kubenswrapper[4958]: I1006 13:01:05.042375 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/80b45d5d-1a89-4c03-a387-d74a9e2912f4-combined-ca-bundle\") pod \"80b45d5d-1a89-4c03-a387-d74a9e2912f4\" (UID: \"80b45d5d-1a89-4c03-a387-d74a9e2912f4\") " Oct 06 13:01:05 crc kubenswrapper[4958]: I1006 13:01:05.050167 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/80b45d5d-1a89-4c03-a387-d74a9e2912f4-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "80b45d5d-1a89-4c03-a387-d74a9e2912f4" (UID: "80b45d5d-1a89-4c03-a387-d74a9e2912f4"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 06 13:01:05 crc kubenswrapper[4958]: I1006 13:01:05.067363 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/80b45d5d-1a89-4c03-a387-d74a9e2912f4-kube-api-access-7ss9z" (OuterVolumeSpecName: "kube-api-access-7ss9z") pod "80b45d5d-1a89-4c03-a387-d74a9e2912f4" (UID: "80b45d5d-1a89-4c03-a387-d74a9e2912f4"). InnerVolumeSpecName "kube-api-access-7ss9z". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 06 13:01:05 crc kubenswrapper[4958]: I1006 13:01:05.078285 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/80b45d5d-1a89-4c03-a387-d74a9e2912f4-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "80b45d5d-1a89-4c03-a387-d74a9e2912f4" (UID: "80b45d5d-1a89-4c03-a387-d74a9e2912f4"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 06 13:01:05 crc kubenswrapper[4958]: I1006 13:01:05.111976 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/80b45d5d-1a89-4c03-a387-d74a9e2912f4-config-data" (OuterVolumeSpecName: "config-data") pod "80b45d5d-1a89-4c03-a387-d74a9e2912f4" (UID: "80b45d5d-1a89-4c03-a387-d74a9e2912f4"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 06 13:01:05 crc kubenswrapper[4958]: I1006 13:01:05.146303 4958 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/80b45d5d-1a89-4c03-a387-d74a9e2912f4-config-data\") on node \"crc\" DevicePath \"\"" Oct 06 13:01:05 crc kubenswrapper[4958]: I1006 13:01:05.146343 4958 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7ss9z\" (UniqueName: \"kubernetes.io/projected/80b45d5d-1a89-4c03-a387-d74a9e2912f4-kube-api-access-7ss9z\") on node \"crc\" DevicePath \"\"" Oct 06 13:01:05 crc kubenswrapper[4958]: I1006 13:01:05.146353 4958 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/80b45d5d-1a89-4c03-a387-d74a9e2912f4-fernet-keys\") on node \"crc\" DevicePath \"\"" Oct 06 13:01:05 crc kubenswrapper[4958]: I1006 13:01:05.146362 4958 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/80b45d5d-1a89-4c03-a387-d74a9e2912f4-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 06 13:01:05 crc kubenswrapper[4958]: I1006 13:01:05.170019 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29329261-tpdvf" event={"ID":"80b45d5d-1a89-4c03-a387-d74a9e2912f4","Type":"ContainerDied","Data":"8bc2c4b53e1e7165623ba1221b077c921ca2e68e41f63f3ddcf35350109b4d0b"} Oct 06 13:01:05 crc kubenswrapper[4958]: I1006 13:01:05.170550 4958 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8bc2c4b53e1e7165623ba1221b077c921ca2e68e41f63f3ddcf35350109b4d0b" Oct 06 13:01:05 crc kubenswrapper[4958]: I1006 13:01:05.170094 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29329261-tpdvf" Oct 06 13:02:23 crc kubenswrapper[4958]: I1006 13:02:23.802178 4958 patch_prober.go:28] interesting pod/machine-config-daemon-whw6z container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 06 13:02:23 crc kubenswrapper[4958]: I1006 13:02:23.803408 4958 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-whw6z" podUID="1a63a5e9-6d00-40ff-a080-cfcaf03a1c1b" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 06 13:02:53 crc kubenswrapper[4958]: I1006 13:02:53.802517 4958 patch_prober.go:28] interesting pod/machine-config-daemon-whw6z container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 06 13:02:53 crc kubenswrapper[4958]: I1006 13:02:53.803178 4958 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-whw6z" podUID="1a63a5e9-6d00-40ff-a080-cfcaf03a1c1b" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 06 13:03:23 crc kubenswrapper[4958]: I1006 13:03:23.801416 4958 patch_prober.go:28] interesting pod/machine-config-daemon-whw6z container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 06 13:03:23 crc kubenswrapper[4958]: I1006 13:03:23.801979 4958 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-whw6z" podUID="1a63a5e9-6d00-40ff-a080-cfcaf03a1c1b" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 06 13:03:23 crc kubenswrapper[4958]: I1006 13:03:23.802030 4958 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-whw6z" Oct 06 13:03:23 crc kubenswrapper[4958]: I1006 13:03:23.802671 4958 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"9bfe594bd1bd50636dd556d4ca46badc87684952b4d8eba0a11662e99a16bf6a"} pod="openshift-machine-config-operator/machine-config-daemon-whw6z" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Oct 06 13:03:23 crc kubenswrapper[4958]: I1006 13:03:23.802744 4958 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-whw6z" podUID="1a63a5e9-6d00-40ff-a080-cfcaf03a1c1b" containerName="machine-config-daemon" containerID="cri-o://9bfe594bd1bd50636dd556d4ca46badc87684952b4d8eba0a11662e99a16bf6a" gracePeriod=600 Oct 06 13:03:23 crc kubenswrapper[4958]: E1006 13:03:23.928985 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-whw6z_openshift-machine-config-operator(1a63a5e9-6d00-40ff-a080-cfcaf03a1c1b)\"" pod="openshift-machine-config-operator/machine-config-daemon-whw6z" podUID="1a63a5e9-6d00-40ff-a080-cfcaf03a1c1b" Oct 06 13:03:24 crc kubenswrapper[4958]: I1006 13:03:24.651848 4958 generic.go:334] "Generic (PLEG): container finished" podID="1a63a5e9-6d00-40ff-a080-cfcaf03a1c1b" containerID="9bfe594bd1bd50636dd556d4ca46badc87684952b4d8eba0a11662e99a16bf6a" exitCode=0 Oct 06 13:03:24 crc kubenswrapper[4958]: I1006 13:03:24.652043 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-whw6z" event={"ID":"1a63a5e9-6d00-40ff-a080-cfcaf03a1c1b","Type":"ContainerDied","Data":"9bfe594bd1bd50636dd556d4ca46badc87684952b4d8eba0a11662e99a16bf6a"} Oct 06 13:03:24 crc kubenswrapper[4958]: I1006 13:03:24.652318 4958 scope.go:117] "RemoveContainer" containerID="4b480221c89ad6ea54b7e021fe811dda80d2531729ab9dcc45e8465cea93e58b" Oct 06 13:03:24 crc kubenswrapper[4958]: I1006 13:03:24.653058 4958 scope.go:117] "RemoveContainer" containerID="9bfe594bd1bd50636dd556d4ca46badc87684952b4d8eba0a11662e99a16bf6a" Oct 06 13:03:24 crc kubenswrapper[4958]: E1006 13:03:24.653351 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-whw6z_openshift-machine-config-operator(1a63a5e9-6d00-40ff-a080-cfcaf03a1c1b)\"" pod="openshift-machine-config-operator/machine-config-daemon-whw6z" podUID="1a63a5e9-6d00-40ff-a080-cfcaf03a1c1b" Oct 06 13:03:35 crc kubenswrapper[4958]: I1006 13:03:35.914577 4958 scope.go:117] "RemoveContainer" containerID="9bfe594bd1bd50636dd556d4ca46badc87684952b4d8eba0a11662e99a16bf6a" Oct 06 13:03:35 crc kubenswrapper[4958]: E1006 13:03:35.915730 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-whw6z_openshift-machine-config-operator(1a63a5e9-6d00-40ff-a080-cfcaf03a1c1b)\"" pod="openshift-machine-config-operator/machine-config-daemon-whw6z" podUID="1a63a5e9-6d00-40ff-a080-cfcaf03a1c1b" Oct 06 13:03:47 crc kubenswrapper[4958]: I1006 13:03:47.914049 4958 scope.go:117] "RemoveContainer" containerID="9bfe594bd1bd50636dd556d4ca46badc87684952b4d8eba0a11662e99a16bf6a" Oct 06 13:03:47 crc kubenswrapper[4958]: E1006 13:03:47.915460 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-whw6z_openshift-machine-config-operator(1a63a5e9-6d00-40ff-a080-cfcaf03a1c1b)\"" pod="openshift-machine-config-operator/machine-config-daemon-whw6z" podUID="1a63a5e9-6d00-40ff-a080-cfcaf03a1c1b" Oct 06 13:04:02 crc kubenswrapper[4958]: I1006 13:04:02.913889 4958 scope.go:117] "RemoveContainer" containerID="9bfe594bd1bd50636dd556d4ca46badc87684952b4d8eba0a11662e99a16bf6a" Oct 06 13:04:02 crc kubenswrapper[4958]: E1006 13:04:02.914681 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-whw6z_openshift-machine-config-operator(1a63a5e9-6d00-40ff-a080-cfcaf03a1c1b)\"" pod="openshift-machine-config-operator/machine-config-daemon-whw6z" podUID="1a63a5e9-6d00-40ff-a080-cfcaf03a1c1b" Oct 06 13:04:06 crc kubenswrapper[4958]: I1006 13:04:06.467326 4958 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-8szj2"] Oct 06 13:04:06 crc kubenswrapper[4958]: E1006 13:04:06.468761 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="80b45d5d-1a89-4c03-a387-d74a9e2912f4" containerName="keystone-cron" Oct 06 13:04:06 crc kubenswrapper[4958]: I1006 13:04:06.468795 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="80b45d5d-1a89-4c03-a387-d74a9e2912f4" containerName="keystone-cron" Oct 06 13:04:06 crc kubenswrapper[4958]: I1006 13:04:06.469388 4958 memory_manager.go:354] "RemoveStaleState removing state" podUID="80b45d5d-1a89-4c03-a387-d74a9e2912f4" containerName="keystone-cron" Oct 06 13:04:06 crc kubenswrapper[4958]: I1006 13:04:06.472705 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-8szj2" Oct 06 13:04:06 crc kubenswrapper[4958]: I1006 13:04:06.480941 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-8szj2"] Oct 06 13:04:06 crc kubenswrapper[4958]: I1006 13:04:06.519754 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2551c216-90d3-43f5-9733-9e8d7e88ad18-catalog-content\") pod \"community-operators-8szj2\" (UID: \"2551c216-90d3-43f5-9733-9e8d7e88ad18\") " pod="openshift-marketplace/community-operators-8szj2" Oct 06 13:04:06 crc kubenswrapper[4958]: I1006 13:04:06.519939 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2551c216-90d3-43f5-9733-9e8d7e88ad18-utilities\") pod \"community-operators-8szj2\" (UID: \"2551c216-90d3-43f5-9733-9e8d7e88ad18\") " pod="openshift-marketplace/community-operators-8szj2" Oct 06 13:04:06 crc kubenswrapper[4958]: I1006 13:04:06.520076 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t9dj9\" (UniqueName: \"kubernetes.io/projected/2551c216-90d3-43f5-9733-9e8d7e88ad18-kube-api-access-t9dj9\") pod \"community-operators-8szj2\" (UID: \"2551c216-90d3-43f5-9733-9e8d7e88ad18\") " pod="openshift-marketplace/community-operators-8szj2" Oct 06 13:04:06 crc kubenswrapper[4958]: I1006 13:04:06.621738 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2551c216-90d3-43f5-9733-9e8d7e88ad18-catalog-content\") pod \"community-operators-8szj2\" (UID: \"2551c216-90d3-43f5-9733-9e8d7e88ad18\") " pod="openshift-marketplace/community-operators-8szj2" Oct 06 13:04:06 crc kubenswrapper[4958]: I1006 13:04:06.621830 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2551c216-90d3-43f5-9733-9e8d7e88ad18-utilities\") pod \"community-operators-8szj2\" (UID: \"2551c216-90d3-43f5-9733-9e8d7e88ad18\") " pod="openshift-marketplace/community-operators-8szj2" Oct 06 13:04:06 crc kubenswrapper[4958]: I1006 13:04:06.621892 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t9dj9\" (UniqueName: \"kubernetes.io/projected/2551c216-90d3-43f5-9733-9e8d7e88ad18-kube-api-access-t9dj9\") pod \"community-operators-8szj2\" (UID: \"2551c216-90d3-43f5-9733-9e8d7e88ad18\") " pod="openshift-marketplace/community-operators-8szj2" Oct 06 13:04:06 crc kubenswrapper[4958]: I1006 13:04:06.622452 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2551c216-90d3-43f5-9733-9e8d7e88ad18-catalog-content\") pod \"community-operators-8szj2\" (UID: \"2551c216-90d3-43f5-9733-9e8d7e88ad18\") " pod="openshift-marketplace/community-operators-8szj2" Oct 06 13:04:06 crc kubenswrapper[4958]: I1006 13:04:06.622455 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2551c216-90d3-43f5-9733-9e8d7e88ad18-utilities\") pod \"community-operators-8szj2\" (UID: \"2551c216-90d3-43f5-9733-9e8d7e88ad18\") " pod="openshift-marketplace/community-operators-8szj2" Oct 06 13:04:06 crc kubenswrapper[4958]: I1006 13:04:06.650733 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t9dj9\" (UniqueName: \"kubernetes.io/projected/2551c216-90d3-43f5-9733-9e8d7e88ad18-kube-api-access-t9dj9\") pod \"community-operators-8szj2\" (UID: \"2551c216-90d3-43f5-9733-9e8d7e88ad18\") " pod="openshift-marketplace/community-operators-8szj2" Oct 06 13:04:06 crc kubenswrapper[4958]: I1006 13:04:06.817079 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-8szj2" Oct 06 13:04:07 crc kubenswrapper[4958]: I1006 13:04:07.227928 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-8szj2"] Oct 06 13:04:07 crc kubenswrapper[4958]: W1006 13:04:07.238853 4958 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2551c216_90d3_43f5_9733_9e8d7e88ad18.slice/crio-b2cc00031fc57ff69f8e93dd523de7ea8159543a1acf9bc1e88f8c25f60eaf78 WatchSource:0}: Error finding container b2cc00031fc57ff69f8e93dd523de7ea8159543a1acf9bc1e88f8c25f60eaf78: Status 404 returned error can't find the container with id b2cc00031fc57ff69f8e93dd523de7ea8159543a1acf9bc1e88f8c25f60eaf78 Oct 06 13:04:08 crc kubenswrapper[4958]: I1006 13:04:08.163881 4958 generic.go:334] "Generic (PLEG): container finished" podID="2551c216-90d3-43f5-9733-9e8d7e88ad18" containerID="760ffd449323d15055aaa89b1f8dc59bea1e14a722fdc9d9f0aea584efee7ce4" exitCode=0 Oct 06 13:04:08 crc kubenswrapper[4958]: I1006 13:04:08.163939 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-8szj2" event={"ID":"2551c216-90d3-43f5-9733-9e8d7e88ad18","Type":"ContainerDied","Data":"760ffd449323d15055aaa89b1f8dc59bea1e14a722fdc9d9f0aea584efee7ce4"} Oct 06 13:04:08 crc kubenswrapper[4958]: I1006 13:04:08.164241 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-8szj2" event={"ID":"2551c216-90d3-43f5-9733-9e8d7e88ad18","Type":"ContainerStarted","Data":"b2cc00031fc57ff69f8e93dd523de7ea8159543a1acf9bc1e88f8c25f60eaf78"} Oct 06 13:04:08 crc kubenswrapper[4958]: I1006 13:04:08.166330 4958 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Oct 06 13:04:10 crc kubenswrapper[4958]: I1006 13:04:10.213732 4958 generic.go:334] "Generic (PLEG): container finished" podID="2551c216-90d3-43f5-9733-9e8d7e88ad18" containerID="8aebf120c07ae385751eb0a6c4fb713693338565d0db3dc6a2784334c2dea60d" exitCode=0 Oct 06 13:04:10 crc kubenswrapper[4958]: I1006 13:04:10.214128 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-8szj2" event={"ID":"2551c216-90d3-43f5-9733-9e8d7e88ad18","Type":"ContainerDied","Data":"8aebf120c07ae385751eb0a6c4fb713693338565d0db3dc6a2784334c2dea60d"} Oct 06 13:04:11 crc kubenswrapper[4958]: I1006 13:04:11.229278 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-8szj2" event={"ID":"2551c216-90d3-43f5-9733-9e8d7e88ad18","Type":"ContainerStarted","Data":"a54f3aacd946d8467dbfaf8b7d1d53bb66defa4de114d068b6a74a0184ca5779"} Oct 06 13:04:11 crc kubenswrapper[4958]: I1006 13:04:11.254840 4958 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-8szj2" podStartSLOduration=2.774986036 podStartE2EDuration="5.254825905s" podCreationTimestamp="2025-10-06 13:04:06 +0000 UTC" firstStartedPulling="2025-10-06 13:04:08.166042865 +0000 UTC m=+4602.052068163" lastFinishedPulling="2025-10-06 13:04:10.645882724 +0000 UTC m=+4604.531908032" observedRunningTime="2025-10-06 13:04:11.251650401 +0000 UTC m=+4605.137675719" watchObservedRunningTime="2025-10-06 13:04:11.254825905 +0000 UTC m=+4605.140851213" Oct 06 13:04:14 crc kubenswrapper[4958]: I1006 13:04:14.913498 4958 scope.go:117] "RemoveContainer" containerID="9bfe594bd1bd50636dd556d4ca46badc87684952b4d8eba0a11662e99a16bf6a" Oct 06 13:04:14 crc kubenswrapper[4958]: E1006 13:04:14.914400 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-whw6z_openshift-machine-config-operator(1a63a5e9-6d00-40ff-a080-cfcaf03a1c1b)\"" pod="openshift-machine-config-operator/machine-config-daemon-whw6z" podUID="1a63a5e9-6d00-40ff-a080-cfcaf03a1c1b" Oct 06 13:04:16 crc kubenswrapper[4958]: I1006 13:04:16.817824 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-8szj2" Oct 06 13:04:16 crc kubenswrapper[4958]: I1006 13:04:16.818204 4958 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-8szj2" Oct 06 13:04:16 crc kubenswrapper[4958]: I1006 13:04:16.890998 4958 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-8szj2" Oct 06 13:04:17 crc kubenswrapper[4958]: I1006 13:04:17.334390 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-8szj2" Oct 06 13:04:17 crc kubenswrapper[4958]: I1006 13:04:17.393877 4958 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-8szj2"] Oct 06 13:04:19 crc kubenswrapper[4958]: I1006 13:04:19.296929 4958 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-8szj2" podUID="2551c216-90d3-43f5-9733-9e8d7e88ad18" containerName="registry-server" containerID="cri-o://a54f3aacd946d8467dbfaf8b7d1d53bb66defa4de114d068b6a74a0184ca5779" gracePeriod=2 Oct 06 13:04:20 crc kubenswrapper[4958]: I1006 13:04:20.308392 4958 generic.go:334] "Generic (PLEG): container finished" podID="2551c216-90d3-43f5-9733-9e8d7e88ad18" containerID="a54f3aacd946d8467dbfaf8b7d1d53bb66defa4de114d068b6a74a0184ca5779" exitCode=0 Oct 06 13:04:20 crc kubenswrapper[4958]: I1006 13:04:20.308457 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-8szj2" event={"ID":"2551c216-90d3-43f5-9733-9e8d7e88ad18","Type":"ContainerDied","Data":"a54f3aacd946d8467dbfaf8b7d1d53bb66defa4de114d068b6a74a0184ca5779"} Oct 06 13:04:20 crc kubenswrapper[4958]: I1006 13:04:20.465876 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-8szj2" Oct 06 13:04:20 crc kubenswrapper[4958]: I1006 13:04:20.630542 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2551c216-90d3-43f5-9733-9e8d7e88ad18-utilities\") pod \"2551c216-90d3-43f5-9733-9e8d7e88ad18\" (UID: \"2551c216-90d3-43f5-9733-9e8d7e88ad18\") " Oct 06 13:04:20 crc kubenswrapper[4958]: I1006 13:04:20.630700 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2551c216-90d3-43f5-9733-9e8d7e88ad18-catalog-content\") pod \"2551c216-90d3-43f5-9733-9e8d7e88ad18\" (UID: \"2551c216-90d3-43f5-9733-9e8d7e88ad18\") " Oct 06 13:04:20 crc kubenswrapper[4958]: I1006 13:04:20.630746 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-t9dj9\" (UniqueName: \"kubernetes.io/projected/2551c216-90d3-43f5-9733-9e8d7e88ad18-kube-api-access-t9dj9\") pod \"2551c216-90d3-43f5-9733-9e8d7e88ad18\" (UID: \"2551c216-90d3-43f5-9733-9e8d7e88ad18\") " Oct 06 13:04:20 crc kubenswrapper[4958]: I1006 13:04:20.631648 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2551c216-90d3-43f5-9733-9e8d7e88ad18-utilities" (OuterVolumeSpecName: "utilities") pod "2551c216-90d3-43f5-9733-9e8d7e88ad18" (UID: "2551c216-90d3-43f5-9733-9e8d7e88ad18"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 06 13:04:20 crc kubenswrapper[4958]: I1006 13:04:20.639518 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2551c216-90d3-43f5-9733-9e8d7e88ad18-kube-api-access-t9dj9" (OuterVolumeSpecName: "kube-api-access-t9dj9") pod "2551c216-90d3-43f5-9733-9e8d7e88ad18" (UID: "2551c216-90d3-43f5-9733-9e8d7e88ad18"). InnerVolumeSpecName "kube-api-access-t9dj9". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 06 13:04:20 crc kubenswrapper[4958]: I1006 13:04:20.732649 4958 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-t9dj9\" (UniqueName: \"kubernetes.io/projected/2551c216-90d3-43f5-9733-9e8d7e88ad18-kube-api-access-t9dj9\") on node \"crc\" DevicePath \"\"" Oct 06 13:04:20 crc kubenswrapper[4958]: I1006 13:04:20.732916 4958 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2551c216-90d3-43f5-9733-9e8d7e88ad18-utilities\") on node \"crc\" DevicePath \"\"" Oct 06 13:04:20 crc kubenswrapper[4958]: I1006 13:04:20.775016 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2551c216-90d3-43f5-9733-9e8d7e88ad18-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "2551c216-90d3-43f5-9733-9e8d7e88ad18" (UID: "2551c216-90d3-43f5-9733-9e8d7e88ad18"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 06 13:04:20 crc kubenswrapper[4958]: I1006 13:04:20.834616 4958 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2551c216-90d3-43f5-9733-9e8d7e88ad18-catalog-content\") on node \"crc\" DevicePath \"\"" Oct 06 13:04:21 crc kubenswrapper[4958]: I1006 13:04:21.327599 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-8szj2" event={"ID":"2551c216-90d3-43f5-9733-9e8d7e88ad18","Type":"ContainerDied","Data":"b2cc00031fc57ff69f8e93dd523de7ea8159543a1acf9bc1e88f8c25f60eaf78"} Oct 06 13:04:21 crc kubenswrapper[4958]: I1006 13:04:21.327678 4958 scope.go:117] "RemoveContainer" containerID="a54f3aacd946d8467dbfaf8b7d1d53bb66defa4de114d068b6a74a0184ca5779" Oct 06 13:04:21 crc kubenswrapper[4958]: I1006 13:04:21.327880 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-8szj2" Oct 06 13:04:21 crc kubenswrapper[4958]: I1006 13:04:21.363215 4958 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-8szj2"] Oct 06 13:04:21 crc kubenswrapper[4958]: I1006 13:04:21.367781 4958 scope.go:117] "RemoveContainer" containerID="8aebf120c07ae385751eb0a6c4fb713693338565d0db3dc6a2784334c2dea60d" Oct 06 13:04:21 crc kubenswrapper[4958]: I1006 13:04:21.376775 4958 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-8szj2"] Oct 06 13:04:21 crc kubenswrapper[4958]: I1006 13:04:21.399204 4958 scope.go:117] "RemoveContainer" containerID="760ffd449323d15055aaa89b1f8dc59bea1e14a722fdc9d9f0aea584efee7ce4" Oct 06 13:04:22 crc kubenswrapper[4958]: I1006 13:04:22.936283 4958 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2551c216-90d3-43f5-9733-9e8d7e88ad18" path="/var/lib/kubelet/pods/2551c216-90d3-43f5-9733-9e8d7e88ad18/volumes" Oct 06 13:04:25 crc kubenswrapper[4958]: I1006 13:04:25.913578 4958 scope.go:117] "RemoveContainer" containerID="9bfe594bd1bd50636dd556d4ca46badc87684952b4d8eba0a11662e99a16bf6a" Oct 06 13:04:25 crc kubenswrapper[4958]: E1006 13:04:25.915499 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-whw6z_openshift-machine-config-operator(1a63a5e9-6d00-40ff-a080-cfcaf03a1c1b)\"" pod="openshift-machine-config-operator/machine-config-daemon-whw6z" podUID="1a63a5e9-6d00-40ff-a080-cfcaf03a1c1b" Oct 06 13:04:40 crc kubenswrapper[4958]: I1006 13:04:40.913862 4958 scope.go:117] "RemoveContainer" containerID="9bfe594bd1bd50636dd556d4ca46badc87684952b4d8eba0a11662e99a16bf6a" Oct 06 13:04:40 crc kubenswrapper[4958]: E1006 13:04:40.914855 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-whw6z_openshift-machine-config-operator(1a63a5e9-6d00-40ff-a080-cfcaf03a1c1b)\"" pod="openshift-machine-config-operator/machine-config-daemon-whw6z" podUID="1a63a5e9-6d00-40ff-a080-cfcaf03a1c1b" Oct 06 13:04:55 crc kubenswrapper[4958]: I1006 13:04:55.913838 4958 scope.go:117] "RemoveContainer" containerID="9bfe594bd1bd50636dd556d4ca46badc87684952b4d8eba0a11662e99a16bf6a" Oct 06 13:04:55 crc kubenswrapper[4958]: E1006 13:04:55.915260 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-whw6z_openshift-machine-config-operator(1a63a5e9-6d00-40ff-a080-cfcaf03a1c1b)\"" pod="openshift-machine-config-operator/machine-config-daemon-whw6z" podUID="1a63a5e9-6d00-40ff-a080-cfcaf03a1c1b" Oct 06 13:05:09 crc kubenswrapper[4958]: I1006 13:05:09.913106 4958 scope.go:117] "RemoveContainer" containerID="9bfe594bd1bd50636dd556d4ca46badc87684952b4d8eba0a11662e99a16bf6a" Oct 06 13:05:09 crc kubenswrapper[4958]: E1006 13:05:09.913942 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-whw6z_openshift-machine-config-operator(1a63a5e9-6d00-40ff-a080-cfcaf03a1c1b)\"" pod="openshift-machine-config-operator/machine-config-daemon-whw6z" podUID="1a63a5e9-6d00-40ff-a080-cfcaf03a1c1b" Oct 06 13:05:20 crc kubenswrapper[4958]: I1006 13:05:20.914444 4958 scope.go:117] "RemoveContainer" containerID="9bfe594bd1bd50636dd556d4ca46badc87684952b4d8eba0a11662e99a16bf6a" Oct 06 13:05:20 crc kubenswrapper[4958]: E1006 13:05:20.915430 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-whw6z_openshift-machine-config-operator(1a63a5e9-6d00-40ff-a080-cfcaf03a1c1b)\"" pod="openshift-machine-config-operator/machine-config-daemon-whw6z" podUID="1a63a5e9-6d00-40ff-a080-cfcaf03a1c1b" Oct 06 13:05:35 crc kubenswrapper[4958]: I1006 13:05:35.913557 4958 scope.go:117] "RemoveContainer" containerID="9bfe594bd1bd50636dd556d4ca46badc87684952b4d8eba0a11662e99a16bf6a" Oct 06 13:05:35 crc kubenswrapper[4958]: E1006 13:05:35.914918 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-whw6z_openshift-machine-config-operator(1a63a5e9-6d00-40ff-a080-cfcaf03a1c1b)\"" pod="openshift-machine-config-operator/machine-config-daemon-whw6z" podUID="1a63a5e9-6d00-40ff-a080-cfcaf03a1c1b" Oct 06 13:05:45 crc kubenswrapper[4958]: I1006 13:05:45.117076 4958 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-6vhkf"] Oct 06 13:05:45 crc kubenswrapper[4958]: E1006 13:05:45.118557 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2551c216-90d3-43f5-9733-9e8d7e88ad18" containerName="extract-utilities" Oct 06 13:05:45 crc kubenswrapper[4958]: I1006 13:05:45.118582 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="2551c216-90d3-43f5-9733-9e8d7e88ad18" containerName="extract-utilities" Oct 06 13:05:45 crc kubenswrapper[4958]: E1006 13:05:45.118663 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2551c216-90d3-43f5-9733-9e8d7e88ad18" containerName="extract-content" Oct 06 13:05:45 crc kubenswrapper[4958]: I1006 13:05:45.118677 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="2551c216-90d3-43f5-9733-9e8d7e88ad18" containerName="extract-content" Oct 06 13:05:45 crc kubenswrapper[4958]: E1006 13:05:45.118751 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2551c216-90d3-43f5-9733-9e8d7e88ad18" containerName="registry-server" Oct 06 13:05:45 crc kubenswrapper[4958]: I1006 13:05:45.118768 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="2551c216-90d3-43f5-9733-9e8d7e88ad18" containerName="registry-server" Oct 06 13:05:45 crc kubenswrapper[4958]: I1006 13:05:45.119411 4958 memory_manager.go:354] "RemoveStaleState removing state" podUID="2551c216-90d3-43f5-9733-9e8d7e88ad18" containerName="registry-server" Oct 06 13:05:45 crc kubenswrapper[4958]: I1006 13:05:45.125290 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-6vhkf" Oct 06 13:05:45 crc kubenswrapper[4958]: I1006 13:05:45.138302 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-6vhkf"] Oct 06 13:05:45 crc kubenswrapper[4958]: I1006 13:05:45.280716 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a298484f-f091-4c8c-96c4-f6cb4b67d8e0-catalog-content\") pod \"redhat-marketplace-6vhkf\" (UID: \"a298484f-f091-4c8c-96c4-f6cb4b67d8e0\") " pod="openshift-marketplace/redhat-marketplace-6vhkf" Oct 06 13:05:45 crc kubenswrapper[4958]: I1006 13:05:45.280833 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2jlsf\" (UniqueName: \"kubernetes.io/projected/a298484f-f091-4c8c-96c4-f6cb4b67d8e0-kube-api-access-2jlsf\") pod \"redhat-marketplace-6vhkf\" (UID: \"a298484f-f091-4c8c-96c4-f6cb4b67d8e0\") " pod="openshift-marketplace/redhat-marketplace-6vhkf" Oct 06 13:05:45 crc kubenswrapper[4958]: I1006 13:05:45.280982 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a298484f-f091-4c8c-96c4-f6cb4b67d8e0-utilities\") pod \"redhat-marketplace-6vhkf\" (UID: \"a298484f-f091-4c8c-96c4-f6cb4b67d8e0\") " pod="openshift-marketplace/redhat-marketplace-6vhkf" Oct 06 13:05:45 crc kubenswrapper[4958]: I1006 13:05:45.382864 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a298484f-f091-4c8c-96c4-f6cb4b67d8e0-catalog-content\") pod \"redhat-marketplace-6vhkf\" (UID: \"a298484f-f091-4c8c-96c4-f6cb4b67d8e0\") " pod="openshift-marketplace/redhat-marketplace-6vhkf" Oct 06 13:05:45 crc kubenswrapper[4958]: I1006 13:05:45.382910 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2jlsf\" (UniqueName: \"kubernetes.io/projected/a298484f-f091-4c8c-96c4-f6cb4b67d8e0-kube-api-access-2jlsf\") pod \"redhat-marketplace-6vhkf\" (UID: \"a298484f-f091-4c8c-96c4-f6cb4b67d8e0\") " pod="openshift-marketplace/redhat-marketplace-6vhkf" Oct 06 13:05:45 crc kubenswrapper[4958]: I1006 13:05:45.382982 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a298484f-f091-4c8c-96c4-f6cb4b67d8e0-utilities\") pod \"redhat-marketplace-6vhkf\" (UID: \"a298484f-f091-4c8c-96c4-f6cb4b67d8e0\") " pod="openshift-marketplace/redhat-marketplace-6vhkf" Oct 06 13:05:45 crc kubenswrapper[4958]: I1006 13:05:45.383421 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a298484f-f091-4c8c-96c4-f6cb4b67d8e0-catalog-content\") pod \"redhat-marketplace-6vhkf\" (UID: \"a298484f-f091-4c8c-96c4-f6cb4b67d8e0\") " pod="openshift-marketplace/redhat-marketplace-6vhkf" Oct 06 13:05:45 crc kubenswrapper[4958]: I1006 13:05:45.383586 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a298484f-f091-4c8c-96c4-f6cb4b67d8e0-utilities\") pod \"redhat-marketplace-6vhkf\" (UID: \"a298484f-f091-4c8c-96c4-f6cb4b67d8e0\") " pod="openshift-marketplace/redhat-marketplace-6vhkf" Oct 06 13:05:45 crc kubenswrapper[4958]: I1006 13:05:45.533070 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2jlsf\" (UniqueName: \"kubernetes.io/projected/a298484f-f091-4c8c-96c4-f6cb4b67d8e0-kube-api-access-2jlsf\") pod \"redhat-marketplace-6vhkf\" (UID: \"a298484f-f091-4c8c-96c4-f6cb4b67d8e0\") " pod="openshift-marketplace/redhat-marketplace-6vhkf" Oct 06 13:05:45 crc kubenswrapper[4958]: I1006 13:05:45.764024 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-6vhkf" Oct 06 13:05:46 crc kubenswrapper[4958]: I1006 13:05:46.297121 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-6vhkf"] Oct 06 13:05:47 crc kubenswrapper[4958]: I1006 13:05:47.275043 4958 generic.go:334] "Generic (PLEG): container finished" podID="a298484f-f091-4c8c-96c4-f6cb4b67d8e0" containerID="554392bcf348d8c55924b9b30189ca49e5a3e312aa4c2006a4b4ff1f0df4d6de" exitCode=0 Oct 06 13:05:47 crc kubenswrapper[4958]: I1006 13:05:47.275116 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-6vhkf" event={"ID":"a298484f-f091-4c8c-96c4-f6cb4b67d8e0","Type":"ContainerDied","Data":"554392bcf348d8c55924b9b30189ca49e5a3e312aa4c2006a4b4ff1f0df4d6de"} Oct 06 13:05:47 crc kubenswrapper[4958]: I1006 13:05:47.275538 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-6vhkf" event={"ID":"a298484f-f091-4c8c-96c4-f6cb4b67d8e0","Type":"ContainerStarted","Data":"220db174dcf206ff7b14c50a6d87c88cc9f3a5b039ca8637328968687b484842"} Oct 06 13:05:48 crc kubenswrapper[4958]: I1006 13:05:48.913536 4958 scope.go:117] "RemoveContainer" containerID="9bfe594bd1bd50636dd556d4ca46badc87684952b4d8eba0a11662e99a16bf6a" Oct 06 13:05:48 crc kubenswrapper[4958]: E1006 13:05:48.914994 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-whw6z_openshift-machine-config-operator(1a63a5e9-6d00-40ff-a080-cfcaf03a1c1b)\"" pod="openshift-machine-config-operator/machine-config-daemon-whw6z" podUID="1a63a5e9-6d00-40ff-a080-cfcaf03a1c1b" Oct 06 13:05:49 crc kubenswrapper[4958]: I1006 13:05:49.311539 4958 generic.go:334] "Generic (PLEG): container finished" podID="a298484f-f091-4c8c-96c4-f6cb4b67d8e0" containerID="48170cacd8b47b4f42343fd69ea4eacfde4f7b444f15b2d0c890fb94767e853d" exitCode=0 Oct 06 13:05:49 crc kubenswrapper[4958]: I1006 13:05:49.311600 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-6vhkf" event={"ID":"a298484f-f091-4c8c-96c4-f6cb4b67d8e0","Type":"ContainerDied","Data":"48170cacd8b47b4f42343fd69ea4eacfde4f7b444f15b2d0c890fb94767e853d"} Oct 06 13:05:50 crc kubenswrapper[4958]: I1006 13:05:50.327038 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-6vhkf" event={"ID":"a298484f-f091-4c8c-96c4-f6cb4b67d8e0","Type":"ContainerStarted","Data":"3a3ba2904ce0b8fc1d0891c6bbbad9417632cbb454f87ef2ec447d5c624f6d02"} Oct 06 13:05:50 crc kubenswrapper[4958]: I1006 13:05:50.352108 4958 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-6vhkf" podStartSLOduration=2.882910345 podStartE2EDuration="5.352089241s" podCreationTimestamp="2025-10-06 13:05:45 +0000 UTC" firstStartedPulling="2025-10-06 13:05:47.278755507 +0000 UTC m=+4701.164780835" lastFinishedPulling="2025-10-06 13:05:49.747934423 +0000 UTC m=+4703.633959731" observedRunningTime="2025-10-06 13:05:50.347072043 +0000 UTC m=+4704.233097391" watchObservedRunningTime="2025-10-06 13:05:50.352089241 +0000 UTC m=+4704.238114569" Oct 06 13:05:55 crc kubenswrapper[4958]: I1006 13:05:55.764450 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-6vhkf" Oct 06 13:05:55 crc kubenswrapper[4958]: I1006 13:05:55.765125 4958 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-6vhkf" Oct 06 13:05:55 crc kubenswrapper[4958]: I1006 13:05:55.860790 4958 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-6vhkf" Oct 06 13:05:56 crc kubenswrapper[4958]: I1006 13:05:56.710036 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-6vhkf" Oct 06 13:05:56 crc kubenswrapper[4958]: I1006 13:05:56.771916 4958 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-6vhkf"] Oct 06 13:05:58 crc kubenswrapper[4958]: I1006 13:05:58.413247 4958 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-6vhkf" podUID="a298484f-f091-4c8c-96c4-f6cb4b67d8e0" containerName="registry-server" containerID="cri-o://3a3ba2904ce0b8fc1d0891c6bbbad9417632cbb454f87ef2ec447d5c624f6d02" gracePeriod=2 Oct 06 13:05:59 crc kubenswrapper[4958]: I1006 13:05:59.427225 4958 generic.go:334] "Generic (PLEG): container finished" podID="a298484f-f091-4c8c-96c4-f6cb4b67d8e0" containerID="3a3ba2904ce0b8fc1d0891c6bbbad9417632cbb454f87ef2ec447d5c624f6d02" exitCode=0 Oct 06 13:05:59 crc kubenswrapper[4958]: I1006 13:05:59.427465 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-6vhkf" event={"ID":"a298484f-f091-4c8c-96c4-f6cb4b67d8e0","Type":"ContainerDied","Data":"3a3ba2904ce0b8fc1d0891c6bbbad9417632cbb454f87ef2ec447d5c624f6d02"} Oct 06 13:05:59 crc kubenswrapper[4958]: I1006 13:05:59.427721 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-6vhkf" event={"ID":"a298484f-f091-4c8c-96c4-f6cb4b67d8e0","Type":"ContainerDied","Data":"220db174dcf206ff7b14c50a6d87c88cc9f3a5b039ca8637328968687b484842"} Oct 06 13:05:59 crc kubenswrapper[4958]: I1006 13:05:59.427738 4958 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="220db174dcf206ff7b14c50a6d87c88cc9f3a5b039ca8637328968687b484842" Oct 06 13:05:59 crc kubenswrapper[4958]: I1006 13:05:59.473465 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-6vhkf" Oct 06 13:05:59 crc kubenswrapper[4958]: I1006 13:05:59.589390 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a298484f-f091-4c8c-96c4-f6cb4b67d8e0-catalog-content\") pod \"a298484f-f091-4c8c-96c4-f6cb4b67d8e0\" (UID: \"a298484f-f091-4c8c-96c4-f6cb4b67d8e0\") " Oct 06 13:05:59 crc kubenswrapper[4958]: I1006 13:05:59.589456 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a298484f-f091-4c8c-96c4-f6cb4b67d8e0-utilities\") pod \"a298484f-f091-4c8c-96c4-f6cb4b67d8e0\" (UID: \"a298484f-f091-4c8c-96c4-f6cb4b67d8e0\") " Oct 06 13:05:59 crc kubenswrapper[4958]: I1006 13:05:59.589474 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2jlsf\" (UniqueName: \"kubernetes.io/projected/a298484f-f091-4c8c-96c4-f6cb4b67d8e0-kube-api-access-2jlsf\") pod \"a298484f-f091-4c8c-96c4-f6cb4b67d8e0\" (UID: \"a298484f-f091-4c8c-96c4-f6cb4b67d8e0\") " Oct 06 13:05:59 crc kubenswrapper[4958]: I1006 13:05:59.591826 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a298484f-f091-4c8c-96c4-f6cb4b67d8e0-utilities" (OuterVolumeSpecName: "utilities") pod "a298484f-f091-4c8c-96c4-f6cb4b67d8e0" (UID: "a298484f-f091-4c8c-96c4-f6cb4b67d8e0"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 06 13:05:59 crc kubenswrapper[4958]: I1006 13:05:59.598056 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a298484f-f091-4c8c-96c4-f6cb4b67d8e0-kube-api-access-2jlsf" (OuterVolumeSpecName: "kube-api-access-2jlsf") pod "a298484f-f091-4c8c-96c4-f6cb4b67d8e0" (UID: "a298484f-f091-4c8c-96c4-f6cb4b67d8e0"). InnerVolumeSpecName "kube-api-access-2jlsf". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 06 13:05:59 crc kubenswrapper[4958]: I1006 13:05:59.601799 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a298484f-f091-4c8c-96c4-f6cb4b67d8e0-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "a298484f-f091-4c8c-96c4-f6cb4b67d8e0" (UID: "a298484f-f091-4c8c-96c4-f6cb4b67d8e0"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 06 13:05:59 crc kubenswrapper[4958]: I1006 13:05:59.692551 4958 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a298484f-f091-4c8c-96c4-f6cb4b67d8e0-catalog-content\") on node \"crc\" DevicePath \"\"" Oct 06 13:05:59 crc kubenswrapper[4958]: I1006 13:05:59.692593 4958 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a298484f-f091-4c8c-96c4-f6cb4b67d8e0-utilities\") on node \"crc\" DevicePath \"\"" Oct 06 13:05:59 crc kubenswrapper[4958]: I1006 13:05:59.692609 4958 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2jlsf\" (UniqueName: \"kubernetes.io/projected/a298484f-f091-4c8c-96c4-f6cb4b67d8e0-kube-api-access-2jlsf\") on node \"crc\" DevicePath \"\"" Oct 06 13:06:00 crc kubenswrapper[4958]: I1006 13:06:00.439693 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-6vhkf" Oct 06 13:06:00 crc kubenswrapper[4958]: I1006 13:06:00.514910 4958 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-6vhkf"] Oct 06 13:06:00 crc kubenswrapper[4958]: I1006 13:06:00.524771 4958 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-6vhkf"] Oct 06 13:06:00 crc kubenswrapper[4958]: I1006 13:06:00.930330 4958 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a298484f-f091-4c8c-96c4-f6cb4b67d8e0" path="/var/lib/kubelet/pods/a298484f-f091-4c8c-96c4-f6cb4b67d8e0/volumes" Oct 06 13:06:03 crc kubenswrapper[4958]: I1006 13:06:03.916495 4958 scope.go:117] "RemoveContainer" containerID="9bfe594bd1bd50636dd556d4ca46badc87684952b4d8eba0a11662e99a16bf6a" Oct 06 13:06:03 crc kubenswrapper[4958]: E1006 13:06:03.922420 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-whw6z_openshift-machine-config-operator(1a63a5e9-6d00-40ff-a080-cfcaf03a1c1b)\"" pod="openshift-machine-config-operator/machine-config-daemon-whw6z" podUID="1a63a5e9-6d00-40ff-a080-cfcaf03a1c1b" Oct 06 13:06:16 crc kubenswrapper[4958]: I1006 13:06:16.928080 4958 scope.go:117] "RemoveContainer" containerID="9bfe594bd1bd50636dd556d4ca46badc87684952b4d8eba0a11662e99a16bf6a" Oct 06 13:06:16 crc kubenswrapper[4958]: E1006 13:06:16.929565 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-whw6z_openshift-machine-config-operator(1a63a5e9-6d00-40ff-a080-cfcaf03a1c1b)\"" pod="openshift-machine-config-operator/machine-config-daemon-whw6z" podUID="1a63a5e9-6d00-40ff-a080-cfcaf03a1c1b" Oct 06 13:06:29 crc kubenswrapper[4958]: I1006 13:06:29.913839 4958 scope.go:117] "RemoveContainer" containerID="9bfe594bd1bd50636dd556d4ca46badc87684952b4d8eba0a11662e99a16bf6a" Oct 06 13:06:29 crc kubenswrapper[4958]: E1006 13:06:29.914588 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-whw6z_openshift-machine-config-operator(1a63a5e9-6d00-40ff-a080-cfcaf03a1c1b)\"" pod="openshift-machine-config-operator/machine-config-daemon-whw6z" podUID="1a63a5e9-6d00-40ff-a080-cfcaf03a1c1b" Oct 06 13:06:33 crc kubenswrapper[4958]: I1006 13:06:33.486792 4958 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-hcd54"] Oct 06 13:06:33 crc kubenswrapper[4958]: E1006 13:06:33.487972 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a298484f-f091-4c8c-96c4-f6cb4b67d8e0" containerName="extract-content" Oct 06 13:06:33 crc kubenswrapper[4958]: I1006 13:06:33.487996 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="a298484f-f091-4c8c-96c4-f6cb4b67d8e0" containerName="extract-content" Oct 06 13:06:33 crc kubenswrapper[4958]: E1006 13:06:33.488013 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a298484f-f091-4c8c-96c4-f6cb4b67d8e0" containerName="registry-server" Oct 06 13:06:33 crc kubenswrapper[4958]: I1006 13:06:33.488024 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="a298484f-f091-4c8c-96c4-f6cb4b67d8e0" containerName="registry-server" Oct 06 13:06:33 crc kubenswrapper[4958]: E1006 13:06:33.488051 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a298484f-f091-4c8c-96c4-f6cb4b67d8e0" containerName="extract-utilities" Oct 06 13:06:33 crc kubenswrapper[4958]: I1006 13:06:33.488063 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="a298484f-f091-4c8c-96c4-f6cb4b67d8e0" containerName="extract-utilities" Oct 06 13:06:33 crc kubenswrapper[4958]: I1006 13:06:33.489052 4958 memory_manager.go:354] "RemoveStaleState removing state" podUID="a298484f-f091-4c8c-96c4-f6cb4b67d8e0" containerName="registry-server" Oct 06 13:06:33 crc kubenswrapper[4958]: I1006 13:06:33.493406 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-hcd54" Oct 06 13:06:33 crc kubenswrapper[4958]: I1006 13:06:33.505655 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-hcd54"] Oct 06 13:06:33 crc kubenswrapper[4958]: I1006 13:06:33.631024 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cbefd68f-afd2-4dbc-a811-b4c9d9d1249e-utilities\") pod \"redhat-operators-hcd54\" (UID: \"cbefd68f-afd2-4dbc-a811-b4c9d9d1249e\") " pod="openshift-marketplace/redhat-operators-hcd54" Oct 06 13:06:33 crc kubenswrapper[4958]: I1006 13:06:33.631095 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cbefd68f-afd2-4dbc-a811-b4c9d9d1249e-catalog-content\") pod \"redhat-operators-hcd54\" (UID: \"cbefd68f-afd2-4dbc-a811-b4c9d9d1249e\") " pod="openshift-marketplace/redhat-operators-hcd54" Oct 06 13:06:33 crc kubenswrapper[4958]: I1006 13:06:33.631232 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5cndk\" (UniqueName: \"kubernetes.io/projected/cbefd68f-afd2-4dbc-a811-b4c9d9d1249e-kube-api-access-5cndk\") pod \"redhat-operators-hcd54\" (UID: \"cbefd68f-afd2-4dbc-a811-b4c9d9d1249e\") " pod="openshift-marketplace/redhat-operators-hcd54" Oct 06 13:06:33 crc kubenswrapper[4958]: I1006 13:06:33.732997 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cbefd68f-afd2-4dbc-a811-b4c9d9d1249e-utilities\") pod \"redhat-operators-hcd54\" (UID: \"cbefd68f-afd2-4dbc-a811-b4c9d9d1249e\") " pod="openshift-marketplace/redhat-operators-hcd54" Oct 06 13:06:33 crc kubenswrapper[4958]: I1006 13:06:33.733660 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cbefd68f-afd2-4dbc-a811-b4c9d9d1249e-catalog-content\") pod \"redhat-operators-hcd54\" (UID: \"cbefd68f-afd2-4dbc-a811-b4c9d9d1249e\") " pod="openshift-marketplace/redhat-operators-hcd54" Oct 06 13:06:33 crc kubenswrapper[4958]: I1006 13:06:33.733594 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cbefd68f-afd2-4dbc-a811-b4c9d9d1249e-utilities\") pod \"redhat-operators-hcd54\" (UID: \"cbefd68f-afd2-4dbc-a811-b4c9d9d1249e\") " pod="openshift-marketplace/redhat-operators-hcd54" Oct 06 13:06:33 crc kubenswrapper[4958]: I1006 13:06:33.733762 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5cndk\" (UniqueName: \"kubernetes.io/projected/cbefd68f-afd2-4dbc-a811-b4c9d9d1249e-kube-api-access-5cndk\") pod \"redhat-operators-hcd54\" (UID: \"cbefd68f-afd2-4dbc-a811-b4c9d9d1249e\") " pod="openshift-marketplace/redhat-operators-hcd54" Oct 06 13:06:33 crc kubenswrapper[4958]: I1006 13:06:33.734285 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cbefd68f-afd2-4dbc-a811-b4c9d9d1249e-catalog-content\") pod \"redhat-operators-hcd54\" (UID: \"cbefd68f-afd2-4dbc-a811-b4c9d9d1249e\") " pod="openshift-marketplace/redhat-operators-hcd54" Oct 06 13:06:33 crc kubenswrapper[4958]: I1006 13:06:33.769007 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5cndk\" (UniqueName: \"kubernetes.io/projected/cbefd68f-afd2-4dbc-a811-b4c9d9d1249e-kube-api-access-5cndk\") pod \"redhat-operators-hcd54\" (UID: \"cbefd68f-afd2-4dbc-a811-b4c9d9d1249e\") " pod="openshift-marketplace/redhat-operators-hcd54" Oct 06 13:06:33 crc kubenswrapper[4958]: I1006 13:06:33.848804 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-hcd54" Oct 06 13:06:34 crc kubenswrapper[4958]: I1006 13:06:34.335680 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-hcd54"] Oct 06 13:06:34 crc kubenswrapper[4958]: I1006 13:06:34.781755 4958 generic.go:334] "Generic (PLEG): container finished" podID="cbefd68f-afd2-4dbc-a811-b4c9d9d1249e" containerID="588911cc8cbe4b4a1013b28d8ed169b8ce81452d512ec79af31d7a624953d0fb" exitCode=0 Oct 06 13:06:34 crc kubenswrapper[4958]: I1006 13:06:34.781801 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-hcd54" event={"ID":"cbefd68f-afd2-4dbc-a811-b4c9d9d1249e","Type":"ContainerDied","Data":"588911cc8cbe4b4a1013b28d8ed169b8ce81452d512ec79af31d7a624953d0fb"} Oct 06 13:06:34 crc kubenswrapper[4958]: I1006 13:06:34.781830 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-hcd54" event={"ID":"cbefd68f-afd2-4dbc-a811-b4c9d9d1249e","Type":"ContainerStarted","Data":"be424c09fa599f9bf3cc50f9b4ed951aa6651c8da65925584e5b5ff6f016f21f"} Oct 06 13:06:36 crc kubenswrapper[4958]: I1006 13:06:36.804252 4958 generic.go:334] "Generic (PLEG): container finished" podID="cbefd68f-afd2-4dbc-a811-b4c9d9d1249e" containerID="f72e51e012a3cc97d6a264963fc8e60e956dd8e8f16bfd1d38182a22b25d1945" exitCode=0 Oct 06 13:06:36 crc kubenswrapper[4958]: I1006 13:06:36.804412 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-hcd54" event={"ID":"cbefd68f-afd2-4dbc-a811-b4c9d9d1249e","Type":"ContainerDied","Data":"f72e51e012a3cc97d6a264963fc8e60e956dd8e8f16bfd1d38182a22b25d1945"} Oct 06 13:06:37 crc kubenswrapper[4958]: I1006 13:06:37.814035 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-hcd54" event={"ID":"cbefd68f-afd2-4dbc-a811-b4c9d9d1249e","Type":"ContainerStarted","Data":"5ec6fb44c0939a42b9cbfb41539c7affe874e6dbe908430f06ab579e6c35c598"} Oct 06 13:06:37 crc kubenswrapper[4958]: I1006 13:06:37.842075 4958 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-hcd54" podStartSLOduration=2.290245424 podStartE2EDuration="4.842050456s" podCreationTimestamp="2025-10-06 13:06:33 +0000 UTC" firstStartedPulling="2025-10-06 13:06:34.783778927 +0000 UTC m=+4748.669804235" lastFinishedPulling="2025-10-06 13:06:37.335583959 +0000 UTC m=+4751.221609267" observedRunningTime="2025-10-06 13:06:37.834329079 +0000 UTC m=+4751.720354397" watchObservedRunningTime="2025-10-06 13:06:37.842050456 +0000 UTC m=+4751.728075764" Oct 06 13:06:43 crc kubenswrapper[4958]: I1006 13:06:43.849530 4958 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-hcd54" Oct 06 13:06:43 crc kubenswrapper[4958]: I1006 13:06:43.850375 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-hcd54" Oct 06 13:06:43 crc kubenswrapper[4958]: I1006 13:06:43.921286 4958 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-hcd54" Oct 06 13:06:44 crc kubenswrapper[4958]: I1006 13:06:44.732445 4958 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-ltf4j"] Oct 06 13:06:44 crc kubenswrapper[4958]: I1006 13:06:44.735193 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-ltf4j" Oct 06 13:06:44 crc kubenswrapper[4958]: I1006 13:06:44.743986 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-ltf4j"] Oct 06 13:06:44 crc kubenswrapper[4958]: I1006 13:06:44.885206 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2feae1f3-e85b-411b-9522-598c5d46786d-catalog-content\") pod \"certified-operators-ltf4j\" (UID: \"2feae1f3-e85b-411b-9522-598c5d46786d\") " pod="openshift-marketplace/certified-operators-ltf4j" Oct 06 13:06:44 crc kubenswrapper[4958]: I1006 13:06:44.885266 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ps7rv\" (UniqueName: \"kubernetes.io/projected/2feae1f3-e85b-411b-9522-598c5d46786d-kube-api-access-ps7rv\") pod \"certified-operators-ltf4j\" (UID: \"2feae1f3-e85b-411b-9522-598c5d46786d\") " pod="openshift-marketplace/certified-operators-ltf4j" Oct 06 13:06:44 crc kubenswrapper[4958]: I1006 13:06:44.885376 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2feae1f3-e85b-411b-9522-598c5d46786d-utilities\") pod \"certified-operators-ltf4j\" (UID: \"2feae1f3-e85b-411b-9522-598c5d46786d\") " pod="openshift-marketplace/certified-operators-ltf4j" Oct 06 13:06:44 crc kubenswrapper[4958]: I1006 13:06:44.913108 4958 scope.go:117] "RemoveContainer" containerID="9bfe594bd1bd50636dd556d4ca46badc87684952b4d8eba0a11662e99a16bf6a" Oct 06 13:06:44 crc kubenswrapper[4958]: E1006 13:06:44.913404 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-whw6z_openshift-machine-config-operator(1a63a5e9-6d00-40ff-a080-cfcaf03a1c1b)\"" pod="openshift-machine-config-operator/machine-config-daemon-whw6z" podUID="1a63a5e9-6d00-40ff-a080-cfcaf03a1c1b" Oct 06 13:06:44 crc kubenswrapper[4958]: I1006 13:06:44.960982 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-hcd54" Oct 06 13:06:44 crc kubenswrapper[4958]: I1006 13:06:44.987301 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2feae1f3-e85b-411b-9522-598c5d46786d-catalog-content\") pod \"certified-operators-ltf4j\" (UID: \"2feae1f3-e85b-411b-9522-598c5d46786d\") " pod="openshift-marketplace/certified-operators-ltf4j" Oct 06 13:06:44 crc kubenswrapper[4958]: I1006 13:06:44.987356 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ps7rv\" (UniqueName: \"kubernetes.io/projected/2feae1f3-e85b-411b-9522-598c5d46786d-kube-api-access-ps7rv\") pod \"certified-operators-ltf4j\" (UID: \"2feae1f3-e85b-411b-9522-598c5d46786d\") " pod="openshift-marketplace/certified-operators-ltf4j" Oct 06 13:06:44 crc kubenswrapper[4958]: I1006 13:06:44.987441 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2feae1f3-e85b-411b-9522-598c5d46786d-utilities\") pod \"certified-operators-ltf4j\" (UID: \"2feae1f3-e85b-411b-9522-598c5d46786d\") " pod="openshift-marketplace/certified-operators-ltf4j" Oct 06 13:06:44 crc kubenswrapper[4958]: I1006 13:06:44.988040 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2feae1f3-e85b-411b-9522-598c5d46786d-utilities\") pod \"certified-operators-ltf4j\" (UID: \"2feae1f3-e85b-411b-9522-598c5d46786d\") " pod="openshift-marketplace/certified-operators-ltf4j" Oct 06 13:06:44 crc kubenswrapper[4958]: I1006 13:06:44.988347 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2feae1f3-e85b-411b-9522-598c5d46786d-catalog-content\") pod \"certified-operators-ltf4j\" (UID: \"2feae1f3-e85b-411b-9522-598c5d46786d\") " pod="openshift-marketplace/certified-operators-ltf4j" Oct 06 13:06:45 crc kubenswrapper[4958]: I1006 13:06:45.014571 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ps7rv\" (UniqueName: \"kubernetes.io/projected/2feae1f3-e85b-411b-9522-598c5d46786d-kube-api-access-ps7rv\") pod \"certified-operators-ltf4j\" (UID: \"2feae1f3-e85b-411b-9522-598c5d46786d\") " pod="openshift-marketplace/certified-operators-ltf4j" Oct 06 13:06:45 crc kubenswrapper[4958]: I1006 13:06:45.071789 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-ltf4j" Oct 06 13:06:45 crc kubenswrapper[4958]: I1006 13:06:45.581156 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-ltf4j"] Oct 06 13:06:45 crc kubenswrapper[4958]: I1006 13:06:45.902068 4958 generic.go:334] "Generic (PLEG): container finished" podID="2feae1f3-e85b-411b-9522-598c5d46786d" containerID="fd9fe669e2b484c72049931df063e860a6bf0b3bb0d62b49c008799332aa0cd0" exitCode=0 Oct 06 13:06:45 crc kubenswrapper[4958]: I1006 13:06:45.902436 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-ltf4j" event={"ID":"2feae1f3-e85b-411b-9522-598c5d46786d","Type":"ContainerDied","Data":"fd9fe669e2b484c72049931df063e860a6bf0b3bb0d62b49c008799332aa0cd0"} Oct 06 13:06:45 crc kubenswrapper[4958]: I1006 13:06:45.904265 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-ltf4j" event={"ID":"2feae1f3-e85b-411b-9522-598c5d46786d","Type":"ContainerStarted","Data":"d7966b0a8ddfdb28c4fbabcd4c88cc6dd53b0d39c68cb93b473c48f46b456e68"} Oct 06 13:06:47 crc kubenswrapper[4958]: I1006 13:06:47.375510 4958 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-hcd54"] Oct 06 13:06:47 crc kubenswrapper[4958]: I1006 13:06:47.376898 4958 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-hcd54" podUID="cbefd68f-afd2-4dbc-a811-b4c9d9d1249e" containerName="registry-server" containerID="cri-o://5ec6fb44c0939a42b9cbfb41539c7affe874e6dbe908430f06ab579e6c35c598" gracePeriod=2 Oct 06 13:06:47 crc kubenswrapper[4958]: I1006 13:06:47.886243 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-hcd54" Oct 06 13:06:47 crc kubenswrapper[4958]: I1006 13:06:47.933981 4958 generic.go:334] "Generic (PLEG): container finished" podID="cbefd68f-afd2-4dbc-a811-b4c9d9d1249e" containerID="5ec6fb44c0939a42b9cbfb41539c7affe874e6dbe908430f06ab579e6c35c598" exitCode=0 Oct 06 13:06:47 crc kubenswrapper[4958]: I1006 13:06:47.934047 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-hcd54" Oct 06 13:06:47 crc kubenswrapper[4958]: I1006 13:06:47.934098 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-hcd54" event={"ID":"cbefd68f-afd2-4dbc-a811-b4c9d9d1249e","Type":"ContainerDied","Data":"5ec6fb44c0939a42b9cbfb41539c7affe874e6dbe908430f06ab579e6c35c598"} Oct 06 13:06:47 crc kubenswrapper[4958]: I1006 13:06:47.934172 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-hcd54" event={"ID":"cbefd68f-afd2-4dbc-a811-b4c9d9d1249e","Type":"ContainerDied","Data":"be424c09fa599f9bf3cc50f9b4ed951aa6651c8da65925584e5b5ff6f016f21f"} Oct 06 13:06:47 crc kubenswrapper[4958]: I1006 13:06:47.934345 4958 scope.go:117] "RemoveContainer" containerID="5ec6fb44c0939a42b9cbfb41539c7affe874e6dbe908430f06ab579e6c35c598" Oct 06 13:06:47 crc kubenswrapper[4958]: I1006 13:06:47.937396 4958 generic.go:334] "Generic (PLEG): container finished" podID="2feae1f3-e85b-411b-9522-598c5d46786d" containerID="68eaa1e23ab8cf8f0fcae695127e6825255a4977195fdd7514f99bd506f940dd" exitCode=0 Oct 06 13:06:47 crc kubenswrapper[4958]: I1006 13:06:47.937438 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-ltf4j" event={"ID":"2feae1f3-e85b-411b-9522-598c5d46786d","Type":"ContainerDied","Data":"68eaa1e23ab8cf8f0fcae695127e6825255a4977195fdd7514f99bd506f940dd"} Oct 06 13:06:47 crc kubenswrapper[4958]: I1006 13:06:47.961017 4958 scope.go:117] "RemoveContainer" containerID="f72e51e012a3cc97d6a264963fc8e60e956dd8e8f16bfd1d38182a22b25d1945" Oct 06 13:06:47 crc kubenswrapper[4958]: I1006 13:06:47.986910 4958 scope.go:117] "RemoveContainer" containerID="588911cc8cbe4b4a1013b28d8ed169b8ce81452d512ec79af31d7a624953d0fb" Oct 06 13:06:48 crc kubenswrapper[4958]: I1006 13:06:48.023578 4958 scope.go:117] "RemoveContainer" containerID="5ec6fb44c0939a42b9cbfb41539c7affe874e6dbe908430f06ab579e6c35c598" Oct 06 13:06:48 crc kubenswrapper[4958]: E1006 13:06:48.023913 4958 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5ec6fb44c0939a42b9cbfb41539c7affe874e6dbe908430f06ab579e6c35c598\": container with ID starting with 5ec6fb44c0939a42b9cbfb41539c7affe874e6dbe908430f06ab579e6c35c598 not found: ID does not exist" containerID="5ec6fb44c0939a42b9cbfb41539c7affe874e6dbe908430f06ab579e6c35c598" Oct 06 13:06:48 crc kubenswrapper[4958]: I1006 13:06:48.023962 4958 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5ec6fb44c0939a42b9cbfb41539c7affe874e6dbe908430f06ab579e6c35c598"} err="failed to get container status \"5ec6fb44c0939a42b9cbfb41539c7affe874e6dbe908430f06ab579e6c35c598\": rpc error: code = NotFound desc = could not find container \"5ec6fb44c0939a42b9cbfb41539c7affe874e6dbe908430f06ab579e6c35c598\": container with ID starting with 5ec6fb44c0939a42b9cbfb41539c7affe874e6dbe908430f06ab579e6c35c598 not found: ID does not exist" Oct 06 13:06:48 crc kubenswrapper[4958]: I1006 13:06:48.023993 4958 scope.go:117] "RemoveContainer" containerID="f72e51e012a3cc97d6a264963fc8e60e956dd8e8f16bfd1d38182a22b25d1945" Oct 06 13:06:48 crc kubenswrapper[4958]: E1006 13:06:48.024291 4958 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f72e51e012a3cc97d6a264963fc8e60e956dd8e8f16bfd1d38182a22b25d1945\": container with ID starting with f72e51e012a3cc97d6a264963fc8e60e956dd8e8f16bfd1d38182a22b25d1945 not found: ID does not exist" containerID="f72e51e012a3cc97d6a264963fc8e60e956dd8e8f16bfd1d38182a22b25d1945" Oct 06 13:06:48 crc kubenswrapper[4958]: I1006 13:06:48.024323 4958 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f72e51e012a3cc97d6a264963fc8e60e956dd8e8f16bfd1d38182a22b25d1945"} err="failed to get container status \"f72e51e012a3cc97d6a264963fc8e60e956dd8e8f16bfd1d38182a22b25d1945\": rpc error: code = NotFound desc = could not find container \"f72e51e012a3cc97d6a264963fc8e60e956dd8e8f16bfd1d38182a22b25d1945\": container with ID starting with f72e51e012a3cc97d6a264963fc8e60e956dd8e8f16bfd1d38182a22b25d1945 not found: ID does not exist" Oct 06 13:06:48 crc kubenswrapper[4958]: I1006 13:06:48.024349 4958 scope.go:117] "RemoveContainer" containerID="588911cc8cbe4b4a1013b28d8ed169b8ce81452d512ec79af31d7a624953d0fb" Oct 06 13:06:48 crc kubenswrapper[4958]: E1006 13:06:48.024526 4958 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"588911cc8cbe4b4a1013b28d8ed169b8ce81452d512ec79af31d7a624953d0fb\": container with ID starting with 588911cc8cbe4b4a1013b28d8ed169b8ce81452d512ec79af31d7a624953d0fb not found: ID does not exist" containerID="588911cc8cbe4b4a1013b28d8ed169b8ce81452d512ec79af31d7a624953d0fb" Oct 06 13:06:48 crc kubenswrapper[4958]: I1006 13:06:48.024552 4958 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"588911cc8cbe4b4a1013b28d8ed169b8ce81452d512ec79af31d7a624953d0fb"} err="failed to get container status \"588911cc8cbe4b4a1013b28d8ed169b8ce81452d512ec79af31d7a624953d0fb\": rpc error: code = NotFound desc = could not find container \"588911cc8cbe4b4a1013b28d8ed169b8ce81452d512ec79af31d7a624953d0fb\": container with ID starting with 588911cc8cbe4b4a1013b28d8ed169b8ce81452d512ec79af31d7a624953d0fb not found: ID does not exist" Oct 06 13:06:48 crc kubenswrapper[4958]: I1006 13:06:48.050292 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5cndk\" (UniqueName: \"kubernetes.io/projected/cbefd68f-afd2-4dbc-a811-b4c9d9d1249e-kube-api-access-5cndk\") pod \"cbefd68f-afd2-4dbc-a811-b4c9d9d1249e\" (UID: \"cbefd68f-afd2-4dbc-a811-b4c9d9d1249e\") " Oct 06 13:06:48 crc kubenswrapper[4958]: I1006 13:06:48.050467 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cbefd68f-afd2-4dbc-a811-b4c9d9d1249e-catalog-content\") pod \"cbefd68f-afd2-4dbc-a811-b4c9d9d1249e\" (UID: \"cbefd68f-afd2-4dbc-a811-b4c9d9d1249e\") " Oct 06 13:06:48 crc kubenswrapper[4958]: I1006 13:06:48.054046 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cbefd68f-afd2-4dbc-a811-b4c9d9d1249e-utilities\") pod \"cbefd68f-afd2-4dbc-a811-b4c9d9d1249e\" (UID: \"cbefd68f-afd2-4dbc-a811-b4c9d9d1249e\") " Oct 06 13:06:48 crc kubenswrapper[4958]: I1006 13:06:48.055621 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cbefd68f-afd2-4dbc-a811-b4c9d9d1249e-kube-api-access-5cndk" (OuterVolumeSpecName: "kube-api-access-5cndk") pod "cbefd68f-afd2-4dbc-a811-b4c9d9d1249e" (UID: "cbefd68f-afd2-4dbc-a811-b4c9d9d1249e"). InnerVolumeSpecName "kube-api-access-5cndk". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 06 13:06:48 crc kubenswrapper[4958]: I1006 13:06:48.056470 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/cbefd68f-afd2-4dbc-a811-b4c9d9d1249e-utilities" (OuterVolumeSpecName: "utilities") pod "cbefd68f-afd2-4dbc-a811-b4c9d9d1249e" (UID: "cbefd68f-afd2-4dbc-a811-b4c9d9d1249e"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 06 13:06:48 crc kubenswrapper[4958]: I1006 13:06:48.134118 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/cbefd68f-afd2-4dbc-a811-b4c9d9d1249e-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "cbefd68f-afd2-4dbc-a811-b4c9d9d1249e" (UID: "cbefd68f-afd2-4dbc-a811-b4c9d9d1249e"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 06 13:06:48 crc kubenswrapper[4958]: I1006 13:06:48.157657 4958 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5cndk\" (UniqueName: \"kubernetes.io/projected/cbefd68f-afd2-4dbc-a811-b4c9d9d1249e-kube-api-access-5cndk\") on node \"crc\" DevicePath \"\"" Oct 06 13:06:48 crc kubenswrapper[4958]: I1006 13:06:48.157834 4958 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cbefd68f-afd2-4dbc-a811-b4c9d9d1249e-catalog-content\") on node \"crc\" DevicePath \"\"" Oct 06 13:06:48 crc kubenswrapper[4958]: I1006 13:06:48.158078 4958 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cbefd68f-afd2-4dbc-a811-b4c9d9d1249e-utilities\") on node \"crc\" DevicePath \"\"" Oct 06 13:06:48 crc kubenswrapper[4958]: I1006 13:06:48.270685 4958 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-hcd54"] Oct 06 13:06:48 crc kubenswrapper[4958]: I1006 13:06:48.278252 4958 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-hcd54"] Oct 06 13:06:48 crc kubenswrapper[4958]: I1006 13:06:48.934952 4958 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cbefd68f-afd2-4dbc-a811-b4c9d9d1249e" path="/var/lib/kubelet/pods/cbefd68f-afd2-4dbc-a811-b4c9d9d1249e/volumes" Oct 06 13:06:48 crc kubenswrapper[4958]: I1006 13:06:48.957739 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-ltf4j" event={"ID":"2feae1f3-e85b-411b-9522-598c5d46786d","Type":"ContainerStarted","Data":"06a2aab2c6395fe07d98869906294f0a76587c297e8fbcd1750920a086bdbe5b"} Oct 06 13:06:48 crc kubenswrapper[4958]: I1006 13:06:48.995666 4958 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-ltf4j" podStartSLOduration=2.434386617 podStartE2EDuration="4.995633027s" podCreationTimestamp="2025-10-06 13:06:44 +0000 UTC" firstStartedPulling="2025-10-06 13:06:45.904326493 +0000 UTC m=+4759.790351801" lastFinishedPulling="2025-10-06 13:06:48.465572863 +0000 UTC m=+4762.351598211" observedRunningTime="2025-10-06 13:06:48.983333654 +0000 UTC m=+4762.869358962" watchObservedRunningTime="2025-10-06 13:06:48.995633027 +0000 UTC m=+4762.881658385" Oct 06 13:06:55 crc kubenswrapper[4958]: I1006 13:06:55.072320 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-ltf4j" Oct 06 13:06:55 crc kubenswrapper[4958]: I1006 13:06:55.072970 4958 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-ltf4j" Oct 06 13:06:55 crc kubenswrapper[4958]: I1006 13:06:55.128602 4958 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-ltf4j" Oct 06 13:06:55 crc kubenswrapper[4958]: I1006 13:06:55.914252 4958 scope.go:117] "RemoveContainer" containerID="9bfe594bd1bd50636dd556d4ca46badc87684952b4d8eba0a11662e99a16bf6a" Oct 06 13:06:55 crc kubenswrapper[4958]: E1006 13:06:55.914728 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-whw6z_openshift-machine-config-operator(1a63a5e9-6d00-40ff-a080-cfcaf03a1c1b)\"" pod="openshift-machine-config-operator/machine-config-daemon-whw6z" podUID="1a63a5e9-6d00-40ff-a080-cfcaf03a1c1b" Oct 06 13:06:56 crc kubenswrapper[4958]: I1006 13:06:56.095632 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-ltf4j" Oct 06 13:06:56 crc kubenswrapper[4958]: I1006 13:06:56.188120 4958 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-ltf4j"] Oct 06 13:06:58 crc kubenswrapper[4958]: I1006 13:06:58.050904 4958 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-ltf4j" podUID="2feae1f3-e85b-411b-9522-598c5d46786d" containerName="registry-server" containerID="cri-o://06a2aab2c6395fe07d98869906294f0a76587c297e8fbcd1750920a086bdbe5b" gracePeriod=2 Oct 06 13:06:58 crc kubenswrapper[4958]: I1006 13:06:58.612390 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-ltf4j" Oct 06 13:06:58 crc kubenswrapper[4958]: I1006 13:06:58.678221 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2feae1f3-e85b-411b-9522-598c5d46786d-utilities\") pod \"2feae1f3-e85b-411b-9522-598c5d46786d\" (UID: \"2feae1f3-e85b-411b-9522-598c5d46786d\") " Oct 06 13:06:58 crc kubenswrapper[4958]: I1006 13:06:58.678370 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ps7rv\" (UniqueName: \"kubernetes.io/projected/2feae1f3-e85b-411b-9522-598c5d46786d-kube-api-access-ps7rv\") pod \"2feae1f3-e85b-411b-9522-598c5d46786d\" (UID: \"2feae1f3-e85b-411b-9522-598c5d46786d\") " Oct 06 13:06:58 crc kubenswrapper[4958]: I1006 13:06:58.678569 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2feae1f3-e85b-411b-9522-598c5d46786d-catalog-content\") pod \"2feae1f3-e85b-411b-9522-598c5d46786d\" (UID: \"2feae1f3-e85b-411b-9522-598c5d46786d\") " Oct 06 13:06:58 crc kubenswrapper[4958]: I1006 13:06:58.680249 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2feae1f3-e85b-411b-9522-598c5d46786d-utilities" (OuterVolumeSpecName: "utilities") pod "2feae1f3-e85b-411b-9522-598c5d46786d" (UID: "2feae1f3-e85b-411b-9522-598c5d46786d"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 06 13:06:58 crc kubenswrapper[4958]: I1006 13:06:58.730064 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2feae1f3-e85b-411b-9522-598c5d46786d-kube-api-access-ps7rv" (OuterVolumeSpecName: "kube-api-access-ps7rv") pod "2feae1f3-e85b-411b-9522-598c5d46786d" (UID: "2feae1f3-e85b-411b-9522-598c5d46786d"). InnerVolumeSpecName "kube-api-access-ps7rv". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 06 13:06:58 crc kubenswrapper[4958]: I1006 13:06:58.734802 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2feae1f3-e85b-411b-9522-598c5d46786d-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "2feae1f3-e85b-411b-9522-598c5d46786d" (UID: "2feae1f3-e85b-411b-9522-598c5d46786d"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 06 13:06:58 crc kubenswrapper[4958]: I1006 13:06:58.780852 4958 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ps7rv\" (UniqueName: \"kubernetes.io/projected/2feae1f3-e85b-411b-9522-598c5d46786d-kube-api-access-ps7rv\") on node \"crc\" DevicePath \"\"" Oct 06 13:06:58 crc kubenswrapper[4958]: I1006 13:06:58.780885 4958 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2feae1f3-e85b-411b-9522-598c5d46786d-catalog-content\") on node \"crc\" DevicePath \"\"" Oct 06 13:06:58 crc kubenswrapper[4958]: I1006 13:06:58.780895 4958 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2feae1f3-e85b-411b-9522-598c5d46786d-utilities\") on node \"crc\" DevicePath \"\"" Oct 06 13:06:59 crc kubenswrapper[4958]: I1006 13:06:59.068345 4958 generic.go:334] "Generic (PLEG): container finished" podID="2feae1f3-e85b-411b-9522-598c5d46786d" containerID="06a2aab2c6395fe07d98869906294f0a76587c297e8fbcd1750920a086bdbe5b" exitCode=0 Oct 06 13:06:59 crc kubenswrapper[4958]: I1006 13:06:59.068414 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-ltf4j" event={"ID":"2feae1f3-e85b-411b-9522-598c5d46786d","Type":"ContainerDied","Data":"06a2aab2c6395fe07d98869906294f0a76587c297e8fbcd1750920a086bdbe5b"} Oct 06 13:06:59 crc kubenswrapper[4958]: I1006 13:06:59.068465 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-ltf4j" event={"ID":"2feae1f3-e85b-411b-9522-598c5d46786d","Type":"ContainerDied","Data":"d7966b0a8ddfdb28c4fbabcd4c88cc6dd53b0d39c68cb93b473c48f46b456e68"} Oct 06 13:06:59 crc kubenswrapper[4958]: I1006 13:06:59.068503 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-ltf4j" Oct 06 13:06:59 crc kubenswrapper[4958]: I1006 13:06:59.068510 4958 scope.go:117] "RemoveContainer" containerID="06a2aab2c6395fe07d98869906294f0a76587c297e8fbcd1750920a086bdbe5b" Oct 06 13:06:59 crc kubenswrapper[4958]: I1006 13:06:59.109777 4958 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-ltf4j"] Oct 06 13:06:59 crc kubenswrapper[4958]: I1006 13:06:59.110015 4958 scope.go:117] "RemoveContainer" containerID="68eaa1e23ab8cf8f0fcae695127e6825255a4977195fdd7514f99bd506f940dd" Oct 06 13:06:59 crc kubenswrapper[4958]: I1006 13:06:59.124541 4958 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-ltf4j"] Oct 06 13:06:59 crc kubenswrapper[4958]: I1006 13:06:59.151159 4958 scope.go:117] "RemoveContainer" containerID="fd9fe669e2b484c72049931df063e860a6bf0b3bb0d62b49c008799332aa0cd0" Oct 06 13:06:59 crc kubenswrapper[4958]: I1006 13:06:59.214087 4958 scope.go:117] "RemoveContainer" containerID="06a2aab2c6395fe07d98869906294f0a76587c297e8fbcd1750920a086bdbe5b" Oct 06 13:06:59 crc kubenswrapper[4958]: E1006 13:06:59.214628 4958 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"06a2aab2c6395fe07d98869906294f0a76587c297e8fbcd1750920a086bdbe5b\": container with ID starting with 06a2aab2c6395fe07d98869906294f0a76587c297e8fbcd1750920a086bdbe5b not found: ID does not exist" containerID="06a2aab2c6395fe07d98869906294f0a76587c297e8fbcd1750920a086bdbe5b" Oct 06 13:06:59 crc kubenswrapper[4958]: I1006 13:06:59.214675 4958 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"06a2aab2c6395fe07d98869906294f0a76587c297e8fbcd1750920a086bdbe5b"} err="failed to get container status \"06a2aab2c6395fe07d98869906294f0a76587c297e8fbcd1750920a086bdbe5b\": rpc error: code = NotFound desc = could not find container \"06a2aab2c6395fe07d98869906294f0a76587c297e8fbcd1750920a086bdbe5b\": container with ID starting with 06a2aab2c6395fe07d98869906294f0a76587c297e8fbcd1750920a086bdbe5b not found: ID does not exist" Oct 06 13:06:59 crc kubenswrapper[4958]: I1006 13:06:59.214701 4958 scope.go:117] "RemoveContainer" containerID="68eaa1e23ab8cf8f0fcae695127e6825255a4977195fdd7514f99bd506f940dd" Oct 06 13:06:59 crc kubenswrapper[4958]: E1006 13:06:59.215130 4958 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"68eaa1e23ab8cf8f0fcae695127e6825255a4977195fdd7514f99bd506f940dd\": container with ID starting with 68eaa1e23ab8cf8f0fcae695127e6825255a4977195fdd7514f99bd506f940dd not found: ID does not exist" containerID="68eaa1e23ab8cf8f0fcae695127e6825255a4977195fdd7514f99bd506f940dd" Oct 06 13:06:59 crc kubenswrapper[4958]: I1006 13:06:59.215180 4958 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"68eaa1e23ab8cf8f0fcae695127e6825255a4977195fdd7514f99bd506f940dd"} err="failed to get container status \"68eaa1e23ab8cf8f0fcae695127e6825255a4977195fdd7514f99bd506f940dd\": rpc error: code = NotFound desc = could not find container \"68eaa1e23ab8cf8f0fcae695127e6825255a4977195fdd7514f99bd506f940dd\": container with ID starting with 68eaa1e23ab8cf8f0fcae695127e6825255a4977195fdd7514f99bd506f940dd not found: ID does not exist" Oct 06 13:06:59 crc kubenswrapper[4958]: I1006 13:06:59.215202 4958 scope.go:117] "RemoveContainer" containerID="fd9fe669e2b484c72049931df063e860a6bf0b3bb0d62b49c008799332aa0cd0" Oct 06 13:06:59 crc kubenswrapper[4958]: E1006 13:06:59.215576 4958 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fd9fe669e2b484c72049931df063e860a6bf0b3bb0d62b49c008799332aa0cd0\": container with ID starting with fd9fe669e2b484c72049931df063e860a6bf0b3bb0d62b49c008799332aa0cd0 not found: ID does not exist" containerID="fd9fe669e2b484c72049931df063e860a6bf0b3bb0d62b49c008799332aa0cd0" Oct 06 13:06:59 crc kubenswrapper[4958]: I1006 13:06:59.215605 4958 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fd9fe669e2b484c72049931df063e860a6bf0b3bb0d62b49c008799332aa0cd0"} err="failed to get container status \"fd9fe669e2b484c72049931df063e860a6bf0b3bb0d62b49c008799332aa0cd0\": rpc error: code = NotFound desc = could not find container \"fd9fe669e2b484c72049931df063e860a6bf0b3bb0d62b49c008799332aa0cd0\": container with ID starting with fd9fe669e2b484c72049931df063e860a6bf0b3bb0d62b49c008799332aa0cd0 not found: ID does not exist" Oct 06 13:07:00 crc kubenswrapper[4958]: I1006 13:07:00.937698 4958 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2feae1f3-e85b-411b-9522-598c5d46786d" path="/var/lib/kubelet/pods/2feae1f3-e85b-411b-9522-598c5d46786d/volumes" Oct 06 13:07:10 crc kubenswrapper[4958]: I1006 13:07:10.914508 4958 scope.go:117] "RemoveContainer" containerID="9bfe594bd1bd50636dd556d4ca46badc87684952b4d8eba0a11662e99a16bf6a" Oct 06 13:07:10 crc kubenswrapper[4958]: E1006 13:07:10.915616 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-whw6z_openshift-machine-config-operator(1a63a5e9-6d00-40ff-a080-cfcaf03a1c1b)\"" pod="openshift-machine-config-operator/machine-config-daemon-whw6z" podUID="1a63a5e9-6d00-40ff-a080-cfcaf03a1c1b" Oct 06 13:07:25 crc kubenswrapper[4958]: I1006 13:07:25.913402 4958 scope.go:117] "RemoveContainer" containerID="9bfe594bd1bd50636dd556d4ca46badc87684952b4d8eba0a11662e99a16bf6a" Oct 06 13:07:25 crc kubenswrapper[4958]: E1006 13:07:25.914489 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-whw6z_openshift-machine-config-operator(1a63a5e9-6d00-40ff-a080-cfcaf03a1c1b)\"" pod="openshift-machine-config-operator/machine-config-daemon-whw6z" podUID="1a63a5e9-6d00-40ff-a080-cfcaf03a1c1b" Oct 06 13:07:37 crc kubenswrapper[4958]: I1006 13:07:37.913562 4958 scope.go:117] "RemoveContainer" containerID="9bfe594bd1bd50636dd556d4ca46badc87684952b4d8eba0a11662e99a16bf6a" Oct 06 13:07:37 crc kubenswrapper[4958]: E1006 13:07:37.914810 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-whw6z_openshift-machine-config-operator(1a63a5e9-6d00-40ff-a080-cfcaf03a1c1b)\"" pod="openshift-machine-config-operator/machine-config-daemon-whw6z" podUID="1a63a5e9-6d00-40ff-a080-cfcaf03a1c1b" Oct 06 13:07:52 crc kubenswrapper[4958]: I1006 13:07:52.913572 4958 scope.go:117] "RemoveContainer" containerID="9bfe594bd1bd50636dd556d4ca46badc87684952b4d8eba0a11662e99a16bf6a" Oct 06 13:07:52 crc kubenswrapper[4958]: E1006 13:07:52.914703 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-whw6z_openshift-machine-config-operator(1a63a5e9-6d00-40ff-a080-cfcaf03a1c1b)\"" pod="openshift-machine-config-operator/machine-config-daemon-whw6z" podUID="1a63a5e9-6d00-40ff-a080-cfcaf03a1c1b" Oct 06 13:08:06 crc kubenswrapper[4958]: I1006 13:08:06.921979 4958 scope.go:117] "RemoveContainer" containerID="9bfe594bd1bd50636dd556d4ca46badc87684952b4d8eba0a11662e99a16bf6a" Oct 06 13:08:06 crc kubenswrapper[4958]: E1006 13:08:06.922870 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-whw6z_openshift-machine-config-operator(1a63a5e9-6d00-40ff-a080-cfcaf03a1c1b)\"" pod="openshift-machine-config-operator/machine-config-daemon-whw6z" podUID="1a63a5e9-6d00-40ff-a080-cfcaf03a1c1b" Oct 06 13:08:17 crc kubenswrapper[4958]: I1006 13:08:17.913801 4958 scope.go:117] "RemoveContainer" containerID="9bfe594bd1bd50636dd556d4ca46badc87684952b4d8eba0a11662e99a16bf6a" Oct 06 13:08:17 crc kubenswrapper[4958]: E1006 13:08:17.915024 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-whw6z_openshift-machine-config-operator(1a63a5e9-6d00-40ff-a080-cfcaf03a1c1b)\"" pod="openshift-machine-config-operator/machine-config-daemon-whw6z" podUID="1a63a5e9-6d00-40ff-a080-cfcaf03a1c1b" Oct 06 13:08:31 crc kubenswrapper[4958]: I1006 13:08:31.915229 4958 scope.go:117] "RemoveContainer" containerID="9bfe594bd1bd50636dd556d4ca46badc87684952b4d8eba0a11662e99a16bf6a" Oct 06 13:08:33 crc kubenswrapper[4958]: I1006 13:08:33.128547 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-whw6z" event={"ID":"1a63a5e9-6d00-40ff-a080-cfcaf03a1c1b","Type":"ContainerStarted","Data":"b6fcb4d8b3d156808c5f09a3bfa6066cc9d9732cd809369a2217718cfb946c7b"} Oct 06 13:10:53 crc kubenswrapper[4958]: I1006 13:10:53.801332 4958 patch_prober.go:28] interesting pod/machine-config-daemon-whw6z container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 06 13:10:53 crc kubenswrapper[4958]: I1006 13:10:53.802069 4958 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-whw6z" podUID="1a63a5e9-6d00-40ff-a080-cfcaf03a1c1b" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 06 13:11:23 crc kubenswrapper[4958]: I1006 13:11:23.802321 4958 patch_prober.go:28] interesting pod/machine-config-daemon-whw6z container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 06 13:11:23 crc kubenswrapper[4958]: I1006 13:11:23.804318 4958 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-whw6z" podUID="1a63a5e9-6d00-40ff-a080-cfcaf03a1c1b" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 06 13:11:53 crc kubenswrapper[4958]: I1006 13:11:53.802376 4958 patch_prober.go:28] interesting pod/machine-config-daemon-whw6z container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 06 13:11:53 crc kubenswrapper[4958]: I1006 13:11:53.803274 4958 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-whw6z" podUID="1a63a5e9-6d00-40ff-a080-cfcaf03a1c1b" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 06 13:11:53 crc kubenswrapper[4958]: I1006 13:11:53.803363 4958 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-whw6z" Oct 06 13:11:53 crc kubenswrapper[4958]: I1006 13:11:53.804692 4958 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"b6fcb4d8b3d156808c5f09a3bfa6066cc9d9732cd809369a2217718cfb946c7b"} pod="openshift-machine-config-operator/machine-config-daemon-whw6z" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Oct 06 13:11:53 crc kubenswrapper[4958]: I1006 13:11:53.805048 4958 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-whw6z" podUID="1a63a5e9-6d00-40ff-a080-cfcaf03a1c1b" containerName="machine-config-daemon" containerID="cri-o://b6fcb4d8b3d156808c5f09a3bfa6066cc9d9732cd809369a2217718cfb946c7b" gracePeriod=600 Oct 06 13:11:54 crc kubenswrapper[4958]: I1006 13:11:54.251352 4958 generic.go:334] "Generic (PLEG): container finished" podID="1a63a5e9-6d00-40ff-a080-cfcaf03a1c1b" containerID="b6fcb4d8b3d156808c5f09a3bfa6066cc9d9732cd809369a2217718cfb946c7b" exitCode=0 Oct 06 13:11:54 crc kubenswrapper[4958]: I1006 13:11:54.251432 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-whw6z" event={"ID":"1a63a5e9-6d00-40ff-a080-cfcaf03a1c1b","Type":"ContainerDied","Data":"b6fcb4d8b3d156808c5f09a3bfa6066cc9d9732cd809369a2217718cfb946c7b"} Oct 06 13:11:54 crc kubenswrapper[4958]: I1006 13:11:54.251878 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-whw6z" event={"ID":"1a63a5e9-6d00-40ff-a080-cfcaf03a1c1b","Type":"ContainerStarted","Data":"d84ab81ce9062a0950d4884324ac3057bab7e52ccaab6ef82edbbad8288b7777"} Oct 06 13:11:54 crc kubenswrapper[4958]: I1006 13:11:54.251915 4958 scope.go:117] "RemoveContainer" containerID="9bfe594bd1bd50636dd556d4ca46badc87684952b4d8eba0a11662e99a16bf6a" Oct 06 13:12:32 crc kubenswrapper[4958]: I1006 13:12:32.920731 4958 scope.go:117] "RemoveContainer" containerID="48170cacd8b47b4f42343fd69ea4eacfde4f7b444f15b2d0c890fb94767e853d" Oct 06 13:12:32 crc kubenswrapper[4958]: I1006 13:12:32.958577 4958 scope.go:117] "RemoveContainer" containerID="3a3ba2904ce0b8fc1d0891c6bbbad9417632cbb454f87ef2ec447d5c624f6d02" Oct 06 13:12:33 crc kubenswrapper[4958]: I1006 13:12:33.019958 4958 scope.go:117] "RemoveContainer" containerID="554392bcf348d8c55924b9b30189ca49e5a3e312aa4c2006a4b4ff1f0df4d6de" Oct 06 13:14:23 crc kubenswrapper[4958]: I1006 13:14:23.801741 4958 patch_prober.go:28] interesting pod/machine-config-daemon-whw6z container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 06 13:14:23 crc kubenswrapper[4958]: I1006 13:14:23.802349 4958 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-whw6z" podUID="1a63a5e9-6d00-40ff-a080-cfcaf03a1c1b" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 06 13:14:53 crc kubenswrapper[4958]: I1006 13:14:53.802443 4958 patch_prober.go:28] interesting pod/machine-config-daemon-whw6z container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 06 13:14:53 crc kubenswrapper[4958]: I1006 13:14:53.802919 4958 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-whw6z" podUID="1a63a5e9-6d00-40ff-a080-cfcaf03a1c1b" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 06 13:15:00 crc kubenswrapper[4958]: I1006 13:15:00.166245 4958 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29329275-k5m9p"] Oct 06 13:15:00 crc kubenswrapper[4958]: E1006 13:15:00.168554 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cbefd68f-afd2-4dbc-a811-b4c9d9d1249e" containerName="extract-utilities" Oct 06 13:15:00 crc kubenswrapper[4958]: I1006 13:15:00.168599 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="cbefd68f-afd2-4dbc-a811-b4c9d9d1249e" containerName="extract-utilities" Oct 06 13:15:00 crc kubenswrapper[4958]: E1006 13:15:00.168630 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cbefd68f-afd2-4dbc-a811-b4c9d9d1249e" containerName="extract-content" Oct 06 13:15:00 crc kubenswrapper[4958]: I1006 13:15:00.168642 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="cbefd68f-afd2-4dbc-a811-b4c9d9d1249e" containerName="extract-content" Oct 06 13:15:00 crc kubenswrapper[4958]: E1006 13:15:00.168670 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2feae1f3-e85b-411b-9522-598c5d46786d" containerName="registry-server" Oct 06 13:15:00 crc kubenswrapper[4958]: I1006 13:15:00.168681 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="2feae1f3-e85b-411b-9522-598c5d46786d" containerName="registry-server" Oct 06 13:15:00 crc kubenswrapper[4958]: E1006 13:15:00.168711 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2feae1f3-e85b-411b-9522-598c5d46786d" containerName="extract-utilities" Oct 06 13:15:00 crc kubenswrapper[4958]: I1006 13:15:00.168723 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="2feae1f3-e85b-411b-9522-598c5d46786d" containerName="extract-utilities" Oct 06 13:15:00 crc kubenswrapper[4958]: E1006 13:15:00.168756 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cbefd68f-afd2-4dbc-a811-b4c9d9d1249e" containerName="registry-server" Oct 06 13:15:00 crc kubenswrapper[4958]: I1006 13:15:00.168766 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="cbefd68f-afd2-4dbc-a811-b4c9d9d1249e" containerName="registry-server" Oct 06 13:15:00 crc kubenswrapper[4958]: E1006 13:15:00.168782 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2feae1f3-e85b-411b-9522-598c5d46786d" containerName="extract-content" Oct 06 13:15:00 crc kubenswrapper[4958]: I1006 13:15:00.168793 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="2feae1f3-e85b-411b-9522-598c5d46786d" containerName="extract-content" Oct 06 13:15:00 crc kubenswrapper[4958]: I1006 13:15:00.169087 4958 memory_manager.go:354] "RemoveStaleState removing state" podUID="2feae1f3-e85b-411b-9522-598c5d46786d" containerName="registry-server" Oct 06 13:15:00 crc kubenswrapper[4958]: I1006 13:15:00.169125 4958 memory_manager.go:354] "RemoveStaleState removing state" podUID="cbefd68f-afd2-4dbc-a811-b4c9d9d1249e" containerName="registry-server" Oct 06 13:15:00 crc kubenswrapper[4958]: I1006 13:15:00.170166 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29329275-k5m9p" Oct 06 13:15:00 crc kubenswrapper[4958]: I1006 13:15:00.175298 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29329275-k5m9p"] Oct 06 13:15:00 crc kubenswrapper[4958]: I1006 13:15:00.175904 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Oct 06 13:15:00 crc kubenswrapper[4958]: I1006 13:15:00.176274 4958 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Oct 06 13:15:00 crc kubenswrapper[4958]: I1006 13:15:00.200751 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/d0d9dd8b-75a4-4cb7-a7e3-d4531b93679e-config-volume\") pod \"collect-profiles-29329275-k5m9p\" (UID: \"d0d9dd8b-75a4-4cb7-a7e3-d4531b93679e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29329275-k5m9p" Oct 06 13:15:00 crc kubenswrapper[4958]: I1006 13:15:00.200807 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-krj4b\" (UniqueName: \"kubernetes.io/projected/d0d9dd8b-75a4-4cb7-a7e3-d4531b93679e-kube-api-access-krj4b\") pod \"collect-profiles-29329275-k5m9p\" (UID: \"d0d9dd8b-75a4-4cb7-a7e3-d4531b93679e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29329275-k5m9p" Oct 06 13:15:00 crc kubenswrapper[4958]: I1006 13:15:00.200890 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/d0d9dd8b-75a4-4cb7-a7e3-d4531b93679e-secret-volume\") pod \"collect-profiles-29329275-k5m9p\" (UID: \"d0d9dd8b-75a4-4cb7-a7e3-d4531b93679e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29329275-k5m9p" Oct 06 13:15:00 crc kubenswrapper[4958]: I1006 13:15:00.302551 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/d0d9dd8b-75a4-4cb7-a7e3-d4531b93679e-secret-volume\") pod \"collect-profiles-29329275-k5m9p\" (UID: \"d0d9dd8b-75a4-4cb7-a7e3-d4531b93679e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29329275-k5m9p" Oct 06 13:15:00 crc kubenswrapper[4958]: I1006 13:15:00.302779 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/d0d9dd8b-75a4-4cb7-a7e3-d4531b93679e-config-volume\") pod \"collect-profiles-29329275-k5m9p\" (UID: \"d0d9dd8b-75a4-4cb7-a7e3-d4531b93679e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29329275-k5m9p" Oct 06 13:15:00 crc kubenswrapper[4958]: I1006 13:15:00.302826 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-krj4b\" (UniqueName: \"kubernetes.io/projected/d0d9dd8b-75a4-4cb7-a7e3-d4531b93679e-kube-api-access-krj4b\") pod \"collect-profiles-29329275-k5m9p\" (UID: \"d0d9dd8b-75a4-4cb7-a7e3-d4531b93679e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29329275-k5m9p" Oct 06 13:15:00 crc kubenswrapper[4958]: I1006 13:15:00.304217 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/d0d9dd8b-75a4-4cb7-a7e3-d4531b93679e-config-volume\") pod \"collect-profiles-29329275-k5m9p\" (UID: \"d0d9dd8b-75a4-4cb7-a7e3-d4531b93679e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29329275-k5m9p" Oct 06 13:15:00 crc kubenswrapper[4958]: I1006 13:15:00.442807 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-krj4b\" (UniqueName: \"kubernetes.io/projected/d0d9dd8b-75a4-4cb7-a7e3-d4531b93679e-kube-api-access-krj4b\") pod \"collect-profiles-29329275-k5m9p\" (UID: \"d0d9dd8b-75a4-4cb7-a7e3-d4531b93679e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29329275-k5m9p" Oct 06 13:15:00 crc kubenswrapper[4958]: I1006 13:15:00.446853 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/d0d9dd8b-75a4-4cb7-a7e3-d4531b93679e-secret-volume\") pod \"collect-profiles-29329275-k5m9p\" (UID: \"d0d9dd8b-75a4-4cb7-a7e3-d4531b93679e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29329275-k5m9p" Oct 06 13:15:00 crc kubenswrapper[4958]: I1006 13:15:00.504891 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29329275-k5m9p" Oct 06 13:15:01 crc kubenswrapper[4958]: I1006 13:15:01.015850 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29329275-k5m9p"] Oct 06 13:15:01 crc kubenswrapper[4958]: I1006 13:15:01.286795 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29329275-k5m9p" event={"ID":"d0d9dd8b-75a4-4cb7-a7e3-d4531b93679e","Type":"ContainerStarted","Data":"b0250838a2c75f33151f70f3f81636d5d9e3c90d4ae8cd29f04185c2611f3fc2"} Oct 06 13:15:01 crc kubenswrapper[4958]: I1006 13:15:01.287273 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29329275-k5m9p" event={"ID":"d0d9dd8b-75a4-4cb7-a7e3-d4531b93679e","Type":"ContainerStarted","Data":"b41a8ad89f7e01f4e98a028c5382b4d7d41f3f8c70b8334a17eca6c8277b2085"} Oct 06 13:15:01 crc kubenswrapper[4958]: I1006 13:15:01.309617 4958 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29329275-k5m9p" podStartSLOduration=1.309591374 podStartE2EDuration="1.309591374s" podCreationTimestamp="2025-10-06 13:15:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-06 13:15:01.302545516 +0000 UTC m=+5255.188570884" watchObservedRunningTime="2025-10-06 13:15:01.309591374 +0000 UTC m=+5255.195616692" Oct 06 13:15:02 crc kubenswrapper[4958]: I1006 13:15:02.298376 4958 generic.go:334] "Generic (PLEG): container finished" podID="d0d9dd8b-75a4-4cb7-a7e3-d4531b93679e" containerID="b0250838a2c75f33151f70f3f81636d5d9e3c90d4ae8cd29f04185c2611f3fc2" exitCode=0 Oct 06 13:15:02 crc kubenswrapper[4958]: I1006 13:15:02.298462 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29329275-k5m9p" event={"ID":"d0d9dd8b-75a4-4cb7-a7e3-d4531b93679e","Type":"ContainerDied","Data":"b0250838a2c75f33151f70f3f81636d5d9e3c90d4ae8cd29f04185c2611f3fc2"} Oct 06 13:15:03 crc kubenswrapper[4958]: I1006 13:15:03.703332 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29329275-k5m9p" Oct 06 13:15:03 crc kubenswrapper[4958]: I1006 13:15:03.769272 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/d0d9dd8b-75a4-4cb7-a7e3-d4531b93679e-config-volume\") pod \"d0d9dd8b-75a4-4cb7-a7e3-d4531b93679e\" (UID: \"d0d9dd8b-75a4-4cb7-a7e3-d4531b93679e\") " Oct 06 13:15:03 crc kubenswrapper[4958]: I1006 13:15:03.769466 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/d0d9dd8b-75a4-4cb7-a7e3-d4531b93679e-secret-volume\") pod \"d0d9dd8b-75a4-4cb7-a7e3-d4531b93679e\" (UID: \"d0d9dd8b-75a4-4cb7-a7e3-d4531b93679e\") " Oct 06 13:15:03 crc kubenswrapper[4958]: I1006 13:15:03.769698 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-krj4b\" (UniqueName: \"kubernetes.io/projected/d0d9dd8b-75a4-4cb7-a7e3-d4531b93679e-kube-api-access-krj4b\") pod \"d0d9dd8b-75a4-4cb7-a7e3-d4531b93679e\" (UID: \"d0d9dd8b-75a4-4cb7-a7e3-d4531b93679e\") " Oct 06 13:15:03 crc kubenswrapper[4958]: I1006 13:15:03.769824 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d0d9dd8b-75a4-4cb7-a7e3-d4531b93679e-config-volume" (OuterVolumeSpecName: "config-volume") pod "d0d9dd8b-75a4-4cb7-a7e3-d4531b93679e" (UID: "d0d9dd8b-75a4-4cb7-a7e3-d4531b93679e"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 06 13:15:03 crc kubenswrapper[4958]: I1006 13:15:03.770499 4958 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/d0d9dd8b-75a4-4cb7-a7e3-d4531b93679e-config-volume\") on node \"crc\" DevicePath \"\"" Oct 06 13:15:03 crc kubenswrapper[4958]: I1006 13:15:03.775281 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d0d9dd8b-75a4-4cb7-a7e3-d4531b93679e-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "d0d9dd8b-75a4-4cb7-a7e3-d4531b93679e" (UID: "d0d9dd8b-75a4-4cb7-a7e3-d4531b93679e"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 06 13:15:03 crc kubenswrapper[4958]: I1006 13:15:03.778526 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d0d9dd8b-75a4-4cb7-a7e3-d4531b93679e-kube-api-access-krj4b" (OuterVolumeSpecName: "kube-api-access-krj4b") pod "d0d9dd8b-75a4-4cb7-a7e3-d4531b93679e" (UID: "d0d9dd8b-75a4-4cb7-a7e3-d4531b93679e"). InnerVolumeSpecName "kube-api-access-krj4b". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 06 13:15:03 crc kubenswrapper[4958]: I1006 13:15:03.871277 4958 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-krj4b\" (UniqueName: \"kubernetes.io/projected/d0d9dd8b-75a4-4cb7-a7e3-d4531b93679e-kube-api-access-krj4b\") on node \"crc\" DevicePath \"\"" Oct 06 13:15:03 crc kubenswrapper[4958]: I1006 13:15:03.871307 4958 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/d0d9dd8b-75a4-4cb7-a7e3-d4531b93679e-secret-volume\") on node \"crc\" DevicePath \"\"" Oct 06 13:15:04 crc kubenswrapper[4958]: I1006 13:15:04.327748 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29329275-k5m9p" event={"ID":"d0d9dd8b-75a4-4cb7-a7e3-d4531b93679e","Type":"ContainerDied","Data":"b41a8ad89f7e01f4e98a028c5382b4d7d41f3f8c70b8334a17eca6c8277b2085"} Oct 06 13:15:04 crc kubenswrapper[4958]: I1006 13:15:04.328079 4958 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b41a8ad89f7e01f4e98a028c5382b4d7d41f3f8c70b8334a17eca6c8277b2085" Oct 06 13:15:04 crc kubenswrapper[4958]: I1006 13:15:04.327826 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29329275-k5m9p" Oct 06 13:15:04 crc kubenswrapper[4958]: I1006 13:15:04.387101 4958 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29329230-wdrsf"] Oct 06 13:15:04 crc kubenswrapper[4958]: I1006 13:15:04.397639 4958 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29329230-wdrsf"] Oct 06 13:15:04 crc kubenswrapper[4958]: I1006 13:15:04.927775 4958 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="51131f3b-9af1-45ce-8e80-cd7439743329" path="/var/lib/kubelet/pods/51131f3b-9af1-45ce-8e80-cd7439743329/volumes" Oct 06 13:15:23 crc kubenswrapper[4958]: I1006 13:15:23.801741 4958 patch_prober.go:28] interesting pod/machine-config-daemon-whw6z container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 06 13:15:23 crc kubenswrapper[4958]: I1006 13:15:23.802283 4958 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-whw6z" podUID="1a63a5e9-6d00-40ff-a080-cfcaf03a1c1b" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 06 13:15:23 crc kubenswrapper[4958]: I1006 13:15:23.802345 4958 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-whw6z" Oct 06 13:15:23 crc kubenswrapper[4958]: I1006 13:15:23.803105 4958 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"d84ab81ce9062a0950d4884324ac3057bab7e52ccaab6ef82edbbad8288b7777"} pod="openshift-machine-config-operator/machine-config-daemon-whw6z" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Oct 06 13:15:23 crc kubenswrapper[4958]: I1006 13:15:23.803171 4958 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-whw6z" podUID="1a63a5e9-6d00-40ff-a080-cfcaf03a1c1b" containerName="machine-config-daemon" containerID="cri-o://d84ab81ce9062a0950d4884324ac3057bab7e52ccaab6ef82edbbad8288b7777" gracePeriod=600 Oct 06 13:15:23 crc kubenswrapper[4958]: E1006 13:15:23.928877 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-whw6z_openshift-machine-config-operator(1a63a5e9-6d00-40ff-a080-cfcaf03a1c1b)\"" pod="openshift-machine-config-operator/machine-config-daemon-whw6z" podUID="1a63a5e9-6d00-40ff-a080-cfcaf03a1c1b" Oct 06 13:15:24 crc kubenswrapper[4958]: I1006 13:15:24.537926 4958 generic.go:334] "Generic (PLEG): container finished" podID="1a63a5e9-6d00-40ff-a080-cfcaf03a1c1b" containerID="d84ab81ce9062a0950d4884324ac3057bab7e52ccaab6ef82edbbad8288b7777" exitCode=0 Oct 06 13:15:24 crc kubenswrapper[4958]: I1006 13:15:24.537981 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-whw6z" event={"ID":"1a63a5e9-6d00-40ff-a080-cfcaf03a1c1b","Type":"ContainerDied","Data":"d84ab81ce9062a0950d4884324ac3057bab7e52ccaab6ef82edbbad8288b7777"} Oct 06 13:15:24 crc kubenswrapper[4958]: I1006 13:15:24.538224 4958 scope.go:117] "RemoveContainer" containerID="b6fcb4d8b3d156808c5f09a3bfa6066cc9d9732cd809369a2217718cfb946c7b" Oct 06 13:15:24 crc kubenswrapper[4958]: I1006 13:15:24.538970 4958 scope.go:117] "RemoveContainer" containerID="d84ab81ce9062a0950d4884324ac3057bab7e52ccaab6ef82edbbad8288b7777" Oct 06 13:15:24 crc kubenswrapper[4958]: E1006 13:15:24.539574 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-whw6z_openshift-machine-config-operator(1a63a5e9-6d00-40ff-a080-cfcaf03a1c1b)\"" pod="openshift-machine-config-operator/machine-config-daemon-whw6z" podUID="1a63a5e9-6d00-40ff-a080-cfcaf03a1c1b" Oct 06 13:15:33 crc kubenswrapper[4958]: I1006 13:15:33.143683 4958 scope.go:117] "RemoveContainer" containerID="b6974d44ddc2acb6f3131290df8ac4efe132d5257a39ec65fc0e58853e4ef23e" Oct 06 13:15:37 crc kubenswrapper[4958]: I1006 13:15:37.916260 4958 scope.go:117] "RemoveContainer" containerID="d84ab81ce9062a0950d4884324ac3057bab7e52ccaab6ef82edbbad8288b7777" Oct 06 13:15:37 crc kubenswrapper[4958]: E1006 13:15:37.917062 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-whw6z_openshift-machine-config-operator(1a63a5e9-6d00-40ff-a080-cfcaf03a1c1b)\"" pod="openshift-machine-config-operator/machine-config-daemon-whw6z" podUID="1a63a5e9-6d00-40ff-a080-cfcaf03a1c1b" Oct 06 13:15:50 crc kubenswrapper[4958]: I1006 13:15:50.913912 4958 scope.go:117] "RemoveContainer" containerID="d84ab81ce9062a0950d4884324ac3057bab7e52ccaab6ef82edbbad8288b7777" Oct 06 13:15:50 crc kubenswrapper[4958]: E1006 13:15:50.914590 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-whw6z_openshift-machine-config-operator(1a63a5e9-6d00-40ff-a080-cfcaf03a1c1b)\"" pod="openshift-machine-config-operator/machine-config-daemon-whw6z" podUID="1a63a5e9-6d00-40ff-a080-cfcaf03a1c1b" Oct 06 13:15:52 crc kubenswrapper[4958]: I1006 13:15:52.574409 4958 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-v8bss"] Oct 06 13:15:52 crc kubenswrapper[4958]: E1006 13:15:52.575403 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d0d9dd8b-75a4-4cb7-a7e3-d4531b93679e" containerName="collect-profiles" Oct 06 13:15:52 crc kubenswrapper[4958]: I1006 13:15:52.575860 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="d0d9dd8b-75a4-4cb7-a7e3-d4531b93679e" containerName="collect-profiles" Oct 06 13:15:52 crc kubenswrapper[4958]: I1006 13:15:52.577443 4958 memory_manager.go:354] "RemoveStaleState removing state" podUID="d0d9dd8b-75a4-4cb7-a7e3-d4531b93679e" containerName="collect-profiles" Oct 06 13:15:52 crc kubenswrapper[4958]: I1006 13:15:52.580075 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-v8bss" Oct 06 13:15:52 crc kubenswrapper[4958]: I1006 13:15:52.588563 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-v8bss"] Oct 06 13:15:52 crc kubenswrapper[4958]: I1006 13:15:52.753863 4958 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-h56lg"] Oct 06 13:15:52 crc kubenswrapper[4958]: I1006 13:15:52.756090 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-h56lg" Oct 06 13:15:52 crc kubenswrapper[4958]: I1006 13:15:52.766715 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-h56lg"] Oct 06 13:15:52 crc kubenswrapper[4958]: I1006 13:15:52.774832 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rmx7f\" (UniqueName: \"kubernetes.io/projected/4f00eac0-be56-4173-884d-adfc060cdb66-kube-api-access-rmx7f\") pod \"redhat-marketplace-v8bss\" (UID: \"4f00eac0-be56-4173-884d-adfc060cdb66\") " pod="openshift-marketplace/redhat-marketplace-v8bss" Oct 06 13:15:52 crc kubenswrapper[4958]: I1006 13:15:52.774933 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4f00eac0-be56-4173-884d-adfc060cdb66-catalog-content\") pod \"redhat-marketplace-v8bss\" (UID: \"4f00eac0-be56-4173-884d-adfc060cdb66\") " pod="openshift-marketplace/redhat-marketplace-v8bss" Oct 06 13:15:52 crc kubenswrapper[4958]: I1006 13:15:52.775728 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4f00eac0-be56-4173-884d-adfc060cdb66-utilities\") pod \"redhat-marketplace-v8bss\" (UID: \"4f00eac0-be56-4173-884d-adfc060cdb66\") " pod="openshift-marketplace/redhat-marketplace-v8bss" Oct 06 13:15:52 crc kubenswrapper[4958]: I1006 13:15:52.878514 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4f00eac0-be56-4173-884d-adfc060cdb66-utilities\") pod \"redhat-marketplace-v8bss\" (UID: \"4f00eac0-be56-4173-884d-adfc060cdb66\") " pod="openshift-marketplace/redhat-marketplace-v8bss" Oct 06 13:15:52 crc kubenswrapper[4958]: I1006 13:15:52.878586 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c68298ec-6856-49fe-97a3-aeea130bb53a-utilities\") pod \"community-operators-h56lg\" (UID: \"c68298ec-6856-49fe-97a3-aeea130bb53a\") " pod="openshift-marketplace/community-operators-h56lg" Oct 06 13:15:52 crc kubenswrapper[4958]: I1006 13:15:52.878653 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c68298ec-6856-49fe-97a3-aeea130bb53a-catalog-content\") pod \"community-operators-h56lg\" (UID: \"c68298ec-6856-49fe-97a3-aeea130bb53a\") " pod="openshift-marketplace/community-operators-h56lg" Oct 06 13:15:52 crc kubenswrapper[4958]: I1006 13:15:52.878672 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ttg5q\" (UniqueName: \"kubernetes.io/projected/c68298ec-6856-49fe-97a3-aeea130bb53a-kube-api-access-ttg5q\") pod \"community-operators-h56lg\" (UID: \"c68298ec-6856-49fe-97a3-aeea130bb53a\") " pod="openshift-marketplace/community-operators-h56lg" Oct 06 13:15:52 crc kubenswrapper[4958]: I1006 13:15:52.878697 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rmx7f\" (UniqueName: \"kubernetes.io/projected/4f00eac0-be56-4173-884d-adfc060cdb66-kube-api-access-rmx7f\") pod \"redhat-marketplace-v8bss\" (UID: \"4f00eac0-be56-4173-884d-adfc060cdb66\") " pod="openshift-marketplace/redhat-marketplace-v8bss" Oct 06 13:15:52 crc kubenswrapper[4958]: I1006 13:15:52.878733 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4f00eac0-be56-4173-884d-adfc060cdb66-catalog-content\") pod \"redhat-marketplace-v8bss\" (UID: \"4f00eac0-be56-4173-884d-adfc060cdb66\") " pod="openshift-marketplace/redhat-marketplace-v8bss" Oct 06 13:15:52 crc kubenswrapper[4958]: I1006 13:15:52.879504 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4f00eac0-be56-4173-884d-adfc060cdb66-utilities\") pod \"redhat-marketplace-v8bss\" (UID: \"4f00eac0-be56-4173-884d-adfc060cdb66\") " pod="openshift-marketplace/redhat-marketplace-v8bss" Oct 06 13:15:52 crc kubenswrapper[4958]: I1006 13:15:52.879735 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4f00eac0-be56-4173-884d-adfc060cdb66-catalog-content\") pod \"redhat-marketplace-v8bss\" (UID: \"4f00eac0-be56-4173-884d-adfc060cdb66\") " pod="openshift-marketplace/redhat-marketplace-v8bss" Oct 06 13:15:52 crc kubenswrapper[4958]: I1006 13:15:52.898037 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rmx7f\" (UniqueName: \"kubernetes.io/projected/4f00eac0-be56-4173-884d-adfc060cdb66-kube-api-access-rmx7f\") pod \"redhat-marketplace-v8bss\" (UID: \"4f00eac0-be56-4173-884d-adfc060cdb66\") " pod="openshift-marketplace/redhat-marketplace-v8bss" Oct 06 13:15:52 crc kubenswrapper[4958]: I1006 13:15:52.910306 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-v8bss" Oct 06 13:15:52 crc kubenswrapper[4958]: I1006 13:15:52.980748 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c68298ec-6856-49fe-97a3-aeea130bb53a-utilities\") pod \"community-operators-h56lg\" (UID: \"c68298ec-6856-49fe-97a3-aeea130bb53a\") " pod="openshift-marketplace/community-operators-h56lg" Oct 06 13:15:52 crc kubenswrapper[4958]: I1006 13:15:52.980831 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c68298ec-6856-49fe-97a3-aeea130bb53a-catalog-content\") pod \"community-operators-h56lg\" (UID: \"c68298ec-6856-49fe-97a3-aeea130bb53a\") " pod="openshift-marketplace/community-operators-h56lg" Oct 06 13:15:52 crc kubenswrapper[4958]: I1006 13:15:52.980853 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ttg5q\" (UniqueName: \"kubernetes.io/projected/c68298ec-6856-49fe-97a3-aeea130bb53a-kube-api-access-ttg5q\") pod \"community-operators-h56lg\" (UID: \"c68298ec-6856-49fe-97a3-aeea130bb53a\") " pod="openshift-marketplace/community-operators-h56lg" Oct 06 13:15:52 crc kubenswrapper[4958]: I1006 13:15:52.981804 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c68298ec-6856-49fe-97a3-aeea130bb53a-utilities\") pod \"community-operators-h56lg\" (UID: \"c68298ec-6856-49fe-97a3-aeea130bb53a\") " pod="openshift-marketplace/community-operators-h56lg" Oct 06 13:15:52 crc kubenswrapper[4958]: I1006 13:15:52.982330 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c68298ec-6856-49fe-97a3-aeea130bb53a-catalog-content\") pod \"community-operators-h56lg\" (UID: \"c68298ec-6856-49fe-97a3-aeea130bb53a\") " pod="openshift-marketplace/community-operators-h56lg" Oct 06 13:15:53 crc kubenswrapper[4958]: I1006 13:15:53.002468 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ttg5q\" (UniqueName: \"kubernetes.io/projected/c68298ec-6856-49fe-97a3-aeea130bb53a-kube-api-access-ttg5q\") pod \"community-operators-h56lg\" (UID: \"c68298ec-6856-49fe-97a3-aeea130bb53a\") " pod="openshift-marketplace/community-operators-h56lg" Oct 06 13:15:53 crc kubenswrapper[4958]: I1006 13:15:53.072051 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-h56lg" Oct 06 13:15:53 crc kubenswrapper[4958]: I1006 13:15:53.455921 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-v8bss"] Oct 06 13:15:53 crc kubenswrapper[4958]: W1006 13:15:53.460526 4958 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4f00eac0_be56_4173_884d_adfc060cdb66.slice/crio-fb37a840edff65f178390076bc32ea6171879f65b6fe3ddc2577faa4c3b7c09d WatchSource:0}: Error finding container fb37a840edff65f178390076bc32ea6171879f65b6fe3ddc2577faa4c3b7c09d: Status 404 returned error can't find the container with id fb37a840edff65f178390076bc32ea6171879f65b6fe3ddc2577faa4c3b7c09d Oct 06 13:15:53 crc kubenswrapper[4958]: W1006 13:15:53.608588 4958 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc68298ec_6856_49fe_97a3_aeea130bb53a.slice/crio-b817713e8f25693a661d518898bf4a9b04fa08fee3eded74f90482ad74fb4822 WatchSource:0}: Error finding container b817713e8f25693a661d518898bf4a9b04fa08fee3eded74f90482ad74fb4822: Status 404 returned error can't find the container with id b817713e8f25693a661d518898bf4a9b04fa08fee3eded74f90482ad74fb4822 Oct 06 13:15:53 crc kubenswrapper[4958]: I1006 13:15:53.614455 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-h56lg"] Oct 06 13:15:53 crc kubenswrapper[4958]: I1006 13:15:53.881114 4958 generic.go:334] "Generic (PLEG): container finished" podID="4f00eac0-be56-4173-884d-adfc060cdb66" containerID="e6b9e2f958b12b3cb21733f1ec8fc1908269c06563315236991804dccb9820d1" exitCode=0 Oct 06 13:15:53 crc kubenswrapper[4958]: I1006 13:15:53.881184 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-v8bss" event={"ID":"4f00eac0-be56-4173-884d-adfc060cdb66","Type":"ContainerDied","Data":"e6b9e2f958b12b3cb21733f1ec8fc1908269c06563315236991804dccb9820d1"} Oct 06 13:15:53 crc kubenswrapper[4958]: I1006 13:15:53.881498 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-v8bss" event={"ID":"4f00eac0-be56-4173-884d-adfc060cdb66","Type":"ContainerStarted","Data":"fb37a840edff65f178390076bc32ea6171879f65b6fe3ddc2577faa4c3b7c09d"} Oct 06 13:15:53 crc kubenswrapper[4958]: I1006 13:15:53.883120 4958 generic.go:334] "Generic (PLEG): container finished" podID="c68298ec-6856-49fe-97a3-aeea130bb53a" containerID="8ff2284dc1223536aef55073c5dacd69780500aeca6977c71987708d73fa44e6" exitCode=0 Oct 06 13:15:53 crc kubenswrapper[4958]: I1006 13:15:53.883210 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-h56lg" event={"ID":"c68298ec-6856-49fe-97a3-aeea130bb53a","Type":"ContainerDied","Data":"8ff2284dc1223536aef55073c5dacd69780500aeca6977c71987708d73fa44e6"} Oct 06 13:15:53 crc kubenswrapper[4958]: I1006 13:15:53.883248 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-h56lg" event={"ID":"c68298ec-6856-49fe-97a3-aeea130bb53a","Type":"ContainerStarted","Data":"b817713e8f25693a661d518898bf4a9b04fa08fee3eded74f90482ad74fb4822"} Oct 06 13:15:53 crc kubenswrapper[4958]: I1006 13:15:53.883822 4958 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Oct 06 13:15:54 crc kubenswrapper[4958]: I1006 13:15:54.895642 4958 generic.go:334] "Generic (PLEG): container finished" podID="4f00eac0-be56-4173-884d-adfc060cdb66" containerID="aebd4db55a0c9613efd6e3bced7ff37474476bbdbd8248b47c7563bb64d149be" exitCode=0 Oct 06 13:15:54 crc kubenswrapper[4958]: I1006 13:15:54.895703 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-v8bss" event={"ID":"4f00eac0-be56-4173-884d-adfc060cdb66","Type":"ContainerDied","Data":"aebd4db55a0c9613efd6e3bced7ff37474476bbdbd8248b47c7563bb64d149be"} Oct 06 13:15:55 crc kubenswrapper[4958]: I1006 13:15:55.911009 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-v8bss" event={"ID":"4f00eac0-be56-4173-884d-adfc060cdb66","Type":"ContainerStarted","Data":"3d4def86a40ab64ceb84fe09a42a245520976c26951bba841bd34eddfe6995d6"} Oct 06 13:15:55 crc kubenswrapper[4958]: I1006 13:15:55.923930 4958 generic.go:334] "Generic (PLEG): container finished" podID="c68298ec-6856-49fe-97a3-aeea130bb53a" containerID="efa168b3bb0f63d199ac172869c7a5d9b5dcf78d71416d4b36594861e649675f" exitCode=0 Oct 06 13:15:55 crc kubenswrapper[4958]: I1006 13:15:55.924011 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-h56lg" event={"ID":"c68298ec-6856-49fe-97a3-aeea130bb53a","Type":"ContainerDied","Data":"efa168b3bb0f63d199ac172869c7a5d9b5dcf78d71416d4b36594861e649675f"} Oct 06 13:15:55 crc kubenswrapper[4958]: I1006 13:15:55.967087 4958 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-v8bss" podStartSLOduration=2.26518066 podStartE2EDuration="3.967063915s" podCreationTimestamp="2025-10-06 13:15:52 +0000 UTC" firstStartedPulling="2025-10-06 13:15:53.883438121 +0000 UTC m=+5307.769463469" lastFinishedPulling="2025-10-06 13:15:55.585321386 +0000 UTC m=+5309.471346724" observedRunningTime="2025-10-06 13:15:55.938301096 +0000 UTC m=+5309.824326484" watchObservedRunningTime="2025-10-06 13:15:55.967063915 +0000 UTC m=+5309.853089233" Oct 06 13:15:56 crc kubenswrapper[4958]: I1006 13:15:56.954121 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-h56lg" event={"ID":"c68298ec-6856-49fe-97a3-aeea130bb53a","Type":"ContainerStarted","Data":"1b60fcc4de36117ecc132db46a77c8134382c9d468830908c1734b3b2c82ba98"} Oct 06 13:15:56 crc kubenswrapper[4958]: I1006 13:15:56.992089 4958 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-h56lg" podStartSLOduration=2.559923423 podStartE2EDuration="4.992061815s" podCreationTimestamp="2025-10-06 13:15:52 +0000 UTC" firstStartedPulling="2025-10-06 13:15:53.886760169 +0000 UTC m=+5307.772785477" lastFinishedPulling="2025-10-06 13:15:56.318898561 +0000 UTC m=+5310.204923869" observedRunningTime="2025-10-06 13:15:56.980728091 +0000 UTC m=+5310.866753399" watchObservedRunningTime="2025-10-06 13:15:56.992061815 +0000 UTC m=+5310.878087163" Oct 06 13:16:01 crc kubenswrapper[4958]: I1006 13:16:01.913251 4958 scope.go:117] "RemoveContainer" containerID="d84ab81ce9062a0950d4884324ac3057bab7e52ccaab6ef82edbbad8288b7777" Oct 06 13:16:01 crc kubenswrapper[4958]: E1006 13:16:01.914037 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-whw6z_openshift-machine-config-operator(1a63a5e9-6d00-40ff-a080-cfcaf03a1c1b)\"" pod="openshift-machine-config-operator/machine-config-daemon-whw6z" podUID="1a63a5e9-6d00-40ff-a080-cfcaf03a1c1b" Oct 06 13:16:02 crc kubenswrapper[4958]: I1006 13:16:02.910836 4958 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-v8bss" Oct 06 13:16:02 crc kubenswrapper[4958]: I1006 13:16:02.911121 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-v8bss" Oct 06 13:16:03 crc kubenswrapper[4958]: I1006 13:16:03.003863 4958 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-v8bss" Oct 06 13:16:03 crc kubenswrapper[4958]: I1006 13:16:03.072981 4958 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-h56lg" Oct 06 13:16:03 crc kubenswrapper[4958]: I1006 13:16:03.073031 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-h56lg" Oct 06 13:16:03 crc kubenswrapper[4958]: I1006 13:16:03.089581 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-v8bss" Oct 06 13:16:03 crc kubenswrapper[4958]: I1006 13:16:03.115971 4958 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-h56lg" Oct 06 13:16:04 crc kubenswrapper[4958]: I1006 13:16:04.096036 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-h56lg" Oct 06 13:16:05 crc kubenswrapper[4958]: I1006 13:16:05.949686 4958 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-v8bss"] Oct 06 13:16:06 crc kubenswrapper[4958]: I1006 13:16:06.055340 4958 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-v8bss" podUID="4f00eac0-be56-4173-884d-adfc060cdb66" containerName="registry-server" containerID="cri-o://3d4def86a40ab64ceb84fe09a42a245520976c26951bba841bd34eddfe6995d6" gracePeriod=2 Oct 06 13:16:06 crc kubenswrapper[4958]: I1006 13:16:06.143481 4958 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-h56lg"] Oct 06 13:16:06 crc kubenswrapper[4958]: I1006 13:16:06.143718 4958 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-h56lg" podUID="c68298ec-6856-49fe-97a3-aeea130bb53a" containerName="registry-server" containerID="cri-o://1b60fcc4de36117ecc132db46a77c8134382c9d468830908c1734b3b2c82ba98" gracePeriod=2 Oct 06 13:16:06 crc kubenswrapper[4958]: I1006 13:16:06.540256 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-v8bss" Oct 06 13:16:06 crc kubenswrapper[4958]: I1006 13:16:06.592815 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4f00eac0-be56-4173-884d-adfc060cdb66-catalog-content\") pod \"4f00eac0-be56-4173-884d-adfc060cdb66\" (UID: \"4f00eac0-be56-4173-884d-adfc060cdb66\") " Oct 06 13:16:06 crc kubenswrapper[4958]: I1006 13:16:06.592930 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4f00eac0-be56-4173-884d-adfc060cdb66-utilities\") pod \"4f00eac0-be56-4173-884d-adfc060cdb66\" (UID: \"4f00eac0-be56-4173-884d-adfc060cdb66\") " Oct 06 13:16:06 crc kubenswrapper[4958]: I1006 13:16:06.593006 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rmx7f\" (UniqueName: \"kubernetes.io/projected/4f00eac0-be56-4173-884d-adfc060cdb66-kube-api-access-rmx7f\") pod \"4f00eac0-be56-4173-884d-adfc060cdb66\" (UID: \"4f00eac0-be56-4173-884d-adfc060cdb66\") " Oct 06 13:16:06 crc kubenswrapper[4958]: I1006 13:16:06.593962 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4f00eac0-be56-4173-884d-adfc060cdb66-utilities" (OuterVolumeSpecName: "utilities") pod "4f00eac0-be56-4173-884d-adfc060cdb66" (UID: "4f00eac0-be56-4173-884d-adfc060cdb66"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 06 13:16:06 crc kubenswrapper[4958]: I1006 13:16:06.598477 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4f00eac0-be56-4173-884d-adfc060cdb66-kube-api-access-rmx7f" (OuterVolumeSpecName: "kube-api-access-rmx7f") pod "4f00eac0-be56-4173-884d-adfc060cdb66" (UID: "4f00eac0-be56-4173-884d-adfc060cdb66"). InnerVolumeSpecName "kube-api-access-rmx7f". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 06 13:16:06 crc kubenswrapper[4958]: I1006 13:16:06.605359 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4f00eac0-be56-4173-884d-adfc060cdb66-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "4f00eac0-be56-4173-884d-adfc060cdb66" (UID: "4f00eac0-be56-4173-884d-adfc060cdb66"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 06 13:16:06 crc kubenswrapper[4958]: I1006 13:16:06.621658 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-h56lg" Oct 06 13:16:06 crc kubenswrapper[4958]: I1006 13:16:06.694517 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ttg5q\" (UniqueName: \"kubernetes.io/projected/c68298ec-6856-49fe-97a3-aeea130bb53a-kube-api-access-ttg5q\") pod \"c68298ec-6856-49fe-97a3-aeea130bb53a\" (UID: \"c68298ec-6856-49fe-97a3-aeea130bb53a\") " Oct 06 13:16:06 crc kubenswrapper[4958]: I1006 13:16:06.694594 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c68298ec-6856-49fe-97a3-aeea130bb53a-utilities\") pod \"c68298ec-6856-49fe-97a3-aeea130bb53a\" (UID: \"c68298ec-6856-49fe-97a3-aeea130bb53a\") " Oct 06 13:16:06 crc kubenswrapper[4958]: I1006 13:16:06.694706 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c68298ec-6856-49fe-97a3-aeea130bb53a-catalog-content\") pod \"c68298ec-6856-49fe-97a3-aeea130bb53a\" (UID: \"c68298ec-6856-49fe-97a3-aeea130bb53a\") " Oct 06 13:16:06 crc kubenswrapper[4958]: I1006 13:16:06.695182 4958 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4f00eac0-be56-4173-884d-adfc060cdb66-utilities\") on node \"crc\" DevicePath \"\"" Oct 06 13:16:06 crc kubenswrapper[4958]: I1006 13:16:06.695211 4958 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rmx7f\" (UniqueName: \"kubernetes.io/projected/4f00eac0-be56-4173-884d-adfc060cdb66-kube-api-access-rmx7f\") on node \"crc\" DevicePath \"\"" Oct 06 13:16:06 crc kubenswrapper[4958]: I1006 13:16:06.695223 4958 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4f00eac0-be56-4173-884d-adfc060cdb66-catalog-content\") on node \"crc\" DevicePath \"\"" Oct 06 13:16:06 crc kubenswrapper[4958]: I1006 13:16:06.695449 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c68298ec-6856-49fe-97a3-aeea130bb53a-utilities" (OuterVolumeSpecName: "utilities") pod "c68298ec-6856-49fe-97a3-aeea130bb53a" (UID: "c68298ec-6856-49fe-97a3-aeea130bb53a"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 06 13:16:06 crc kubenswrapper[4958]: I1006 13:16:06.697777 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c68298ec-6856-49fe-97a3-aeea130bb53a-kube-api-access-ttg5q" (OuterVolumeSpecName: "kube-api-access-ttg5q") pod "c68298ec-6856-49fe-97a3-aeea130bb53a" (UID: "c68298ec-6856-49fe-97a3-aeea130bb53a"). InnerVolumeSpecName "kube-api-access-ttg5q". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 06 13:16:06 crc kubenswrapper[4958]: I1006 13:16:06.744343 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c68298ec-6856-49fe-97a3-aeea130bb53a-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "c68298ec-6856-49fe-97a3-aeea130bb53a" (UID: "c68298ec-6856-49fe-97a3-aeea130bb53a"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 06 13:16:06 crc kubenswrapper[4958]: I1006 13:16:06.798014 4958 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ttg5q\" (UniqueName: \"kubernetes.io/projected/c68298ec-6856-49fe-97a3-aeea130bb53a-kube-api-access-ttg5q\") on node \"crc\" DevicePath \"\"" Oct 06 13:16:06 crc kubenswrapper[4958]: I1006 13:16:06.798087 4958 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c68298ec-6856-49fe-97a3-aeea130bb53a-utilities\") on node \"crc\" DevicePath \"\"" Oct 06 13:16:06 crc kubenswrapper[4958]: I1006 13:16:06.798115 4958 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c68298ec-6856-49fe-97a3-aeea130bb53a-catalog-content\") on node \"crc\" DevicePath \"\"" Oct 06 13:16:07 crc kubenswrapper[4958]: I1006 13:16:07.073482 4958 generic.go:334] "Generic (PLEG): container finished" podID="4f00eac0-be56-4173-884d-adfc060cdb66" containerID="3d4def86a40ab64ceb84fe09a42a245520976c26951bba841bd34eddfe6995d6" exitCode=0 Oct 06 13:16:07 crc kubenswrapper[4958]: I1006 13:16:07.073586 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-v8bss" Oct 06 13:16:07 crc kubenswrapper[4958]: I1006 13:16:07.073595 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-v8bss" event={"ID":"4f00eac0-be56-4173-884d-adfc060cdb66","Type":"ContainerDied","Data":"3d4def86a40ab64ceb84fe09a42a245520976c26951bba841bd34eddfe6995d6"} Oct 06 13:16:07 crc kubenswrapper[4958]: I1006 13:16:07.073702 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-v8bss" event={"ID":"4f00eac0-be56-4173-884d-adfc060cdb66","Type":"ContainerDied","Data":"fb37a840edff65f178390076bc32ea6171879f65b6fe3ddc2577faa4c3b7c09d"} Oct 06 13:16:07 crc kubenswrapper[4958]: I1006 13:16:07.073736 4958 scope.go:117] "RemoveContainer" containerID="3d4def86a40ab64ceb84fe09a42a245520976c26951bba841bd34eddfe6995d6" Oct 06 13:16:07 crc kubenswrapper[4958]: I1006 13:16:07.077532 4958 generic.go:334] "Generic (PLEG): container finished" podID="c68298ec-6856-49fe-97a3-aeea130bb53a" containerID="1b60fcc4de36117ecc132db46a77c8134382c9d468830908c1734b3b2c82ba98" exitCode=0 Oct 06 13:16:07 crc kubenswrapper[4958]: I1006 13:16:07.077612 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-h56lg" Oct 06 13:16:07 crc kubenswrapper[4958]: I1006 13:16:07.077716 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-h56lg" event={"ID":"c68298ec-6856-49fe-97a3-aeea130bb53a","Type":"ContainerDied","Data":"1b60fcc4de36117ecc132db46a77c8134382c9d468830908c1734b3b2c82ba98"} Oct 06 13:16:07 crc kubenswrapper[4958]: I1006 13:16:07.078005 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-h56lg" event={"ID":"c68298ec-6856-49fe-97a3-aeea130bb53a","Type":"ContainerDied","Data":"b817713e8f25693a661d518898bf4a9b04fa08fee3eded74f90482ad74fb4822"} Oct 06 13:16:07 crc kubenswrapper[4958]: I1006 13:16:07.116419 4958 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-v8bss"] Oct 06 13:16:07 crc kubenswrapper[4958]: I1006 13:16:07.119598 4958 scope.go:117] "RemoveContainer" containerID="aebd4db55a0c9613efd6e3bced7ff37474476bbdbd8248b47c7563bb64d149be" Oct 06 13:16:07 crc kubenswrapper[4958]: I1006 13:16:07.128214 4958 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-v8bss"] Oct 06 13:16:07 crc kubenswrapper[4958]: I1006 13:16:07.139194 4958 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-h56lg"] Oct 06 13:16:07 crc kubenswrapper[4958]: I1006 13:16:07.148607 4958 scope.go:117] "RemoveContainer" containerID="e6b9e2f958b12b3cb21733f1ec8fc1908269c06563315236991804dccb9820d1" Oct 06 13:16:07 crc kubenswrapper[4958]: I1006 13:16:07.149646 4958 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-h56lg"] Oct 06 13:16:07 crc kubenswrapper[4958]: I1006 13:16:07.171364 4958 scope.go:117] "RemoveContainer" containerID="3d4def86a40ab64ceb84fe09a42a245520976c26951bba841bd34eddfe6995d6" Oct 06 13:16:07 crc kubenswrapper[4958]: E1006 13:16:07.172438 4958 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3d4def86a40ab64ceb84fe09a42a245520976c26951bba841bd34eddfe6995d6\": container with ID starting with 3d4def86a40ab64ceb84fe09a42a245520976c26951bba841bd34eddfe6995d6 not found: ID does not exist" containerID="3d4def86a40ab64ceb84fe09a42a245520976c26951bba841bd34eddfe6995d6" Oct 06 13:16:07 crc kubenswrapper[4958]: I1006 13:16:07.172555 4958 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3d4def86a40ab64ceb84fe09a42a245520976c26951bba841bd34eddfe6995d6"} err="failed to get container status \"3d4def86a40ab64ceb84fe09a42a245520976c26951bba841bd34eddfe6995d6\": rpc error: code = NotFound desc = could not find container \"3d4def86a40ab64ceb84fe09a42a245520976c26951bba841bd34eddfe6995d6\": container with ID starting with 3d4def86a40ab64ceb84fe09a42a245520976c26951bba841bd34eddfe6995d6 not found: ID does not exist" Oct 06 13:16:07 crc kubenswrapper[4958]: I1006 13:16:07.172649 4958 scope.go:117] "RemoveContainer" containerID="aebd4db55a0c9613efd6e3bced7ff37474476bbdbd8248b47c7563bb64d149be" Oct 06 13:16:07 crc kubenswrapper[4958]: E1006 13:16:07.173194 4958 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"aebd4db55a0c9613efd6e3bced7ff37474476bbdbd8248b47c7563bb64d149be\": container with ID starting with aebd4db55a0c9613efd6e3bced7ff37474476bbdbd8248b47c7563bb64d149be not found: ID does not exist" containerID="aebd4db55a0c9613efd6e3bced7ff37474476bbdbd8248b47c7563bb64d149be" Oct 06 13:16:07 crc kubenswrapper[4958]: I1006 13:16:07.173304 4958 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"aebd4db55a0c9613efd6e3bced7ff37474476bbdbd8248b47c7563bb64d149be"} err="failed to get container status \"aebd4db55a0c9613efd6e3bced7ff37474476bbdbd8248b47c7563bb64d149be\": rpc error: code = NotFound desc = could not find container \"aebd4db55a0c9613efd6e3bced7ff37474476bbdbd8248b47c7563bb64d149be\": container with ID starting with aebd4db55a0c9613efd6e3bced7ff37474476bbdbd8248b47c7563bb64d149be not found: ID does not exist" Oct 06 13:16:07 crc kubenswrapper[4958]: I1006 13:16:07.173402 4958 scope.go:117] "RemoveContainer" containerID="e6b9e2f958b12b3cb21733f1ec8fc1908269c06563315236991804dccb9820d1" Oct 06 13:16:07 crc kubenswrapper[4958]: E1006 13:16:07.174081 4958 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e6b9e2f958b12b3cb21733f1ec8fc1908269c06563315236991804dccb9820d1\": container with ID starting with e6b9e2f958b12b3cb21733f1ec8fc1908269c06563315236991804dccb9820d1 not found: ID does not exist" containerID="e6b9e2f958b12b3cb21733f1ec8fc1908269c06563315236991804dccb9820d1" Oct 06 13:16:07 crc kubenswrapper[4958]: I1006 13:16:07.174212 4958 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e6b9e2f958b12b3cb21733f1ec8fc1908269c06563315236991804dccb9820d1"} err="failed to get container status \"e6b9e2f958b12b3cb21733f1ec8fc1908269c06563315236991804dccb9820d1\": rpc error: code = NotFound desc = could not find container \"e6b9e2f958b12b3cb21733f1ec8fc1908269c06563315236991804dccb9820d1\": container with ID starting with e6b9e2f958b12b3cb21733f1ec8fc1908269c06563315236991804dccb9820d1 not found: ID does not exist" Oct 06 13:16:07 crc kubenswrapper[4958]: I1006 13:16:07.174305 4958 scope.go:117] "RemoveContainer" containerID="1b60fcc4de36117ecc132db46a77c8134382c9d468830908c1734b3b2c82ba98" Oct 06 13:16:07 crc kubenswrapper[4958]: I1006 13:16:07.226808 4958 scope.go:117] "RemoveContainer" containerID="efa168b3bb0f63d199ac172869c7a5d9b5dcf78d71416d4b36594861e649675f" Oct 06 13:16:07 crc kubenswrapper[4958]: I1006 13:16:07.255332 4958 scope.go:117] "RemoveContainer" containerID="8ff2284dc1223536aef55073c5dacd69780500aeca6977c71987708d73fa44e6" Oct 06 13:16:07 crc kubenswrapper[4958]: I1006 13:16:07.295690 4958 scope.go:117] "RemoveContainer" containerID="1b60fcc4de36117ecc132db46a77c8134382c9d468830908c1734b3b2c82ba98" Oct 06 13:16:07 crc kubenswrapper[4958]: E1006 13:16:07.296254 4958 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1b60fcc4de36117ecc132db46a77c8134382c9d468830908c1734b3b2c82ba98\": container with ID starting with 1b60fcc4de36117ecc132db46a77c8134382c9d468830908c1734b3b2c82ba98 not found: ID does not exist" containerID="1b60fcc4de36117ecc132db46a77c8134382c9d468830908c1734b3b2c82ba98" Oct 06 13:16:07 crc kubenswrapper[4958]: I1006 13:16:07.296303 4958 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1b60fcc4de36117ecc132db46a77c8134382c9d468830908c1734b3b2c82ba98"} err="failed to get container status \"1b60fcc4de36117ecc132db46a77c8134382c9d468830908c1734b3b2c82ba98\": rpc error: code = NotFound desc = could not find container \"1b60fcc4de36117ecc132db46a77c8134382c9d468830908c1734b3b2c82ba98\": container with ID starting with 1b60fcc4de36117ecc132db46a77c8134382c9d468830908c1734b3b2c82ba98 not found: ID does not exist" Oct 06 13:16:07 crc kubenswrapper[4958]: I1006 13:16:07.296335 4958 scope.go:117] "RemoveContainer" containerID="efa168b3bb0f63d199ac172869c7a5d9b5dcf78d71416d4b36594861e649675f" Oct 06 13:16:07 crc kubenswrapper[4958]: E1006 13:16:07.296758 4958 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"efa168b3bb0f63d199ac172869c7a5d9b5dcf78d71416d4b36594861e649675f\": container with ID starting with efa168b3bb0f63d199ac172869c7a5d9b5dcf78d71416d4b36594861e649675f not found: ID does not exist" containerID="efa168b3bb0f63d199ac172869c7a5d9b5dcf78d71416d4b36594861e649675f" Oct 06 13:16:07 crc kubenswrapper[4958]: I1006 13:16:07.296799 4958 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"efa168b3bb0f63d199ac172869c7a5d9b5dcf78d71416d4b36594861e649675f"} err="failed to get container status \"efa168b3bb0f63d199ac172869c7a5d9b5dcf78d71416d4b36594861e649675f\": rpc error: code = NotFound desc = could not find container \"efa168b3bb0f63d199ac172869c7a5d9b5dcf78d71416d4b36594861e649675f\": container with ID starting with efa168b3bb0f63d199ac172869c7a5d9b5dcf78d71416d4b36594861e649675f not found: ID does not exist" Oct 06 13:16:07 crc kubenswrapper[4958]: I1006 13:16:07.296820 4958 scope.go:117] "RemoveContainer" containerID="8ff2284dc1223536aef55073c5dacd69780500aeca6977c71987708d73fa44e6" Oct 06 13:16:07 crc kubenswrapper[4958]: E1006 13:16:07.297249 4958 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8ff2284dc1223536aef55073c5dacd69780500aeca6977c71987708d73fa44e6\": container with ID starting with 8ff2284dc1223536aef55073c5dacd69780500aeca6977c71987708d73fa44e6 not found: ID does not exist" containerID="8ff2284dc1223536aef55073c5dacd69780500aeca6977c71987708d73fa44e6" Oct 06 13:16:07 crc kubenswrapper[4958]: I1006 13:16:07.297286 4958 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8ff2284dc1223536aef55073c5dacd69780500aeca6977c71987708d73fa44e6"} err="failed to get container status \"8ff2284dc1223536aef55073c5dacd69780500aeca6977c71987708d73fa44e6\": rpc error: code = NotFound desc = could not find container \"8ff2284dc1223536aef55073c5dacd69780500aeca6977c71987708d73fa44e6\": container with ID starting with 8ff2284dc1223536aef55073c5dacd69780500aeca6977c71987708d73fa44e6 not found: ID does not exist" Oct 06 13:16:08 crc kubenswrapper[4958]: I1006 13:16:08.932457 4958 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4f00eac0-be56-4173-884d-adfc060cdb66" path="/var/lib/kubelet/pods/4f00eac0-be56-4173-884d-adfc060cdb66/volumes" Oct 06 13:16:08 crc kubenswrapper[4958]: I1006 13:16:08.934498 4958 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c68298ec-6856-49fe-97a3-aeea130bb53a" path="/var/lib/kubelet/pods/c68298ec-6856-49fe-97a3-aeea130bb53a/volumes" Oct 06 13:16:12 crc kubenswrapper[4958]: I1006 13:16:12.915277 4958 scope.go:117] "RemoveContainer" containerID="d84ab81ce9062a0950d4884324ac3057bab7e52ccaab6ef82edbbad8288b7777" Oct 06 13:16:12 crc kubenswrapper[4958]: E1006 13:16:12.916514 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-whw6z_openshift-machine-config-operator(1a63a5e9-6d00-40ff-a080-cfcaf03a1c1b)\"" pod="openshift-machine-config-operator/machine-config-daemon-whw6z" podUID="1a63a5e9-6d00-40ff-a080-cfcaf03a1c1b" Oct 06 13:16:23 crc kubenswrapper[4958]: I1006 13:16:23.914688 4958 scope.go:117] "RemoveContainer" containerID="d84ab81ce9062a0950d4884324ac3057bab7e52ccaab6ef82edbbad8288b7777" Oct 06 13:16:23 crc kubenswrapper[4958]: E1006 13:16:23.915467 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-whw6z_openshift-machine-config-operator(1a63a5e9-6d00-40ff-a080-cfcaf03a1c1b)\"" pod="openshift-machine-config-operator/machine-config-daemon-whw6z" podUID="1a63a5e9-6d00-40ff-a080-cfcaf03a1c1b" Oct 06 13:16:38 crc kubenswrapper[4958]: I1006 13:16:38.913624 4958 scope.go:117] "RemoveContainer" containerID="d84ab81ce9062a0950d4884324ac3057bab7e52ccaab6ef82edbbad8288b7777" Oct 06 13:16:38 crc kubenswrapper[4958]: E1006 13:16:38.914759 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-whw6z_openshift-machine-config-operator(1a63a5e9-6d00-40ff-a080-cfcaf03a1c1b)\"" pod="openshift-machine-config-operator/machine-config-daemon-whw6z" podUID="1a63a5e9-6d00-40ff-a080-cfcaf03a1c1b" Oct 06 13:16:46 crc kubenswrapper[4958]: I1006 13:16:46.232999 4958 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-tddls"] Oct 06 13:16:46 crc kubenswrapper[4958]: E1006 13:16:46.235091 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4f00eac0-be56-4173-884d-adfc060cdb66" containerName="registry-server" Oct 06 13:16:46 crc kubenswrapper[4958]: I1006 13:16:46.235109 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="4f00eac0-be56-4173-884d-adfc060cdb66" containerName="registry-server" Oct 06 13:16:46 crc kubenswrapper[4958]: E1006 13:16:46.235135 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c68298ec-6856-49fe-97a3-aeea130bb53a" containerName="extract-content" Oct 06 13:16:46 crc kubenswrapper[4958]: I1006 13:16:46.235161 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="c68298ec-6856-49fe-97a3-aeea130bb53a" containerName="extract-content" Oct 06 13:16:46 crc kubenswrapper[4958]: E1006 13:16:46.235173 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c68298ec-6856-49fe-97a3-aeea130bb53a" containerName="registry-server" Oct 06 13:16:46 crc kubenswrapper[4958]: I1006 13:16:46.235181 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="c68298ec-6856-49fe-97a3-aeea130bb53a" containerName="registry-server" Oct 06 13:16:46 crc kubenswrapper[4958]: E1006 13:16:46.235195 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c68298ec-6856-49fe-97a3-aeea130bb53a" containerName="extract-utilities" Oct 06 13:16:46 crc kubenswrapper[4958]: I1006 13:16:46.235205 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="c68298ec-6856-49fe-97a3-aeea130bb53a" containerName="extract-utilities" Oct 06 13:16:46 crc kubenswrapper[4958]: E1006 13:16:46.235235 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4f00eac0-be56-4173-884d-adfc060cdb66" containerName="extract-utilities" Oct 06 13:16:46 crc kubenswrapper[4958]: I1006 13:16:46.235243 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="4f00eac0-be56-4173-884d-adfc060cdb66" containerName="extract-utilities" Oct 06 13:16:46 crc kubenswrapper[4958]: E1006 13:16:46.235263 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4f00eac0-be56-4173-884d-adfc060cdb66" containerName="extract-content" Oct 06 13:16:46 crc kubenswrapper[4958]: I1006 13:16:46.235271 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="4f00eac0-be56-4173-884d-adfc060cdb66" containerName="extract-content" Oct 06 13:16:46 crc kubenswrapper[4958]: I1006 13:16:46.235505 4958 memory_manager.go:354] "RemoveStaleState removing state" podUID="4f00eac0-be56-4173-884d-adfc060cdb66" containerName="registry-server" Oct 06 13:16:46 crc kubenswrapper[4958]: I1006 13:16:46.235528 4958 memory_manager.go:354] "RemoveStaleState removing state" podUID="c68298ec-6856-49fe-97a3-aeea130bb53a" containerName="registry-server" Oct 06 13:16:46 crc kubenswrapper[4958]: I1006 13:16:46.237279 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-tddls" Oct 06 13:16:46 crc kubenswrapper[4958]: I1006 13:16:46.254420 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-tddls"] Oct 06 13:16:46 crc kubenswrapper[4958]: I1006 13:16:46.356423 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1629f4e2-053e-44b8-8770-25d7543a2c88-utilities\") pod \"certified-operators-tddls\" (UID: \"1629f4e2-053e-44b8-8770-25d7543a2c88\") " pod="openshift-marketplace/certified-operators-tddls" Oct 06 13:16:46 crc kubenswrapper[4958]: I1006 13:16:46.356725 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9zj4w\" (UniqueName: \"kubernetes.io/projected/1629f4e2-053e-44b8-8770-25d7543a2c88-kube-api-access-9zj4w\") pod \"certified-operators-tddls\" (UID: \"1629f4e2-053e-44b8-8770-25d7543a2c88\") " pod="openshift-marketplace/certified-operators-tddls" Oct 06 13:16:46 crc kubenswrapper[4958]: I1006 13:16:46.356799 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1629f4e2-053e-44b8-8770-25d7543a2c88-catalog-content\") pod \"certified-operators-tddls\" (UID: \"1629f4e2-053e-44b8-8770-25d7543a2c88\") " pod="openshift-marketplace/certified-operators-tddls" Oct 06 13:16:46 crc kubenswrapper[4958]: I1006 13:16:46.458764 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9zj4w\" (UniqueName: \"kubernetes.io/projected/1629f4e2-053e-44b8-8770-25d7543a2c88-kube-api-access-9zj4w\") pod \"certified-operators-tddls\" (UID: \"1629f4e2-053e-44b8-8770-25d7543a2c88\") " pod="openshift-marketplace/certified-operators-tddls" Oct 06 13:16:46 crc kubenswrapper[4958]: I1006 13:16:46.458877 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1629f4e2-053e-44b8-8770-25d7543a2c88-catalog-content\") pod \"certified-operators-tddls\" (UID: \"1629f4e2-053e-44b8-8770-25d7543a2c88\") " pod="openshift-marketplace/certified-operators-tddls" Oct 06 13:16:46 crc kubenswrapper[4958]: I1006 13:16:46.458934 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1629f4e2-053e-44b8-8770-25d7543a2c88-utilities\") pod \"certified-operators-tddls\" (UID: \"1629f4e2-053e-44b8-8770-25d7543a2c88\") " pod="openshift-marketplace/certified-operators-tddls" Oct 06 13:16:46 crc kubenswrapper[4958]: I1006 13:16:46.459517 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1629f4e2-053e-44b8-8770-25d7543a2c88-catalog-content\") pod \"certified-operators-tddls\" (UID: \"1629f4e2-053e-44b8-8770-25d7543a2c88\") " pod="openshift-marketplace/certified-operators-tddls" Oct 06 13:16:46 crc kubenswrapper[4958]: I1006 13:16:46.459695 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1629f4e2-053e-44b8-8770-25d7543a2c88-utilities\") pod \"certified-operators-tddls\" (UID: \"1629f4e2-053e-44b8-8770-25d7543a2c88\") " pod="openshift-marketplace/certified-operators-tddls" Oct 06 13:16:46 crc kubenswrapper[4958]: I1006 13:16:46.485005 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9zj4w\" (UniqueName: \"kubernetes.io/projected/1629f4e2-053e-44b8-8770-25d7543a2c88-kube-api-access-9zj4w\") pod \"certified-operators-tddls\" (UID: \"1629f4e2-053e-44b8-8770-25d7543a2c88\") " pod="openshift-marketplace/certified-operators-tddls" Oct 06 13:16:46 crc kubenswrapper[4958]: I1006 13:16:46.570844 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-tddls" Oct 06 13:16:47 crc kubenswrapper[4958]: I1006 13:16:47.030328 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-tddls"] Oct 06 13:16:47 crc kubenswrapper[4958]: W1006 13:16:47.237169 4958 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod1629f4e2_053e_44b8_8770_25d7543a2c88.slice/crio-3024f0cd84235d7766ebd14ff1fdb71fc2af3be940a193b826d59d7e391f976a WatchSource:0}: Error finding container 3024f0cd84235d7766ebd14ff1fdb71fc2af3be940a193b826d59d7e391f976a: Status 404 returned error can't find the container with id 3024f0cd84235d7766ebd14ff1fdb71fc2af3be940a193b826d59d7e391f976a Oct 06 13:16:47 crc kubenswrapper[4958]: I1006 13:16:47.507388 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-tddls" event={"ID":"1629f4e2-053e-44b8-8770-25d7543a2c88","Type":"ContainerStarted","Data":"3024f0cd84235d7766ebd14ff1fdb71fc2af3be940a193b826d59d7e391f976a"} Oct 06 13:16:48 crc kubenswrapper[4958]: I1006 13:16:48.532568 4958 generic.go:334] "Generic (PLEG): container finished" podID="1629f4e2-053e-44b8-8770-25d7543a2c88" containerID="72dcde23b6667bf0258695f6d71084fb884462c08686440c89d1dffffefd8211" exitCode=0 Oct 06 13:16:48 crc kubenswrapper[4958]: I1006 13:16:48.532773 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-tddls" event={"ID":"1629f4e2-053e-44b8-8770-25d7543a2c88","Type":"ContainerDied","Data":"72dcde23b6667bf0258695f6d71084fb884462c08686440c89d1dffffefd8211"} Oct 06 13:16:50 crc kubenswrapper[4958]: I1006 13:16:50.557219 4958 generic.go:334] "Generic (PLEG): container finished" podID="1629f4e2-053e-44b8-8770-25d7543a2c88" containerID="13e6a6b5539e2da1ba6392701796ab7852b08ed757ec197b5bcf544351f38873" exitCode=0 Oct 06 13:16:50 crc kubenswrapper[4958]: I1006 13:16:50.557325 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-tddls" event={"ID":"1629f4e2-053e-44b8-8770-25d7543a2c88","Type":"ContainerDied","Data":"13e6a6b5539e2da1ba6392701796ab7852b08ed757ec197b5bcf544351f38873"} Oct 06 13:16:51 crc kubenswrapper[4958]: I1006 13:16:51.625067 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-tddls" event={"ID":"1629f4e2-053e-44b8-8770-25d7543a2c88","Type":"ContainerStarted","Data":"e8d045e89f7ef381b3892e9e6a6ec4c2a64f68fa0c69f01b4c21daf025ce9a2f"} Oct 06 13:16:51 crc kubenswrapper[4958]: I1006 13:16:51.649985 4958 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-tddls" podStartSLOduration=3.24486425 podStartE2EDuration="5.649963516s" podCreationTimestamp="2025-10-06 13:16:46 +0000 UTC" firstStartedPulling="2025-10-06 13:16:48.551519221 +0000 UTC m=+5362.437544569" lastFinishedPulling="2025-10-06 13:16:50.956618527 +0000 UTC m=+5364.842643835" observedRunningTime="2025-10-06 13:16:51.646895426 +0000 UTC m=+5365.532920754" watchObservedRunningTime="2025-10-06 13:16:51.649963516 +0000 UTC m=+5365.535988824" Oct 06 13:16:51 crc kubenswrapper[4958]: I1006 13:16:51.913480 4958 scope.go:117] "RemoveContainer" containerID="d84ab81ce9062a0950d4884324ac3057bab7e52ccaab6ef82edbbad8288b7777" Oct 06 13:16:51 crc kubenswrapper[4958]: E1006 13:16:51.914028 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-whw6z_openshift-machine-config-operator(1a63a5e9-6d00-40ff-a080-cfcaf03a1c1b)\"" pod="openshift-machine-config-operator/machine-config-daemon-whw6z" podUID="1a63a5e9-6d00-40ff-a080-cfcaf03a1c1b" Oct 06 13:16:56 crc kubenswrapper[4958]: I1006 13:16:56.571080 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-tddls" Oct 06 13:16:56 crc kubenswrapper[4958]: I1006 13:16:56.571929 4958 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-tddls" Oct 06 13:16:56 crc kubenswrapper[4958]: I1006 13:16:56.634607 4958 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-tddls" Oct 06 13:16:56 crc kubenswrapper[4958]: I1006 13:16:56.741204 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-tddls" Oct 06 13:16:56 crc kubenswrapper[4958]: I1006 13:16:56.888218 4958 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-tddls"] Oct 06 13:16:58 crc kubenswrapper[4958]: I1006 13:16:58.703390 4958 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-tddls" podUID="1629f4e2-053e-44b8-8770-25d7543a2c88" containerName="registry-server" containerID="cri-o://e8d045e89f7ef381b3892e9e6a6ec4c2a64f68fa0c69f01b4c21daf025ce9a2f" gracePeriod=2 Oct 06 13:16:59 crc kubenswrapper[4958]: I1006 13:16:59.692603 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-tddls" Oct 06 13:16:59 crc kubenswrapper[4958]: I1006 13:16:59.720139 4958 generic.go:334] "Generic (PLEG): container finished" podID="1629f4e2-053e-44b8-8770-25d7543a2c88" containerID="e8d045e89f7ef381b3892e9e6a6ec4c2a64f68fa0c69f01b4c21daf025ce9a2f" exitCode=0 Oct 06 13:16:59 crc kubenswrapper[4958]: I1006 13:16:59.720206 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-tddls" event={"ID":"1629f4e2-053e-44b8-8770-25d7543a2c88","Type":"ContainerDied","Data":"e8d045e89f7ef381b3892e9e6a6ec4c2a64f68fa0c69f01b4c21daf025ce9a2f"} Oct 06 13:16:59 crc kubenswrapper[4958]: I1006 13:16:59.720242 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-tddls" event={"ID":"1629f4e2-053e-44b8-8770-25d7543a2c88","Type":"ContainerDied","Data":"3024f0cd84235d7766ebd14ff1fdb71fc2af3be940a193b826d59d7e391f976a"} Oct 06 13:16:59 crc kubenswrapper[4958]: I1006 13:16:59.720262 4958 scope.go:117] "RemoveContainer" containerID="e8d045e89f7ef381b3892e9e6a6ec4c2a64f68fa0c69f01b4c21daf025ce9a2f" Oct 06 13:16:59 crc kubenswrapper[4958]: I1006 13:16:59.721788 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-tddls" Oct 06 13:16:59 crc kubenswrapper[4958]: I1006 13:16:59.756074 4958 scope.go:117] "RemoveContainer" containerID="13e6a6b5539e2da1ba6392701796ab7852b08ed757ec197b5bcf544351f38873" Oct 06 13:16:59 crc kubenswrapper[4958]: I1006 13:16:59.790329 4958 scope.go:117] "RemoveContainer" containerID="72dcde23b6667bf0258695f6d71084fb884462c08686440c89d1dffffefd8211" Oct 06 13:16:59 crc kubenswrapper[4958]: I1006 13:16:59.793886 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9zj4w\" (UniqueName: \"kubernetes.io/projected/1629f4e2-053e-44b8-8770-25d7543a2c88-kube-api-access-9zj4w\") pod \"1629f4e2-053e-44b8-8770-25d7543a2c88\" (UID: \"1629f4e2-053e-44b8-8770-25d7543a2c88\") " Oct 06 13:16:59 crc kubenswrapper[4958]: I1006 13:16:59.794069 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1629f4e2-053e-44b8-8770-25d7543a2c88-catalog-content\") pod \"1629f4e2-053e-44b8-8770-25d7543a2c88\" (UID: \"1629f4e2-053e-44b8-8770-25d7543a2c88\") " Oct 06 13:16:59 crc kubenswrapper[4958]: I1006 13:16:59.794350 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1629f4e2-053e-44b8-8770-25d7543a2c88-utilities\") pod \"1629f4e2-053e-44b8-8770-25d7543a2c88\" (UID: \"1629f4e2-053e-44b8-8770-25d7543a2c88\") " Oct 06 13:16:59 crc kubenswrapper[4958]: I1006 13:16:59.795701 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1629f4e2-053e-44b8-8770-25d7543a2c88-utilities" (OuterVolumeSpecName: "utilities") pod "1629f4e2-053e-44b8-8770-25d7543a2c88" (UID: "1629f4e2-053e-44b8-8770-25d7543a2c88"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 06 13:16:59 crc kubenswrapper[4958]: I1006 13:16:59.800822 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1629f4e2-053e-44b8-8770-25d7543a2c88-kube-api-access-9zj4w" (OuterVolumeSpecName: "kube-api-access-9zj4w") pod "1629f4e2-053e-44b8-8770-25d7543a2c88" (UID: "1629f4e2-053e-44b8-8770-25d7543a2c88"). InnerVolumeSpecName "kube-api-access-9zj4w". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 06 13:16:59 crc kubenswrapper[4958]: I1006 13:16:59.846891 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1629f4e2-053e-44b8-8770-25d7543a2c88-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "1629f4e2-053e-44b8-8770-25d7543a2c88" (UID: "1629f4e2-053e-44b8-8770-25d7543a2c88"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 06 13:16:59 crc kubenswrapper[4958]: I1006 13:16:59.859777 4958 scope.go:117] "RemoveContainer" containerID="e8d045e89f7ef381b3892e9e6a6ec4c2a64f68fa0c69f01b4c21daf025ce9a2f" Oct 06 13:16:59 crc kubenswrapper[4958]: E1006 13:16:59.860292 4958 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e8d045e89f7ef381b3892e9e6a6ec4c2a64f68fa0c69f01b4c21daf025ce9a2f\": container with ID starting with e8d045e89f7ef381b3892e9e6a6ec4c2a64f68fa0c69f01b4c21daf025ce9a2f not found: ID does not exist" containerID="e8d045e89f7ef381b3892e9e6a6ec4c2a64f68fa0c69f01b4c21daf025ce9a2f" Oct 06 13:16:59 crc kubenswrapper[4958]: I1006 13:16:59.860334 4958 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e8d045e89f7ef381b3892e9e6a6ec4c2a64f68fa0c69f01b4c21daf025ce9a2f"} err="failed to get container status \"e8d045e89f7ef381b3892e9e6a6ec4c2a64f68fa0c69f01b4c21daf025ce9a2f\": rpc error: code = NotFound desc = could not find container \"e8d045e89f7ef381b3892e9e6a6ec4c2a64f68fa0c69f01b4c21daf025ce9a2f\": container with ID starting with e8d045e89f7ef381b3892e9e6a6ec4c2a64f68fa0c69f01b4c21daf025ce9a2f not found: ID does not exist" Oct 06 13:16:59 crc kubenswrapper[4958]: I1006 13:16:59.860358 4958 scope.go:117] "RemoveContainer" containerID="13e6a6b5539e2da1ba6392701796ab7852b08ed757ec197b5bcf544351f38873" Oct 06 13:16:59 crc kubenswrapper[4958]: E1006 13:16:59.860863 4958 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"13e6a6b5539e2da1ba6392701796ab7852b08ed757ec197b5bcf544351f38873\": container with ID starting with 13e6a6b5539e2da1ba6392701796ab7852b08ed757ec197b5bcf544351f38873 not found: ID does not exist" containerID="13e6a6b5539e2da1ba6392701796ab7852b08ed757ec197b5bcf544351f38873" Oct 06 13:16:59 crc kubenswrapper[4958]: I1006 13:16:59.860895 4958 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"13e6a6b5539e2da1ba6392701796ab7852b08ed757ec197b5bcf544351f38873"} err="failed to get container status \"13e6a6b5539e2da1ba6392701796ab7852b08ed757ec197b5bcf544351f38873\": rpc error: code = NotFound desc = could not find container \"13e6a6b5539e2da1ba6392701796ab7852b08ed757ec197b5bcf544351f38873\": container with ID starting with 13e6a6b5539e2da1ba6392701796ab7852b08ed757ec197b5bcf544351f38873 not found: ID does not exist" Oct 06 13:16:59 crc kubenswrapper[4958]: I1006 13:16:59.860917 4958 scope.go:117] "RemoveContainer" containerID="72dcde23b6667bf0258695f6d71084fb884462c08686440c89d1dffffefd8211" Oct 06 13:16:59 crc kubenswrapper[4958]: E1006 13:16:59.861221 4958 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"72dcde23b6667bf0258695f6d71084fb884462c08686440c89d1dffffefd8211\": container with ID starting with 72dcde23b6667bf0258695f6d71084fb884462c08686440c89d1dffffefd8211 not found: ID does not exist" containerID="72dcde23b6667bf0258695f6d71084fb884462c08686440c89d1dffffefd8211" Oct 06 13:16:59 crc kubenswrapper[4958]: I1006 13:16:59.861243 4958 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"72dcde23b6667bf0258695f6d71084fb884462c08686440c89d1dffffefd8211"} err="failed to get container status \"72dcde23b6667bf0258695f6d71084fb884462c08686440c89d1dffffefd8211\": rpc error: code = NotFound desc = could not find container \"72dcde23b6667bf0258695f6d71084fb884462c08686440c89d1dffffefd8211\": container with ID starting with 72dcde23b6667bf0258695f6d71084fb884462c08686440c89d1dffffefd8211 not found: ID does not exist" Oct 06 13:16:59 crc kubenswrapper[4958]: I1006 13:16:59.897221 4958 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1629f4e2-053e-44b8-8770-25d7543a2c88-utilities\") on node \"crc\" DevicePath \"\"" Oct 06 13:16:59 crc kubenswrapper[4958]: I1006 13:16:59.897267 4958 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9zj4w\" (UniqueName: \"kubernetes.io/projected/1629f4e2-053e-44b8-8770-25d7543a2c88-kube-api-access-9zj4w\") on node \"crc\" DevicePath \"\"" Oct 06 13:16:59 crc kubenswrapper[4958]: I1006 13:16:59.897282 4958 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1629f4e2-053e-44b8-8770-25d7543a2c88-catalog-content\") on node \"crc\" DevicePath \"\"" Oct 06 13:17:00 crc kubenswrapper[4958]: I1006 13:17:00.073752 4958 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-tddls"] Oct 06 13:17:00 crc kubenswrapper[4958]: I1006 13:17:00.087990 4958 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-tddls"] Oct 06 13:17:00 crc kubenswrapper[4958]: I1006 13:17:00.935561 4958 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1629f4e2-053e-44b8-8770-25d7543a2c88" path="/var/lib/kubelet/pods/1629f4e2-053e-44b8-8770-25d7543a2c88/volumes" Oct 06 13:17:06 crc kubenswrapper[4958]: I1006 13:17:06.930559 4958 scope.go:117] "RemoveContainer" containerID="d84ab81ce9062a0950d4884324ac3057bab7e52ccaab6ef82edbbad8288b7777" Oct 06 13:17:06 crc kubenswrapper[4958]: E1006 13:17:06.932267 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-whw6z_openshift-machine-config-operator(1a63a5e9-6d00-40ff-a080-cfcaf03a1c1b)\"" pod="openshift-machine-config-operator/machine-config-daemon-whw6z" podUID="1a63a5e9-6d00-40ff-a080-cfcaf03a1c1b" Oct 06 13:17:06 crc kubenswrapper[4958]: I1006 13:17:06.975827 4958 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-nctmk"] Oct 06 13:17:06 crc kubenswrapper[4958]: E1006 13:17:06.976422 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1629f4e2-053e-44b8-8770-25d7543a2c88" containerName="extract-content" Oct 06 13:17:06 crc kubenswrapper[4958]: I1006 13:17:06.976451 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="1629f4e2-053e-44b8-8770-25d7543a2c88" containerName="extract-content" Oct 06 13:17:06 crc kubenswrapper[4958]: E1006 13:17:06.976475 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1629f4e2-053e-44b8-8770-25d7543a2c88" containerName="registry-server" Oct 06 13:17:06 crc kubenswrapper[4958]: I1006 13:17:06.976486 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="1629f4e2-053e-44b8-8770-25d7543a2c88" containerName="registry-server" Oct 06 13:17:06 crc kubenswrapper[4958]: E1006 13:17:06.976510 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1629f4e2-053e-44b8-8770-25d7543a2c88" containerName="extract-utilities" Oct 06 13:17:06 crc kubenswrapper[4958]: I1006 13:17:06.976522 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="1629f4e2-053e-44b8-8770-25d7543a2c88" containerName="extract-utilities" Oct 06 13:17:06 crc kubenswrapper[4958]: I1006 13:17:06.976883 4958 memory_manager.go:354] "RemoveStaleState removing state" podUID="1629f4e2-053e-44b8-8770-25d7543a2c88" containerName="registry-server" Oct 06 13:17:06 crc kubenswrapper[4958]: I1006 13:17:06.981455 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-nctmk" Oct 06 13:17:07 crc kubenswrapper[4958]: I1006 13:17:07.005347 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-nctmk"] Oct 06 13:17:07 crc kubenswrapper[4958]: I1006 13:17:07.075184 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k2vf6\" (UniqueName: \"kubernetes.io/projected/58e3129e-ebe5-44b8-b712-ed373382cef4-kube-api-access-k2vf6\") pod \"redhat-operators-nctmk\" (UID: \"58e3129e-ebe5-44b8-b712-ed373382cef4\") " pod="openshift-marketplace/redhat-operators-nctmk" Oct 06 13:17:07 crc kubenswrapper[4958]: I1006 13:17:07.075266 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/58e3129e-ebe5-44b8-b712-ed373382cef4-utilities\") pod \"redhat-operators-nctmk\" (UID: \"58e3129e-ebe5-44b8-b712-ed373382cef4\") " pod="openshift-marketplace/redhat-operators-nctmk" Oct 06 13:17:07 crc kubenswrapper[4958]: I1006 13:17:07.075620 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/58e3129e-ebe5-44b8-b712-ed373382cef4-catalog-content\") pod \"redhat-operators-nctmk\" (UID: \"58e3129e-ebe5-44b8-b712-ed373382cef4\") " pod="openshift-marketplace/redhat-operators-nctmk" Oct 06 13:17:07 crc kubenswrapper[4958]: I1006 13:17:07.177837 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k2vf6\" (UniqueName: \"kubernetes.io/projected/58e3129e-ebe5-44b8-b712-ed373382cef4-kube-api-access-k2vf6\") pod \"redhat-operators-nctmk\" (UID: \"58e3129e-ebe5-44b8-b712-ed373382cef4\") " pod="openshift-marketplace/redhat-operators-nctmk" Oct 06 13:17:07 crc kubenswrapper[4958]: I1006 13:17:07.177950 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/58e3129e-ebe5-44b8-b712-ed373382cef4-utilities\") pod \"redhat-operators-nctmk\" (UID: \"58e3129e-ebe5-44b8-b712-ed373382cef4\") " pod="openshift-marketplace/redhat-operators-nctmk" Oct 06 13:17:07 crc kubenswrapper[4958]: I1006 13:17:07.178174 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/58e3129e-ebe5-44b8-b712-ed373382cef4-catalog-content\") pod \"redhat-operators-nctmk\" (UID: \"58e3129e-ebe5-44b8-b712-ed373382cef4\") " pod="openshift-marketplace/redhat-operators-nctmk" Oct 06 13:17:07 crc kubenswrapper[4958]: I1006 13:17:07.178804 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/58e3129e-ebe5-44b8-b712-ed373382cef4-utilities\") pod \"redhat-operators-nctmk\" (UID: \"58e3129e-ebe5-44b8-b712-ed373382cef4\") " pod="openshift-marketplace/redhat-operators-nctmk" Oct 06 13:17:07 crc kubenswrapper[4958]: I1006 13:17:07.178867 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/58e3129e-ebe5-44b8-b712-ed373382cef4-catalog-content\") pod \"redhat-operators-nctmk\" (UID: \"58e3129e-ebe5-44b8-b712-ed373382cef4\") " pod="openshift-marketplace/redhat-operators-nctmk" Oct 06 13:17:07 crc kubenswrapper[4958]: I1006 13:17:07.207650 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k2vf6\" (UniqueName: \"kubernetes.io/projected/58e3129e-ebe5-44b8-b712-ed373382cef4-kube-api-access-k2vf6\") pod \"redhat-operators-nctmk\" (UID: \"58e3129e-ebe5-44b8-b712-ed373382cef4\") " pod="openshift-marketplace/redhat-operators-nctmk" Oct 06 13:17:07 crc kubenswrapper[4958]: I1006 13:17:07.313663 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-nctmk" Oct 06 13:17:07 crc kubenswrapper[4958]: I1006 13:17:07.812818 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-nctmk"] Oct 06 13:17:08 crc kubenswrapper[4958]: I1006 13:17:08.831007 4958 generic.go:334] "Generic (PLEG): container finished" podID="58e3129e-ebe5-44b8-b712-ed373382cef4" containerID="3c50cc7688eed2b0ef4dfe3335ead25d32c8d368d30c08c4e328f077349e8694" exitCode=0 Oct 06 13:17:08 crc kubenswrapper[4958]: I1006 13:17:08.831130 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-nctmk" event={"ID":"58e3129e-ebe5-44b8-b712-ed373382cef4","Type":"ContainerDied","Data":"3c50cc7688eed2b0ef4dfe3335ead25d32c8d368d30c08c4e328f077349e8694"} Oct 06 13:17:08 crc kubenswrapper[4958]: I1006 13:17:08.831414 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-nctmk" event={"ID":"58e3129e-ebe5-44b8-b712-ed373382cef4","Type":"ContainerStarted","Data":"3bc31548343238421d8725b92148e8f4830c5b93856c896b19d23b36c9821356"} Oct 06 13:17:11 crc kubenswrapper[4958]: I1006 13:17:11.868409 4958 generic.go:334] "Generic (PLEG): container finished" podID="58e3129e-ebe5-44b8-b712-ed373382cef4" containerID="9c8c97ee1d13668d6bfbd3eaae61b5e5dce355bfb18a2737b1f89c7c8613f191" exitCode=0 Oct 06 13:17:11 crc kubenswrapper[4958]: I1006 13:17:11.868691 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-nctmk" event={"ID":"58e3129e-ebe5-44b8-b712-ed373382cef4","Type":"ContainerDied","Data":"9c8c97ee1d13668d6bfbd3eaae61b5e5dce355bfb18a2737b1f89c7c8613f191"} Oct 06 13:17:12 crc kubenswrapper[4958]: I1006 13:17:12.882233 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-nctmk" event={"ID":"58e3129e-ebe5-44b8-b712-ed373382cef4","Type":"ContainerStarted","Data":"4253817cf0874209ad87d9d0470de2acffdc14c203858964d9abe2813d4db6d9"} Oct 06 13:17:12 crc kubenswrapper[4958]: I1006 13:17:12.913411 4958 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-nctmk" podStartSLOduration=3.350800472 podStartE2EDuration="6.913390146s" podCreationTimestamp="2025-10-06 13:17:06 +0000 UTC" firstStartedPulling="2025-10-06 13:17:08.83471564 +0000 UTC m=+5382.720740948" lastFinishedPulling="2025-10-06 13:17:12.397305314 +0000 UTC m=+5386.283330622" observedRunningTime="2025-10-06 13:17:12.912251932 +0000 UTC m=+5386.798277250" watchObservedRunningTime="2025-10-06 13:17:12.913390146 +0000 UTC m=+5386.799415464" Oct 06 13:17:17 crc kubenswrapper[4958]: I1006 13:17:17.313925 4958 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-nctmk" Oct 06 13:17:17 crc kubenswrapper[4958]: I1006 13:17:17.314291 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-nctmk" Oct 06 13:17:17 crc kubenswrapper[4958]: I1006 13:17:17.913211 4958 scope.go:117] "RemoveContainer" containerID="d84ab81ce9062a0950d4884324ac3057bab7e52ccaab6ef82edbbad8288b7777" Oct 06 13:17:17 crc kubenswrapper[4958]: E1006 13:17:17.913822 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-whw6z_openshift-machine-config-operator(1a63a5e9-6d00-40ff-a080-cfcaf03a1c1b)\"" pod="openshift-machine-config-operator/machine-config-daemon-whw6z" podUID="1a63a5e9-6d00-40ff-a080-cfcaf03a1c1b" Oct 06 13:17:18 crc kubenswrapper[4958]: I1006 13:17:18.403008 4958 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-nctmk" podUID="58e3129e-ebe5-44b8-b712-ed373382cef4" containerName="registry-server" probeResult="failure" output=< Oct 06 13:17:18 crc kubenswrapper[4958]: timeout: failed to connect service ":50051" within 1s Oct 06 13:17:18 crc kubenswrapper[4958]: > Oct 06 13:17:27 crc kubenswrapper[4958]: I1006 13:17:27.374793 4958 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-nctmk" Oct 06 13:17:27 crc kubenswrapper[4958]: I1006 13:17:27.451600 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-nctmk" Oct 06 13:17:27 crc kubenswrapper[4958]: I1006 13:17:27.631592 4958 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-nctmk"] Oct 06 13:17:29 crc kubenswrapper[4958]: I1006 13:17:29.060747 4958 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-nctmk" podUID="58e3129e-ebe5-44b8-b712-ed373382cef4" containerName="registry-server" containerID="cri-o://4253817cf0874209ad87d9d0470de2acffdc14c203858964d9abe2813d4db6d9" gracePeriod=2 Oct 06 13:17:29 crc kubenswrapper[4958]: I1006 13:17:29.542966 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-nctmk" Oct 06 13:17:29 crc kubenswrapper[4958]: I1006 13:17:29.600117 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/58e3129e-ebe5-44b8-b712-ed373382cef4-catalog-content\") pod \"58e3129e-ebe5-44b8-b712-ed373382cef4\" (UID: \"58e3129e-ebe5-44b8-b712-ed373382cef4\") " Oct 06 13:17:29 crc kubenswrapper[4958]: I1006 13:17:29.600182 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/58e3129e-ebe5-44b8-b712-ed373382cef4-utilities\") pod \"58e3129e-ebe5-44b8-b712-ed373382cef4\" (UID: \"58e3129e-ebe5-44b8-b712-ed373382cef4\") " Oct 06 13:17:29 crc kubenswrapper[4958]: I1006 13:17:29.600513 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-k2vf6\" (UniqueName: \"kubernetes.io/projected/58e3129e-ebe5-44b8-b712-ed373382cef4-kube-api-access-k2vf6\") pod \"58e3129e-ebe5-44b8-b712-ed373382cef4\" (UID: \"58e3129e-ebe5-44b8-b712-ed373382cef4\") " Oct 06 13:17:29 crc kubenswrapper[4958]: I1006 13:17:29.602011 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/58e3129e-ebe5-44b8-b712-ed373382cef4-utilities" (OuterVolumeSpecName: "utilities") pod "58e3129e-ebe5-44b8-b712-ed373382cef4" (UID: "58e3129e-ebe5-44b8-b712-ed373382cef4"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 06 13:17:29 crc kubenswrapper[4958]: I1006 13:17:29.620422 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/58e3129e-ebe5-44b8-b712-ed373382cef4-kube-api-access-k2vf6" (OuterVolumeSpecName: "kube-api-access-k2vf6") pod "58e3129e-ebe5-44b8-b712-ed373382cef4" (UID: "58e3129e-ebe5-44b8-b712-ed373382cef4"). InnerVolumeSpecName "kube-api-access-k2vf6". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 06 13:17:29 crc kubenswrapper[4958]: I1006 13:17:29.686821 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/58e3129e-ebe5-44b8-b712-ed373382cef4-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "58e3129e-ebe5-44b8-b712-ed373382cef4" (UID: "58e3129e-ebe5-44b8-b712-ed373382cef4"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 06 13:17:29 crc kubenswrapper[4958]: I1006 13:17:29.704695 4958 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/58e3129e-ebe5-44b8-b712-ed373382cef4-catalog-content\") on node \"crc\" DevicePath \"\"" Oct 06 13:17:29 crc kubenswrapper[4958]: I1006 13:17:29.704744 4958 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/58e3129e-ebe5-44b8-b712-ed373382cef4-utilities\") on node \"crc\" DevicePath \"\"" Oct 06 13:17:29 crc kubenswrapper[4958]: I1006 13:17:29.704764 4958 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-k2vf6\" (UniqueName: \"kubernetes.io/projected/58e3129e-ebe5-44b8-b712-ed373382cef4-kube-api-access-k2vf6\") on node \"crc\" DevicePath \"\"" Oct 06 13:17:30 crc kubenswrapper[4958]: I1006 13:17:30.076638 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-nctmk" Oct 06 13:17:30 crc kubenswrapper[4958]: I1006 13:17:30.076681 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-nctmk" event={"ID":"58e3129e-ebe5-44b8-b712-ed373382cef4","Type":"ContainerDied","Data":"4253817cf0874209ad87d9d0470de2acffdc14c203858964d9abe2813d4db6d9"} Oct 06 13:17:30 crc kubenswrapper[4958]: I1006 13:17:30.076531 4958 generic.go:334] "Generic (PLEG): container finished" podID="58e3129e-ebe5-44b8-b712-ed373382cef4" containerID="4253817cf0874209ad87d9d0470de2acffdc14c203858964d9abe2813d4db6d9" exitCode=0 Oct 06 13:17:30 crc kubenswrapper[4958]: I1006 13:17:30.076745 4958 scope.go:117] "RemoveContainer" containerID="4253817cf0874209ad87d9d0470de2acffdc14c203858964d9abe2813d4db6d9" Oct 06 13:17:30 crc kubenswrapper[4958]: I1006 13:17:30.076770 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-nctmk" event={"ID":"58e3129e-ebe5-44b8-b712-ed373382cef4","Type":"ContainerDied","Data":"3bc31548343238421d8725b92148e8f4830c5b93856c896b19d23b36c9821356"} Oct 06 13:17:30 crc kubenswrapper[4958]: I1006 13:17:30.130610 4958 scope.go:117] "RemoveContainer" containerID="9c8c97ee1d13668d6bfbd3eaae61b5e5dce355bfb18a2737b1f89c7c8613f191" Oct 06 13:17:30 crc kubenswrapper[4958]: I1006 13:17:30.159268 4958 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-nctmk"] Oct 06 13:17:30 crc kubenswrapper[4958]: I1006 13:17:30.161209 4958 scope.go:117] "RemoveContainer" containerID="3c50cc7688eed2b0ef4dfe3335ead25d32c8d368d30c08c4e328f077349e8694" Oct 06 13:17:30 crc kubenswrapper[4958]: I1006 13:17:30.169346 4958 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-nctmk"] Oct 06 13:17:30 crc kubenswrapper[4958]: I1006 13:17:30.199764 4958 scope.go:117] "RemoveContainer" containerID="4253817cf0874209ad87d9d0470de2acffdc14c203858964d9abe2813d4db6d9" Oct 06 13:17:30 crc kubenswrapper[4958]: E1006 13:17:30.200174 4958 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4253817cf0874209ad87d9d0470de2acffdc14c203858964d9abe2813d4db6d9\": container with ID starting with 4253817cf0874209ad87d9d0470de2acffdc14c203858964d9abe2813d4db6d9 not found: ID does not exist" containerID="4253817cf0874209ad87d9d0470de2acffdc14c203858964d9abe2813d4db6d9" Oct 06 13:17:30 crc kubenswrapper[4958]: I1006 13:17:30.200209 4958 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4253817cf0874209ad87d9d0470de2acffdc14c203858964d9abe2813d4db6d9"} err="failed to get container status \"4253817cf0874209ad87d9d0470de2acffdc14c203858964d9abe2813d4db6d9\": rpc error: code = NotFound desc = could not find container \"4253817cf0874209ad87d9d0470de2acffdc14c203858964d9abe2813d4db6d9\": container with ID starting with 4253817cf0874209ad87d9d0470de2acffdc14c203858964d9abe2813d4db6d9 not found: ID does not exist" Oct 06 13:17:30 crc kubenswrapper[4958]: I1006 13:17:30.200231 4958 scope.go:117] "RemoveContainer" containerID="9c8c97ee1d13668d6bfbd3eaae61b5e5dce355bfb18a2737b1f89c7c8613f191" Oct 06 13:17:30 crc kubenswrapper[4958]: E1006 13:17:30.200421 4958 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9c8c97ee1d13668d6bfbd3eaae61b5e5dce355bfb18a2737b1f89c7c8613f191\": container with ID starting with 9c8c97ee1d13668d6bfbd3eaae61b5e5dce355bfb18a2737b1f89c7c8613f191 not found: ID does not exist" containerID="9c8c97ee1d13668d6bfbd3eaae61b5e5dce355bfb18a2737b1f89c7c8613f191" Oct 06 13:17:30 crc kubenswrapper[4958]: I1006 13:17:30.200444 4958 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9c8c97ee1d13668d6bfbd3eaae61b5e5dce355bfb18a2737b1f89c7c8613f191"} err="failed to get container status \"9c8c97ee1d13668d6bfbd3eaae61b5e5dce355bfb18a2737b1f89c7c8613f191\": rpc error: code = NotFound desc = could not find container \"9c8c97ee1d13668d6bfbd3eaae61b5e5dce355bfb18a2737b1f89c7c8613f191\": container with ID starting with 9c8c97ee1d13668d6bfbd3eaae61b5e5dce355bfb18a2737b1f89c7c8613f191 not found: ID does not exist" Oct 06 13:17:30 crc kubenswrapper[4958]: I1006 13:17:30.200464 4958 scope.go:117] "RemoveContainer" containerID="3c50cc7688eed2b0ef4dfe3335ead25d32c8d368d30c08c4e328f077349e8694" Oct 06 13:17:30 crc kubenswrapper[4958]: E1006 13:17:30.200927 4958 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3c50cc7688eed2b0ef4dfe3335ead25d32c8d368d30c08c4e328f077349e8694\": container with ID starting with 3c50cc7688eed2b0ef4dfe3335ead25d32c8d368d30c08c4e328f077349e8694 not found: ID does not exist" containerID="3c50cc7688eed2b0ef4dfe3335ead25d32c8d368d30c08c4e328f077349e8694" Oct 06 13:17:30 crc kubenswrapper[4958]: I1006 13:17:30.200973 4958 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3c50cc7688eed2b0ef4dfe3335ead25d32c8d368d30c08c4e328f077349e8694"} err="failed to get container status \"3c50cc7688eed2b0ef4dfe3335ead25d32c8d368d30c08c4e328f077349e8694\": rpc error: code = NotFound desc = could not find container \"3c50cc7688eed2b0ef4dfe3335ead25d32c8d368d30c08c4e328f077349e8694\": container with ID starting with 3c50cc7688eed2b0ef4dfe3335ead25d32c8d368d30c08c4e328f077349e8694 not found: ID does not exist" Oct 06 13:17:30 crc kubenswrapper[4958]: I1006 13:17:30.913817 4958 scope.go:117] "RemoveContainer" containerID="d84ab81ce9062a0950d4884324ac3057bab7e52ccaab6ef82edbbad8288b7777" Oct 06 13:17:30 crc kubenswrapper[4958]: E1006 13:17:30.914742 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-whw6z_openshift-machine-config-operator(1a63a5e9-6d00-40ff-a080-cfcaf03a1c1b)\"" pod="openshift-machine-config-operator/machine-config-daemon-whw6z" podUID="1a63a5e9-6d00-40ff-a080-cfcaf03a1c1b" Oct 06 13:17:30 crc kubenswrapper[4958]: I1006 13:17:30.925122 4958 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="58e3129e-ebe5-44b8-b712-ed373382cef4" path="/var/lib/kubelet/pods/58e3129e-ebe5-44b8-b712-ed373382cef4/volumes" Oct 06 13:17:44 crc kubenswrapper[4958]: I1006 13:17:44.913701 4958 scope.go:117] "RemoveContainer" containerID="d84ab81ce9062a0950d4884324ac3057bab7e52ccaab6ef82edbbad8288b7777" Oct 06 13:17:44 crc kubenswrapper[4958]: E1006 13:17:44.914681 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-whw6z_openshift-machine-config-operator(1a63a5e9-6d00-40ff-a080-cfcaf03a1c1b)\"" pod="openshift-machine-config-operator/machine-config-daemon-whw6z" podUID="1a63a5e9-6d00-40ff-a080-cfcaf03a1c1b" Oct 06 13:17:59 crc kubenswrapper[4958]: I1006 13:17:59.913646 4958 scope.go:117] "RemoveContainer" containerID="d84ab81ce9062a0950d4884324ac3057bab7e52ccaab6ef82edbbad8288b7777" Oct 06 13:17:59 crc kubenswrapper[4958]: E1006 13:17:59.915449 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-whw6z_openshift-machine-config-operator(1a63a5e9-6d00-40ff-a080-cfcaf03a1c1b)\"" pod="openshift-machine-config-operator/machine-config-daemon-whw6z" podUID="1a63a5e9-6d00-40ff-a080-cfcaf03a1c1b" Oct 06 13:18:12 crc kubenswrapper[4958]: I1006 13:18:12.913807 4958 scope.go:117] "RemoveContainer" containerID="d84ab81ce9062a0950d4884324ac3057bab7e52ccaab6ef82edbbad8288b7777" Oct 06 13:18:12 crc kubenswrapper[4958]: E1006 13:18:12.914854 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-whw6z_openshift-machine-config-operator(1a63a5e9-6d00-40ff-a080-cfcaf03a1c1b)\"" pod="openshift-machine-config-operator/machine-config-daemon-whw6z" podUID="1a63a5e9-6d00-40ff-a080-cfcaf03a1c1b" Oct 06 13:18:23 crc kubenswrapper[4958]: I1006 13:18:23.913271 4958 scope.go:117] "RemoveContainer" containerID="d84ab81ce9062a0950d4884324ac3057bab7e52ccaab6ef82edbbad8288b7777" Oct 06 13:18:23 crc kubenswrapper[4958]: E1006 13:18:23.914001 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-whw6z_openshift-machine-config-operator(1a63a5e9-6d00-40ff-a080-cfcaf03a1c1b)\"" pod="openshift-machine-config-operator/machine-config-daemon-whw6z" podUID="1a63a5e9-6d00-40ff-a080-cfcaf03a1c1b" Oct 06 13:18:35 crc kubenswrapper[4958]: I1006 13:18:35.914258 4958 scope.go:117] "RemoveContainer" containerID="d84ab81ce9062a0950d4884324ac3057bab7e52ccaab6ef82edbbad8288b7777" Oct 06 13:18:35 crc kubenswrapper[4958]: E1006 13:18:35.915087 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-whw6z_openshift-machine-config-operator(1a63a5e9-6d00-40ff-a080-cfcaf03a1c1b)\"" pod="openshift-machine-config-operator/machine-config-daemon-whw6z" podUID="1a63a5e9-6d00-40ff-a080-cfcaf03a1c1b" Oct 06 13:18:46 crc kubenswrapper[4958]: I1006 13:18:46.924979 4958 scope.go:117] "RemoveContainer" containerID="d84ab81ce9062a0950d4884324ac3057bab7e52ccaab6ef82edbbad8288b7777" Oct 06 13:18:46 crc kubenswrapper[4958]: E1006 13:18:46.926727 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-whw6z_openshift-machine-config-operator(1a63a5e9-6d00-40ff-a080-cfcaf03a1c1b)\"" pod="openshift-machine-config-operator/machine-config-daemon-whw6z" podUID="1a63a5e9-6d00-40ff-a080-cfcaf03a1c1b" Oct 06 13:18:58 crc kubenswrapper[4958]: I1006 13:18:58.913383 4958 scope.go:117] "RemoveContainer" containerID="d84ab81ce9062a0950d4884324ac3057bab7e52ccaab6ef82edbbad8288b7777" Oct 06 13:18:58 crc kubenswrapper[4958]: E1006 13:18:58.915574 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-whw6z_openshift-machine-config-operator(1a63a5e9-6d00-40ff-a080-cfcaf03a1c1b)\"" pod="openshift-machine-config-operator/machine-config-daemon-whw6z" podUID="1a63a5e9-6d00-40ff-a080-cfcaf03a1c1b" Oct 06 13:19:13 crc kubenswrapper[4958]: I1006 13:19:13.914412 4958 scope.go:117] "RemoveContainer" containerID="d84ab81ce9062a0950d4884324ac3057bab7e52ccaab6ef82edbbad8288b7777" Oct 06 13:19:13 crc kubenswrapper[4958]: E1006 13:19:13.915599 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-whw6z_openshift-machine-config-operator(1a63a5e9-6d00-40ff-a080-cfcaf03a1c1b)\"" pod="openshift-machine-config-operator/machine-config-daemon-whw6z" podUID="1a63a5e9-6d00-40ff-a080-cfcaf03a1c1b" Oct 06 13:19:24 crc kubenswrapper[4958]: I1006 13:19:24.913501 4958 scope.go:117] "RemoveContainer" containerID="d84ab81ce9062a0950d4884324ac3057bab7e52ccaab6ef82edbbad8288b7777" Oct 06 13:19:24 crc kubenswrapper[4958]: E1006 13:19:24.914496 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-whw6z_openshift-machine-config-operator(1a63a5e9-6d00-40ff-a080-cfcaf03a1c1b)\"" pod="openshift-machine-config-operator/machine-config-daemon-whw6z" podUID="1a63a5e9-6d00-40ff-a080-cfcaf03a1c1b" Oct 06 13:19:39 crc kubenswrapper[4958]: I1006 13:19:39.912937 4958 scope.go:117] "RemoveContainer" containerID="d84ab81ce9062a0950d4884324ac3057bab7e52ccaab6ef82edbbad8288b7777" Oct 06 13:19:39 crc kubenswrapper[4958]: E1006 13:19:39.914032 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-whw6z_openshift-machine-config-operator(1a63a5e9-6d00-40ff-a080-cfcaf03a1c1b)\"" pod="openshift-machine-config-operator/machine-config-daemon-whw6z" podUID="1a63a5e9-6d00-40ff-a080-cfcaf03a1c1b" Oct 06 13:19:53 crc kubenswrapper[4958]: I1006 13:19:53.913361 4958 scope.go:117] "RemoveContainer" containerID="d84ab81ce9062a0950d4884324ac3057bab7e52ccaab6ef82edbbad8288b7777" Oct 06 13:19:53 crc kubenswrapper[4958]: E1006 13:19:53.915847 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-whw6z_openshift-machine-config-operator(1a63a5e9-6d00-40ff-a080-cfcaf03a1c1b)\"" pod="openshift-machine-config-operator/machine-config-daemon-whw6z" podUID="1a63a5e9-6d00-40ff-a080-cfcaf03a1c1b" Oct 06 13:20:05 crc kubenswrapper[4958]: I1006 13:20:05.913129 4958 scope.go:117] "RemoveContainer" containerID="d84ab81ce9062a0950d4884324ac3057bab7e52ccaab6ef82edbbad8288b7777" Oct 06 13:20:05 crc kubenswrapper[4958]: E1006 13:20:05.913813 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-whw6z_openshift-machine-config-operator(1a63a5e9-6d00-40ff-a080-cfcaf03a1c1b)\"" pod="openshift-machine-config-operator/machine-config-daemon-whw6z" podUID="1a63a5e9-6d00-40ff-a080-cfcaf03a1c1b" Oct 06 13:20:19 crc kubenswrapper[4958]: I1006 13:20:19.913915 4958 scope.go:117] "RemoveContainer" containerID="d84ab81ce9062a0950d4884324ac3057bab7e52ccaab6ef82edbbad8288b7777" Oct 06 13:20:19 crc kubenswrapper[4958]: E1006 13:20:19.914941 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-whw6z_openshift-machine-config-operator(1a63a5e9-6d00-40ff-a080-cfcaf03a1c1b)\"" pod="openshift-machine-config-operator/machine-config-daemon-whw6z" podUID="1a63a5e9-6d00-40ff-a080-cfcaf03a1c1b" Oct 06 13:20:30 crc kubenswrapper[4958]: I1006 13:20:30.913283 4958 scope.go:117] "RemoveContainer" containerID="d84ab81ce9062a0950d4884324ac3057bab7e52ccaab6ef82edbbad8288b7777" Oct 06 13:20:32 crc kubenswrapper[4958]: I1006 13:20:32.145255 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-whw6z" event={"ID":"1a63a5e9-6d00-40ff-a080-cfcaf03a1c1b","Type":"ContainerStarted","Data":"1d41da7355ed8fca93307deb9ba65e8c10fc79ad77b6a1123fc848e312fa086a"} Oct 06 13:22:20 crc kubenswrapper[4958]: I1006 13:22:20.879073 4958 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-d9jtr/must-gather-thrjh"] Oct 06 13:22:20 crc kubenswrapper[4958]: E1006 13:22:20.880166 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="58e3129e-ebe5-44b8-b712-ed373382cef4" containerName="registry-server" Oct 06 13:22:20 crc kubenswrapper[4958]: I1006 13:22:20.880181 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="58e3129e-ebe5-44b8-b712-ed373382cef4" containerName="registry-server" Oct 06 13:22:20 crc kubenswrapper[4958]: E1006 13:22:20.880202 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="58e3129e-ebe5-44b8-b712-ed373382cef4" containerName="extract-utilities" Oct 06 13:22:20 crc kubenswrapper[4958]: I1006 13:22:20.880208 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="58e3129e-ebe5-44b8-b712-ed373382cef4" containerName="extract-utilities" Oct 06 13:22:20 crc kubenswrapper[4958]: E1006 13:22:20.880243 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="58e3129e-ebe5-44b8-b712-ed373382cef4" containerName="extract-content" Oct 06 13:22:20 crc kubenswrapper[4958]: I1006 13:22:20.880250 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="58e3129e-ebe5-44b8-b712-ed373382cef4" containerName="extract-content" Oct 06 13:22:20 crc kubenswrapper[4958]: I1006 13:22:20.880584 4958 memory_manager.go:354] "RemoveStaleState removing state" podUID="58e3129e-ebe5-44b8-b712-ed373382cef4" containerName="registry-server" Oct 06 13:22:20 crc kubenswrapper[4958]: I1006 13:22:20.882069 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-d9jtr/must-gather-thrjh" Oct 06 13:22:20 crc kubenswrapper[4958]: I1006 13:22:20.884049 4958 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-d9jtr"/"openshift-service-ca.crt" Oct 06 13:22:20 crc kubenswrapper[4958]: I1006 13:22:20.884319 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-must-gather-d9jtr"/"default-dockercfg-5nnbx" Oct 06 13:22:20 crc kubenswrapper[4958]: I1006 13:22:20.884499 4958 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-d9jtr"/"kube-root-ca.crt" Oct 06 13:22:20 crc kubenswrapper[4958]: I1006 13:22:20.891186 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-d9jtr/must-gather-thrjh"] Oct 06 13:22:21 crc kubenswrapper[4958]: I1006 13:22:21.027751 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/4e979a79-8f58-4109-aa07-8f42159e70e3-must-gather-output\") pod \"must-gather-thrjh\" (UID: \"4e979a79-8f58-4109-aa07-8f42159e70e3\") " pod="openshift-must-gather-d9jtr/must-gather-thrjh" Oct 06 13:22:21 crc kubenswrapper[4958]: I1006 13:22:21.028173 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s2jb5\" (UniqueName: \"kubernetes.io/projected/4e979a79-8f58-4109-aa07-8f42159e70e3-kube-api-access-s2jb5\") pod \"must-gather-thrjh\" (UID: \"4e979a79-8f58-4109-aa07-8f42159e70e3\") " pod="openshift-must-gather-d9jtr/must-gather-thrjh" Oct 06 13:22:21 crc kubenswrapper[4958]: I1006 13:22:21.129984 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/4e979a79-8f58-4109-aa07-8f42159e70e3-must-gather-output\") pod \"must-gather-thrjh\" (UID: \"4e979a79-8f58-4109-aa07-8f42159e70e3\") " pod="openshift-must-gather-d9jtr/must-gather-thrjh" Oct 06 13:22:21 crc kubenswrapper[4958]: I1006 13:22:21.130313 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2jb5\" (UniqueName: \"kubernetes.io/projected/4e979a79-8f58-4109-aa07-8f42159e70e3-kube-api-access-s2jb5\") pod \"must-gather-thrjh\" (UID: \"4e979a79-8f58-4109-aa07-8f42159e70e3\") " pod="openshift-must-gather-d9jtr/must-gather-thrjh" Oct 06 13:22:21 crc kubenswrapper[4958]: I1006 13:22:21.130475 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/4e979a79-8f58-4109-aa07-8f42159e70e3-must-gather-output\") pod \"must-gather-thrjh\" (UID: \"4e979a79-8f58-4109-aa07-8f42159e70e3\") " pod="openshift-must-gather-d9jtr/must-gather-thrjh" Oct 06 13:22:21 crc kubenswrapper[4958]: I1006 13:22:21.150089 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s2jb5\" (UniqueName: \"kubernetes.io/projected/4e979a79-8f58-4109-aa07-8f42159e70e3-kube-api-access-s2jb5\") pod \"must-gather-thrjh\" (UID: \"4e979a79-8f58-4109-aa07-8f42159e70e3\") " pod="openshift-must-gather-d9jtr/must-gather-thrjh" Oct 06 13:22:21 crc kubenswrapper[4958]: I1006 13:22:21.228684 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-d9jtr/must-gather-thrjh" Oct 06 13:22:21 crc kubenswrapper[4958]: I1006 13:22:21.698960 4958 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Oct 06 13:22:21 crc kubenswrapper[4958]: I1006 13:22:21.700560 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-d9jtr/must-gather-thrjh"] Oct 06 13:22:22 crc kubenswrapper[4958]: I1006 13:22:22.222261 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-d9jtr/must-gather-thrjh" event={"ID":"4e979a79-8f58-4109-aa07-8f42159e70e3","Type":"ContainerStarted","Data":"8005d0b3379f67b012ddfa1dea670d34bfe2a3a572576043e5db1d3728c8ab77"} Oct 06 13:22:26 crc kubenswrapper[4958]: I1006 13:22:26.258261 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-d9jtr/must-gather-thrjh" event={"ID":"4e979a79-8f58-4109-aa07-8f42159e70e3","Type":"ContainerStarted","Data":"0c1d7de24edb7b3e959b371e3d31554a22c1542b5cbb509e8f02f5e3bebcf61f"} Oct 06 13:22:27 crc kubenswrapper[4958]: I1006 13:22:27.269729 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-d9jtr/must-gather-thrjh" event={"ID":"4e979a79-8f58-4109-aa07-8f42159e70e3","Type":"ContainerStarted","Data":"a54bd624dda81b64343c52b8814ce3b7ea5216f6f1157ab3a68dab321f8cf2b2"} Oct 06 13:22:27 crc kubenswrapper[4958]: I1006 13:22:27.292164 4958 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-d9jtr/must-gather-thrjh" podStartSLOduration=3.155368679 podStartE2EDuration="7.292141758s" podCreationTimestamp="2025-10-06 13:22:20 +0000 UTC" firstStartedPulling="2025-10-06 13:22:21.698738158 +0000 UTC m=+5695.584763466" lastFinishedPulling="2025-10-06 13:22:25.835511237 +0000 UTC m=+5699.721536545" observedRunningTime="2025-10-06 13:22:27.285836682 +0000 UTC m=+5701.171861990" watchObservedRunningTime="2025-10-06 13:22:27.292141758 +0000 UTC m=+5701.178167056" Oct 06 13:22:29 crc kubenswrapper[4958]: I1006 13:22:29.723953 4958 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-d9jtr/crc-debug-q8qxb"] Oct 06 13:22:29 crc kubenswrapper[4958]: I1006 13:22:29.726360 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-d9jtr/crc-debug-q8qxb" Oct 06 13:22:29 crc kubenswrapper[4958]: I1006 13:22:29.804482 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/9ce140a0-f3b8-4c71-8d9d-9feee2b1ff6a-host\") pod \"crc-debug-q8qxb\" (UID: \"9ce140a0-f3b8-4c71-8d9d-9feee2b1ff6a\") " pod="openshift-must-gather-d9jtr/crc-debug-q8qxb" Oct 06 13:22:29 crc kubenswrapper[4958]: I1006 13:22:29.804635 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xzg67\" (UniqueName: \"kubernetes.io/projected/9ce140a0-f3b8-4c71-8d9d-9feee2b1ff6a-kube-api-access-xzg67\") pod \"crc-debug-q8qxb\" (UID: \"9ce140a0-f3b8-4c71-8d9d-9feee2b1ff6a\") " pod="openshift-must-gather-d9jtr/crc-debug-q8qxb" Oct 06 13:22:29 crc kubenswrapper[4958]: I1006 13:22:29.906753 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xzg67\" (UniqueName: \"kubernetes.io/projected/9ce140a0-f3b8-4c71-8d9d-9feee2b1ff6a-kube-api-access-xzg67\") pod \"crc-debug-q8qxb\" (UID: \"9ce140a0-f3b8-4c71-8d9d-9feee2b1ff6a\") " pod="openshift-must-gather-d9jtr/crc-debug-q8qxb" Oct 06 13:22:29 crc kubenswrapper[4958]: I1006 13:22:29.906919 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/9ce140a0-f3b8-4c71-8d9d-9feee2b1ff6a-host\") pod \"crc-debug-q8qxb\" (UID: \"9ce140a0-f3b8-4c71-8d9d-9feee2b1ff6a\") " pod="openshift-must-gather-d9jtr/crc-debug-q8qxb" Oct 06 13:22:29 crc kubenswrapper[4958]: I1006 13:22:29.907049 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/9ce140a0-f3b8-4c71-8d9d-9feee2b1ff6a-host\") pod \"crc-debug-q8qxb\" (UID: \"9ce140a0-f3b8-4c71-8d9d-9feee2b1ff6a\") " pod="openshift-must-gather-d9jtr/crc-debug-q8qxb" Oct 06 13:22:29 crc kubenswrapper[4958]: I1006 13:22:29.941940 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xzg67\" (UniqueName: \"kubernetes.io/projected/9ce140a0-f3b8-4c71-8d9d-9feee2b1ff6a-kube-api-access-xzg67\") pod \"crc-debug-q8qxb\" (UID: \"9ce140a0-f3b8-4c71-8d9d-9feee2b1ff6a\") " pod="openshift-must-gather-d9jtr/crc-debug-q8qxb" Oct 06 13:22:30 crc kubenswrapper[4958]: I1006 13:22:30.054677 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-d9jtr/crc-debug-q8qxb" Oct 06 13:22:30 crc kubenswrapper[4958]: W1006 13:22:30.120539 4958 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod9ce140a0_f3b8_4c71_8d9d_9feee2b1ff6a.slice/crio-d71c924ad0de6903ab4b1bf39f0a6468d416b9190b6c48d5c851d5c01f93140f WatchSource:0}: Error finding container d71c924ad0de6903ab4b1bf39f0a6468d416b9190b6c48d5c851d5c01f93140f: Status 404 returned error can't find the container with id d71c924ad0de6903ab4b1bf39f0a6468d416b9190b6c48d5c851d5c01f93140f Oct 06 13:22:30 crc kubenswrapper[4958]: I1006 13:22:30.294763 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-d9jtr/crc-debug-q8qxb" event={"ID":"9ce140a0-f3b8-4c71-8d9d-9feee2b1ff6a","Type":"ContainerStarted","Data":"d71c924ad0de6903ab4b1bf39f0a6468d416b9190b6c48d5c851d5c01f93140f"} Oct 06 13:22:43 crc kubenswrapper[4958]: I1006 13:22:43.440955 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-d9jtr/crc-debug-q8qxb" event={"ID":"9ce140a0-f3b8-4c71-8d9d-9feee2b1ff6a","Type":"ContainerStarted","Data":"03b40be786995d889587dee435a0e1efdba6c2954233a0870e1c9d9659c3436d"} Oct 06 13:22:43 crc kubenswrapper[4958]: I1006 13:22:43.467284 4958 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-d9jtr/crc-debug-q8qxb" podStartSLOduration=2.256199762 podStartE2EDuration="14.467262692s" podCreationTimestamp="2025-10-06 13:22:29 +0000 UTC" firstStartedPulling="2025-10-06 13:22:30.124065822 +0000 UTC m=+5704.010091130" lastFinishedPulling="2025-10-06 13:22:42.335128752 +0000 UTC m=+5716.221154060" observedRunningTime="2025-10-06 13:22:43.45667847 +0000 UTC m=+5717.342703778" watchObservedRunningTime="2025-10-06 13:22:43.467262692 +0000 UTC m=+5717.353288000" Oct 06 13:22:53 crc kubenswrapper[4958]: I1006 13:22:53.802381 4958 patch_prober.go:28] interesting pod/machine-config-daemon-whw6z container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 06 13:22:53 crc kubenswrapper[4958]: I1006 13:22:53.802919 4958 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-whw6z" podUID="1a63a5e9-6d00-40ff-a080-cfcaf03a1c1b" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 06 13:23:23 crc kubenswrapper[4958]: I1006 13:23:23.802390 4958 patch_prober.go:28] interesting pod/machine-config-daemon-whw6z container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 06 13:23:23 crc kubenswrapper[4958]: I1006 13:23:23.802966 4958 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-whw6z" podUID="1a63a5e9-6d00-40ff-a080-cfcaf03a1c1b" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 06 13:23:35 crc kubenswrapper[4958]: I1006 13:23:35.134105 4958 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-api-5784c7f6c4-pqpwp_d5b91f63-f0c4-4c4b-a06a-0136898c0beb/barbican-api/0.log" Oct 06 13:23:35 crc kubenswrapper[4958]: I1006 13:23:35.223515 4958 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-api-5784c7f6c4-pqpwp_d5b91f63-f0c4-4c4b-a06a-0136898c0beb/barbican-api-log/0.log" Oct 06 13:23:35 crc kubenswrapper[4958]: I1006 13:23:35.540293 4958 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-keystone-listener-7f4f85d9b4-tcldf_f3a9469b-86a8-4eec-9722-8bec4159b05e/barbican-keystone-listener/0.log" Oct 06 13:23:35 crc kubenswrapper[4958]: I1006 13:23:35.642672 4958 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-keystone-listener-7f4f85d9b4-tcldf_f3a9469b-86a8-4eec-9722-8bec4159b05e/barbican-keystone-listener-log/0.log" Oct 06 13:23:35 crc kubenswrapper[4958]: I1006 13:23:35.784037 4958 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-worker-56588d4b7-rsgzm_d1f5dd7f-a3f7-4c0d-8e1e-ccbc0fc5063f/barbican-worker/0.log" Oct 06 13:23:35 crc kubenswrapper[4958]: I1006 13:23:35.883479 4958 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-worker-56588d4b7-rsgzm_d1f5dd7f-a3f7-4c0d-8e1e-ccbc0fc5063f/barbican-worker-log/0.log" Oct 06 13:23:36 crc kubenswrapper[4958]: I1006 13:23:36.027551 4958 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_bootstrap-edpm-deployment-openstack-edpm-ipam-64zqq_bff355e0-d99f-4997-81e9-849deb8cea2a/bootstrap-edpm-deployment-openstack-edpm-ipam/0.log" Oct 06 13:23:36 crc kubenswrapper[4958]: I1006 13:23:36.230949 4958 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_8110dd9c-85f9-4427-909e-1bc397a4678c/ceilometer-notification-agent/0.log" Oct 06 13:23:36 crc kubenswrapper[4958]: I1006 13:23:36.252034 4958 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_8110dd9c-85f9-4427-909e-1bc397a4678c/ceilometer-central-agent/0.log" Oct 06 13:23:36 crc kubenswrapper[4958]: I1006 13:23:36.350304 4958 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_8110dd9c-85f9-4427-909e-1bc397a4678c/proxy-httpd/0.log" Oct 06 13:23:36 crc kubenswrapper[4958]: I1006 13:23:36.406236 4958 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_8110dd9c-85f9-4427-909e-1bc397a4678c/sg-core/0.log" Oct 06 13:23:36 crc kubenswrapper[4958]: I1006 13:23:36.594700 4958 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-api-0_5631f7c8-d7b1-4655-8acd-83a29bb5f3b3/cinder-api-log/0.log" Oct 06 13:23:36 crc kubenswrapper[4958]: I1006 13:23:36.650196 4958 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-api-0_5631f7c8-d7b1-4655-8acd-83a29bb5f3b3/cinder-api/0.log" Oct 06 13:23:36 crc kubenswrapper[4958]: I1006 13:23:36.853895 4958 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-scheduler-0_131b14e5-e45a-4fc4-817c-b8f82c27e92e/cinder-scheduler/0.log" Oct 06 13:23:36 crc kubenswrapper[4958]: I1006 13:23:36.867040 4958 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-scheduler-0_131b14e5-e45a-4fc4-817c-b8f82c27e92e/probe/0.log" Oct 06 13:23:37 crc kubenswrapper[4958]: I1006 13:23:37.086605 4958 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_configure-network-edpm-deployment-openstack-edpm-ipam-7whq7_55d6c75b-9ef4-4576-bdc9-46bd62865410/configure-network-edpm-deployment-openstack-edpm-ipam/0.log" Oct 06 13:23:37 crc kubenswrapper[4958]: I1006 13:23:37.179758 4958 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_configure-os-edpm-deployment-openstack-edpm-ipam-7ppk7_9966ebae-f14d-4b3a-aea7-28843e2fe605/configure-os-edpm-deployment-openstack-edpm-ipam/0.log" Oct 06 13:23:37 crc kubenswrapper[4958]: I1006 13:23:37.334090 4958 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_configure-os-edpm-deployment-openstack-edpm-ipam-b6zqz_b84c284f-00cf-4afd-a3e6-84c24af1caae/configure-os-edpm-deployment-openstack-edpm-ipam/0.log" Oct 06 13:23:37 crc kubenswrapper[4958]: I1006 13:23:37.518132 4958 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-cb6ffcf87-5lvxx_760d1ffb-81bb-4765-865c-c655d0886553/init/0.log" Oct 06 13:23:37 crc kubenswrapper[4958]: I1006 13:23:37.655763 4958 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-cb6ffcf87-5lvxx_760d1ffb-81bb-4765-865c-c655d0886553/init/0.log" Oct 06 13:23:37 crc kubenswrapper[4958]: I1006 13:23:37.896082 4958 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-cb6ffcf87-5lvxx_760d1ffb-81bb-4765-865c-c655d0886553/dnsmasq-dns/0.log" Oct 06 13:23:37 crc kubenswrapper[4958]: I1006 13:23:37.931299 4958 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_download-cache-edpm-deployment-openstack-edpm-ipam-xg9gd_81afece6-fe0a-491c-94b8-3b19d00058c5/download-cache-edpm-deployment-openstack-edpm-ipam/0.log" Oct 06 13:23:38 crc kubenswrapper[4958]: I1006 13:23:38.077849 4958 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-external-api-0_33d3c29a-90b4-49ce-9dd3-0c3a415a8ec4/glance-httpd/0.log" Oct 06 13:23:38 crc kubenswrapper[4958]: I1006 13:23:38.154047 4958 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-external-api-0_33d3c29a-90b4-49ce-9dd3-0c3a415a8ec4/glance-log/0.log" Oct 06 13:23:38 crc kubenswrapper[4958]: I1006 13:23:38.287703 4958 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-internal-api-0_ddd9cc34-6d2f-41d2-ba9f-e41230964003/glance-httpd/0.log" Oct 06 13:23:38 crc kubenswrapper[4958]: I1006 13:23:38.354464 4958 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-internal-api-0_ddd9cc34-6d2f-41d2-ba9f-e41230964003/glance-log/0.log" Oct 06 13:23:38 crc kubenswrapper[4958]: I1006 13:23:38.492660 4958 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_horizon-69f5d58bb-ghq4l_7fdb6376-1709-4378-8fe4-eaf26cf5fde7/horizon/0.log" Oct 06 13:23:38 crc kubenswrapper[4958]: I1006 13:23:38.770179 4958 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_install-certs-edpm-deployment-openstack-edpm-ipam-gmxmw_f1064552-8f6a-46ac-8628-d9d2bc8c2a95/install-certs-edpm-deployment-openstack-edpm-ipam/0.log" Oct 06 13:23:38 crc kubenswrapper[4958]: I1006 13:23:38.841880 4958 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_install-os-edpm-deployment-openstack-edpm-ipam-m7n8m_27373b57-9835-4096-9b31-eab53444391c/install-os-edpm-deployment-openstack-edpm-ipam/0.log" Oct 06 13:23:39 crc kubenswrapper[4958]: I1006 13:23:39.082033 4958 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_horizon-69f5d58bb-ghq4l_7fdb6376-1709-4378-8fe4-eaf26cf5fde7/horizon-log/0.log" Oct 06 13:23:39 crc kubenswrapper[4958]: I1006 13:23:39.260106 4958 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_keystone-cron-29329261-tpdvf_80b45d5d-1a89-4c03-a387-d74a9e2912f4/keystone-cron/0.log" Oct 06 13:23:39 crc kubenswrapper[4958]: I1006 13:23:39.352684 4958 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_kube-state-metrics-0_670f79b0-7850-4798-a452-f387018cd4d3/kube-state-metrics/0.log" Oct 06 13:23:39 crc kubenswrapper[4958]: I1006 13:23:39.604178 4958 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_libvirt-edpm-deployment-openstack-edpm-ipam-rqzwk_186200b0-8ce3-46a8-9691-42b254a077be/libvirt-edpm-deployment-openstack-edpm-ipam/0.log" Oct 06 13:23:39 crc kubenswrapper[4958]: I1006 13:23:39.604502 4958 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_keystone-6d6b6556f7-c2dwg_2ba0cfe8-aec4-4d54-b307-7e7f5b1d9756/keystone-api/0.log" Oct 06 13:23:40 crc kubenswrapper[4958]: I1006 13:23:40.194120 4958 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-7bc84f8f6c-tdr2k_6ef174b4-138f-4dc1-8618-afb9c9e8f9b3/neutron-httpd/0.log" Oct 06 13:23:40 crc kubenswrapper[4958]: I1006 13:23:40.289036 4958 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-metadata-edpm-deployment-openstack-edpm-ipam-hch6t_c7524451-dd6e-42b7-8454-4e9efe77c79c/neutron-metadata-edpm-deployment-openstack-edpm-ipam/0.log" Oct 06 13:23:40 crc kubenswrapper[4958]: I1006 13:23:40.442973 4958 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-7bc84f8f6c-tdr2k_6ef174b4-138f-4dc1-8618-afb9c9e8f9b3/neutron-api/0.log" Oct 06 13:23:41 crc kubenswrapper[4958]: I1006 13:23:41.324793 4958 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell0-conductor-0_ded8adc6-35b0-4901-89ec-7f314c7817e7/nova-cell0-conductor-conductor/0.log" Oct 06 13:23:41 crc kubenswrapper[4958]: I1006 13:23:41.904026 4958 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell1-conductor-0_f769ee5d-6085-4e88-a212-2c3e2e8f6f2b/nova-cell1-conductor-conductor/0.log" Oct 06 13:23:42 crc kubenswrapper[4958]: I1006 13:23:42.246960 4958 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-api-0_18ae50d9-6e14-4379-b6e2-6a1845859f0c/nova-api-log/0.log" Oct 06 13:23:42 crc kubenswrapper[4958]: I1006 13:23:42.489117 4958 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell1-novncproxy-0_07232aba-c139-41f7-b153-ab542bbfa39a/nova-cell1-novncproxy-novncproxy/0.log" Oct 06 13:23:42 crc kubenswrapper[4958]: I1006 13:23:42.773561 4958 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-edpm-deployment-openstack-edpm-ipam-nx8z6_226865fc-14de-4b5f-a693-a27ef3d06efa/nova-edpm-deployment-openstack-edpm-ipam/0.log" Oct 06 13:23:42 crc kubenswrapper[4958]: I1006 13:23:42.798859 4958 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-api-0_18ae50d9-6e14-4379-b6e2-6a1845859f0c/nova-api-api/0.log" Oct 06 13:23:43 crc kubenswrapper[4958]: I1006 13:23:43.472727 4958 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-metadata-0_22f238b7-8e7b-408c-81cd-9635a10e7d3d/nova-metadata-log/0.log" Oct 06 13:23:43 crc kubenswrapper[4958]: I1006 13:23:43.593582 4958 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-scheduler-0_1b61e54d-6905-4ce9-b034-987af62ab20a/nova-scheduler-scheduler/0.log" Oct 06 13:23:43 crc kubenswrapper[4958]: I1006 13:23:43.859681 4958 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_16e7f0fd-80ed-4a05-8ec6-c3b82fe3fb51/mysql-bootstrap/0.log" Oct 06 13:23:44 crc kubenswrapper[4958]: I1006 13:23:44.024116 4958 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_16e7f0fd-80ed-4a05-8ec6-c3b82fe3fb51/mysql-bootstrap/0.log" Oct 06 13:23:44 crc kubenswrapper[4958]: I1006 13:23:44.059481 4958 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_16e7f0fd-80ed-4a05-8ec6-c3b82fe3fb51/galera/0.log" Oct 06 13:23:44 crc kubenswrapper[4958]: I1006 13:23:44.309462 4958 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_acaf745d-7462-44e9-be0b-28424e3c2f31/mysql-bootstrap/0.log" Oct 06 13:23:44 crc kubenswrapper[4958]: I1006 13:23:44.476528 4958 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_acaf745d-7462-44e9-be0b-28424e3c2f31/mysql-bootstrap/0.log" Oct 06 13:23:44 crc kubenswrapper[4958]: I1006 13:23:44.542565 4958 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_acaf745d-7462-44e9-be0b-28424e3c2f31/galera/0.log" Oct 06 13:23:44 crc kubenswrapper[4958]: I1006 13:23:44.738016 4958 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstackclient_2474d50f-478f-4d0f-abc0-f0a5135285ca/openstackclient/0.log" Oct 06 13:23:44 crc kubenswrapper[4958]: I1006 13:23:44.975269 4958 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-5x288_ca1fdee5-1c5e-4740-b69a-d2111ba255ee/ovn-controller/0.log" Oct 06 13:23:45 crc kubenswrapper[4958]: I1006 13:23:45.179767 4958 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-metrics-dk97g_0001ceb8-4afd-4d37-acfe-8ed9c976b6d9/openstack-network-exporter/0.log" Oct 06 13:23:45 crc kubenswrapper[4958]: I1006 13:23:45.423420 4958 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-k7nm2_f8475601-8235-4d69-958e-53f8e6a2f71b/ovsdb-server-init/0.log" Oct 06 13:23:45 crc kubenswrapper[4958]: I1006 13:23:45.636918 4958 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-k7nm2_f8475601-8235-4d69-958e-53f8e6a2f71b/ovsdb-server-init/0.log" Oct 06 13:23:45 crc kubenswrapper[4958]: I1006 13:23:45.645634 4958 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-k7nm2_f8475601-8235-4d69-958e-53f8e6a2f71b/ovs-vswitchd/0.log" Oct 06 13:23:45 crc kubenswrapper[4958]: I1006 13:23:45.839511 4958 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-k7nm2_f8475601-8235-4d69-958e-53f8e6a2f71b/ovsdb-server/0.log" Oct 06 13:23:46 crc kubenswrapper[4958]: I1006 13:23:46.098044 4958 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-edpm-deployment-openstack-edpm-ipam-5ljwg_08d67da7-f4f3-4e1c-acd8-c8fcec30f59d/ovn-edpm-deployment-openstack-edpm-ipam/0.log" Oct 06 13:23:46 crc kubenswrapper[4958]: I1006 13:23:46.321622 4958 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-northd-0_0bee2760-9ae4-4988-80cc-1bf507ae032b/openstack-network-exporter/0.log" Oct 06 13:23:46 crc kubenswrapper[4958]: I1006 13:23:46.340124 4958 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-northd-0_0bee2760-9ae4-4988-80cc-1bf507ae032b/ovn-northd/0.log" Oct 06 13:23:46 crc kubenswrapper[4958]: I1006 13:23:46.509384 4958 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-0_d699699e-9c26-4129-9483-3ac7d597f948/openstack-network-exporter/0.log" Oct 06 13:23:46 crc kubenswrapper[4958]: I1006 13:23:46.593828 4958 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-metadata-0_22f238b7-8e7b-408c-81cd-9635a10e7d3d/nova-metadata-metadata/0.log" Oct 06 13:23:46 crc kubenswrapper[4958]: I1006 13:23:46.688450 4958 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-0_d699699e-9c26-4129-9483-3ac7d597f948/ovsdbserver-nb/0.log" Oct 06 13:23:46 crc kubenswrapper[4958]: I1006 13:23:46.819687 4958 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-0_90b1ca14-6697-42b0-8e63-fcec51e4599a/openstack-network-exporter/0.log" Oct 06 13:23:46 crc kubenswrapper[4958]: I1006 13:23:46.910896 4958 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-0_90b1ca14-6697-42b0-8e63-fcec51e4599a/ovsdbserver-sb/0.log" Oct 06 13:23:47 crc kubenswrapper[4958]: I1006 13:23:47.338232 4958 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_placement-999fc56db-gbkpz_13b3f1be-75c2-49c7-a3c6-d6dd842788c4/placement-api/0.log" Oct 06 13:23:47 crc kubenswrapper[4958]: I1006 13:23:47.455565 4958 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_placement-999fc56db-gbkpz_13b3f1be-75c2-49c7-a3c6-d6dd842788c4/placement-log/0.log" Oct 06 13:23:47 crc kubenswrapper[4958]: I1006 13:23:47.587732 4958 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_35b611fd-63b3-4146-b713-3fef7c26c3c7/setup-container/0.log" Oct 06 13:23:47 crc kubenswrapper[4958]: I1006 13:23:47.856804 4958 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_35b611fd-63b3-4146-b713-3fef7c26c3c7/setup-container/0.log" Oct 06 13:23:47 crc kubenswrapper[4958]: I1006 13:23:47.875474 4958 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_35b611fd-63b3-4146-b713-3fef7c26c3c7/rabbitmq/0.log" Oct 06 13:23:48 crc kubenswrapper[4958]: I1006 13:23:48.058708 4958 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_02fc87b1-4709-4476-a597-9154c5c3a322/setup-container/0.log" Oct 06 13:23:48 crc kubenswrapper[4958]: I1006 13:23:48.289373 4958 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_02fc87b1-4709-4476-a597-9154c5c3a322/rabbitmq/0.log" Oct 06 13:23:48 crc kubenswrapper[4958]: I1006 13:23:48.316018 4958 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_02fc87b1-4709-4476-a597-9154c5c3a322/setup-container/0.log" Oct 06 13:23:48 crc kubenswrapper[4958]: I1006 13:23:48.473329 4958 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_reboot-os-edpm-deployment-openstack-edpm-ipam-bf2ht_eee45a20-bff0-4c1c-a7a7-84646b71c82d/reboot-os-edpm-deployment-openstack-edpm-ipam/0.log" Oct 06 13:23:48 crc kubenswrapper[4958]: I1006 13:23:48.542822 4958 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_redhat-edpm-deployment-openstack-edpm-ipam-m996d_3dd583f8-4c3f-4059-8b0f-621021a4eaa1/redhat-edpm-deployment-openstack-edpm-ipam/0.log" Oct 06 13:23:48 crc kubenswrapper[4958]: I1006 13:23:48.781360 4958 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_repo-setup-edpm-deployment-openstack-edpm-ipam-jchd4_99935553-e8d4-497e-be84-8fa4a807fd72/repo-setup-edpm-deployment-openstack-edpm-ipam/0.log" Oct 06 13:23:48 crc kubenswrapper[4958]: I1006 13:23:48.957010 4958 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_run-os-edpm-deployment-openstack-edpm-ipam-jd665_7a0b8144-e1d6-4d95-8f13-71ea09785481/run-os-edpm-deployment-openstack-edpm-ipam/0.log" Oct 06 13:23:49 crc kubenswrapper[4958]: I1006 13:23:49.026734 4958 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ssh-known-hosts-edpm-deployment-r248v_263da9ff-5240-442f-9a4b-e2d8b5a30321/ssh-known-hosts-edpm-deployment/0.log" Oct 06 13:23:49 crc kubenswrapper[4958]: I1006 13:23:49.327288 4958 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-proxy-5cc4dd9879-7xgdr_eb6c6362-e91c-47c8-8616-702c4cada19a/proxy-server/0.log" Oct 06 13:23:49 crc kubenswrapper[4958]: I1006 13:23:49.506347 4958 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-proxy-5cc4dd9879-7xgdr_eb6c6362-e91c-47c8-8616-702c4cada19a/proxy-httpd/0.log" Oct 06 13:23:49 crc kubenswrapper[4958]: I1006 13:23:49.546110 4958 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-ring-rebalance-mvxsm_368db9f7-1e1b-42e1-a8d7-af0c7d9d910f/swift-ring-rebalance/0.log" Oct 06 13:23:49 crc kubenswrapper[4958]: I1006 13:23:49.680361 4958 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_7a90ffe6-00a1-4bee-862b-b1ca74e3185d/account-auditor/0.log" Oct 06 13:23:49 crc kubenswrapper[4958]: I1006 13:23:49.794342 4958 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_7a90ffe6-00a1-4bee-862b-b1ca74e3185d/account-reaper/0.log" Oct 06 13:23:49 crc kubenswrapper[4958]: I1006 13:23:49.925743 4958 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_7a90ffe6-00a1-4bee-862b-b1ca74e3185d/account-replicator/0.log" Oct 06 13:23:49 crc kubenswrapper[4958]: I1006 13:23:49.988815 4958 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_7a90ffe6-00a1-4bee-862b-b1ca74e3185d/container-auditor/0.log" Oct 06 13:23:50 crc kubenswrapper[4958]: I1006 13:23:50.002101 4958 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_7a90ffe6-00a1-4bee-862b-b1ca74e3185d/account-server/0.log" Oct 06 13:23:50 crc kubenswrapper[4958]: I1006 13:23:50.211919 4958 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_7a90ffe6-00a1-4bee-862b-b1ca74e3185d/container-replicator/0.log" Oct 06 13:23:50 crc kubenswrapper[4958]: I1006 13:23:50.254522 4958 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_7a90ffe6-00a1-4bee-862b-b1ca74e3185d/container-server/0.log" Oct 06 13:23:50 crc kubenswrapper[4958]: I1006 13:23:50.255159 4958 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_7a90ffe6-00a1-4bee-862b-b1ca74e3185d/container-updater/0.log" Oct 06 13:23:50 crc kubenswrapper[4958]: I1006 13:23:50.421254 4958 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_7a90ffe6-00a1-4bee-862b-b1ca74e3185d/object-auditor/0.log" Oct 06 13:23:50 crc kubenswrapper[4958]: I1006 13:23:50.453907 4958 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_7a90ffe6-00a1-4bee-862b-b1ca74e3185d/object-expirer/0.log" Oct 06 13:23:50 crc kubenswrapper[4958]: I1006 13:23:50.511630 4958 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_7a90ffe6-00a1-4bee-862b-b1ca74e3185d/object-replicator/0.log" Oct 06 13:23:50 crc kubenswrapper[4958]: I1006 13:23:50.626267 4958 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_7a90ffe6-00a1-4bee-862b-b1ca74e3185d/object-server/0.log" Oct 06 13:23:50 crc kubenswrapper[4958]: I1006 13:23:50.708055 4958 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_7a90ffe6-00a1-4bee-862b-b1ca74e3185d/rsync/0.log" Oct 06 13:23:50 crc kubenswrapper[4958]: I1006 13:23:50.711155 4958 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_7a90ffe6-00a1-4bee-862b-b1ca74e3185d/object-updater/0.log" Oct 06 13:23:50 crc kubenswrapper[4958]: I1006 13:23:50.845814 4958 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_7a90ffe6-00a1-4bee-862b-b1ca74e3185d/swift-recon-cron/0.log" Oct 06 13:23:50 crc kubenswrapper[4958]: I1006 13:23:50.987947 4958 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_telemetry-edpm-deployment-openstack-edpm-ipam-47hl4_5569cfb8-1cd6-4f3d-9eee-282ddce72171/telemetry-edpm-deployment-openstack-edpm-ipam/0.log" Oct 06 13:23:51 crc kubenswrapper[4958]: I1006 13:23:51.149995 4958 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_validate-network-edpm-deployment-openstack-edpm-ipam-kg5b5_264707ac-53c7-4002-bb44-5ed2af779aec/validate-network-edpm-deployment-openstack-edpm-ipam/0.log" Oct 06 13:23:53 crc kubenswrapper[4958]: I1006 13:23:53.801227 4958 patch_prober.go:28] interesting pod/machine-config-daemon-whw6z container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 06 13:23:53 crc kubenswrapper[4958]: I1006 13:23:53.801497 4958 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-whw6z" podUID="1a63a5e9-6d00-40ff-a080-cfcaf03a1c1b" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 06 13:23:53 crc kubenswrapper[4958]: I1006 13:23:53.801540 4958 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-whw6z" Oct 06 13:23:53 crc kubenswrapper[4958]: I1006 13:23:53.802246 4958 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"1d41da7355ed8fca93307deb9ba65e8c10fc79ad77b6a1123fc848e312fa086a"} pod="openshift-machine-config-operator/machine-config-daemon-whw6z" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Oct 06 13:23:53 crc kubenswrapper[4958]: I1006 13:23:53.802289 4958 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-whw6z" podUID="1a63a5e9-6d00-40ff-a080-cfcaf03a1c1b" containerName="machine-config-daemon" containerID="cri-o://1d41da7355ed8fca93307deb9ba65e8c10fc79ad77b6a1123fc848e312fa086a" gracePeriod=600 Oct 06 13:23:54 crc kubenswrapper[4958]: I1006 13:23:54.151307 4958 generic.go:334] "Generic (PLEG): container finished" podID="1a63a5e9-6d00-40ff-a080-cfcaf03a1c1b" containerID="1d41da7355ed8fca93307deb9ba65e8c10fc79ad77b6a1123fc848e312fa086a" exitCode=0 Oct 06 13:23:54 crc kubenswrapper[4958]: I1006 13:23:54.151546 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-whw6z" event={"ID":"1a63a5e9-6d00-40ff-a080-cfcaf03a1c1b","Type":"ContainerDied","Data":"1d41da7355ed8fca93307deb9ba65e8c10fc79ad77b6a1123fc848e312fa086a"} Oct 06 13:23:54 crc kubenswrapper[4958]: I1006 13:23:54.151573 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-whw6z" event={"ID":"1a63a5e9-6d00-40ff-a080-cfcaf03a1c1b","Type":"ContainerStarted","Data":"7fae5e7c3f2061ad0524619a6772893fa1ae12670cf4b11ced46230c90fe0223"} Oct 06 13:23:54 crc kubenswrapper[4958]: I1006 13:23:54.151588 4958 scope.go:117] "RemoveContainer" containerID="d84ab81ce9062a0950d4884324ac3057bab7e52ccaab6ef82edbbad8288b7777" Oct 06 13:23:55 crc kubenswrapper[4958]: I1006 13:23:55.134876 4958 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_memcached-0_3a4b3c4e-da8b-4eb1-a159-6376181dcbb8/memcached/0.log" Oct 06 13:24:40 crc kubenswrapper[4958]: I1006 13:24:40.582622 4958 generic.go:334] "Generic (PLEG): container finished" podID="9ce140a0-f3b8-4c71-8d9d-9feee2b1ff6a" containerID="03b40be786995d889587dee435a0e1efdba6c2954233a0870e1c9d9659c3436d" exitCode=0 Oct 06 13:24:40 crc kubenswrapper[4958]: I1006 13:24:40.582687 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-d9jtr/crc-debug-q8qxb" event={"ID":"9ce140a0-f3b8-4c71-8d9d-9feee2b1ff6a","Type":"ContainerDied","Data":"03b40be786995d889587dee435a0e1efdba6c2954233a0870e1c9d9659c3436d"} Oct 06 13:24:41 crc kubenswrapper[4958]: I1006 13:24:41.702108 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-d9jtr/crc-debug-q8qxb" Oct 06 13:24:41 crc kubenswrapper[4958]: I1006 13:24:41.740679 4958 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-d9jtr/crc-debug-q8qxb"] Oct 06 13:24:41 crc kubenswrapper[4958]: I1006 13:24:41.749882 4958 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-d9jtr/crc-debug-q8qxb"] Oct 06 13:24:41 crc kubenswrapper[4958]: I1006 13:24:41.797811 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xzg67\" (UniqueName: \"kubernetes.io/projected/9ce140a0-f3b8-4c71-8d9d-9feee2b1ff6a-kube-api-access-xzg67\") pod \"9ce140a0-f3b8-4c71-8d9d-9feee2b1ff6a\" (UID: \"9ce140a0-f3b8-4c71-8d9d-9feee2b1ff6a\") " Oct 06 13:24:41 crc kubenswrapper[4958]: I1006 13:24:41.798286 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/9ce140a0-f3b8-4c71-8d9d-9feee2b1ff6a-host\") pod \"9ce140a0-f3b8-4c71-8d9d-9feee2b1ff6a\" (UID: \"9ce140a0-f3b8-4c71-8d9d-9feee2b1ff6a\") " Oct 06 13:24:41 crc kubenswrapper[4958]: I1006 13:24:41.798349 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9ce140a0-f3b8-4c71-8d9d-9feee2b1ff6a-host" (OuterVolumeSpecName: "host") pod "9ce140a0-f3b8-4c71-8d9d-9feee2b1ff6a" (UID: "9ce140a0-f3b8-4c71-8d9d-9feee2b1ff6a"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 06 13:24:41 crc kubenswrapper[4958]: I1006 13:24:41.798758 4958 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/9ce140a0-f3b8-4c71-8d9d-9feee2b1ff6a-host\") on node \"crc\" DevicePath \"\"" Oct 06 13:24:41 crc kubenswrapper[4958]: I1006 13:24:41.805214 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9ce140a0-f3b8-4c71-8d9d-9feee2b1ff6a-kube-api-access-xzg67" (OuterVolumeSpecName: "kube-api-access-xzg67") pod "9ce140a0-f3b8-4c71-8d9d-9feee2b1ff6a" (UID: "9ce140a0-f3b8-4c71-8d9d-9feee2b1ff6a"). InnerVolumeSpecName "kube-api-access-xzg67". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 06 13:24:41 crc kubenswrapper[4958]: I1006 13:24:41.900428 4958 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xzg67\" (UniqueName: \"kubernetes.io/projected/9ce140a0-f3b8-4c71-8d9d-9feee2b1ff6a-kube-api-access-xzg67\") on node \"crc\" DevicePath \"\"" Oct 06 13:24:42 crc kubenswrapper[4958]: I1006 13:24:42.608619 4958 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d71c924ad0de6903ab4b1bf39f0a6468d416b9190b6c48d5c851d5c01f93140f" Oct 06 13:24:42 crc kubenswrapper[4958]: I1006 13:24:42.608707 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-d9jtr/crc-debug-q8qxb" Oct 06 13:24:42 crc kubenswrapper[4958]: I1006 13:24:42.933275 4958 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9ce140a0-f3b8-4c71-8d9d-9feee2b1ff6a" path="/var/lib/kubelet/pods/9ce140a0-f3b8-4c71-8d9d-9feee2b1ff6a/volumes" Oct 06 13:24:42 crc kubenswrapper[4958]: I1006 13:24:42.934549 4958 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-d9jtr/crc-debug-fdgb6"] Oct 06 13:24:42 crc kubenswrapper[4958]: E1006 13:24:42.935213 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9ce140a0-f3b8-4c71-8d9d-9feee2b1ff6a" containerName="container-00" Oct 06 13:24:42 crc kubenswrapper[4958]: I1006 13:24:42.935253 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="9ce140a0-f3b8-4c71-8d9d-9feee2b1ff6a" containerName="container-00" Oct 06 13:24:42 crc kubenswrapper[4958]: I1006 13:24:42.935909 4958 memory_manager.go:354] "RemoveStaleState removing state" podUID="9ce140a0-f3b8-4c71-8d9d-9feee2b1ff6a" containerName="container-00" Oct 06 13:24:42 crc kubenswrapper[4958]: I1006 13:24:42.937445 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-d9jtr/crc-debug-fdgb6" Oct 06 13:24:43 crc kubenswrapper[4958]: I1006 13:24:43.025050 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-57bpq\" (UniqueName: \"kubernetes.io/projected/b1ad0a30-8b7d-488e-a0b2-8e582477e79b-kube-api-access-57bpq\") pod \"crc-debug-fdgb6\" (UID: \"b1ad0a30-8b7d-488e-a0b2-8e582477e79b\") " pod="openshift-must-gather-d9jtr/crc-debug-fdgb6" Oct 06 13:24:43 crc kubenswrapper[4958]: I1006 13:24:43.025280 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/b1ad0a30-8b7d-488e-a0b2-8e582477e79b-host\") pod \"crc-debug-fdgb6\" (UID: \"b1ad0a30-8b7d-488e-a0b2-8e582477e79b\") " pod="openshift-must-gather-d9jtr/crc-debug-fdgb6" Oct 06 13:24:43 crc kubenswrapper[4958]: I1006 13:24:43.126979 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/b1ad0a30-8b7d-488e-a0b2-8e582477e79b-host\") pod \"crc-debug-fdgb6\" (UID: \"b1ad0a30-8b7d-488e-a0b2-8e582477e79b\") " pod="openshift-must-gather-d9jtr/crc-debug-fdgb6" Oct 06 13:24:43 crc kubenswrapper[4958]: I1006 13:24:43.127091 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-57bpq\" (UniqueName: \"kubernetes.io/projected/b1ad0a30-8b7d-488e-a0b2-8e582477e79b-kube-api-access-57bpq\") pod \"crc-debug-fdgb6\" (UID: \"b1ad0a30-8b7d-488e-a0b2-8e582477e79b\") " pod="openshift-must-gather-d9jtr/crc-debug-fdgb6" Oct 06 13:24:43 crc kubenswrapper[4958]: I1006 13:24:43.127096 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/b1ad0a30-8b7d-488e-a0b2-8e582477e79b-host\") pod \"crc-debug-fdgb6\" (UID: \"b1ad0a30-8b7d-488e-a0b2-8e582477e79b\") " pod="openshift-must-gather-d9jtr/crc-debug-fdgb6" Oct 06 13:24:43 crc kubenswrapper[4958]: I1006 13:24:43.145242 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-57bpq\" (UniqueName: \"kubernetes.io/projected/b1ad0a30-8b7d-488e-a0b2-8e582477e79b-kube-api-access-57bpq\") pod \"crc-debug-fdgb6\" (UID: \"b1ad0a30-8b7d-488e-a0b2-8e582477e79b\") " pod="openshift-must-gather-d9jtr/crc-debug-fdgb6" Oct 06 13:24:43 crc kubenswrapper[4958]: I1006 13:24:43.257625 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-d9jtr/crc-debug-fdgb6" Oct 06 13:24:43 crc kubenswrapper[4958]: W1006 13:24:43.307693 4958 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb1ad0a30_8b7d_488e_a0b2_8e582477e79b.slice/crio-41d131f9022008b7690bf700718bc52d449bd13fc79730651b452acf71f5922f WatchSource:0}: Error finding container 41d131f9022008b7690bf700718bc52d449bd13fc79730651b452acf71f5922f: Status 404 returned error can't find the container with id 41d131f9022008b7690bf700718bc52d449bd13fc79730651b452acf71f5922f Oct 06 13:24:43 crc kubenswrapper[4958]: I1006 13:24:43.619661 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-d9jtr/crc-debug-fdgb6" event={"ID":"b1ad0a30-8b7d-488e-a0b2-8e582477e79b","Type":"ContainerStarted","Data":"dfab613495c8c0f535addb29d08ff6814e5174b2362e727f452431f62a7579bb"} Oct 06 13:24:43 crc kubenswrapper[4958]: I1006 13:24:43.619999 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-d9jtr/crc-debug-fdgb6" event={"ID":"b1ad0a30-8b7d-488e-a0b2-8e582477e79b","Type":"ContainerStarted","Data":"41d131f9022008b7690bf700718bc52d449bd13fc79730651b452acf71f5922f"} Oct 06 13:24:43 crc kubenswrapper[4958]: I1006 13:24:43.640592 4958 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-d9jtr/crc-debug-fdgb6" podStartSLOduration=1.640571359 podStartE2EDuration="1.640571359s" podCreationTimestamp="2025-10-06 13:24:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-06 13:24:43.63824073 +0000 UTC m=+5837.524266048" watchObservedRunningTime="2025-10-06 13:24:43.640571359 +0000 UTC m=+5837.526596667" Oct 06 13:24:44 crc kubenswrapper[4958]: I1006 13:24:44.628645 4958 generic.go:334] "Generic (PLEG): container finished" podID="b1ad0a30-8b7d-488e-a0b2-8e582477e79b" containerID="dfab613495c8c0f535addb29d08ff6814e5174b2362e727f452431f62a7579bb" exitCode=0 Oct 06 13:24:44 crc kubenswrapper[4958]: I1006 13:24:44.628689 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-d9jtr/crc-debug-fdgb6" event={"ID":"b1ad0a30-8b7d-488e-a0b2-8e582477e79b","Type":"ContainerDied","Data":"dfab613495c8c0f535addb29d08ff6814e5174b2362e727f452431f62a7579bb"} Oct 06 13:24:45 crc kubenswrapper[4958]: I1006 13:24:45.731963 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-d9jtr/crc-debug-fdgb6" Oct 06 13:24:45 crc kubenswrapper[4958]: I1006 13:24:45.867470 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-57bpq\" (UniqueName: \"kubernetes.io/projected/b1ad0a30-8b7d-488e-a0b2-8e582477e79b-kube-api-access-57bpq\") pod \"b1ad0a30-8b7d-488e-a0b2-8e582477e79b\" (UID: \"b1ad0a30-8b7d-488e-a0b2-8e582477e79b\") " Oct 06 13:24:45 crc kubenswrapper[4958]: I1006 13:24:45.867617 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/b1ad0a30-8b7d-488e-a0b2-8e582477e79b-host\") pod \"b1ad0a30-8b7d-488e-a0b2-8e582477e79b\" (UID: \"b1ad0a30-8b7d-488e-a0b2-8e582477e79b\") " Oct 06 13:24:45 crc kubenswrapper[4958]: I1006 13:24:45.867753 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b1ad0a30-8b7d-488e-a0b2-8e582477e79b-host" (OuterVolumeSpecName: "host") pod "b1ad0a30-8b7d-488e-a0b2-8e582477e79b" (UID: "b1ad0a30-8b7d-488e-a0b2-8e582477e79b"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 06 13:24:45 crc kubenswrapper[4958]: I1006 13:24:45.868086 4958 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/b1ad0a30-8b7d-488e-a0b2-8e582477e79b-host\") on node \"crc\" DevicePath \"\"" Oct 06 13:24:45 crc kubenswrapper[4958]: I1006 13:24:45.873978 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b1ad0a30-8b7d-488e-a0b2-8e582477e79b-kube-api-access-57bpq" (OuterVolumeSpecName: "kube-api-access-57bpq") pod "b1ad0a30-8b7d-488e-a0b2-8e582477e79b" (UID: "b1ad0a30-8b7d-488e-a0b2-8e582477e79b"). InnerVolumeSpecName "kube-api-access-57bpq". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 06 13:24:45 crc kubenswrapper[4958]: I1006 13:24:45.973183 4958 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-57bpq\" (UniqueName: \"kubernetes.io/projected/b1ad0a30-8b7d-488e-a0b2-8e582477e79b-kube-api-access-57bpq\") on node \"crc\" DevicePath \"\"" Oct 06 13:24:46 crc kubenswrapper[4958]: I1006 13:24:46.643948 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-d9jtr/crc-debug-fdgb6" event={"ID":"b1ad0a30-8b7d-488e-a0b2-8e582477e79b","Type":"ContainerDied","Data":"41d131f9022008b7690bf700718bc52d449bd13fc79730651b452acf71f5922f"} Oct 06 13:24:46 crc kubenswrapper[4958]: I1006 13:24:46.643997 4958 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="41d131f9022008b7690bf700718bc52d449bd13fc79730651b452acf71f5922f" Oct 06 13:24:46 crc kubenswrapper[4958]: I1006 13:24:46.644392 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-d9jtr/crc-debug-fdgb6" Oct 06 13:24:52 crc kubenswrapper[4958]: I1006 13:24:52.722931 4958 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-d9jtr/crc-debug-fdgb6"] Oct 06 13:24:52 crc kubenswrapper[4958]: I1006 13:24:52.731693 4958 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-d9jtr/crc-debug-fdgb6"] Oct 06 13:24:52 crc kubenswrapper[4958]: I1006 13:24:52.930263 4958 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b1ad0a30-8b7d-488e-a0b2-8e582477e79b" path="/var/lib/kubelet/pods/b1ad0a30-8b7d-488e-a0b2-8e582477e79b/volumes" Oct 06 13:24:53 crc kubenswrapper[4958]: I1006 13:24:53.904111 4958 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-d9jtr/crc-debug-f7ndj"] Oct 06 13:24:53 crc kubenswrapper[4958]: E1006 13:24:53.905327 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b1ad0a30-8b7d-488e-a0b2-8e582477e79b" containerName="container-00" Oct 06 13:24:53 crc kubenswrapper[4958]: I1006 13:24:53.905443 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="b1ad0a30-8b7d-488e-a0b2-8e582477e79b" containerName="container-00" Oct 06 13:24:53 crc kubenswrapper[4958]: I1006 13:24:53.905748 4958 memory_manager.go:354] "RemoveStaleState removing state" podUID="b1ad0a30-8b7d-488e-a0b2-8e582477e79b" containerName="container-00" Oct 06 13:24:53 crc kubenswrapper[4958]: I1006 13:24:53.906606 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-d9jtr/crc-debug-f7ndj" Oct 06 13:24:53 crc kubenswrapper[4958]: I1006 13:24:53.993045 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hcxkg\" (UniqueName: \"kubernetes.io/projected/35ea3dad-29cf-451a-b17a-123ad66b7918-kube-api-access-hcxkg\") pod \"crc-debug-f7ndj\" (UID: \"35ea3dad-29cf-451a-b17a-123ad66b7918\") " pod="openshift-must-gather-d9jtr/crc-debug-f7ndj" Oct 06 13:24:53 crc kubenswrapper[4958]: I1006 13:24:53.993095 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/35ea3dad-29cf-451a-b17a-123ad66b7918-host\") pod \"crc-debug-f7ndj\" (UID: \"35ea3dad-29cf-451a-b17a-123ad66b7918\") " pod="openshift-must-gather-d9jtr/crc-debug-f7ndj" Oct 06 13:24:54 crc kubenswrapper[4958]: I1006 13:24:54.094785 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hcxkg\" (UniqueName: \"kubernetes.io/projected/35ea3dad-29cf-451a-b17a-123ad66b7918-kube-api-access-hcxkg\") pod \"crc-debug-f7ndj\" (UID: \"35ea3dad-29cf-451a-b17a-123ad66b7918\") " pod="openshift-must-gather-d9jtr/crc-debug-f7ndj" Oct 06 13:24:54 crc kubenswrapper[4958]: I1006 13:24:54.094836 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/35ea3dad-29cf-451a-b17a-123ad66b7918-host\") pod \"crc-debug-f7ndj\" (UID: \"35ea3dad-29cf-451a-b17a-123ad66b7918\") " pod="openshift-must-gather-d9jtr/crc-debug-f7ndj" Oct 06 13:24:54 crc kubenswrapper[4958]: I1006 13:24:54.094968 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/35ea3dad-29cf-451a-b17a-123ad66b7918-host\") pod \"crc-debug-f7ndj\" (UID: \"35ea3dad-29cf-451a-b17a-123ad66b7918\") " pod="openshift-must-gather-d9jtr/crc-debug-f7ndj" Oct 06 13:24:54 crc kubenswrapper[4958]: I1006 13:24:54.121031 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hcxkg\" (UniqueName: \"kubernetes.io/projected/35ea3dad-29cf-451a-b17a-123ad66b7918-kube-api-access-hcxkg\") pod \"crc-debug-f7ndj\" (UID: \"35ea3dad-29cf-451a-b17a-123ad66b7918\") " pod="openshift-must-gather-d9jtr/crc-debug-f7ndj" Oct 06 13:24:54 crc kubenswrapper[4958]: I1006 13:24:54.232604 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-d9jtr/crc-debug-f7ndj" Oct 06 13:24:54 crc kubenswrapper[4958]: I1006 13:24:54.720897 4958 generic.go:334] "Generic (PLEG): container finished" podID="35ea3dad-29cf-451a-b17a-123ad66b7918" containerID="d70a92963451e92215ae4a93ab82089a9810f9571e51871af1a48f1b4313e793" exitCode=0 Oct 06 13:24:54 crc kubenswrapper[4958]: I1006 13:24:54.720951 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-d9jtr/crc-debug-f7ndj" event={"ID":"35ea3dad-29cf-451a-b17a-123ad66b7918","Type":"ContainerDied","Data":"d70a92963451e92215ae4a93ab82089a9810f9571e51871af1a48f1b4313e793"} Oct 06 13:24:54 crc kubenswrapper[4958]: I1006 13:24:54.721008 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-d9jtr/crc-debug-f7ndj" event={"ID":"35ea3dad-29cf-451a-b17a-123ad66b7918","Type":"ContainerStarted","Data":"d734604dd00ae207962ba142418eb64c279b3dbc826418f289f04ada0b194e74"} Oct 06 13:24:54 crc kubenswrapper[4958]: I1006 13:24:54.771927 4958 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-d9jtr/crc-debug-f7ndj"] Oct 06 13:24:54 crc kubenswrapper[4958]: I1006 13:24:54.787094 4958 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-d9jtr/crc-debug-f7ndj"] Oct 06 13:24:55 crc kubenswrapper[4958]: I1006 13:24:55.831927 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-d9jtr/crc-debug-f7ndj" Oct 06 13:24:55 crc kubenswrapper[4958]: I1006 13:24:55.927094 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hcxkg\" (UniqueName: \"kubernetes.io/projected/35ea3dad-29cf-451a-b17a-123ad66b7918-kube-api-access-hcxkg\") pod \"35ea3dad-29cf-451a-b17a-123ad66b7918\" (UID: \"35ea3dad-29cf-451a-b17a-123ad66b7918\") " Oct 06 13:24:55 crc kubenswrapper[4958]: I1006 13:24:55.927307 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/35ea3dad-29cf-451a-b17a-123ad66b7918-host\") pod \"35ea3dad-29cf-451a-b17a-123ad66b7918\" (UID: \"35ea3dad-29cf-451a-b17a-123ad66b7918\") " Oct 06 13:24:55 crc kubenswrapper[4958]: I1006 13:24:55.927368 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/35ea3dad-29cf-451a-b17a-123ad66b7918-host" (OuterVolumeSpecName: "host") pod "35ea3dad-29cf-451a-b17a-123ad66b7918" (UID: "35ea3dad-29cf-451a-b17a-123ad66b7918"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 06 13:24:55 crc kubenswrapper[4958]: I1006 13:24:55.927886 4958 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/35ea3dad-29cf-451a-b17a-123ad66b7918-host\") on node \"crc\" DevicePath \"\"" Oct 06 13:24:55 crc kubenswrapper[4958]: I1006 13:24:55.935975 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/35ea3dad-29cf-451a-b17a-123ad66b7918-kube-api-access-hcxkg" (OuterVolumeSpecName: "kube-api-access-hcxkg") pod "35ea3dad-29cf-451a-b17a-123ad66b7918" (UID: "35ea3dad-29cf-451a-b17a-123ad66b7918"). InnerVolumeSpecName "kube-api-access-hcxkg". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 06 13:24:56 crc kubenswrapper[4958]: I1006 13:24:56.030222 4958 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hcxkg\" (UniqueName: \"kubernetes.io/projected/35ea3dad-29cf-451a-b17a-123ad66b7918-kube-api-access-hcxkg\") on node \"crc\" DevicePath \"\"" Oct 06 13:24:56 crc kubenswrapper[4958]: I1006 13:24:56.338032 4958 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_barbican-operator-controller-manager-58c4cd55f4-l8pp6_cbe81ce3-66c4-4226-bc0a-78d6757561ff/kube-rbac-proxy/0.log" Oct 06 13:24:56 crc kubenswrapper[4958]: I1006 13:24:56.392335 4958 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_barbican-operator-controller-manager-58c4cd55f4-l8pp6_cbe81ce3-66c4-4226-bc0a-78d6757561ff/manager/0.log" Oct 06 13:24:56 crc kubenswrapper[4958]: I1006 13:24:56.527446 4958 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_cinder-operator-controller-manager-7d4d4f8d-th999_b7998026-248b-4fdb-b5fe-8e6ca29c69f0/kube-rbac-proxy/0.log" Oct 06 13:24:56 crc kubenswrapper[4958]: I1006 13:24:56.597499 4958 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_cinder-operator-controller-manager-7d4d4f8d-th999_b7998026-248b-4fdb-b5fe-8e6ca29c69f0/manager/0.log" Oct 06 13:24:56 crc kubenswrapper[4958]: I1006 13:24:56.718836 4958 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_designate-operator-controller-manager-75dfd9b554-4qklz_efb3e8ee-d92e-49fe-82c8-3fbe5794410f/manager/0.log" Oct 06 13:24:56 crc kubenswrapper[4958]: I1006 13:24:56.737087 4958 scope.go:117] "RemoveContainer" containerID="d70a92963451e92215ae4a93ab82089a9810f9571e51871af1a48f1b4313e793" Oct 06 13:24:56 crc kubenswrapper[4958]: I1006 13:24:56.737170 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-d9jtr/crc-debug-f7ndj" Oct 06 13:24:56 crc kubenswrapper[4958]: I1006 13:24:56.748872 4958 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_designate-operator-controller-manager-75dfd9b554-4qklz_efb3e8ee-d92e-49fe-82c8-3fbe5794410f/kube-rbac-proxy/0.log" Oct 06 13:24:56 crc kubenswrapper[4958]: I1006 13:24:56.815077 4958 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_f5a5c0b02fdcad19334731c5311e773a7ecde0ba9b698f7efdee87252acvwtt_825a268b-7779-4e3a-b87c-6769d02e8213/util/0.log" Oct 06 13:24:56 crc kubenswrapper[4958]: I1006 13:24:56.943586 4958 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="35ea3dad-29cf-451a-b17a-123ad66b7918" path="/var/lib/kubelet/pods/35ea3dad-29cf-451a-b17a-123ad66b7918/volumes" Oct 06 13:24:56 crc kubenswrapper[4958]: I1006 13:24:56.977444 4958 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_f5a5c0b02fdcad19334731c5311e773a7ecde0ba9b698f7efdee87252acvwtt_825a268b-7779-4e3a-b87c-6769d02e8213/util/0.log" Oct 06 13:24:57 crc kubenswrapper[4958]: I1006 13:24:57.013539 4958 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_f5a5c0b02fdcad19334731c5311e773a7ecde0ba9b698f7efdee87252acvwtt_825a268b-7779-4e3a-b87c-6769d02e8213/pull/0.log" Oct 06 13:24:57 crc kubenswrapper[4958]: I1006 13:24:57.020422 4958 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_f5a5c0b02fdcad19334731c5311e773a7ecde0ba9b698f7efdee87252acvwtt_825a268b-7779-4e3a-b87c-6769d02e8213/pull/0.log" Oct 06 13:24:57 crc kubenswrapper[4958]: I1006 13:24:57.175716 4958 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_f5a5c0b02fdcad19334731c5311e773a7ecde0ba9b698f7efdee87252acvwtt_825a268b-7779-4e3a-b87c-6769d02e8213/util/0.log" Oct 06 13:24:57 crc kubenswrapper[4958]: I1006 13:24:57.184349 4958 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_f5a5c0b02fdcad19334731c5311e773a7ecde0ba9b698f7efdee87252acvwtt_825a268b-7779-4e3a-b87c-6769d02e8213/pull/0.log" Oct 06 13:24:57 crc kubenswrapper[4958]: I1006 13:24:57.184876 4958 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_f5a5c0b02fdcad19334731c5311e773a7ecde0ba9b698f7efdee87252acvwtt_825a268b-7779-4e3a-b87c-6769d02e8213/extract/0.log" Oct 06 13:24:57 crc kubenswrapper[4958]: I1006 13:24:57.351996 4958 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_glance-operator-controller-manager-5dc44df7d5-jj784_da95cbd7-81b9-48e7-99eb-207063cf651a/kube-rbac-proxy/0.log" Oct 06 13:24:57 crc kubenswrapper[4958]: I1006 13:24:57.395697 4958 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_glance-operator-controller-manager-5dc44df7d5-jj784_da95cbd7-81b9-48e7-99eb-207063cf651a/manager/0.log" Oct 06 13:24:57 crc kubenswrapper[4958]: I1006 13:24:57.448760 4958 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_heat-operator-controller-manager-54b4974c45-xc89l_96446a00-b397-4b48-94bf-432c32ed13cb/kube-rbac-proxy/0.log" Oct 06 13:24:57 crc kubenswrapper[4958]: I1006 13:24:57.529706 4958 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_heat-operator-controller-manager-54b4974c45-xc89l_96446a00-b397-4b48-94bf-432c32ed13cb/manager/0.log" Oct 06 13:24:57 crc kubenswrapper[4958]: I1006 13:24:57.594931 4958 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_horizon-operator-controller-manager-76d5b87f47-7nfvh_7d2f0a48-cffe-49d6-8ac8-830558228e2a/kube-rbac-proxy/0.log" Oct 06 13:24:57 crc kubenswrapper[4958]: I1006 13:24:57.645551 4958 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_horizon-operator-controller-manager-76d5b87f47-7nfvh_7d2f0a48-cffe-49d6-8ac8-830558228e2a/manager/0.log" Oct 06 13:24:57 crc kubenswrapper[4958]: I1006 13:24:57.756818 4958 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_infra-operator-controller-manager-658588b8c9-dnk6g_8a0f56a2-c168-4707-acac-43cc91b44835/kube-rbac-proxy/0.log" Oct 06 13:24:57 crc kubenswrapper[4958]: I1006 13:24:57.912852 4958 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_infra-operator-controller-manager-658588b8c9-dnk6g_8a0f56a2-c168-4707-acac-43cc91b44835/manager/0.log" Oct 06 13:24:57 crc kubenswrapper[4958]: I1006 13:24:57.946743 4958 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ironic-operator-controller-manager-649675d675-6h8b6_78e7c04a-fb1a-420f-a99b-94b6b0cf899a/kube-rbac-proxy/0.log" Oct 06 13:24:57 crc kubenswrapper[4958]: I1006 13:24:57.998267 4958 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ironic-operator-controller-manager-649675d675-6h8b6_78e7c04a-fb1a-420f-a99b-94b6b0cf899a/manager/0.log" Oct 06 13:24:58 crc kubenswrapper[4958]: I1006 13:24:58.089547 4958 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_keystone-operator-controller-manager-7b5ccf6d9c-86s48_e1d4f271-a424-45d7-abf0-33633ac7713c/kube-rbac-proxy/0.log" Oct 06 13:24:58 crc kubenswrapper[4958]: I1006 13:24:58.195199 4958 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_keystone-operator-controller-manager-7b5ccf6d9c-86s48_e1d4f271-a424-45d7-abf0-33633ac7713c/manager/0.log" Oct 06 13:24:58 crc kubenswrapper[4958]: I1006 13:24:58.213472 4958 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_manila-operator-controller-manager-65d89cfd9f-gqj7w_cb612d52-fceb-471f-af53-104bfc2966e7/kube-rbac-proxy/0.log" Oct 06 13:24:58 crc kubenswrapper[4958]: I1006 13:24:58.263169 4958 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_manila-operator-controller-manager-65d89cfd9f-gqj7w_cb612d52-fceb-471f-af53-104bfc2966e7/manager/0.log" Oct 06 13:24:58 crc kubenswrapper[4958]: I1006 13:24:58.406300 4958 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_mariadb-operator-controller-manager-6cd6d7bdf5-42wnq_78e5cfa4-7e8c-4fb2-b90d-bd9967385a71/kube-rbac-proxy/0.log" Oct 06 13:24:58 crc kubenswrapper[4958]: I1006 13:24:58.410864 4958 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_mariadb-operator-controller-manager-6cd6d7bdf5-42wnq_78e5cfa4-7e8c-4fb2-b90d-bd9967385a71/manager/0.log" Oct 06 13:24:58 crc kubenswrapper[4958]: I1006 13:24:58.536526 4958 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_neutron-operator-controller-manager-8d984cc4d-7r9bg_3332b65c-b3bf-44f5-ae31-865d77029641/kube-rbac-proxy/0.log" Oct 06 13:24:58 crc kubenswrapper[4958]: I1006 13:24:58.628033 4958 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_nova-operator-controller-manager-7c7fc454ff-nvbjx_c35f7c69-655c-4d86-bcfd-29a899cf3011/kube-rbac-proxy/0.log" Oct 06 13:24:58 crc kubenswrapper[4958]: I1006 13:24:58.659266 4958 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_neutron-operator-controller-manager-8d984cc4d-7r9bg_3332b65c-b3bf-44f5-ae31-865d77029641/manager/0.log" Oct 06 13:24:58 crc kubenswrapper[4958]: I1006 13:24:58.790209 4958 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_nova-operator-controller-manager-7c7fc454ff-nvbjx_c35f7c69-655c-4d86-bcfd-29a899cf3011/manager/0.log" Oct 06 13:24:58 crc kubenswrapper[4958]: I1006 13:24:58.823619 4958 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_octavia-operator-controller-manager-7468f855d8-crg8g_28816408-a493-4e27-8213-998d338cc1d0/kube-rbac-proxy/0.log" Oct 06 13:24:58 crc kubenswrapper[4958]: I1006 13:24:58.838939 4958 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_octavia-operator-controller-manager-7468f855d8-crg8g_28816408-a493-4e27-8213-998d338cc1d0/manager/0.log" Oct 06 13:24:59 crc kubenswrapper[4958]: I1006 13:24:59.013608 4958 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-baremetal-operator-controller-manager-5dfbbd665clrhkv_6558de0f-a2a0-4841-9764-574061835f3b/kube-rbac-proxy/0.log" Oct 06 13:24:59 crc kubenswrapper[4958]: I1006 13:24:59.014753 4958 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-baremetal-operator-controller-manager-5dfbbd665clrhkv_6558de0f-a2a0-4841-9764-574061835f3b/manager/0.log" Oct 06 13:24:59 crc kubenswrapper[4958]: I1006 13:24:59.186092 4958 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-manager-64c95c565c-djmq7_e3f7c90b-8bb7-4b1c-bab9-0c341627ee32/kube-rbac-proxy/0.log" Oct 06 13:24:59 crc kubenswrapper[4958]: I1006 13:24:59.247347 4958 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-operator-57448bb547-2ptw6_a2a23b45-5568-49fb-9e85-6bce53831d13/kube-rbac-proxy/0.log" Oct 06 13:24:59 crc kubenswrapper[4958]: I1006 13:24:59.498095 4958 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-index-wdw62_e61771dc-7e62-4e42-a99a-c8eae920cb26/registry-server/0.log" Oct 06 13:24:59 crc kubenswrapper[4958]: I1006 13:24:59.514064 4958 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-operator-57448bb547-2ptw6_a2a23b45-5568-49fb-9e85-6bce53831d13/operator/0.log" Oct 06 13:24:59 crc kubenswrapper[4958]: I1006 13:24:59.739926 4958 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ovn-operator-controller-manager-646d647dd5-fqrj2_28613b85-8223-4190-b2de-a88d186b8901/kube-rbac-proxy/0.log" Oct 06 13:24:59 crc kubenswrapper[4958]: I1006 13:24:59.749896 4958 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ovn-operator-controller-manager-646d647dd5-fqrj2_28613b85-8223-4190-b2de-a88d186b8901/manager/0.log" Oct 06 13:24:59 crc kubenswrapper[4958]: I1006 13:24:59.914378 4958 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_placement-operator-controller-manager-54689d9f88-lf55c_7c2e69f6-a1a9-46e9-9fa5-ff6002bf1f97/kube-rbac-proxy/0.log" Oct 06 13:25:00 crc kubenswrapper[4958]: I1006 13:25:00.005645 4958 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_rabbitmq-cluster-operator-manager-5f97d8c699-8c99l_ccce5a46-80c5-4f14-b63d-d4eff64bef36/operator/0.log" Oct 06 13:25:00 crc kubenswrapper[4958]: I1006 13:25:00.074034 4958 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_placement-operator-controller-manager-54689d9f88-lf55c_7c2e69f6-a1a9-46e9-9fa5-ff6002bf1f97/manager/0.log" Oct 06 13:25:00 crc kubenswrapper[4958]: I1006 13:25:00.237245 4958 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_swift-operator-controller-manager-6859f9b676-4d8cf_a8547a74-8a2d-4a7f-9852-71036642c51a/kube-rbac-proxy/0.log" Oct 06 13:25:00 crc kubenswrapper[4958]: I1006 13:25:00.251704 4958 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-manager-64c95c565c-djmq7_e3f7c90b-8bb7-4b1c-bab9-0c341627ee32/manager/0.log" Oct 06 13:25:00 crc kubenswrapper[4958]: I1006 13:25:00.293367 4958 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_swift-operator-controller-manager-6859f9b676-4d8cf_a8547a74-8a2d-4a7f-9852-71036642c51a/manager/0.log" Oct 06 13:25:00 crc kubenswrapper[4958]: I1006 13:25:00.374490 4958 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_telemetry-operator-controller-manager-5d4d74dd89-zfxpj_c060a91a-1009-469c-a9f4-d2e3b3d34840/kube-rbac-proxy/0.log" Oct 06 13:25:00 crc kubenswrapper[4958]: I1006 13:25:00.492958 4958 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_telemetry-operator-controller-manager-5d4d74dd89-zfxpj_c060a91a-1009-469c-a9f4-d2e3b3d34840/manager/0.log" Oct 06 13:25:00 crc kubenswrapper[4958]: I1006 13:25:00.514315 4958 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_test-operator-controller-manager-5cd5cb47d7-m5gw5_d79b059c-ab85-4eed-937e-6f7844c24621/kube-rbac-proxy/0.log" Oct 06 13:25:00 crc kubenswrapper[4958]: I1006 13:25:00.532454 4958 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_test-operator-controller-manager-5cd5cb47d7-m5gw5_d79b059c-ab85-4eed-937e-6f7844c24621/manager/0.log" Oct 06 13:25:00 crc kubenswrapper[4958]: I1006 13:25:00.625054 4958 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_watcher-operator-controller-manager-6cbc6dd547-hfq6r_0bd35c63-af59-499c-baaa-8cec7e13f7bc/kube-rbac-proxy/0.log" Oct 06 13:25:00 crc kubenswrapper[4958]: I1006 13:25:00.671748 4958 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_watcher-operator-controller-manager-6cbc6dd547-hfq6r_0bd35c63-af59-499c-baaa-8cec7e13f7bc/manager/0.log" Oct 06 13:25:15 crc kubenswrapper[4958]: I1006 13:25:15.468459 4958 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_control-plane-machine-set-operator-78cbb6b69f-jxkkt_dc407428-3c19-40aa-b476-c159f9b8f2a4/control-plane-machine-set-operator/0.log" Oct 06 13:25:15 crc kubenswrapper[4958]: I1006 13:25:15.639765 4958 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-x8mdr_d949b5d0-2cd6-40ab-8bd1-9fb19fd615f8/kube-rbac-proxy/0.log" Oct 06 13:25:15 crc kubenswrapper[4958]: I1006 13:25:15.656041 4958 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-x8mdr_d949b5d0-2cd6-40ab-8bd1-9fb19fd615f8/machine-api-operator/0.log" Oct 06 13:25:26 crc kubenswrapper[4958]: I1006 13:25:26.454696 4958 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-5b446d88c5-d6ffb_063d4ef1-4461-4677-90de-7e746456a573/cert-manager-controller/0.log" Oct 06 13:25:26 crc kubenswrapper[4958]: I1006 13:25:26.550110 4958 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-cainjector-7f985d654d-75ksk_78fb0e67-9bf0-4357-9208-9fff92c3074c/cert-manager-cainjector/0.log" Oct 06 13:25:26 crc kubenswrapper[4958]: I1006 13:25:26.657916 4958 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-webhook-5655c58dd6-bcxhs_2e49dc60-a2ba-4e79-9563-2dd8857d45b0/cert-manager-webhook/0.log" Oct 06 13:25:37 crc kubenswrapper[4958]: I1006 13:25:37.645455 4958 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-console-plugin-6b874cbd85-rd5gn_5f482352-713c-4502-aded-dfe37c5fa8bc/nmstate-console-plugin/0.log" Oct 06 13:25:37 crc kubenswrapper[4958]: I1006 13:25:37.797490 4958 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-handler-6vlgl_89638e7b-fd22-4b85-8ee9-7eb5353f06c0/nmstate-handler/0.log" Oct 06 13:25:37 crc kubenswrapper[4958]: I1006 13:25:37.844834 4958 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-metrics-fdff9cb8d-cmgch_fda4902f-9bcc-419f-80f2-40a46dc2e7dd/kube-rbac-proxy/0.log" Oct 06 13:25:37 crc kubenswrapper[4958]: I1006 13:25:37.856141 4958 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-metrics-fdff9cb8d-cmgch_fda4902f-9bcc-419f-80f2-40a46dc2e7dd/nmstate-metrics/0.log" Oct 06 13:25:38 crc kubenswrapper[4958]: I1006 13:25:38.053088 4958 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-operator-858ddd8f98-cl46g_66c0e594-48e4-4f2f-b25e-5b69f377d6e2/nmstate-operator/0.log" Oct 06 13:25:38 crc kubenswrapper[4958]: I1006 13:25:38.068278 4958 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-webhook-6cdbc54649-82rnp_4605847c-947f-4955-b80b-87bb98b3c946/nmstate-webhook/0.log" Oct 06 13:25:51 crc kubenswrapper[4958]: I1006 13:25:51.164750 4958 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-68d546b9d8-tz4r7_38cbcdd4-5fc7-4c9f-9adc-d2b4c8362ce5/kube-rbac-proxy/0.log" Oct 06 13:25:51 crc kubenswrapper[4958]: I1006 13:25:51.236183 4958 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-68d546b9d8-tz4r7_38cbcdd4-5fc7-4c9f-9adc-d2b4c8362ce5/controller/0.log" Oct 06 13:25:51 crc kubenswrapper[4958]: I1006 13:25:51.341905 4958 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-mllk7_2cab249a-2dc5-4211-a567-b55c234a8853/cp-frr-files/0.log" Oct 06 13:25:51 crc kubenswrapper[4958]: I1006 13:25:51.512029 4958 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-mllk7_2cab249a-2dc5-4211-a567-b55c234a8853/cp-frr-files/0.log" Oct 06 13:25:51 crc kubenswrapper[4958]: I1006 13:25:51.516330 4958 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-mllk7_2cab249a-2dc5-4211-a567-b55c234a8853/cp-reloader/0.log" Oct 06 13:25:51 crc kubenswrapper[4958]: I1006 13:25:51.524883 4958 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-mllk7_2cab249a-2dc5-4211-a567-b55c234a8853/cp-metrics/0.log" Oct 06 13:25:51 crc kubenswrapper[4958]: I1006 13:25:51.562311 4958 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-mllk7_2cab249a-2dc5-4211-a567-b55c234a8853/cp-reloader/0.log" Oct 06 13:25:51 crc kubenswrapper[4958]: I1006 13:25:51.672212 4958 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-mllk7_2cab249a-2dc5-4211-a567-b55c234a8853/cp-reloader/0.log" Oct 06 13:25:51 crc kubenswrapper[4958]: I1006 13:25:51.700904 4958 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-mllk7_2cab249a-2dc5-4211-a567-b55c234a8853/cp-metrics/0.log" Oct 06 13:25:51 crc kubenswrapper[4958]: I1006 13:25:51.707243 4958 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-mllk7_2cab249a-2dc5-4211-a567-b55c234a8853/cp-frr-files/0.log" Oct 06 13:25:51 crc kubenswrapper[4958]: I1006 13:25:51.757015 4958 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-mllk7_2cab249a-2dc5-4211-a567-b55c234a8853/cp-metrics/0.log" Oct 06 13:25:51 crc kubenswrapper[4958]: I1006 13:25:51.918501 4958 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-mllk7_2cab249a-2dc5-4211-a567-b55c234a8853/cp-reloader/0.log" Oct 06 13:25:51 crc kubenswrapper[4958]: I1006 13:25:51.930865 4958 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-mllk7_2cab249a-2dc5-4211-a567-b55c234a8853/controller/0.log" Oct 06 13:25:51 crc kubenswrapper[4958]: I1006 13:25:51.950206 4958 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-mllk7_2cab249a-2dc5-4211-a567-b55c234a8853/cp-metrics/0.log" Oct 06 13:25:51 crc kubenswrapper[4958]: I1006 13:25:51.986335 4958 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-mllk7_2cab249a-2dc5-4211-a567-b55c234a8853/cp-frr-files/0.log" Oct 06 13:25:52 crc kubenswrapper[4958]: I1006 13:25:52.091600 4958 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-mllk7_2cab249a-2dc5-4211-a567-b55c234a8853/frr-metrics/0.log" Oct 06 13:25:52 crc kubenswrapper[4958]: I1006 13:25:52.133813 4958 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-mllk7_2cab249a-2dc5-4211-a567-b55c234a8853/kube-rbac-proxy/0.log" Oct 06 13:25:52 crc kubenswrapper[4958]: I1006 13:25:52.165325 4958 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-mllk7_2cab249a-2dc5-4211-a567-b55c234a8853/kube-rbac-proxy-frr/0.log" Oct 06 13:25:52 crc kubenswrapper[4958]: I1006 13:25:52.319717 4958 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-mllk7_2cab249a-2dc5-4211-a567-b55c234a8853/reloader/0.log" Oct 06 13:25:52 crc kubenswrapper[4958]: I1006 13:25:52.348729 4958 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-webhook-server-64bf5d555-vf4w8_f4ed48dc-17a8-4241-9c4e-8febcebb2c45/frr-k8s-webhook-server/0.log" Oct 06 13:25:52 crc kubenswrapper[4958]: I1006 13:25:52.597984 4958 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-controller-manager-7bff9bd6d4-tzpwc_cf4a240a-9885-4f19-aea0-799fe1715bb3/manager/0.log" Oct 06 13:25:52 crc kubenswrapper[4958]: I1006 13:25:52.762318 4958 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-webhook-server-698665c988-htg28_b9d4c539-c6bf-4300-a2cc-9647dbb9fe53/webhook-server/0.log" Oct 06 13:25:52 crc kubenswrapper[4958]: I1006 13:25:52.869975 4958 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-qpvkk_7d0d517b-7a87-4cd1-9039-998c3765332f/kube-rbac-proxy/0.log" Oct 06 13:25:53 crc kubenswrapper[4958]: I1006 13:25:53.407279 4958 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-qpvkk_7d0d517b-7a87-4cd1-9039-998c3765332f/speaker/0.log" Oct 06 13:25:53 crc kubenswrapper[4958]: I1006 13:25:53.721126 4958 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-mllk7_2cab249a-2dc5-4211-a567-b55c234a8853/frr/0.log" Oct 06 13:26:04 crc kubenswrapper[4958]: I1006 13:26:04.458890 4958 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_8f2f4ee801e5826a37d84a7b1fc4ccbf6b79de668302737d0f1152d8d2rtm7t_ea75ba71-b6d0-4620-a412-26fb313a0bff/util/0.log" Oct 06 13:26:04 crc kubenswrapper[4958]: I1006 13:26:04.668301 4958 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_8f2f4ee801e5826a37d84a7b1fc4ccbf6b79de668302737d0f1152d8d2rtm7t_ea75ba71-b6d0-4620-a412-26fb313a0bff/pull/0.log" Oct 06 13:26:04 crc kubenswrapper[4958]: I1006 13:26:04.670780 4958 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_8f2f4ee801e5826a37d84a7b1fc4ccbf6b79de668302737d0f1152d8d2rtm7t_ea75ba71-b6d0-4620-a412-26fb313a0bff/util/0.log" Oct 06 13:26:04 crc kubenswrapper[4958]: I1006 13:26:04.682684 4958 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_8f2f4ee801e5826a37d84a7b1fc4ccbf6b79de668302737d0f1152d8d2rtm7t_ea75ba71-b6d0-4620-a412-26fb313a0bff/pull/0.log" Oct 06 13:26:04 crc kubenswrapper[4958]: I1006 13:26:04.870210 4958 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_8f2f4ee801e5826a37d84a7b1fc4ccbf6b79de668302737d0f1152d8d2rtm7t_ea75ba71-b6d0-4620-a412-26fb313a0bff/util/0.log" Oct 06 13:26:04 crc kubenswrapper[4958]: I1006 13:26:04.878304 4958 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_8f2f4ee801e5826a37d84a7b1fc4ccbf6b79de668302737d0f1152d8d2rtm7t_ea75ba71-b6d0-4620-a412-26fb313a0bff/pull/0.log" Oct 06 13:26:04 crc kubenswrapper[4958]: I1006 13:26:04.888463 4958 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_8f2f4ee801e5826a37d84a7b1fc4ccbf6b79de668302737d0f1152d8d2rtm7t_ea75ba71-b6d0-4620-a412-26fb313a0bff/extract/0.log" Oct 06 13:26:05 crc kubenswrapper[4958]: I1006 13:26:05.029443 4958 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-qqm7m_c8ab94cb-0fde-495e-9e1b-cb57600ce892/extract-utilities/0.log" Oct 06 13:26:05 crc kubenswrapper[4958]: I1006 13:26:05.174026 4958 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-qqm7m_c8ab94cb-0fde-495e-9e1b-cb57600ce892/extract-content/0.log" Oct 06 13:26:05 crc kubenswrapper[4958]: I1006 13:26:05.200820 4958 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-qqm7m_c8ab94cb-0fde-495e-9e1b-cb57600ce892/extract-utilities/0.log" Oct 06 13:26:05 crc kubenswrapper[4958]: I1006 13:26:05.217675 4958 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-qqm7m_c8ab94cb-0fde-495e-9e1b-cb57600ce892/extract-content/0.log" Oct 06 13:26:05 crc kubenswrapper[4958]: I1006 13:26:05.456000 4958 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-qqm7m_c8ab94cb-0fde-495e-9e1b-cb57600ce892/extract-content/0.log" Oct 06 13:26:05 crc kubenswrapper[4958]: I1006 13:26:05.533727 4958 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-qqm7m_c8ab94cb-0fde-495e-9e1b-cb57600ce892/extract-utilities/0.log" Oct 06 13:26:05 crc kubenswrapper[4958]: I1006 13:26:05.697344 4958 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-4zfnc_446071e0-1a30-4a0d-8b68-76eb7ece32c9/extract-utilities/0.log" Oct 06 13:26:05 crc kubenswrapper[4958]: I1006 13:26:05.841070 4958 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-4zfnc_446071e0-1a30-4a0d-8b68-76eb7ece32c9/extract-utilities/0.log" Oct 06 13:26:05 crc kubenswrapper[4958]: I1006 13:26:05.890212 4958 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-4zfnc_446071e0-1a30-4a0d-8b68-76eb7ece32c9/extract-content/0.log" Oct 06 13:26:05 crc kubenswrapper[4958]: I1006 13:26:05.930121 4958 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-4zfnc_446071e0-1a30-4a0d-8b68-76eb7ece32c9/extract-content/0.log" Oct 06 13:26:05 crc kubenswrapper[4958]: I1006 13:26:05.985441 4958 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-qqm7m_c8ab94cb-0fde-495e-9e1b-cb57600ce892/registry-server/0.log" Oct 06 13:26:06 crc kubenswrapper[4958]: I1006 13:26:06.176689 4958 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-4zfnc_446071e0-1a30-4a0d-8b68-76eb7ece32c9/extract-utilities/0.log" Oct 06 13:26:06 crc kubenswrapper[4958]: I1006 13:26:06.191487 4958 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-4zfnc_446071e0-1a30-4a0d-8b68-76eb7ece32c9/extract-content/0.log" Oct 06 13:26:06 crc kubenswrapper[4958]: I1006 13:26:06.434775 4958 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_fa9831ede5d93c33d525b70ce6ddf94e500d80992af75a3305fe98835c5x2pl_0b95314f-617e-41fc-9afd-26ea796825c8/util/0.log" Oct 06 13:26:06 crc kubenswrapper[4958]: I1006 13:26:06.654470 4958 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_fa9831ede5d93c33d525b70ce6ddf94e500d80992af75a3305fe98835c5x2pl_0b95314f-617e-41fc-9afd-26ea796825c8/util/0.log" Oct 06 13:26:06 crc kubenswrapper[4958]: I1006 13:26:06.682560 4958 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_fa9831ede5d93c33d525b70ce6ddf94e500d80992af75a3305fe98835c5x2pl_0b95314f-617e-41fc-9afd-26ea796825c8/pull/0.log" Oct 06 13:26:06 crc kubenswrapper[4958]: I1006 13:26:06.690908 4958 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_fa9831ede5d93c33d525b70ce6ddf94e500d80992af75a3305fe98835c5x2pl_0b95314f-617e-41fc-9afd-26ea796825c8/pull/0.log" Oct 06 13:26:06 crc kubenswrapper[4958]: I1006 13:26:06.878792 4958 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_fa9831ede5d93c33d525b70ce6ddf94e500d80992af75a3305fe98835c5x2pl_0b95314f-617e-41fc-9afd-26ea796825c8/extract/0.log" Oct 06 13:26:06 crc kubenswrapper[4958]: I1006 13:26:06.904472 4958 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_fa9831ede5d93c33d525b70ce6ddf94e500d80992af75a3305fe98835c5x2pl_0b95314f-617e-41fc-9afd-26ea796825c8/util/0.log" Oct 06 13:26:06 crc kubenswrapper[4958]: I1006 13:26:06.923559 4958 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_fa9831ede5d93c33d525b70ce6ddf94e500d80992af75a3305fe98835c5x2pl_0b95314f-617e-41fc-9afd-26ea796825c8/pull/0.log" Oct 06 13:26:06 crc kubenswrapper[4958]: I1006 13:26:06.934323 4958 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-4zfnc_446071e0-1a30-4a0d-8b68-76eb7ece32c9/registry-server/0.log" Oct 06 13:26:07 crc kubenswrapper[4958]: I1006 13:26:07.087243 4958 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_marketplace-operator-79b997595-xg6kb_e4ac258a-8e41-4889-9395-9f0a614425cb/marketplace-operator/0.log" Oct 06 13:26:07 crc kubenswrapper[4958]: I1006 13:26:07.129849 4958 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-gcls7_01236baa-2bc7-4607-b92a-be74c28426af/extract-utilities/0.log" Oct 06 13:26:07 crc kubenswrapper[4958]: I1006 13:26:07.239272 4958 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-gcls7_01236baa-2bc7-4607-b92a-be74c28426af/extract-utilities/0.log" Oct 06 13:26:07 crc kubenswrapper[4958]: I1006 13:26:07.272807 4958 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-gcls7_01236baa-2bc7-4607-b92a-be74c28426af/extract-content/0.log" Oct 06 13:26:07 crc kubenswrapper[4958]: I1006 13:26:07.293002 4958 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-gcls7_01236baa-2bc7-4607-b92a-be74c28426af/extract-content/0.log" Oct 06 13:26:07 crc kubenswrapper[4958]: I1006 13:26:07.461426 4958 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-gcls7_01236baa-2bc7-4607-b92a-be74c28426af/extract-utilities/0.log" Oct 06 13:26:07 crc kubenswrapper[4958]: I1006 13:26:07.517123 4958 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-gcls7_01236baa-2bc7-4607-b92a-be74c28426af/extract-content/0.log" Oct 06 13:26:07 crc kubenswrapper[4958]: I1006 13:26:07.644837 4958 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-gcls7_01236baa-2bc7-4607-b92a-be74c28426af/registry-server/0.log" Oct 06 13:26:07 crc kubenswrapper[4958]: I1006 13:26:07.722598 4958 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-lsbmd_a492d97c-2ec7-4443-9bda-536127c35353/extract-utilities/0.log" Oct 06 13:26:07 crc kubenswrapper[4958]: I1006 13:26:07.893674 4958 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-lsbmd_a492d97c-2ec7-4443-9bda-536127c35353/extract-utilities/0.log" Oct 06 13:26:07 crc kubenswrapper[4958]: I1006 13:26:07.893716 4958 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-lsbmd_a492d97c-2ec7-4443-9bda-536127c35353/extract-content/0.log" Oct 06 13:26:07 crc kubenswrapper[4958]: I1006 13:26:07.912115 4958 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-lsbmd_a492d97c-2ec7-4443-9bda-536127c35353/extract-content/0.log" Oct 06 13:26:08 crc kubenswrapper[4958]: I1006 13:26:08.112440 4958 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-lsbmd_a492d97c-2ec7-4443-9bda-536127c35353/extract-utilities/0.log" Oct 06 13:26:08 crc kubenswrapper[4958]: I1006 13:26:08.128573 4958 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-lsbmd_a492d97c-2ec7-4443-9bda-536127c35353/extract-content/0.log" Oct 06 13:26:08 crc kubenswrapper[4958]: I1006 13:26:08.691984 4958 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-lsbmd_a492d97c-2ec7-4443-9bda-536127c35353/registry-server/0.log" Oct 06 13:26:23 crc kubenswrapper[4958]: I1006 13:26:23.802140 4958 patch_prober.go:28] interesting pod/machine-config-daemon-whw6z container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 06 13:26:23 crc kubenswrapper[4958]: I1006 13:26:23.802808 4958 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-whw6z" podUID="1a63a5e9-6d00-40ff-a080-cfcaf03a1c1b" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 06 13:26:53 crc kubenswrapper[4958]: I1006 13:26:53.802065 4958 patch_prober.go:28] interesting pod/machine-config-daemon-whw6z container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 06 13:26:53 crc kubenswrapper[4958]: I1006 13:26:53.802726 4958 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-whw6z" podUID="1a63a5e9-6d00-40ff-a080-cfcaf03a1c1b" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 06 13:27:05 crc kubenswrapper[4958]: I1006 13:27:05.389260 4958 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-fhq5j"] Oct 06 13:27:05 crc kubenswrapper[4958]: E1006 13:27:05.393251 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="35ea3dad-29cf-451a-b17a-123ad66b7918" containerName="container-00" Oct 06 13:27:05 crc kubenswrapper[4958]: I1006 13:27:05.393396 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="35ea3dad-29cf-451a-b17a-123ad66b7918" containerName="container-00" Oct 06 13:27:05 crc kubenswrapper[4958]: I1006 13:27:05.393795 4958 memory_manager.go:354] "RemoveStaleState removing state" podUID="35ea3dad-29cf-451a-b17a-123ad66b7918" containerName="container-00" Oct 06 13:27:05 crc kubenswrapper[4958]: I1006 13:27:05.395836 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-fhq5j" Oct 06 13:27:05 crc kubenswrapper[4958]: I1006 13:27:05.400529 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-fhq5j"] Oct 06 13:27:05 crc kubenswrapper[4958]: I1006 13:27:05.505685 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/20475bad-cedd-4170-9f1b-d813166a94a1-catalog-content\") pod \"redhat-marketplace-fhq5j\" (UID: \"20475bad-cedd-4170-9f1b-d813166a94a1\") " pod="openshift-marketplace/redhat-marketplace-fhq5j" Oct 06 13:27:05 crc kubenswrapper[4958]: I1006 13:27:05.505745 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/20475bad-cedd-4170-9f1b-d813166a94a1-utilities\") pod \"redhat-marketplace-fhq5j\" (UID: \"20475bad-cedd-4170-9f1b-d813166a94a1\") " pod="openshift-marketplace/redhat-marketplace-fhq5j" Oct 06 13:27:05 crc kubenswrapper[4958]: I1006 13:27:05.506502 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rtrrj\" (UniqueName: \"kubernetes.io/projected/20475bad-cedd-4170-9f1b-d813166a94a1-kube-api-access-rtrrj\") pod \"redhat-marketplace-fhq5j\" (UID: \"20475bad-cedd-4170-9f1b-d813166a94a1\") " pod="openshift-marketplace/redhat-marketplace-fhq5j" Oct 06 13:27:05 crc kubenswrapper[4958]: I1006 13:27:05.608576 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/20475bad-cedd-4170-9f1b-d813166a94a1-catalog-content\") pod \"redhat-marketplace-fhq5j\" (UID: \"20475bad-cedd-4170-9f1b-d813166a94a1\") " pod="openshift-marketplace/redhat-marketplace-fhq5j" Oct 06 13:27:05 crc kubenswrapper[4958]: I1006 13:27:05.608634 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/20475bad-cedd-4170-9f1b-d813166a94a1-utilities\") pod \"redhat-marketplace-fhq5j\" (UID: \"20475bad-cedd-4170-9f1b-d813166a94a1\") " pod="openshift-marketplace/redhat-marketplace-fhq5j" Oct 06 13:27:05 crc kubenswrapper[4958]: I1006 13:27:05.608767 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rtrrj\" (UniqueName: \"kubernetes.io/projected/20475bad-cedd-4170-9f1b-d813166a94a1-kube-api-access-rtrrj\") pod \"redhat-marketplace-fhq5j\" (UID: \"20475bad-cedd-4170-9f1b-d813166a94a1\") " pod="openshift-marketplace/redhat-marketplace-fhq5j" Oct 06 13:27:05 crc kubenswrapper[4958]: I1006 13:27:05.609250 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/20475bad-cedd-4170-9f1b-d813166a94a1-catalog-content\") pod \"redhat-marketplace-fhq5j\" (UID: \"20475bad-cedd-4170-9f1b-d813166a94a1\") " pod="openshift-marketplace/redhat-marketplace-fhq5j" Oct 06 13:27:05 crc kubenswrapper[4958]: I1006 13:27:05.609322 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/20475bad-cedd-4170-9f1b-d813166a94a1-utilities\") pod \"redhat-marketplace-fhq5j\" (UID: \"20475bad-cedd-4170-9f1b-d813166a94a1\") " pod="openshift-marketplace/redhat-marketplace-fhq5j" Oct 06 13:27:05 crc kubenswrapper[4958]: I1006 13:27:05.630247 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rtrrj\" (UniqueName: \"kubernetes.io/projected/20475bad-cedd-4170-9f1b-d813166a94a1-kube-api-access-rtrrj\") pod \"redhat-marketplace-fhq5j\" (UID: \"20475bad-cedd-4170-9f1b-d813166a94a1\") " pod="openshift-marketplace/redhat-marketplace-fhq5j" Oct 06 13:27:05 crc kubenswrapper[4958]: I1006 13:27:05.726233 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-fhq5j" Oct 06 13:27:06 crc kubenswrapper[4958]: I1006 13:27:06.248589 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-fhq5j"] Oct 06 13:27:06 crc kubenswrapper[4958]: I1006 13:27:06.958814 4958 generic.go:334] "Generic (PLEG): container finished" podID="20475bad-cedd-4170-9f1b-d813166a94a1" containerID="613302dbb8d2dc9e68c024819e1b1c2198db4d184aa6de1bd83d6f193e5e93f8" exitCode=0 Oct 06 13:27:06 crc kubenswrapper[4958]: I1006 13:27:06.958899 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-fhq5j" event={"ID":"20475bad-cedd-4170-9f1b-d813166a94a1","Type":"ContainerDied","Data":"613302dbb8d2dc9e68c024819e1b1c2198db4d184aa6de1bd83d6f193e5e93f8"} Oct 06 13:27:06 crc kubenswrapper[4958]: I1006 13:27:06.959180 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-fhq5j" event={"ID":"20475bad-cedd-4170-9f1b-d813166a94a1","Type":"ContainerStarted","Data":"aa9cc5391982c205b7e52744eb15298c749cdc30708b4963aaaeaf36a0c68b57"} Oct 06 13:27:07 crc kubenswrapper[4958]: I1006 13:27:07.969874 4958 generic.go:334] "Generic (PLEG): container finished" podID="20475bad-cedd-4170-9f1b-d813166a94a1" containerID="a5fd1b27703cceb639d8b40a0dc8cdccb7e4dcd5a0e39342a1a737df24f86959" exitCode=0 Oct 06 13:27:07 crc kubenswrapper[4958]: I1006 13:27:07.970111 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-fhq5j" event={"ID":"20475bad-cedd-4170-9f1b-d813166a94a1","Type":"ContainerDied","Data":"a5fd1b27703cceb639d8b40a0dc8cdccb7e4dcd5a0e39342a1a737df24f86959"} Oct 06 13:27:08 crc kubenswrapper[4958]: I1006 13:27:08.981701 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-fhq5j" event={"ID":"20475bad-cedd-4170-9f1b-d813166a94a1","Type":"ContainerStarted","Data":"d76c3b6bc38c56ce217ad5a989af41dc95467999e3ec35c97460a2015d28f89b"} Oct 06 13:27:09 crc kubenswrapper[4958]: I1006 13:27:09.012750 4958 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-fhq5j" podStartSLOduration=2.604107866 podStartE2EDuration="4.012717317s" podCreationTimestamp="2025-10-06 13:27:05 +0000 UTC" firstStartedPulling="2025-10-06 13:27:06.963213101 +0000 UTC m=+5980.849238419" lastFinishedPulling="2025-10-06 13:27:08.371822552 +0000 UTC m=+5982.257847870" observedRunningTime="2025-10-06 13:27:08.998675723 +0000 UTC m=+5982.884701051" watchObservedRunningTime="2025-10-06 13:27:09.012717317 +0000 UTC m=+5982.898742665" Oct 06 13:27:15 crc kubenswrapper[4958]: I1006 13:27:15.727102 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-fhq5j" Oct 06 13:27:15 crc kubenswrapper[4958]: I1006 13:27:15.728290 4958 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-fhq5j" Oct 06 13:27:15 crc kubenswrapper[4958]: I1006 13:27:15.781924 4958 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-fhq5j" Oct 06 13:27:16 crc kubenswrapper[4958]: I1006 13:27:16.098306 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-fhq5j" Oct 06 13:27:16 crc kubenswrapper[4958]: I1006 13:27:16.143175 4958 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-fhq5j"] Oct 06 13:27:18 crc kubenswrapper[4958]: I1006 13:27:18.076907 4958 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-fhq5j" podUID="20475bad-cedd-4170-9f1b-d813166a94a1" containerName="registry-server" containerID="cri-o://d76c3b6bc38c56ce217ad5a989af41dc95467999e3ec35c97460a2015d28f89b" gracePeriod=2 Oct 06 13:27:18 crc kubenswrapper[4958]: I1006 13:27:18.574034 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-fhq5j" Oct 06 13:27:18 crc kubenswrapper[4958]: I1006 13:27:18.686949 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/20475bad-cedd-4170-9f1b-d813166a94a1-catalog-content\") pod \"20475bad-cedd-4170-9f1b-d813166a94a1\" (UID: \"20475bad-cedd-4170-9f1b-d813166a94a1\") " Oct 06 13:27:18 crc kubenswrapper[4958]: I1006 13:27:18.687198 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/20475bad-cedd-4170-9f1b-d813166a94a1-utilities\") pod \"20475bad-cedd-4170-9f1b-d813166a94a1\" (UID: \"20475bad-cedd-4170-9f1b-d813166a94a1\") " Oct 06 13:27:18 crc kubenswrapper[4958]: I1006 13:27:18.687229 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rtrrj\" (UniqueName: \"kubernetes.io/projected/20475bad-cedd-4170-9f1b-d813166a94a1-kube-api-access-rtrrj\") pod \"20475bad-cedd-4170-9f1b-d813166a94a1\" (UID: \"20475bad-cedd-4170-9f1b-d813166a94a1\") " Oct 06 13:27:18 crc kubenswrapper[4958]: I1006 13:27:18.689703 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/20475bad-cedd-4170-9f1b-d813166a94a1-utilities" (OuterVolumeSpecName: "utilities") pod "20475bad-cedd-4170-9f1b-d813166a94a1" (UID: "20475bad-cedd-4170-9f1b-d813166a94a1"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 06 13:27:18 crc kubenswrapper[4958]: I1006 13:27:18.694703 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/20475bad-cedd-4170-9f1b-d813166a94a1-kube-api-access-rtrrj" (OuterVolumeSpecName: "kube-api-access-rtrrj") pod "20475bad-cedd-4170-9f1b-d813166a94a1" (UID: "20475bad-cedd-4170-9f1b-d813166a94a1"). InnerVolumeSpecName "kube-api-access-rtrrj". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 06 13:27:18 crc kubenswrapper[4958]: I1006 13:27:18.702369 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/20475bad-cedd-4170-9f1b-d813166a94a1-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "20475bad-cedd-4170-9f1b-d813166a94a1" (UID: "20475bad-cedd-4170-9f1b-d813166a94a1"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 06 13:27:18 crc kubenswrapper[4958]: I1006 13:27:18.790483 4958 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/20475bad-cedd-4170-9f1b-d813166a94a1-catalog-content\") on node \"crc\" DevicePath \"\"" Oct 06 13:27:18 crc kubenswrapper[4958]: I1006 13:27:18.790783 4958 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/20475bad-cedd-4170-9f1b-d813166a94a1-utilities\") on node \"crc\" DevicePath \"\"" Oct 06 13:27:18 crc kubenswrapper[4958]: I1006 13:27:18.790974 4958 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rtrrj\" (UniqueName: \"kubernetes.io/projected/20475bad-cedd-4170-9f1b-d813166a94a1-kube-api-access-rtrrj\") on node \"crc\" DevicePath \"\"" Oct 06 13:27:19 crc kubenswrapper[4958]: I1006 13:27:19.082731 4958 generic.go:334] "Generic (PLEG): container finished" podID="20475bad-cedd-4170-9f1b-d813166a94a1" containerID="d76c3b6bc38c56ce217ad5a989af41dc95467999e3ec35c97460a2015d28f89b" exitCode=0 Oct 06 13:27:19 crc kubenswrapper[4958]: I1006 13:27:19.082790 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-fhq5j" event={"ID":"20475bad-cedd-4170-9f1b-d813166a94a1","Type":"ContainerDied","Data":"d76c3b6bc38c56ce217ad5a989af41dc95467999e3ec35c97460a2015d28f89b"} Oct 06 13:27:19 crc kubenswrapper[4958]: I1006 13:27:19.082828 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-fhq5j" event={"ID":"20475bad-cedd-4170-9f1b-d813166a94a1","Type":"ContainerDied","Data":"aa9cc5391982c205b7e52744eb15298c749cdc30708b4963aaaeaf36a0c68b57"} Oct 06 13:27:19 crc kubenswrapper[4958]: I1006 13:27:19.082847 4958 scope.go:117] "RemoveContainer" containerID="d76c3b6bc38c56ce217ad5a989af41dc95467999e3ec35c97460a2015d28f89b" Oct 06 13:27:19 crc kubenswrapper[4958]: I1006 13:27:19.084578 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-fhq5j" Oct 06 13:27:19 crc kubenswrapper[4958]: I1006 13:27:19.111430 4958 scope.go:117] "RemoveContainer" containerID="a5fd1b27703cceb639d8b40a0dc8cdccb7e4dcd5a0e39342a1a737df24f86959" Oct 06 13:27:19 crc kubenswrapper[4958]: I1006 13:27:19.118179 4958 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-fhq5j"] Oct 06 13:27:19 crc kubenswrapper[4958]: I1006 13:27:19.129783 4958 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-fhq5j"] Oct 06 13:27:19 crc kubenswrapper[4958]: I1006 13:27:19.149075 4958 scope.go:117] "RemoveContainer" containerID="613302dbb8d2dc9e68c024819e1b1c2198db4d184aa6de1bd83d6f193e5e93f8" Oct 06 13:27:19 crc kubenswrapper[4958]: I1006 13:27:19.190006 4958 scope.go:117] "RemoveContainer" containerID="d76c3b6bc38c56ce217ad5a989af41dc95467999e3ec35c97460a2015d28f89b" Oct 06 13:27:19 crc kubenswrapper[4958]: E1006 13:27:19.190507 4958 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d76c3b6bc38c56ce217ad5a989af41dc95467999e3ec35c97460a2015d28f89b\": container with ID starting with d76c3b6bc38c56ce217ad5a989af41dc95467999e3ec35c97460a2015d28f89b not found: ID does not exist" containerID="d76c3b6bc38c56ce217ad5a989af41dc95467999e3ec35c97460a2015d28f89b" Oct 06 13:27:19 crc kubenswrapper[4958]: I1006 13:27:19.190534 4958 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d76c3b6bc38c56ce217ad5a989af41dc95467999e3ec35c97460a2015d28f89b"} err="failed to get container status \"d76c3b6bc38c56ce217ad5a989af41dc95467999e3ec35c97460a2015d28f89b\": rpc error: code = NotFound desc = could not find container \"d76c3b6bc38c56ce217ad5a989af41dc95467999e3ec35c97460a2015d28f89b\": container with ID starting with d76c3b6bc38c56ce217ad5a989af41dc95467999e3ec35c97460a2015d28f89b not found: ID does not exist" Oct 06 13:27:19 crc kubenswrapper[4958]: I1006 13:27:19.190558 4958 scope.go:117] "RemoveContainer" containerID="a5fd1b27703cceb639d8b40a0dc8cdccb7e4dcd5a0e39342a1a737df24f86959" Oct 06 13:27:19 crc kubenswrapper[4958]: E1006 13:27:19.190809 4958 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a5fd1b27703cceb639d8b40a0dc8cdccb7e4dcd5a0e39342a1a737df24f86959\": container with ID starting with a5fd1b27703cceb639d8b40a0dc8cdccb7e4dcd5a0e39342a1a737df24f86959 not found: ID does not exist" containerID="a5fd1b27703cceb639d8b40a0dc8cdccb7e4dcd5a0e39342a1a737df24f86959" Oct 06 13:27:19 crc kubenswrapper[4958]: I1006 13:27:19.190827 4958 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a5fd1b27703cceb639d8b40a0dc8cdccb7e4dcd5a0e39342a1a737df24f86959"} err="failed to get container status \"a5fd1b27703cceb639d8b40a0dc8cdccb7e4dcd5a0e39342a1a737df24f86959\": rpc error: code = NotFound desc = could not find container \"a5fd1b27703cceb639d8b40a0dc8cdccb7e4dcd5a0e39342a1a737df24f86959\": container with ID starting with a5fd1b27703cceb639d8b40a0dc8cdccb7e4dcd5a0e39342a1a737df24f86959 not found: ID does not exist" Oct 06 13:27:19 crc kubenswrapper[4958]: I1006 13:27:19.190839 4958 scope.go:117] "RemoveContainer" containerID="613302dbb8d2dc9e68c024819e1b1c2198db4d184aa6de1bd83d6f193e5e93f8" Oct 06 13:27:19 crc kubenswrapper[4958]: E1006 13:27:19.191273 4958 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"613302dbb8d2dc9e68c024819e1b1c2198db4d184aa6de1bd83d6f193e5e93f8\": container with ID starting with 613302dbb8d2dc9e68c024819e1b1c2198db4d184aa6de1bd83d6f193e5e93f8 not found: ID does not exist" containerID="613302dbb8d2dc9e68c024819e1b1c2198db4d184aa6de1bd83d6f193e5e93f8" Oct 06 13:27:19 crc kubenswrapper[4958]: I1006 13:27:19.191289 4958 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"613302dbb8d2dc9e68c024819e1b1c2198db4d184aa6de1bd83d6f193e5e93f8"} err="failed to get container status \"613302dbb8d2dc9e68c024819e1b1c2198db4d184aa6de1bd83d6f193e5e93f8\": rpc error: code = NotFound desc = could not find container \"613302dbb8d2dc9e68c024819e1b1c2198db4d184aa6de1bd83d6f193e5e93f8\": container with ID starting with 613302dbb8d2dc9e68c024819e1b1c2198db4d184aa6de1bd83d6f193e5e93f8 not found: ID does not exist" Oct 06 13:27:20 crc kubenswrapper[4958]: I1006 13:27:20.928850 4958 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="20475bad-cedd-4170-9f1b-d813166a94a1" path="/var/lib/kubelet/pods/20475bad-cedd-4170-9f1b-d813166a94a1/volumes" Oct 06 13:27:23 crc kubenswrapper[4958]: I1006 13:27:23.801830 4958 patch_prober.go:28] interesting pod/machine-config-daemon-whw6z container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 06 13:27:23 crc kubenswrapper[4958]: I1006 13:27:23.802343 4958 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-whw6z" podUID="1a63a5e9-6d00-40ff-a080-cfcaf03a1c1b" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 06 13:27:23 crc kubenswrapper[4958]: I1006 13:27:23.802390 4958 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-whw6z" Oct 06 13:27:23 crc kubenswrapper[4958]: I1006 13:27:23.802844 4958 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"7fae5e7c3f2061ad0524619a6772893fa1ae12670cf4b11ced46230c90fe0223"} pod="openshift-machine-config-operator/machine-config-daemon-whw6z" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Oct 06 13:27:23 crc kubenswrapper[4958]: I1006 13:27:23.802892 4958 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-whw6z" podUID="1a63a5e9-6d00-40ff-a080-cfcaf03a1c1b" containerName="machine-config-daemon" containerID="cri-o://7fae5e7c3f2061ad0524619a6772893fa1ae12670cf4b11ced46230c90fe0223" gracePeriod=600 Oct 06 13:27:23 crc kubenswrapper[4958]: E1006 13:27:23.936953 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-whw6z_openshift-machine-config-operator(1a63a5e9-6d00-40ff-a080-cfcaf03a1c1b)\"" pod="openshift-machine-config-operator/machine-config-daemon-whw6z" podUID="1a63a5e9-6d00-40ff-a080-cfcaf03a1c1b" Oct 06 13:27:24 crc kubenswrapper[4958]: I1006 13:27:24.137191 4958 generic.go:334] "Generic (PLEG): container finished" podID="1a63a5e9-6d00-40ff-a080-cfcaf03a1c1b" containerID="7fae5e7c3f2061ad0524619a6772893fa1ae12670cf4b11ced46230c90fe0223" exitCode=0 Oct 06 13:27:24 crc kubenswrapper[4958]: I1006 13:27:24.137237 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-whw6z" event={"ID":"1a63a5e9-6d00-40ff-a080-cfcaf03a1c1b","Type":"ContainerDied","Data":"7fae5e7c3f2061ad0524619a6772893fa1ae12670cf4b11ced46230c90fe0223"} Oct 06 13:27:24 crc kubenswrapper[4958]: I1006 13:27:24.137268 4958 scope.go:117] "RemoveContainer" containerID="1d41da7355ed8fca93307deb9ba65e8c10fc79ad77b6a1123fc848e312fa086a" Oct 06 13:27:24 crc kubenswrapper[4958]: I1006 13:27:24.138248 4958 scope.go:117] "RemoveContainer" containerID="7fae5e7c3f2061ad0524619a6772893fa1ae12670cf4b11ced46230c90fe0223" Oct 06 13:27:24 crc kubenswrapper[4958]: E1006 13:27:24.138748 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-whw6z_openshift-machine-config-operator(1a63a5e9-6d00-40ff-a080-cfcaf03a1c1b)\"" pod="openshift-machine-config-operator/machine-config-daemon-whw6z" podUID="1a63a5e9-6d00-40ff-a080-cfcaf03a1c1b" Oct 06 13:27:32 crc kubenswrapper[4958]: I1006 13:27:32.792437 4958 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-mcbkk"] Oct 06 13:27:32 crc kubenswrapper[4958]: E1006 13:27:32.793477 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="20475bad-cedd-4170-9f1b-d813166a94a1" containerName="registry-server" Oct 06 13:27:32 crc kubenswrapper[4958]: I1006 13:27:32.793493 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="20475bad-cedd-4170-9f1b-d813166a94a1" containerName="registry-server" Oct 06 13:27:32 crc kubenswrapper[4958]: E1006 13:27:32.793528 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="20475bad-cedd-4170-9f1b-d813166a94a1" containerName="extract-content" Oct 06 13:27:32 crc kubenswrapper[4958]: I1006 13:27:32.793536 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="20475bad-cedd-4170-9f1b-d813166a94a1" containerName="extract-content" Oct 06 13:27:32 crc kubenswrapper[4958]: E1006 13:27:32.793548 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="20475bad-cedd-4170-9f1b-d813166a94a1" containerName="extract-utilities" Oct 06 13:27:32 crc kubenswrapper[4958]: I1006 13:27:32.793558 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="20475bad-cedd-4170-9f1b-d813166a94a1" containerName="extract-utilities" Oct 06 13:27:32 crc kubenswrapper[4958]: I1006 13:27:32.793833 4958 memory_manager.go:354] "RemoveStaleState removing state" podUID="20475bad-cedd-4170-9f1b-d813166a94a1" containerName="registry-server" Oct 06 13:27:32 crc kubenswrapper[4958]: I1006 13:27:32.795955 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-mcbkk" Oct 06 13:27:32 crc kubenswrapper[4958]: I1006 13:27:32.819059 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-mcbkk"] Oct 06 13:27:32 crc kubenswrapper[4958]: I1006 13:27:32.917015 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/da2a365a-a8ff-4895-9241-62bac48eb1fb-catalog-content\") pod \"certified-operators-mcbkk\" (UID: \"da2a365a-a8ff-4895-9241-62bac48eb1fb\") " pod="openshift-marketplace/certified-operators-mcbkk" Oct 06 13:27:32 crc kubenswrapper[4958]: I1006 13:27:32.917424 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c5pss\" (UniqueName: \"kubernetes.io/projected/da2a365a-a8ff-4895-9241-62bac48eb1fb-kube-api-access-c5pss\") pod \"certified-operators-mcbkk\" (UID: \"da2a365a-a8ff-4895-9241-62bac48eb1fb\") " pod="openshift-marketplace/certified-operators-mcbkk" Oct 06 13:27:32 crc kubenswrapper[4958]: I1006 13:27:32.917723 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/da2a365a-a8ff-4895-9241-62bac48eb1fb-utilities\") pod \"certified-operators-mcbkk\" (UID: \"da2a365a-a8ff-4895-9241-62bac48eb1fb\") " pod="openshift-marketplace/certified-operators-mcbkk" Oct 06 13:27:33 crc kubenswrapper[4958]: I1006 13:27:33.019390 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c5pss\" (UniqueName: \"kubernetes.io/projected/da2a365a-a8ff-4895-9241-62bac48eb1fb-kube-api-access-c5pss\") pod \"certified-operators-mcbkk\" (UID: \"da2a365a-a8ff-4895-9241-62bac48eb1fb\") " pod="openshift-marketplace/certified-operators-mcbkk" Oct 06 13:27:33 crc kubenswrapper[4958]: I1006 13:27:33.019477 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/da2a365a-a8ff-4895-9241-62bac48eb1fb-utilities\") pod \"certified-operators-mcbkk\" (UID: \"da2a365a-a8ff-4895-9241-62bac48eb1fb\") " pod="openshift-marketplace/certified-operators-mcbkk" Oct 06 13:27:33 crc kubenswrapper[4958]: I1006 13:27:33.019552 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/da2a365a-a8ff-4895-9241-62bac48eb1fb-catalog-content\") pod \"certified-operators-mcbkk\" (UID: \"da2a365a-a8ff-4895-9241-62bac48eb1fb\") " pod="openshift-marketplace/certified-operators-mcbkk" Oct 06 13:27:33 crc kubenswrapper[4958]: I1006 13:27:33.020077 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/da2a365a-a8ff-4895-9241-62bac48eb1fb-catalog-content\") pod \"certified-operators-mcbkk\" (UID: \"da2a365a-a8ff-4895-9241-62bac48eb1fb\") " pod="openshift-marketplace/certified-operators-mcbkk" Oct 06 13:27:33 crc kubenswrapper[4958]: I1006 13:27:33.020230 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/da2a365a-a8ff-4895-9241-62bac48eb1fb-utilities\") pod \"certified-operators-mcbkk\" (UID: \"da2a365a-a8ff-4895-9241-62bac48eb1fb\") " pod="openshift-marketplace/certified-operators-mcbkk" Oct 06 13:27:33 crc kubenswrapper[4958]: I1006 13:27:33.050618 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c5pss\" (UniqueName: \"kubernetes.io/projected/da2a365a-a8ff-4895-9241-62bac48eb1fb-kube-api-access-c5pss\") pod \"certified-operators-mcbkk\" (UID: \"da2a365a-a8ff-4895-9241-62bac48eb1fb\") " pod="openshift-marketplace/certified-operators-mcbkk" Oct 06 13:27:33 crc kubenswrapper[4958]: I1006 13:27:33.113123 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-mcbkk" Oct 06 13:27:33 crc kubenswrapper[4958]: I1006 13:27:33.670998 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-mcbkk"] Oct 06 13:27:34 crc kubenswrapper[4958]: I1006 13:27:34.266032 4958 generic.go:334] "Generic (PLEG): container finished" podID="da2a365a-a8ff-4895-9241-62bac48eb1fb" containerID="5082e2ed77119c226d2ad135d06a1a97a7ef781e4a66512708d700528065817f" exitCode=0 Oct 06 13:27:34 crc kubenswrapper[4958]: I1006 13:27:34.266220 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-mcbkk" event={"ID":"da2a365a-a8ff-4895-9241-62bac48eb1fb","Type":"ContainerDied","Data":"5082e2ed77119c226d2ad135d06a1a97a7ef781e4a66512708d700528065817f"} Oct 06 13:27:34 crc kubenswrapper[4958]: I1006 13:27:34.266312 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-mcbkk" event={"ID":"da2a365a-a8ff-4895-9241-62bac48eb1fb","Type":"ContainerStarted","Data":"fa8dc7075d11c05970f827153fd5c0267f730b6b6517bd90b18bbec175ac1acf"} Oct 06 13:27:34 crc kubenswrapper[4958]: I1006 13:27:34.267739 4958 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Oct 06 13:27:36 crc kubenswrapper[4958]: I1006 13:27:36.286992 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-mcbkk" event={"ID":"da2a365a-a8ff-4895-9241-62bac48eb1fb","Type":"ContainerStarted","Data":"7598a4a67d1dc3c820c9dc49d54de03c05cb2d23378012b831790a4845140d87"} Oct 06 13:27:37 crc kubenswrapper[4958]: I1006 13:27:37.296225 4958 generic.go:334] "Generic (PLEG): container finished" podID="da2a365a-a8ff-4895-9241-62bac48eb1fb" containerID="7598a4a67d1dc3c820c9dc49d54de03c05cb2d23378012b831790a4845140d87" exitCode=0 Oct 06 13:27:37 crc kubenswrapper[4958]: I1006 13:27:37.296833 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-mcbkk" event={"ID":"da2a365a-a8ff-4895-9241-62bac48eb1fb","Type":"ContainerDied","Data":"7598a4a67d1dc3c820c9dc49d54de03c05cb2d23378012b831790a4845140d87"} Oct 06 13:27:37 crc kubenswrapper[4958]: I1006 13:27:37.913755 4958 scope.go:117] "RemoveContainer" containerID="7fae5e7c3f2061ad0524619a6772893fa1ae12670cf4b11ced46230c90fe0223" Oct 06 13:27:37 crc kubenswrapper[4958]: E1006 13:27:37.914224 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-whw6z_openshift-machine-config-operator(1a63a5e9-6d00-40ff-a080-cfcaf03a1c1b)\"" pod="openshift-machine-config-operator/machine-config-daemon-whw6z" podUID="1a63a5e9-6d00-40ff-a080-cfcaf03a1c1b" Oct 06 13:27:38 crc kubenswrapper[4958]: I1006 13:27:38.307314 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-mcbkk" event={"ID":"da2a365a-a8ff-4895-9241-62bac48eb1fb","Type":"ContainerStarted","Data":"a14b4aaaa4d79b662b79420b75dc81b5a1fde48ee33c5274f7cdd4c893805ff8"} Oct 06 13:27:38 crc kubenswrapper[4958]: I1006 13:27:38.331110 4958 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-mcbkk" podStartSLOduration=2.746954156 podStartE2EDuration="6.331092335s" podCreationTimestamp="2025-10-06 13:27:32 +0000 UTC" firstStartedPulling="2025-10-06 13:27:34.267508711 +0000 UTC m=+6008.153534009" lastFinishedPulling="2025-10-06 13:27:37.85164687 +0000 UTC m=+6011.737672188" observedRunningTime="2025-10-06 13:27:38.322474878 +0000 UTC m=+6012.208500196" watchObservedRunningTime="2025-10-06 13:27:38.331092335 +0000 UTC m=+6012.217117633" Oct 06 13:27:43 crc kubenswrapper[4958]: I1006 13:27:43.113752 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-mcbkk" Oct 06 13:27:43 crc kubenswrapper[4958]: I1006 13:27:43.115389 4958 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-mcbkk" Oct 06 13:27:43 crc kubenswrapper[4958]: I1006 13:27:43.159418 4958 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-mcbkk" Oct 06 13:27:43 crc kubenswrapper[4958]: I1006 13:27:43.442113 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-mcbkk" Oct 06 13:27:43 crc kubenswrapper[4958]: I1006 13:27:43.501082 4958 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-mcbkk"] Oct 06 13:27:45 crc kubenswrapper[4958]: I1006 13:27:45.382550 4958 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-mcbkk" podUID="da2a365a-a8ff-4895-9241-62bac48eb1fb" containerName="registry-server" containerID="cri-o://a14b4aaaa4d79b662b79420b75dc81b5a1fde48ee33c5274f7cdd4c893805ff8" gracePeriod=2 Oct 06 13:27:45 crc kubenswrapper[4958]: I1006 13:27:45.781754 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-mcbkk" Oct 06 13:27:45 crc kubenswrapper[4958]: I1006 13:27:45.913618 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-c5pss\" (UniqueName: \"kubernetes.io/projected/da2a365a-a8ff-4895-9241-62bac48eb1fb-kube-api-access-c5pss\") pod \"da2a365a-a8ff-4895-9241-62bac48eb1fb\" (UID: \"da2a365a-a8ff-4895-9241-62bac48eb1fb\") " Oct 06 13:27:45 crc kubenswrapper[4958]: I1006 13:27:45.913757 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/da2a365a-a8ff-4895-9241-62bac48eb1fb-catalog-content\") pod \"da2a365a-a8ff-4895-9241-62bac48eb1fb\" (UID: \"da2a365a-a8ff-4895-9241-62bac48eb1fb\") " Oct 06 13:27:45 crc kubenswrapper[4958]: I1006 13:27:45.913808 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/da2a365a-a8ff-4895-9241-62bac48eb1fb-utilities\") pod \"da2a365a-a8ff-4895-9241-62bac48eb1fb\" (UID: \"da2a365a-a8ff-4895-9241-62bac48eb1fb\") " Oct 06 13:27:45 crc kubenswrapper[4958]: I1006 13:27:45.914996 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/da2a365a-a8ff-4895-9241-62bac48eb1fb-utilities" (OuterVolumeSpecName: "utilities") pod "da2a365a-a8ff-4895-9241-62bac48eb1fb" (UID: "da2a365a-a8ff-4895-9241-62bac48eb1fb"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 06 13:27:45 crc kubenswrapper[4958]: I1006 13:27:45.920445 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/da2a365a-a8ff-4895-9241-62bac48eb1fb-kube-api-access-c5pss" (OuterVolumeSpecName: "kube-api-access-c5pss") pod "da2a365a-a8ff-4895-9241-62bac48eb1fb" (UID: "da2a365a-a8ff-4895-9241-62bac48eb1fb"). InnerVolumeSpecName "kube-api-access-c5pss". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 06 13:27:46 crc kubenswrapper[4958]: I1006 13:27:46.016078 4958 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-c5pss\" (UniqueName: \"kubernetes.io/projected/da2a365a-a8ff-4895-9241-62bac48eb1fb-kube-api-access-c5pss\") on node \"crc\" DevicePath \"\"" Oct 06 13:27:46 crc kubenswrapper[4958]: I1006 13:27:46.016110 4958 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/da2a365a-a8ff-4895-9241-62bac48eb1fb-utilities\") on node \"crc\" DevicePath \"\"" Oct 06 13:27:46 crc kubenswrapper[4958]: I1006 13:27:46.281956 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/da2a365a-a8ff-4895-9241-62bac48eb1fb-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "da2a365a-a8ff-4895-9241-62bac48eb1fb" (UID: "da2a365a-a8ff-4895-9241-62bac48eb1fb"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 06 13:27:46 crc kubenswrapper[4958]: I1006 13:27:46.320711 4958 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/da2a365a-a8ff-4895-9241-62bac48eb1fb-catalog-content\") on node \"crc\" DevicePath \"\"" Oct 06 13:27:46 crc kubenswrapper[4958]: I1006 13:27:46.394905 4958 generic.go:334] "Generic (PLEG): container finished" podID="da2a365a-a8ff-4895-9241-62bac48eb1fb" containerID="a14b4aaaa4d79b662b79420b75dc81b5a1fde48ee33c5274f7cdd4c893805ff8" exitCode=0 Oct 06 13:27:46 crc kubenswrapper[4958]: I1006 13:27:46.394947 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-mcbkk" event={"ID":"da2a365a-a8ff-4895-9241-62bac48eb1fb","Type":"ContainerDied","Data":"a14b4aaaa4d79b662b79420b75dc81b5a1fde48ee33c5274f7cdd4c893805ff8"} Oct 06 13:27:46 crc kubenswrapper[4958]: I1006 13:27:46.394979 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-mcbkk" event={"ID":"da2a365a-a8ff-4895-9241-62bac48eb1fb","Type":"ContainerDied","Data":"fa8dc7075d11c05970f827153fd5c0267f730b6b6517bd90b18bbec175ac1acf"} Oct 06 13:27:46 crc kubenswrapper[4958]: I1006 13:27:46.394997 4958 scope.go:117] "RemoveContainer" containerID="a14b4aaaa4d79b662b79420b75dc81b5a1fde48ee33c5274f7cdd4c893805ff8" Oct 06 13:27:46 crc kubenswrapper[4958]: I1006 13:27:46.395127 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-mcbkk" Oct 06 13:27:46 crc kubenswrapper[4958]: I1006 13:27:46.431207 4958 scope.go:117] "RemoveContainer" containerID="7598a4a67d1dc3c820c9dc49d54de03c05cb2d23378012b831790a4845140d87" Oct 06 13:27:46 crc kubenswrapper[4958]: I1006 13:27:46.438256 4958 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-mcbkk"] Oct 06 13:27:46 crc kubenswrapper[4958]: I1006 13:27:46.452827 4958 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-mcbkk"] Oct 06 13:27:46 crc kubenswrapper[4958]: I1006 13:27:46.454154 4958 scope.go:117] "RemoveContainer" containerID="5082e2ed77119c226d2ad135d06a1a97a7ef781e4a66512708d700528065817f" Oct 06 13:27:46 crc kubenswrapper[4958]: I1006 13:27:46.524404 4958 scope.go:117] "RemoveContainer" containerID="a14b4aaaa4d79b662b79420b75dc81b5a1fde48ee33c5274f7cdd4c893805ff8" Oct 06 13:27:46 crc kubenswrapper[4958]: E1006 13:27:46.524937 4958 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a14b4aaaa4d79b662b79420b75dc81b5a1fde48ee33c5274f7cdd4c893805ff8\": container with ID starting with a14b4aaaa4d79b662b79420b75dc81b5a1fde48ee33c5274f7cdd4c893805ff8 not found: ID does not exist" containerID="a14b4aaaa4d79b662b79420b75dc81b5a1fde48ee33c5274f7cdd4c893805ff8" Oct 06 13:27:46 crc kubenswrapper[4958]: I1006 13:27:46.524996 4958 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a14b4aaaa4d79b662b79420b75dc81b5a1fde48ee33c5274f7cdd4c893805ff8"} err="failed to get container status \"a14b4aaaa4d79b662b79420b75dc81b5a1fde48ee33c5274f7cdd4c893805ff8\": rpc error: code = NotFound desc = could not find container \"a14b4aaaa4d79b662b79420b75dc81b5a1fde48ee33c5274f7cdd4c893805ff8\": container with ID starting with a14b4aaaa4d79b662b79420b75dc81b5a1fde48ee33c5274f7cdd4c893805ff8 not found: ID does not exist" Oct 06 13:27:46 crc kubenswrapper[4958]: I1006 13:27:46.525028 4958 scope.go:117] "RemoveContainer" containerID="7598a4a67d1dc3c820c9dc49d54de03c05cb2d23378012b831790a4845140d87" Oct 06 13:27:46 crc kubenswrapper[4958]: E1006 13:27:46.525592 4958 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7598a4a67d1dc3c820c9dc49d54de03c05cb2d23378012b831790a4845140d87\": container with ID starting with 7598a4a67d1dc3c820c9dc49d54de03c05cb2d23378012b831790a4845140d87 not found: ID does not exist" containerID="7598a4a67d1dc3c820c9dc49d54de03c05cb2d23378012b831790a4845140d87" Oct 06 13:27:46 crc kubenswrapper[4958]: I1006 13:27:46.525684 4958 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7598a4a67d1dc3c820c9dc49d54de03c05cb2d23378012b831790a4845140d87"} err="failed to get container status \"7598a4a67d1dc3c820c9dc49d54de03c05cb2d23378012b831790a4845140d87\": rpc error: code = NotFound desc = could not find container \"7598a4a67d1dc3c820c9dc49d54de03c05cb2d23378012b831790a4845140d87\": container with ID starting with 7598a4a67d1dc3c820c9dc49d54de03c05cb2d23378012b831790a4845140d87 not found: ID does not exist" Oct 06 13:27:46 crc kubenswrapper[4958]: I1006 13:27:46.525749 4958 scope.go:117] "RemoveContainer" containerID="5082e2ed77119c226d2ad135d06a1a97a7ef781e4a66512708d700528065817f" Oct 06 13:27:46 crc kubenswrapper[4958]: E1006 13:27:46.526106 4958 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5082e2ed77119c226d2ad135d06a1a97a7ef781e4a66512708d700528065817f\": container with ID starting with 5082e2ed77119c226d2ad135d06a1a97a7ef781e4a66512708d700528065817f not found: ID does not exist" containerID="5082e2ed77119c226d2ad135d06a1a97a7ef781e4a66512708d700528065817f" Oct 06 13:27:46 crc kubenswrapper[4958]: I1006 13:27:46.526160 4958 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5082e2ed77119c226d2ad135d06a1a97a7ef781e4a66512708d700528065817f"} err="failed to get container status \"5082e2ed77119c226d2ad135d06a1a97a7ef781e4a66512708d700528065817f\": rpc error: code = NotFound desc = could not find container \"5082e2ed77119c226d2ad135d06a1a97a7ef781e4a66512708d700528065817f\": container with ID starting with 5082e2ed77119c226d2ad135d06a1a97a7ef781e4a66512708d700528065817f not found: ID does not exist" Oct 06 13:27:46 crc kubenswrapper[4958]: I1006 13:27:46.945668 4958 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="da2a365a-a8ff-4895-9241-62bac48eb1fb" path="/var/lib/kubelet/pods/da2a365a-a8ff-4895-9241-62bac48eb1fb/volumes" Oct 06 13:27:52 crc kubenswrapper[4958]: I1006 13:27:52.913695 4958 scope.go:117] "RemoveContainer" containerID="7fae5e7c3f2061ad0524619a6772893fa1ae12670cf4b11ced46230c90fe0223" Oct 06 13:27:52 crc kubenswrapper[4958]: E1006 13:27:52.915023 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-whw6z_openshift-machine-config-operator(1a63a5e9-6d00-40ff-a080-cfcaf03a1c1b)\"" pod="openshift-machine-config-operator/machine-config-daemon-whw6z" podUID="1a63a5e9-6d00-40ff-a080-cfcaf03a1c1b" Oct 06 13:28:06 crc kubenswrapper[4958]: I1006 13:28:06.919190 4958 scope.go:117] "RemoveContainer" containerID="7fae5e7c3f2061ad0524619a6772893fa1ae12670cf4b11ced46230c90fe0223" Oct 06 13:28:06 crc kubenswrapper[4958]: E1006 13:28:06.919867 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-whw6z_openshift-machine-config-operator(1a63a5e9-6d00-40ff-a080-cfcaf03a1c1b)\"" pod="openshift-machine-config-operator/machine-config-daemon-whw6z" podUID="1a63a5e9-6d00-40ff-a080-cfcaf03a1c1b" Oct 06 13:28:19 crc kubenswrapper[4958]: I1006 13:28:19.912904 4958 scope.go:117] "RemoveContainer" containerID="7fae5e7c3f2061ad0524619a6772893fa1ae12670cf4b11ced46230c90fe0223" Oct 06 13:28:19 crc kubenswrapper[4958]: E1006 13:28:19.913700 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-whw6z_openshift-machine-config-operator(1a63a5e9-6d00-40ff-a080-cfcaf03a1c1b)\"" pod="openshift-machine-config-operator/machine-config-daemon-whw6z" podUID="1a63a5e9-6d00-40ff-a080-cfcaf03a1c1b" Oct 06 13:28:26 crc kubenswrapper[4958]: I1006 13:28:26.801059 4958 generic.go:334] "Generic (PLEG): container finished" podID="4e979a79-8f58-4109-aa07-8f42159e70e3" containerID="0c1d7de24edb7b3e959b371e3d31554a22c1542b5cbb509e8f02f5e3bebcf61f" exitCode=0 Oct 06 13:28:26 crc kubenswrapper[4958]: I1006 13:28:26.801137 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-d9jtr/must-gather-thrjh" event={"ID":"4e979a79-8f58-4109-aa07-8f42159e70e3","Type":"ContainerDied","Data":"0c1d7de24edb7b3e959b371e3d31554a22c1542b5cbb509e8f02f5e3bebcf61f"} Oct 06 13:28:26 crc kubenswrapper[4958]: I1006 13:28:26.802444 4958 scope.go:117] "RemoveContainer" containerID="0c1d7de24edb7b3e959b371e3d31554a22c1542b5cbb509e8f02f5e3bebcf61f" Oct 06 13:28:27 crc kubenswrapper[4958]: I1006 13:28:27.741626 4958 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-d9jtr_must-gather-thrjh_4e979a79-8f58-4109-aa07-8f42159e70e3/gather/0.log" Oct 06 13:28:29 crc kubenswrapper[4958]: I1006 13:28:29.058886 4958 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-8vn9m"] Oct 06 13:28:29 crc kubenswrapper[4958]: E1006 13:28:29.059732 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="da2a365a-a8ff-4895-9241-62bac48eb1fb" containerName="extract-utilities" Oct 06 13:28:29 crc kubenswrapper[4958]: I1006 13:28:29.059753 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="da2a365a-a8ff-4895-9241-62bac48eb1fb" containerName="extract-utilities" Oct 06 13:28:29 crc kubenswrapper[4958]: E1006 13:28:29.059772 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="da2a365a-a8ff-4895-9241-62bac48eb1fb" containerName="extract-content" Oct 06 13:28:29 crc kubenswrapper[4958]: I1006 13:28:29.059781 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="da2a365a-a8ff-4895-9241-62bac48eb1fb" containerName="extract-content" Oct 06 13:28:29 crc kubenswrapper[4958]: E1006 13:28:29.059799 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="da2a365a-a8ff-4895-9241-62bac48eb1fb" containerName="registry-server" Oct 06 13:28:29 crc kubenswrapper[4958]: I1006 13:28:29.059808 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="da2a365a-a8ff-4895-9241-62bac48eb1fb" containerName="registry-server" Oct 06 13:28:29 crc kubenswrapper[4958]: I1006 13:28:29.060070 4958 memory_manager.go:354] "RemoveStaleState removing state" podUID="da2a365a-a8ff-4895-9241-62bac48eb1fb" containerName="registry-server" Oct 06 13:28:29 crc kubenswrapper[4958]: I1006 13:28:29.061858 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-8vn9m" Oct 06 13:28:29 crc kubenswrapper[4958]: I1006 13:28:29.079536 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-8vn9m"] Oct 06 13:28:29 crc kubenswrapper[4958]: I1006 13:28:29.147615 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vkklh\" (UniqueName: \"kubernetes.io/projected/e3aff65c-221d-40b3-a99f-de0f1fef6572-kube-api-access-vkklh\") pod \"redhat-operators-8vn9m\" (UID: \"e3aff65c-221d-40b3-a99f-de0f1fef6572\") " pod="openshift-marketplace/redhat-operators-8vn9m" Oct 06 13:28:29 crc kubenswrapper[4958]: I1006 13:28:29.148037 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e3aff65c-221d-40b3-a99f-de0f1fef6572-catalog-content\") pod \"redhat-operators-8vn9m\" (UID: \"e3aff65c-221d-40b3-a99f-de0f1fef6572\") " pod="openshift-marketplace/redhat-operators-8vn9m" Oct 06 13:28:29 crc kubenswrapper[4958]: I1006 13:28:29.148159 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e3aff65c-221d-40b3-a99f-de0f1fef6572-utilities\") pod \"redhat-operators-8vn9m\" (UID: \"e3aff65c-221d-40b3-a99f-de0f1fef6572\") " pod="openshift-marketplace/redhat-operators-8vn9m" Oct 06 13:28:29 crc kubenswrapper[4958]: I1006 13:28:29.250394 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e3aff65c-221d-40b3-a99f-de0f1fef6572-catalog-content\") pod \"redhat-operators-8vn9m\" (UID: \"e3aff65c-221d-40b3-a99f-de0f1fef6572\") " pod="openshift-marketplace/redhat-operators-8vn9m" Oct 06 13:28:29 crc kubenswrapper[4958]: I1006 13:28:29.250481 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e3aff65c-221d-40b3-a99f-de0f1fef6572-utilities\") pod \"redhat-operators-8vn9m\" (UID: \"e3aff65c-221d-40b3-a99f-de0f1fef6572\") " pod="openshift-marketplace/redhat-operators-8vn9m" Oct 06 13:28:29 crc kubenswrapper[4958]: I1006 13:28:29.250561 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vkklh\" (UniqueName: \"kubernetes.io/projected/e3aff65c-221d-40b3-a99f-de0f1fef6572-kube-api-access-vkklh\") pod \"redhat-operators-8vn9m\" (UID: \"e3aff65c-221d-40b3-a99f-de0f1fef6572\") " pod="openshift-marketplace/redhat-operators-8vn9m" Oct 06 13:28:29 crc kubenswrapper[4958]: I1006 13:28:29.251479 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e3aff65c-221d-40b3-a99f-de0f1fef6572-catalog-content\") pod \"redhat-operators-8vn9m\" (UID: \"e3aff65c-221d-40b3-a99f-de0f1fef6572\") " pod="openshift-marketplace/redhat-operators-8vn9m" Oct 06 13:28:29 crc kubenswrapper[4958]: I1006 13:28:29.251662 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e3aff65c-221d-40b3-a99f-de0f1fef6572-utilities\") pod \"redhat-operators-8vn9m\" (UID: \"e3aff65c-221d-40b3-a99f-de0f1fef6572\") " pod="openshift-marketplace/redhat-operators-8vn9m" Oct 06 13:28:29 crc kubenswrapper[4958]: I1006 13:28:29.276967 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vkklh\" (UniqueName: \"kubernetes.io/projected/e3aff65c-221d-40b3-a99f-de0f1fef6572-kube-api-access-vkklh\") pod \"redhat-operators-8vn9m\" (UID: \"e3aff65c-221d-40b3-a99f-de0f1fef6572\") " pod="openshift-marketplace/redhat-operators-8vn9m" Oct 06 13:28:29 crc kubenswrapper[4958]: I1006 13:28:29.387940 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-8vn9m" Oct 06 13:28:29 crc kubenswrapper[4958]: I1006 13:28:29.909875 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-8vn9m"] Oct 06 13:28:30 crc kubenswrapper[4958]: I1006 13:28:30.843788 4958 generic.go:334] "Generic (PLEG): container finished" podID="e3aff65c-221d-40b3-a99f-de0f1fef6572" containerID="fcc80d239ddeae4015f2bfca19c04f8f1d11c7289bc2cc3a931d3365a67a6811" exitCode=0 Oct 06 13:28:30 crc kubenswrapper[4958]: I1006 13:28:30.843903 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-8vn9m" event={"ID":"e3aff65c-221d-40b3-a99f-de0f1fef6572","Type":"ContainerDied","Data":"fcc80d239ddeae4015f2bfca19c04f8f1d11c7289bc2cc3a931d3365a67a6811"} Oct 06 13:28:30 crc kubenswrapper[4958]: I1006 13:28:30.844168 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-8vn9m" event={"ID":"e3aff65c-221d-40b3-a99f-de0f1fef6572","Type":"ContainerStarted","Data":"1f6244df78987a6543252c37c19f654a14c944b6576c88d70d9c31717f02c2e5"} Oct 06 13:28:31 crc kubenswrapper[4958]: I1006 13:28:31.913275 4958 scope.go:117] "RemoveContainer" containerID="7fae5e7c3f2061ad0524619a6772893fa1ae12670cf4b11ced46230c90fe0223" Oct 06 13:28:31 crc kubenswrapper[4958]: E1006 13:28:31.915133 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-whw6z_openshift-machine-config-operator(1a63a5e9-6d00-40ff-a080-cfcaf03a1c1b)\"" pod="openshift-machine-config-operator/machine-config-daemon-whw6z" podUID="1a63a5e9-6d00-40ff-a080-cfcaf03a1c1b" Oct 06 13:28:32 crc kubenswrapper[4958]: I1006 13:28:32.866747 4958 generic.go:334] "Generic (PLEG): container finished" podID="e3aff65c-221d-40b3-a99f-de0f1fef6572" containerID="6d6eae529c28f2645efee942cf729db869ee7ee885630f869767535ae824c95d" exitCode=0 Oct 06 13:28:32 crc kubenswrapper[4958]: I1006 13:28:32.866797 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-8vn9m" event={"ID":"e3aff65c-221d-40b3-a99f-de0f1fef6572","Type":"ContainerDied","Data":"6d6eae529c28f2645efee942cf729db869ee7ee885630f869767535ae824c95d"} Oct 06 13:28:33 crc kubenswrapper[4958]: I1006 13:28:33.877064 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-8vn9m" event={"ID":"e3aff65c-221d-40b3-a99f-de0f1fef6572","Type":"ContainerStarted","Data":"5a9f270faeb91d1eb0dc5f29fa4eefdec5e6919fd3666593d84ec2f290753b5c"} Oct 06 13:28:33 crc kubenswrapper[4958]: I1006 13:28:33.895297 4958 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-8vn9m" podStartSLOduration=2.451010213 podStartE2EDuration="4.895280512s" podCreationTimestamp="2025-10-06 13:28:29 +0000 UTC" firstStartedPulling="2025-10-06 13:28:30.847031361 +0000 UTC m=+6064.733056669" lastFinishedPulling="2025-10-06 13:28:33.29130165 +0000 UTC m=+6067.177326968" observedRunningTime="2025-10-06 13:28:33.893461 +0000 UTC m=+6067.779486318" watchObservedRunningTime="2025-10-06 13:28:33.895280512 +0000 UTC m=+6067.781305820" Oct 06 13:28:36 crc kubenswrapper[4958]: I1006 13:28:36.061494 4958 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-d9jtr/must-gather-thrjh"] Oct 06 13:28:36 crc kubenswrapper[4958]: I1006 13:28:36.062197 4958 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-must-gather-d9jtr/must-gather-thrjh" podUID="4e979a79-8f58-4109-aa07-8f42159e70e3" containerName="copy" containerID="cri-o://a54bd624dda81b64343c52b8814ce3b7ea5216f6f1157ab3a68dab321f8cf2b2" gracePeriod=2 Oct 06 13:28:36 crc kubenswrapper[4958]: I1006 13:28:36.070421 4958 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-d9jtr/must-gather-thrjh"] Oct 06 13:28:36 crc kubenswrapper[4958]: I1006 13:28:36.522536 4958 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-d9jtr_must-gather-thrjh_4e979a79-8f58-4109-aa07-8f42159e70e3/copy/0.log" Oct 06 13:28:36 crc kubenswrapper[4958]: I1006 13:28:36.522877 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-d9jtr/must-gather-thrjh" Oct 06 13:28:36 crc kubenswrapper[4958]: I1006 13:28:36.596515 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s2jb5\" (UniqueName: \"kubernetes.io/projected/4e979a79-8f58-4109-aa07-8f42159e70e3-kube-api-access-s2jb5\") pod \"4e979a79-8f58-4109-aa07-8f42159e70e3\" (UID: \"4e979a79-8f58-4109-aa07-8f42159e70e3\") " Oct 06 13:28:36 crc kubenswrapper[4958]: I1006 13:28:36.596672 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/4e979a79-8f58-4109-aa07-8f42159e70e3-must-gather-output\") pod \"4e979a79-8f58-4109-aa07-8f42159e70e3\" (UID: \"4e979a79-8f58-4109-aa07-8f42159e70e3\") " Oct 06 13:28:36 crc kubenswrapper[4958]: I1006 13:28:36.602366 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4e979a79-8f58-4109-aa07-8f42159e70e3-kube-api-access-s2jb5" (OuterVolumeSpecName: "kube-api-access-s2jb5") pod "4e979a79-8f58-4109-aa07-8f42159e70e3" (UID: "4e979a79-8f58-4109-aa07-8f42159e70e3"). InnerVolumeSpecName "kube-api-access-s2jb5". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 06 13:28:36 crc kubenswrapper[4958]: I1006 13:28:36.698914 4958 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-s2jb5\" (UniqueName: \"kubernetes.io/projected/4e979a79-8f58-4109-aa07-8f42159e70e3-kube-api-access-s2jb5\") on node \"crc\" DevicePath \"\"" Oct 06 13:28:36 crc kubenswrapper[4958]: I1006 13:28:36.789822 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4e979a79-8f58-4109-aa07-8f42159e70e3-must-gather-output" (OuterVolumeSpecName: "must-gather-output") pod "4e979a79-8f58-4109-aa07-8f42159e70e3" (UID: "4e979a79-8f58-4109-aa07-8f42159e70e3"). InnerVolumeSpecName "must-gather-output". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 06 13:28:36 crc kubenswrapper[4958]: I1006 13:28:36.802566 4958 reconciler_common.go:293] "Volume detached for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/4e979a79-8f58-4109-aa07-8f42159e70e3-must-gather-output\") on node \"crc\" DevicePath \"\"" Oct 06 13:28:36 crc kubenswrapper[4958]: I1006 13:28:36.903115 4958 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-d9jtr_must-gather-thrjh_4e979a79-8f58-4109-aa07-8f42159e70e3/copy/0.log" Oct 06 13:28:36 crc kubenswrapper[4958]: I1006 13:28:36.903471 4958 generic.go:334] "Generic (PLEG): container finished" podID="4e979a79-8f58-4109-aa07-8f42159e70e3" containerID="a54bd624dda81b64343c52b8814ce3b7ea5216f6f1157ab3a68dab321f8cf2b2" exitCode=143 Oct 06 13:28:36 crc kubenswrapper[4958]: I1006 13:28:36.903517 4958 scope.go:117] "RemoveContainer" containerID="a54bd624dda81b64343c52b8814ce3b7ea5216f6f1157ab3a68dab321f8cf2b2" Oct 06 13:28:36 crc kubenswrapper[4958]: I1006 13:28:36.903555 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-d9jtr/must-gather-thrjh" Oct 06 13:28:36 crc kubenswrapper[4958]: I1006 13:28:36.923255 4958 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4e979a79-8f58-4109-aa07-8f42159e70e3" path="/var/lib/kubelet/pods/4e979a79-8f58-4109-aa07-8f42159e70e3/volumes" Oct 06 13:28:36 crc kubenswrapper[4958]: I1006 13:28:36.923946 4958 scope.go:117] "RemoveContainer" containerID="0c1d7de24edb7b3e959b371e3d31554a22c1542b5cbb509e8f02f5e3bebcf61f" Oct 06 13:28:36 crc kubenswrapper[4958]: I1006 13:28:36.994233 4958 scope.go:117] "RemoveContainer" containerID="a54bd624dda81b64343c52b8814ce3b7ea5216f6f1157ab3a68dab321f8cf2b2" Oct 06 13:28:36 crc kubenswrapper[4958]: E1006 13:28:36.994651 4958 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a54bd624dda81b64343c52b8814ce3b7ea5216f6f1157ab3a68dab321f8cf2b2\": container with ID starting with a54bd624dda81b64343c52b8814ce3b7ea5216f6f1157ab3a68dab321f8cf2b2 not found: ID does not exist" containerID="a54bd624dda81b64343c52b8814ce3b7ea5216f6f1157ab3a68dab321f8cf2b2" Oct 06 13:28:36 crc kubenswrapper[4958]: I1006 13:28:36.994694 4958 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a54bd624dda81b64343c52b8814ce3b7ea5216f6f1157ab3a68dab321f8cf2b2"} err="failed to get container status \"a54bd624dda81b64343c52b8814ce3b7ea5216f6f1157ab3a68dab321f8cf2b2\": rpc error: code = NotFound desc = could not find container \"a54bd624dda81b64343c52b8814ce3b7ea5216f6f1157ab3a68dab321f8cf2b2\": container with ID starting with a54bd624dda81b64343c52b8814ce3b7ea5216f6f1157ab3a68dab321f8cf2b2 not found: ID does not exist" Oct 06 13:28:36 crc kubenswrapper[4958]: I1006 13:28:36.994720 4958 scope.go:117] "RemoveContainer" containerID="0c1d7de24edb7b3e959b371e3d31554a22c1542b5cbb509e8f02f5e3bebcf61f" Oct 06 13:28:36 crc kubenswrapper[4958]: E1006 13:28:36.995412 4958 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0c1d7de24edb7b3e959b371e3d31554a22c1542b5cbb509e8f02f5e3bebcf61f\": container with ID starting with 0c1d7de24edb7b3e959b371e3d31554a22c1542b5cbb509e8f02f5e3bebcf61f not found: ID does not exist" containerID="0c1d7de24edb7b3e959b371e3d31554a22c1542b5cbb509e8f02f5e3bebcf61f" Oct 06 13:28:36 crc kubenswrapper[4958]: I1006 13:28:36.995450 4958 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0c1d7de24edb7b3e959b371e3d31554a22c1542b5cbb509e8f02f5e3bebcf61f"} err="failed to get container status \"0c1d7de24edb7b3e959b371e3d31554a22c1542b5cbb509e8f02f5e3bebcf61f\": rpc error: code = NotFound desc = could not find container \"0c1d7de24edb7b3e959b371e3d31554a22c1542b5cbb509e8f02f5e3bebcf61f\": container with ID starting with 0c1d7de24edb7b3e959b371e3d31554a22c1542b5cbb509e8f02f5e3bebcf61f not found: ID does not exist" Oct 06 13:28:39 crc kubenswrapper[4958]: I1006 13:28:39.388423 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-8vn9m" Oct 06 13:28:39 crc kubenswrapper[4958]: I1006 13:28:39.388786 4958 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-8vn9m" Oct 06 13:28:39 crc kubenswrapper[4958]: I1006 13:28:39.432381 4958 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-8vn9m" Oct 06 13:28:39 crc kubenswrapper[4958]: I1006 13:28:39.979166 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-8vn9m" Oct 06 13:28:40 crc kubenswrapper[4958]: I1006 13:28:40.028876 4958 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-8vn9m"] Oct 06 13:28:41 crc kubenswrapper[4958]: I1006 13:28:41.950120 4958 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-8vn9m" podUID="e3aff65c-221d-40b3-a99f-de0f1fef6572" containerName="registry-server" containerID="cri-o://5a9f270faeb91d1eb0dc5f29fa4eefdec5e6919fd3666593d84ec2f290753b5c" gracePeriod=2 Oct 06 13:28:42 crc kubenswrapper[4958]: I1006 13:28:42.415216 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-8vn9m" Oct 06 13:28:42 crc kubenswrapper[4958]: I1006 13:28:42.512562 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e3aff65c-221d-40b3-a99f-de0f1fef6572-catalog-content\") pod \"e3aff65c-221d-40b3-a99f-de0f1fef6572\" (UID: \"e3aff65c-221d-40b3-a99f-de0f1fef6572\") " Oct 06 13:28:42 crc kubenswrapper[4958]: I1006 13:28:42.512760 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vkklh\" (UniqueName: \"kubernetes.io/projected/e3aff65c-221d-40b3-a99f-de0f1fef6572-kube-api-access-vkklh\") pod \"e3aff65c-221d-40b3-a99f-de0f1fef6572\" (UID: \"e3aff65c-221d-40b3-a99f-de0f1fef6572\") " Oct 06 13:28:42 crc kubenswrapper[4958]: I1006 13:28:42.512817 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e3aff65c-221d-40b3-a99f-de0f1fef6572-utilities\") pod \"e3aff65c-221d-40b3-a99f-de0f1fef6572\" (UID: \"e3aff65c-221d-40b3-a99f-de0f1fef6572\") " Oct 06 13:28:42 crc kubenswrapper[4958]: I1006 13:28:42.514200 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e3aff65c-221d-40b3-a99f-de0f1fef6572-utilities" (OuterVolumeSpecName: "utilities") pod "e3aff65c-221d-40b3-a99f-de0f1fef6572" (UID: "e3aff65c-221d-40b3-a99f-de0f1fef6572"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 06 13:28:42 crc kubenswrapper[4958]: I1006 13:28:42.534641 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e3aff65c-221d-40b3-a99f-de0f1fef6572-kube-api-access-vkklh" (OuterVolumeSpecName: "kube-api-access-vkklh") pod "e3aff65c-221d-40b3-a99f-de0f1fef6572" (UID: "e3aff65c-221d-40b3-a99f-de0f1fef6572"). InnerVolumeSpecName "kube-api-access-vkklh". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 06 13:28:42 crc kubenswrapper[4958]: I1006 13:28:42.614951 4958 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vkklh\" (UniqueName: \"kubernetes.io/projected/e3aff65c-221d-40b3-a99f-de0f1fef6572-kube-api-access-vkklh\") on node \"crc\" DevicePath \"\"" Oct 06 13:28:42 crc kubenswrapper[4958]: I1006 13:28:42.614989 4958 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e3aff65c-221d-40b3-a99f-de0f1fef6572-utilities\") on node \"crc\" DevicePath \"\"" Oct 06 13:28:42 crc kubenswrapper[4958]: I1006 13:28:42.913729 4958 scope.go:117] "RemoveContainer" containerID="7fae5e7c3f2061ad0524619a6772893fa1ae12670cf4b11ced46230c90fe0223" Oct 06 13:28:42 crc kubenswrapper[4958]: E1006 13:28:42.913997 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-whw6z_openshift-machine-config-operator(1a63a5e9-6d00-40ff-a080-cfcaf03a1c1b)\"" pod="openshift-machine-config-operator/machine-config-daemon-whw6z" podUID="1a63a5e9-6d00-40ff-a080-cfcaf03a1c1b" Oct 06 13:28:42 crc kubenswrapper[4958]: I1006 13:28:42.960339 4958 generic.go:334] "Generic (PLEG): container finished" podID="e3aff65c-221d-40b3-a99f-de0f1fef6572" containerID="5a9f270faeb91d1eb0dc5f29fa4eefdec5e6919fd3666593d84ec2f290753b5c" exitCode=0 Oct 06 13:28:42 crc kubenswrapper[4958]: I1006 13:28:42.961316 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-8vn9m" event={"ID":"e3aff65c-221d-40b3-a99f-de0f1fef6572","Type":"ContainerDied","Data":"5a9f270faeb91d1eb0dc5f29fa4eefdec5e6919fd3666593d84ec2f290753b5c"} Oct 06 13:28:42 crc kubenswrapper[4958]: I1006 13:28:42.961405 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-8vn9m" event={"ID":"e3aff65c-221d-40b3-a99f-de0f1fef6572","Type":"ContainerDied","Data":"1f6244df78987a6543252c37c19f654a14c944b6576c88d70d9c31717f02c2e5"} Oct 06 13:28:42 crc kubenswrapper[4958]: I1006 13:28:42.961473 4958 scope.go:117] "RemoveContainer" containerID="5a9f270faeb91d1eb0dc5f29fa4eefdec5e6919fd3666593d84ec2f290753b5c" Oct 06 13:28:42 crc kubenswrapper[4958]: I1006 13:28:42.961638 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-8vn9m" Oct 06 13:28:42 crc kubenswrapper[4958]: I1006 13:28:42.985577 4958 scope.go:117] "RemoveContainer" containerID="6d6eae529c28f2645efee942cf729db869ee7ee885630f869767535ae824c95d" Oct 06 13:28:43 crc kubenswrapper[4958]: I1006 13:28:43.006829 4958 scope.go:117] "RemoveContainer" containerID="fcc80d239ddeae4015f2bfca19c04f8f1d11c7289bc2cc3a931d3365a67a6811" Oct 06 13:28:43 crc kubenswrapper[4958]: I1006 13:28:43.047173 4958 scope.go:117] "RemoveContainer" containerID="5a9f270faeb91d1eb0dc5f29fa4eefdec5e6919fd3666593d84ec2f290753b5c" Oct 06 13:28:43 crc kubenswrapper[4958]: E1006 13:28:43.047658 4958 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5a9f270faeb91d1eb0dc5f29fa4eefdec5e6919fd3666593d84ec2f290753b5c\": container with ID starting with 5a9f270faeb91d1eb0dc5f29fa4eefdec5e6919fd3666593d84ec2f290753b5c not found: ID does not exist" containerID="5a9f270faeb91d1eb0dc5f29fa4eefdec5e6919fd3666593d84ec2f290753b5c" Oct 06 13:28:43 crc kubenswrapper[4958]: I1006 13:28:43.047710 4958 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5a9f270faeb91d1eb0dc5f29fa4eefdec5e6919fd3666593d84ec2f290753b5c"} err="failed to get container status \"5a9f270faeb91d1eb0dc5f29fa4eefdec5e6919fd3666593d84ec2f290753b5c\": rpc error: code = NotFound desc = could not find container \"5a9f270faeb91d1eb0dc5f29fa4eefdec5e6919fd3666593d84ec2f290753b5c\": container with ID starting with 5a9f270faeb91d1eb0dc5f29fa4eefdec5e6919fd3666593d84ec2f290753b5c not found: ID does not exist" Oct 06 13:28:43 crc kubenswrapper[4958]: I1006 13:28:43.047736 4958 scope.go:117] "RemoveContainer" containerID="6d6eae529c28f2645efee942cf729db869ee7ee885630f869767535ae824c95d" Oct 06 13:28:43 crc kubenswrapper[4958]: E1006 13:28:43.048032 4958 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6d6eae529c28f2645efee942cf729db869ee7ee885630f869767535ae824c95d\": container with ID starting with 6d6eae529c28f2645efee942cf729db869ee7ee885630f869767535ae824c95d not found: ID does not exist" containerID="6d6eae529c28f2645efee942cf729db869ee7ee885630f869767535ae824c95d" Oct 06 13:28:43 crc kubenswrapper[4958]: I1006 13:28:43.048077 4958 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6d6eae529c28f2645efee942cf729db869ee7ee885630f869767535ae824c95d"} err="failed to get container status \"6d6eae529c28f2645efee942cf729db869ee7ee885630f869767535ae824c95d\": rpc error: code = NotFound desc = could not find container \"6d6eae529c28f2645efee942cf729db869ee7ee885630f869767535ae824c95d\": container with ID starting with 6d6eae529c28f2645efee942cf729db869ee7ee885630f869767535ae824c95d not found: ID does not exist" Oct 06 13:28:43 crc kubenswrapper[4958]: I1006 13:28:43.048099 4958 scope.go:117] "RemoveContainer" containerID="fcc80d239ddeae4015f2bfca19c04f8f1d11c7289bc2cc3a931d3365a67a6811" Oct 06 13:28:43 crc kubenswrapper[4958]: E1006 13:28:43.048427 4958 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fcc80d239ddeae4015f2bfca19c04f8f1d11c7289bc2cc3a931d3365a67a6811\": container with ID starting with fcc80d239ddeae4015f2bfca19c04f8f1d11c7289bc2cc3a931d3365a67a6811 not found: ID does not exist" containerID="fcc80d239ddeae4015f2bfca19c04f8f1d11c7289bc2cc3a931d3365a67a6811" Oct 06 13:28:43 crc kubenswrapper[4958]: I1006 13:28:43.048464 4958 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fcc80d239ddeae4015f2bfca19c04f8f1d11c7289bc2cc3a931d3365a67a6811"} err="failed to get container status \"fcc80d239ddeae4015f2bfca19c04f8f1d11c7289bc2cc3a931d3365a67a6811\": rpc error: code = NotFound desc = could not find container \"fcc80d239ddeae4015f2bfca19c04f8f1d11c7289bc2cc3a931d3365a67a6811\": container with ID starting with fcc80d239ddeae4015f2bfca19c04f8f1d11c7289bc2cc3a931d3365a67a6811 not found: ID does not exist" Oct 06 13:28:43 crc kubenswrapper[4958]: I1006 13:28:43.765771 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e3aff65c-221d-40b3-a99f-de0f1fef6572-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "e3aff65c-221d-40b3-a99f-de0f1fef6572" (UID: "e3aff65c-221d-40b3-a99f-de0f1fef6572"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 06 13:28:43 crc kubenswrapper[4958]: I1006 13:28:43.833556 4958 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e3aff65c-221d-40b3-a99f-de0f1fef6572-catalog-content\") on node \"crc\" DevicePath \"\"" Oct 06 13:28:43 crc kubenswrapper[4958]: I1006 13:28:43.908974 4958 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-8vn9m"] Oct 06 13:28:43 crc kubenswrapper[4958]: I1006 13:28:43.920280 4958 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-8vn9m"] Oct 06 13:28:44 crc kubenswrapper[4958]: I1006 13:28:44.926390 4958 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e3aff65c-221d-40b3-a99f-de0f1fef6572" path="/var/lib/kubelet/pods/e3aff65c-221d-40b3-a99f-de0f1fef6572/volumes" Oct 06 13:28:48 crc kubenswrapper[4958]: I1006 13:28:48.330812 4958 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-rgw4p"] Oct 06 13:28:48 crc kubenswrapper[4958]: E1006 13:28:48.331719 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e3aff65c-221d-40b3-a99f-de0f1fef6572" containerName="extract-utilities" Oct 06 13:28:48 crc kubenswrapper[4958]: I1006 13:28:48.331742 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="e3aff65c-221d-40b3-a99f-de0f1fef6572" containerName="extract-utilities" Oct 06 13:28:48 crc kubenswrapper[4958]: E1006 13:28:48.331776 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e3aff65c-221d-40b3-a99f-de0f1fef6572" containerName="extract-content" Oct 06 13:28:48 crc kubenswrapper[4958]: I1006 13:28:48.331787 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="e3aff65c-221d-40b3-a99f-de0f1fef6572" containerName="extract-content" Oct 06 13:28:48 crc kubenswrapper[4958]: E1006 13:28:48.331817 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4e979a79-8f58-4109-aa07-8f42159e70e3" containerName="gather" Oct 06 13:28:48 crc kubenswrapper[4958]: I1006 13:28:48.331827 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="4e979a79-8f58-4109-aa07-8f42159e70e3" containerName="gather" Oct 06 13:28:48 crc kubenswrapper[4958]: E1006 13:28:48.331844 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4e979a79-8f58-4109-aa07-8f42159e70e3" containerName="copy" Oct 06 13:28:48 crc kubenswrapper[4958]: I1006 13:28:48.331854 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="4e979a79-8f58-4109-aa07-8f42159e70e3" containerName="copy" Oct 06 13:28:48 crc kubenswrapper[4958]: E1006 13:28:48.331873 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e3aff65c-221d-40b3-a99f-de0f1fef6572" containerName="registry-server" Oct 06 13:28:48 crc kubenswrapper[4958]: I1006 13:28:48.331883 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="e3aff65c-221d-40b3-a99f-de0f1fef6572" containerName="registry-server" Oct 06 13:28:48 crc kubenswrapper[4958]: I1006 13:28:48.332233 4958 memory_manager.go:354] "RemoveStaleState removing state" podUID="4e979a79-8f58-4109-aa07-8f42159e70e3" containerName="gather" Oct 06 13:28:48 crc kubenswrapper[4958]: I1006 13:28:48.332272 4958 memory_manager.go:354] "RemoveStaleState removing state" podUID="e3aff65c-221d-40b3-a99f-de0f1fef6572" containerName="registry-server" Oct 06 13:28:48 crc kubenswrapper[4958]: I1006 13:28:48.332289 4958 memory_manager.go:354] "RemoveStaleState removing state" podUID="4e979a79-8f58-4109-aa07-8f42159e70e3" containerName="copy" Oct 06 13:28:48 crc kubenswrapper[4958]: I1006 13:28:48.334879 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-rgw4p" Oct 06 13:28:48 crc kubenswrapper[4958]: I1006 13:28:48.348035 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-rgw4p"] Oct 06 13:28:48 crc kubenswrapper[4958]: I1006 13:28:48.430250 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6b9d5326-a827-4fa2-a77a-a0f1a3657e2d-catalog-content\") pod \"community-operators-rgw4p\" (UID: \"6b9d5326-a827-4fa2-a77a-a0f1a3657e2d\") " pod="openshift-marketplace/community-operators-rgw4p" Oct 06 13:28:48 crc kubenswrapper[4958]: I1006 13:28:48.430631 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6b9d5326-a827-4fa2-a77a-a0f1a3657e2d-utilities\") pod \"community-operators-rgw4p\" (UID: \"6b9d5326-a827-4fa2-a77a-a0f1a3657e2d\") " pod="openshift-marketplace/community-operators-rgw4p" Oct 06 13:28:48 crc kubenswrapper[4958]: I1006 13:28:48.430712 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mvdjx\" (UniqueName: \"kubernetes.io/projected/6b9d5326-a827-4fa2-a77a-a0f1a3657e2d-kube-api-access-mvdjx\") pod \"community-operators-rgw4p\" (UID: \"6b9d5326-a827-4fa2-a77a-a0f1a3657e2d\") " pod="openshift-marketplace/community-operators-rgw4p" Oct 06 13:28:48 crc kubenswrapper[4958]: I1006 13:28:48.532454 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6b9d5326-a827-4fa2-a77a-a0f1a3657e2d-catalog-content\") pod \"community-operators-rgw4p\" (UID: \"6b9d5326-a827-4fa2-a77a-a0f1a3657e2d\") " pod="openshift-marketplace/community-operators-rgw4p" Oct 06 13:28:48 crc kubenswrapper[4958]: I1006 13:28:48.532528 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6b9d5326-a827-4fa2-a77a-a0f1a3657e2d-utilities\") pod \"community-operators-rgw4p\" (UID: \"6b9d5326-a827-4fa2-a77a-a0f1a3657e2d\") " pod="openshift-marketplace/community-operators-rgw4p" Oct 06 13:28:48 crc kubenswrapper[4958]: I1006 13:28:48.532613 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mvdjx\" (UniqueName: \"kubernetes.io/projected/6b9d5326-a827-4fa2-a77a-a0f1a3657e2d-kube-api-access-mvdjx\") pod \"community-operators-rgw4p\" (UID: \"6b9d5326-a827-4fa2-a77a-a0f1a3657e2d\") " pod="openshift-marketplace/community-operators-rgw4p" Oct 06 13:28:48 crc kubenswrapper[4958]: I1006 13:28:48.533251 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6b9d5326-a827-4fa2-a77a-a0f1a3657e2d-catalog-content\") pod \"community-operators-rgw4p\" (UID: \"6b9d5326-a827-4fa2-a77a-a0f1a3657e2d\") " pod="openshift-marketplace/community-operators-rgw4p" Oct 06 13:28:48 crc kubenswrapper[4958]: I1006 13:28:48.533339 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6b9d5326-a827-4fa2-a77a-a0f1a3657e2d-utilities\") pod \"community-operators-rgw4p\" (UID: \"6b9d5326-a827-4fa2-a77a-a0f1a3657e2d\") " pod="openshift-marketplace/community-operators-rgw4p" Oct 06 13:28:48 crc kubenswrapper[4958]: I1006 13:28:48.582010 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mvdjx\" (UniqueName: \"kubernetes.io/projected/6b9d5326-a827-4fa2-a77a-a0f1a3657e2d-kube-api-access-mvdjx\") pod \"community-operators-rgw4p\" (UID: \"6b9d5326-a827-4fa2-a77a-a0f1a3657e2d\") " pod="openshift-marketplace/community-operators-rgw4p" Oct 06 13:28:48 crc kubenswrapper[4958]: I1006 13:28:48.663900 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-rgw4p" Oct 06 13:28:49 crc kubenswrapper[4958]: I1006 13:28:49.181847 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-rgw4p"] Oct 06 13:28:49 crc kubenswrapper[4958]: W1006 13:28:49.189182 4958 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod6b9d5326_a827_4fa2_a77a_a0f1a3657e2d.slice/crio-365d4fc68302bd0834add16937efc1a81a16d2b90547d4eacfd96e8483ded505 WatchSource:0}: Error finding container 365d4fc68302bd0834add16937efc1a81a16d2b90547d4eacfd96e8483ded505: Status 404 returned error can't find the container with id 365d4fc68302bd0834add16937efc1a81a16d2b90547d4eacfd96e8483ded505 Oct 06 13:28:50 crc kubenswrapper[4958]: I1006 13:28:50.039954 4958 generic.go:334] "Generic (PLEG): container finished" podID="6b9d5326-a827-4fa2-a77a-a0f1a3657e2d" containerID="a71d38340246fe2de0107d440e6223dff28c9041fa4cac86b02dcd78efec298e" exitCode=0 Oct 06 13:28:50 crc kubenswrapper[4958]: I1006 13:28:50.040083 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-rgw4p" event={"ID":"6b9d5326-a827-4fa2-a77a-a0f1a3657e2d","Type":"ContainerDied","Data":"a71d38340246fe2de0107d440e6223dff28c9041fa4cac86b02dcd78efec298e"} Oct 06 13:28:50 crc kubenswrapper[4958]: I1006 13:28:50.040449 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-rgw4p" event={"ID":"6b9d5326-a827-4fa2-a77a-a0f1a3657e2d","Type":"ContainerStarted","Data":"365d4fc68302bd0834add16937efc1a81a16d2b90547d4eacfd96e8483ded505"} Oct 06 13:28:51 crc kubenswrapper[4958]: I1006 13:28:51.052277 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-rgw4p" event={"ID":"6b9d5326-a827-4fa2-a77a-a0f1a3657e2d","Type":"ContainerStarted","Data":"c893ba4a59a94822d92ea29404bb43cd9b65b98e9618a9bf3aa2f1fd0aba4684"} Oct 06 13:28:52 crc kubenswrapper[4958]: I1006 13:28:52.065193 4958 generic.go:334] "Generic (PLEG): container finished" podID="6b9d5326-a827-4fa2-a77a-a0f1a3657e2d" containerID="c893ba4a59a94822d92ea29404bb43cd9b65b98e9618a9bf3aa2f1fd0aba4684" exitCode=0 Oct 06 13:28:52 crc kubenswrapper[4958]: I1006 13:28:52.065271 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-rgw4p" event={"ID":"6b9d5326-a827-4fa2-a77a-a0f1a3657e2d","Type":"ContainerDied","Data":"c893ba4a59a94822d92ea29404bb43cd9b65b98e9618a9bf3aa2f1fd0aba4684"} Oct 06 13:28:53 crc kubenswrapper[4958]: I1006 13:28:53.081564 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-rgw4p" event={"ID":"6b9d5326-a827-4fa2-a77a-a0f1a3657e2d","Type":"ContainerStarted","Data":"5449a3c2793ce1541c278b92da7fad5922b881fc5676cbf630e04a1564916602"} Oct 06 13:28:53 crc kubenswrapper[4958]: I1006 13:28:53.913692 4958 scope.go:117] "RemoveContainer" containerID="7fae5e7c3f2061ad0524619a6772893fa1ae12670cf4b11ced46230c90fe0223" Oct 06 13:28:53 crc kubenswrapper[4958]: E1006 13:28:53.914393 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-whw6z_openshift-machine-config-operator(1a63a5e9-6d00-40ff-a080-cfcaf03a1c1b)\"" pod="openshift-machine-config-operator/machine-config-daemon-whw6z" podUID="1a63a5e9-6d00-40ff-a080-cfcaf03a1c1b" Oct 06 13:28:58 crc kubenswrapper[4958]: I1006 13:28:58.665077 4958 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-rgw4p" Oct 06 13:28:58 crc kubenswrapper[4958]: I1006 13:28:58.665492 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-rgw4p" Oct 06 13:28:58 crc kubenswrapper[4958]: I1006 13:28:58.715131 4958 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-rgw4p" Oct 06 13:28:58 crc kubenswrapper[4958]: I1006 13:28:58.732857 4958 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-rgw4p" podStartSLOduration=8.325120472 podStartE2EDuration="10.73284129s" podCreationTimestamp="2025-10-06 13:28:48 +0000 UTC" firstStartedPulling="2025-10-06 13:28:50.041472301 +0000 UTC m=+6083.927497609" lastFinishedPulling="2025-10-06 13:28:52.449193109 +0000 UTC m=+6086.335218427" observedRunningTime="2025-10-06 13:28:53.105033522 +0000 UTC m=+6086.991058850" watchObservedRunningTime="2025-10-06 13:28:58.73284129 +0000 UTC m=+6092.618866598" Oct 06 13:28:59 crc kubenswrapper[4958]: I1006 13:28:59.208774 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-rgw4p" Oct 06 13:28:59 crc kubenswrapper[4958]: I1006 13:28:59.955826 4958 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-rgw4p"] Oct 06 13:29:01 crc kubenswrapper[4958]: I1006 13:29:01.156456 4958 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-rgw4p" podUID="6b9d5326-a827-4fa2-a77a-a0f1a3657e2d" containerName="registry-server" containerID="cri-o://5449a3c2793ce1541c278b92da7fad5922b881fc5676cbf630e04a1564916602" gracePeriod=2 Oct 06 13:29:02 crc kubenswrapper[4958]: I1006 13:29:02.096562 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-rgw4p" Oct 06 13:29:02 crc kubenswrapper[4958]: I1006 13:29:02.168537 4958 generic.go:334] "Generic (PLEG): container finished" podID="6b9d5326-a827-4fa2-a77a-a0f1a3657e2d" containerID="5449a3c2793ce1541c278b92da7fad5922b881fc5676cbf630e04a1564916602" exitCode=0 Oct 06 13:29:02 crc kubenswrapper[4958]: I1006 13:29:02.168621 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-rgw4p" Oct 06 13:29:02 crc kubenswrapper[4958]: I1006 13:29:02.168613 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-rgw4p" event={"ID":"6b9d5326-a827-4fa2-a77a-a0f1a3657e2d","Type":"ContainerDied","Data":"5449a3c2793ce1541c278b92da7fad5922b881fc5676cbf630e04a1564916602"} Oct 06 13:29:02 crc kubenswrapper[4958]: I1006 13:29:02.168682 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-rgw4p" event={"ID":"6b9d5326-a827-4fa2-a77a-a0f1a3657e2d","Type":"ContainerDied","Data":"365d4fc68302bd0834add16937efc1a81a16d2b90547d4eacfd96e8483ded505"} Oct 06 13:29:02 crc kubenswrapper[4958]: I1006 13:29:02.168708 4958 scope.go:117] "RemoveContainer" containerID="5449a3c2793ce1541c278b92da7fad5922b881fc5676cbf630e04a1564916602" Oct 06 13:29:02 crc kubenswrapper[4958]: I1006 13:29:02.176596 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6b9d5326-a827-4fa2-a77a-a0f1a3657e2d-utilities\") pod \"6b9d5326-a827-4fa2-a77a-a0f1a3657e2d\" (UID: \"6b9d5326-a827-4fa2-a77a-a0f1a3657e2d\") " Oct 06 13:29:02 crc kubenswrapper[4958]: I1006 13:29:02.176789 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6b9d5326-a827-4fa2-a77a-a0f1a3657e2d-catalog-content\") pod \"6b9d5326-a827-4fa2-a77a-a0f1a3657e2d\" (UID: \"6b9d5326-a827-4fa2-a77a-a0f1a3657e2d\") " Oct 06 13:29:02 crc kubenswrapper[4958]: I1006 13:29:02.176831 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mvdjx\" (UniqueName: \"kubernetes.io/projected/6b9d5326-a827-4fa2-a77a-a0f1a3657e2d-kube-api-access-mvdjx\") pod \"6b9d5326-a827-4fa2-a77a-a0f1a3657e2d\" (UID: \"6b9d5326-a827-4fa2-a77a-a0f1a3657e2d\") " Oct 06 13:29:02 crc kubenswrapper[4958]: I1006 13:29:02.177375 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6b9d5326-a827-4fa2-a77a-a0f1a3657e2d-utilities" (OuterVolumeSpecName: "utilities") pod "6b9d5326-a827-4fa2-a77a-a0f1a3657e2d" (UID: "6b9d5326-a827-4fa2-a77a-a0f1a3657e2d"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 06 13:29:02 crc kubenswrapper[4958]: I1006 13:29:02.186217 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6b9d5326-a827-4fa2-a77a-a0f1a3657e2d-kube-api-access-mvdjx" (OuterVolumeSpecName: "kube-api-access-mvdjx") pod "6b9d5326-a827-4fa2-a77a-a0f1a3657e2d" (UID: "6b9d5326-a827-4fa2-a77a-a0f1a3657e2d"). InnerVolumeSpecName "kube-api-access-mvdjx". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 06 13:29:02 crc kubenswrapper[4958]: I1006 13:29:02.189207 4958 scope.go:117] "RemoveContainer" containerID="c893ba4a59a94822d92ea29404bb43cd9b65b98e9618a9bf3aa2f1fd0aba4684" Oct 06 13:29:02 crc kubenswrapper[4958]: I1006 13:29:02.245046 4958 scope.go:117] "RemoveContainer" containerID="a71d38340246fe2de0107d440e6223dff28c9041fa4cac86b02dcd78efec298e" Oct 06 13:29:02 crc kubenswrapper[4958]: I1006 13:29:02.246674 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6b9d5326-a827-4fa2-a77a-a0f1a3657e2d-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "6b9d5326-a827-4fa2-a77a-a0f1a3657e2d" (UID: "6b9d5326-a827-4fa2-a77a-a0f1a3657e2d"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 06 13:29:02 crc kubenswrapper[4958]: I1006 13:29:02.278911 4958 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6b9d5326-a827-4fa2-a77a-a0f1a3657e2d-catalog-content\") on node \"crc\" DevicePath \"\"" Oct 06 13:29:02 crc kubenswrapper[4958]: I1006 13:29:02.278940 4958 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mvdjx\" (UniqueName: \"kubernetes.io/projected/6b9d5326-a827-4fa2-a77a-a0f1a3657e2d-kube-api-access-mvdjx\") on node \"crc\" DevicePath \"\"" Oct 06 13:29:02 crc kubenswrapper[4958]: I1006 13:29:02.278950 4958 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6b9d5326-a827-4fa2-a77a-a0f1a3657e2d-utilities\") on node \"crc\" DevicePath \"\"" Oct 06 13:29:02 crc kubenswrapper[4958]: I1006 13:29:02.289919 4958 scope.go:117] "RemoveContainer" containerID="5449a3c2793ce1541c278b92da7fad5922b881fc5676cbf630e04a1564916602" Oct 06 13:29:02 crc kubenswrapper[4958]: E1006 13:29:02.290457 4958 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5449a3c2793ce1541c278b92da7fad5922b881fc5676cbf630e04a1564916602\": container with ID starting with 5449a3c2793ce1541c278b92da7fad5922b881fc5676cbf630e04a1564916602 not found: ID does not exist" containerID="5449a3c2793ce1541c278b92da7fad5922b881fc5676cbf630e04a1564916602" Oct 06 13:29:02 crc kubenswrapper[4958]: I1006 13:29:02.290495 4958 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5449a3c2793ce1541c278b92da7fad5922b881fc5676cbf630e04a1564916602"} err="failed to get container status \"5449a3c2793ce1541c278b92da7fad5922b881fc5676cbf630e04a1564916602\": rpc error: code = NotFound desc = could not find container \"5449a3c2793ce1541c278b92da7fad5922b881fc5676cbf630e04a1564916602\": container with ID starting with 5449a3c2793ce1541c278b92da7fad5922b881fc5676cbf630e04a1564916602 not found: ID does not exist" Oct 06 13:29:02 crc kubenswrapper[4958]: I1006 13:29:02.290517 4958 scope.go:117] "RemoveContainer" containerID="c893ba4a59a94822d92ea29404bb43cd9b65b98e9618a9bf3aa2f1fd0aba4684" Oct 06 13:29:02 crc kubenswrapper[4958]: E1006 13:29:02.290937 4958 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c893ba4a59a94822d92ea29404bb43cd9b65b98e9618a9bf3aa2f1fd0aba4684\": container with ID starting with c893ba4a59a94822d92ea29404bb43cd9b65b98e9618a9bf3aa2f1fd0aba4684 not found: ID does not exist" containerID="c893ba4a59a94822d92ea29404bb43cd9b65b98e9618a9bf3aa2f1fd0aba4684" Oct 06 13:29:02 crc kubenswrapper[4958]: I1006 13:29:02.290988 4958 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c893ba4a59a94822d92ea29404bb43cd9b65b98e9618a9bf3aa2f1fd0aba4684"} err="failed to get container status \"c893ba4a59a94822d92ea29404bb43cd9b65b98e9618a9bf3aa2f1fd0aba4684\": rpc error: code = NotFound desc = could not find container \"c893ba4a59a94822d92ea29404bb43cd9b65b98e9618a9bf3aa2f1fd0aba4684\": container with ID starting with c893ba4a59a94822d92ea29404bb43cd9b65b98e9618a9bf3aa2f1fd0aba4684 not found: ID does not exist" Oct 06 13:29:02 crc kubenswrapper[4958]: I1006 13:29:02.291024 4958 scope.go:117] "RemoveContainer" containerID="a71d38340246fe2de0107d440e6223dff28c9041fa4cac86b02dcd78efec298e" Oct 06 13:29:02 crc kubenswrapper[4958]: E1006 13:29:02.291492 4958 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a71d38340246fe2de0107d440e6223dff28c9041fa4cac86b02dcd78efec298e\": container with ID starting with a71d38340246fe2de0107d440e6223dff28c9041fa4cac86b02dcd78efec298e not found: ID does not exist" containerID="a71d38340246fe2de0107d440e6223dff28c9041fa4cac86b02dcd78efec298e" Oct 06 13:29:02 crc kubenswrapper[4958]: I1006 13:29:02.291526 4958 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a71d38340246fe2de0107d440e6223dff28c9041fa4cac86b02dcd78efec298e"} err="failed to get container status \"a71d38340246fe2de0107d440e6223dff28c9041fa4cac86b02dcd78efec298e\": rpc error: code = NotFound desc = could not find container \"a71d38340246fe2de0107d440e6223dff28c9041fa4cac86b02dcd78efec298e\": container with ID starting with a71d38340246fe2de0107d440e6223dff28c9041fa4cac86b02dcd78efec298e not found: ID does not exist" Oct 06 13:29:02 crc kubenswrapper[4958]: I1006 13:29:02.506097 4958 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-rgw4p"] Oct 06 13:29:02 crc kubenswrapper[4958]: I1006 13:29:02.513849 4958 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-rgw4p"] Oct 06 13:29:02 crc kubenswrapper[4958]: I1006 13:29:02.927491 4958 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6b9d5326-a827-4fa2-a77a-a0f1a3657e2d" path="/var/lib/kubelet/pods/6b9d5326-a827-4fa2-a77a-a0f1a3657e2d/volumes" Oct 06 13:29:06 crc kubenswrapper[4958]: I1006 13:29:06.926263 4958 scope.go:117] "RemoveContainer" containerID="7fae5e7c3f2061ad0524619a6772893fa1ae12670cf4b11ced46230c90fe0223" Oct 06 13:29:06 crc kubenswrapper[4958]: E1006 13:29:06.928028 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-whw6z_openshift-machine-config-operator(1a63a5e9-6d00-40ff-a080-cfcaf03a1c1b)\"" pod="openshift-machine-config-operator/machine-config-daemon-whw6z" podUID="1a63a5e9-6d00-40ff-a080-cfcaf03a1c1b" Oct 06 13:29:18 crc kubenswrapper[4958]: I1006 13:29:18.913133 4958 scope.go:117] "RemoveContainer" containerID="7fae5e7c3f2061ad0524619a6772893fa1ae12670cf4b11ced46230c90fe0223" Oct 06 13:29:18 crc kubenswrapper[4958]: E1006 13:29:18.913906 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-whw6z_openshift-machine-config-operator(1a63a5e9-6d00-40ff-a080-cfcaf03a1c1b)\"" pod="openshift-machine-config-operator/machine-config-daemon-whw6z" podUID="1a63a5e9-6d00-40ff-a080-cfcaf03a1c1b" Oct 06 13:29:24 crc kubenswrapper[4958]: I1006 13:29:24.473484 4958 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-x4wtp/must-gather-vgmlt"] Oct 06 13:29:24 crc kubenswrapper[4958]: E1006 13:29:24.474539 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6b9d5326-a827-4fa2-a77a-a0f1a3657e2d" containerName="registry-server" Oct 06 13:29:24 crc kubenswrapper[4958]: I1006 13:29:24.474555 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="6b9d5326-a827-4fa2-a77a-a0f1a3657e2d" containerName="registry-server" Oct 06 13:29:24 crc kubenswrapper[4958]: E1006 13:29:24.474596 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6b9d5326-a827-4fa2-a77a-a0f1a3657e2d" containerName="extract-utilities" Oct 06 13:29:24 crc kubenswrapper[4958]: I1006 13:29:24.474603 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="6b9d5326-a827-4fa2-a77a-a0f1a3657e2d" containerName="extract-utilities" Oct 06 13:29:24 crc kubenswrapper[4958]: E1006 13:29:24.474624 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6b9d5326-a827-4fa2-a77a-a0f1a3657e2d" containerName="extract-content" Oct 06 13:29:24 crc kubenswrapper[4958]: I1006 13:29:24.474630 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="6b9d5326-a827-4fa2-a77a-a0f1a3657e2d" containerName="extract-content" Oct 06 13:29:24 crc kubenswrapper[4958]: I1006 13:29:24.474838 4958 memory_manager.go:354] "RemoveStaleState removing state" podUID="6b9d5326-a827-4fa2-a77a-a0f1a3657e2d" containerName="registry-server" Oct 06 13:29:24 crc kubenswrapper[4958]: I1006 13:29:24.476130 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-x4wtp/must-gather-vgmlt" Oct 06 13:29:24 crc kubenswrapper[4958]: I1006 13:29:24.480737 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-must-gather-x4wtp"/"default-dockercfg-9f262" Oct 06 13:29:24 crc kubenswrapper[4958]: I1006 13:29:24.480937 4958 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-x4wtp"/"kube-root-ca.crt" Oct 06 13:29:24 crc kubenswrapper[4958]: I1006 13:29:24.486178 4958 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-x4wtp"/"openshift-service-ca.crt" Oct 06 13:29:24 crc kubenswrapper[4958]: I1006 13:29:24.500405 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-x4wtp/must-gather-vgmlt"] Oct 06 13:29:24 crc kubenswrapper[4958]: I1006 13:29:24.627554 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/8aa28a48-3171-4382-9bad-39e174c8d36e-must-gather-output\") pod \"must-gather-vgmlt\" (UID: \"8aa28a48-3171-4382-9bad-39e174c8d36e\") " pod="openshift-must-gather-x4wtp/must-gather-vgmlt" Oct 06 13:29:24 crc kubenswrapper[4958]: I1006 13:29:24.627884 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wq9wn\" (UniqueName: \"kubernetes.io/projected/8aa28a48-3171-4382-9bad-39e174c8d36e-kube-api-access-wq9wn\") pod \"must-gather-vgmlt\" (UID: \"8aa28a48-3171-4382-9bad-39e174c8d36e\") " pod="openshift-must-gather-x4wtp/must-gather-vgmlt" Oct 06 13:29:24 crc kubenswrapper[4958]: I1006 13:29:24.729633 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wq9wn\" (UniqueName: \"kubernetes.io/projected/8aa28a48-3171-4382-9bad-39e174c8d36e-kube-api-access-wq9wn\") pod \"must-gather-vgmlt\" (UID: \"8aa28a48-3171-4382-9bad-39e174c8d36e\") " pod="openshift-must-gather-x4wtp/must-gather-vgmlt" Oct 06 13:29:24 crc kubenswrapper[4958]: I1006 13:29:24.729804 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/8aa28a48-3171-4382-9bad-39e174c8d36e-must-gather-output\") pod \"must-gather-vgmlt\" (UID: \"8aa28a48-3171-4382-9bad-39e174c8d36e\") " pod="openshift-must-gather-x4wtp/must-gather-vgmlt" Oct 06 13:29:24 crc kubenswrapper[4958]: I1006 13:29:24.730281 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/8aa28a48-3171-4382-9bad-39e174c8d36e-must-gather-output\") pod \"must-gather-vgmlt\" (UID: \"8aa28a48-3171-4382-9bad-39e174c8d36e\") " pod="openshift-must-gather-x4wtp/must-gather-vgmlt" Oct 06 13:29:24 crc kubenswrapper[4958]: I1006 13:29:24.749319 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wq9wn\" (UniqueName: \"kubernetes.io/projected/8aa28a48-3171-4382-9bad-39e174c8d36e-kube-api-access-wq9wn\") pod \"must-gather-vgmlt\" (UID: \"8aa28a48-3171-4382-9bad-39e174c8d36e\") " pod="openshift-must-gather-x4wtp/must-gather-vgmlt" Oct 06 13:29:24 crc kubenswrapper[4958]: I1006 13:29:24.804420 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-x4wtp/must-gather-vgmlt" Oct 06 13:29:25 crc kubenswrapper[4958]: I1006 13:29:25.245922 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-x4wtp/must-gather-vgmlt"] Oct 06 13:29:25 crc kubenswrapper[4958]: I1006 13:29:25.422666 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-x4wtp/must-gather-vgmlt" event={"ID":"8aa28a48-3171-4382-9bad-39e174c8d36e","Type":"ContainerStarted","Data":"dd0a6c6f30f696c14cd675a7b8417f0f520ed9a02aa02fe8d429cec745cce5ea"} Oct 06 13:29:26 crc kubenswrapper[4958]: I1006 13:29:26.433892 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-x4wtp/must-gather-vgmlt" event={"ID":"8aa28a48-3171-4382-9bad-39e174c8d36e","Type":"ContainerStarted","Data":"72331bf4b89470bca1271171ae4824eadbb2f1ae5683f92d0ecf735dd1d7a09a"} Oct 06 13:29:26 crc kubenswrapper[4958]: I1006 13:29:26.434238 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-x4wtp/must-gather-vgmlt" event={"ID":"8aa28a48-3171-4382-9bad-39e174c8d36e","Type":"ContainerStarted","Data":"db04cb24d00cc71b6958ce019828946407954a2b07e6457dc7af8d9cbd486456"} Oct 06 13:29:26 crc kubenswrapper[4958]: I1006 13:29:26.452111 4958 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-x4wtp/must-gather-vgmlt" podStartSLOduration=2.452093103 podStartE2EDuration="2.452093103s" podCreationTimestamp="2025-10-06 13:29:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-06 13:29:26.448699175 +0000 UTC m=+6120.334724483" watchObservedRunningTime="2025-10-06 13:29:26.452093103 +0000 UTC m=+6120.338118411" Oct 06 13:29:28 crc kubenswrapper[4958]: I1006 13:29:28.849655 4958 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-x4wtp/crc-debug-w2xnt"] Oct 06 13:29:28 crc kubenswrapper[4958]: I1006 13:29:28.851673 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-x4wtp/crc-debug-w2xnt" Oct 06 13:29:29 crc kubenswrapper[4958]: I1006 13:29:29.025415 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/45173304-a172-46b1-9c5d-58049cf47c8c-host\") pod \"crc-debug-w2xnt\" (UID: \"45173304-a172-46b1-9c5d-58049cf47c8c\") " pod="openshift-must-gather-x4wtp/crc-debug-w2xnt" Oct 06 13:29:29 crc kubenswrapper[4958]: I1006 13:29:29.025618 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hlhr7\" (UniqueName: \"kubernetes.io/projected/45173304-a172-46b1-9c5d-58049cf47c8c-kube-api-access-hlhr7\") pod \"crc-debug-w2xnt\" (UID: \"45173304-a172-46b1-9c5d-58049cf47c8c\") " pod="openshift-must-gather-x4wtp/crc-debug-w2xnt" Oct 06 13:29:29 crc kubenswrapper[4958]: I1006 13:29:29.127508 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hlhr7\" (UniqueName: \"kubernetes.io/projected/45173304-a172-46b1-9c5d-58049cf47c8c-kube-api-access-hlhr7\") pod \"crc-debug-w2xnt\" (UID: \"45173304-a172-46b1-9c5d-58049cf47c8c\") " pod="openshift-must-gather-x4wtp/crc-debug-w2xnt" Oct 06 13:29:29 crc kubenswrapper[4958]: I1006 13:29:29.127677 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/45173304-a172-46b1-9c5d-58049cf47c8c-host\") pod \"crc-debug-w2xnt\" (UID: \"45173304-a172-46b1-9c5d-58049cf47c8c\") " pod="openshift-must-gather-x4wtp/crc-debug-w2xnt" Oct 06 13:29:29 crc kubenswrapper[4958]: I1006 13:29:29.127788 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/45173304-a172-46b1-9c5d-58049cf47c8c-host\") pod \"crc-debug-w2xnt\" (UID: \"45173304-a172-46b1-9c5d-58049cf47c8c\") " pod="openshift-must-gather-x4wtp/crc-debug-w2xnt" Oct 06 13:29:29 crc kubenswrapper[4958]: I1006 13:29:29.148692 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hlhr7\" (UniqueName: \"kubernetes.io/projected/45173304-a172-46b1-9c5d-58049cf47c8c-kube-api-access-hlhr7\") pod \"crc-debug-w2xnt\" (UID: \"45173304-a172-46b1-9c5d-58049cf47c8c\") " pod="openshift-must-gather-x4wtp/crc-debug-w2xnt" Oct 06 13:29:29 crc kubenswrapper[4958]: I1006 13:29:29.169333 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-x4wtp/crc-debug-w2xnt" Oct 06 13:29:29 crc kubenswrapper[4958]: I1006 13:29:29.458633 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-x4wtp/crc-debug-w2xnt" event={"ID":"45173304-a172-46b1-9c5d-58049cf47c8c","Type":"ContainerStarted","Data":"4bc21e43fe47aa882346acc918ce051ace27984147d80380f4657f94f9123256"} Oct 06 13:29:29 crc kubenswrapper[4958]: I1006 13:29:29.459369 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-x4wtp/crc-debug-w2xnt" event={"ID":"45173304-a172-46b1-9c5d-58049cf47c8c","Type":"ContainerStarted","Data":"06f563344133e027707f8c20e78090f55d2c7ae3a031c8d32735a5c23b775982"} Oct 06 13:29:29 crc kubenswrapper[4958]: I1006 13:29:29.484353 4958 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-x4wtp/crc-debug-w2xnt" podStartSLOduration=1.484332494 podStartE2EDuration="1.484332494s" podCreationTimestamp="2025-10-06 13:29:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-06 13:29:29.477081906 +0000 UTC m=+6123.363107234" watchObservedRunningTime="2025-10-06 13:29:29.484332494 +0000 UTC m=+6123.370357812" Oct 06 13:29:32 crc kubenswrapper[4958]: I1006 13:29:32.913082 4958 scope.go:117] "RemoveContainer" containerID="7fae5e7c3f2061ad0524619a6772893fa1ae12670cf4b11ced46230c90fe0223" Oct 06 13:29:32 crc kubenswrapper[4958]: E1006 13:29:32.914403 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-whw6z_openshift-machine-config-operator(1a63a5e9-6d00-40ff-a080-cfcaf03a1c1b)\"" pod="openshift-machine-config-operator/machine-config-daemon-whw6z" podUID="1a63a5e9-6d00-40ff-a080-cfcaf03a1c1b" Oct 06 13:29:33 crc kubenswrapper[4958]: I1006 13:29:33.590789 4958 scope.go:117] "RemoveContainer" containerID="03b40be786995d889587dee435a0e1efdba6c2954233a0870e1c9d9659c3436d" Oct 06 13:29:47 crc kubenswrapper[4958]: I1006 13:29:47.912698 4958 scope.go:117] "RemoveContainer" containerID="7fae5e7c3f2061ad0524619a6772893fa1ae12670cf4b11ced46230c90fe0223" Oct 06 13:29:47 crc kubenswrapper[4958]: E1006 13:29:47.913612 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-whw6z_openshift-machine-config-operator(1a63a5e9-6d00-40ff-a080-cfcaf03a1c1b)\"" pod="openshift-machine-config-operator/machine-config-daemon-whw6z" podUID="1a63a5e9-6d00-40ff-a080-cfcaf03a1c1b" Oct 06 13:30:00 crc kubenswrapper[4958]: I1006 13:30:00.148465 4958 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29329290-66hxp"] Oct 06 13:30:00 crc kubenswrapper[4958]: I1006 13:30:00.165367 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29329290-66hxp" Oct 06 13:30:00 crc kubenswrapper[4958]: I1006 13:30:00.173573 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Oct 06 13:30:00 crc kubenswrapper[4958]: I1006 13:30:00.173840 4958 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Oct 06 13:30:00 crc kubenswrapper[4958]: I1006 13:30:00.184423 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29329290-66hxp"] Oct 06 13:30:00 crc kubenswrapper[4958]: I1006 13:30:00.339835 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s6pv6\" (UniqueName: \"kubernetes.io/projected/0d17d1b4-e76d-48d7-b880-7a93942a1a76-kube-api-access-s6pv6\") pod \"collect-profiles-29329290-66hxp\" (UID: \"0d17d1b4-e76d-48d7-b880-7a93942a1a76\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29329290-66hxp" Oct 06 13:30:00 crc kubenswrapper[4958]: I1006 13:30:00.340548 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/0d17d1b4-e76d-48d7-b880-7a93942a1a76-config-volume\") pod \"collect-profiles-29329290-66hxp\" (UID: \"0d17d1b4-e76d-48d7-b880-7a93942a1a76\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29329290-66hxp" Oct 06 13:30:00 crc kubenswrapper[4958]: I1006 13:30:00.340735 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/0d17d1b4-e76d-48d7-b880-7a93942a1a76-secret-volume\") pod \"collect-profiles-29329290-66hxp\" (UID: \"0d17d1b4-e76d-48d7-b880-7a93942a1a76\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29329290-66hxp" Oct 06 13:30:00 crc kubenswrapper[4958]: I1006 13:30:00.444097 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/0d17d1b4-e76d-48d7-b880-7a93942a1a76-secret-volume\") pod \"collect-profiles-29329290-66hxp\" (UID: \"0d17d1b4-e76d-48d7-b880-7a93942a1a76\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29329290-66hxp" Oct 06 13:30:00 crc kubenswrapper[4958]: I1006 13:30:00.444520 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s6pv6\" (UniqueName: \"kubernetes.io/projected/0d17d1b4-e76d-48d7-b880-7a93942a1a76-kube-api-access-s6pv6\") pod \"collect-profiles-29329290-66hxp\" (UID: \"0d17d1b4-e76d-48d7-b880-7a93942a1a76\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29329290-66hxp" Oct 06 13:30:00 crc kubenswrapper[4958]: I1006 13:30:00.444650 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/0d17d1b4-e76d-48d7-b880-7a93942a1a76-config-volume\") pod \"collect-profiles-29329290-66hxp\" (UID: \"0d17d1b4-e76d-48d7-b880-7a93942a1a76\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29329290-66hxp" Oct 06 13:30:00 crc kubenswrapper[4958]: I1006 13:30:00.445841 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/0d17d1b4-e76d-48d7-b880-7a93942a1a76-config-volume\") pod \"collect-profiles-29329290-66hxp\" (UID: \"0d17d1b4-e76d-48d7-b880-7a93942a1a76\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29329290-66hxp" Oct 06 13:30:00 crc kubenswrapper[4958]: I1006 13:30:00.460322 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/0d17d1b4-e76d-48d7-b880-7a93942a1a76-secret-volume\") pod \"collect-profiles-29329290-66hxp\" (UID: \"0d17d1b4-e76d-48d7-b880-7a93942a1a76\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29329290-66hxp" Oct 06 13:30:00 crc kubenswrapper[4958]: I1006 13:30:00.465154 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s6pv6\" (UniqueName: \"kubernetes.io/projected/0d17d1b4-e76d-48d7-b880-7a93942a1a76-kube-api-access-s6pv6\") pod \"collect-profiles-29329290-66hxp\" (UID: \"0d17d1b4-e76d-48d7-b880-7a93942a1a76\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29329290-66hxp" Oct 06 13:30:00 crc kubenswrapper[4958]: I1006 13:30:00.506557 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29329290-66hxp" Oct 06 13:30:00 crc kubenswrapper[4958]: I1006 13:30:00.989604 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29329290-66hxp"] Oct 06 13:30:01 crc kubenswrapper[4958]: W1006 13:30:01.019227 4958 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0d17d1b4_e76d_48d7_b880_7a93942a1a76.slice/crio-e156a64acab5ef7a9431bd36f50144af9cab148b590431ddb6b27c82d08b2909 WatchSource:0}: Error finding container e156a64acab5ef7a9431bd36f50144af9cab148b590431ddb6b27c82d08b2909: Status 404 returned error can't find the container with id e156a64acab5ef7a9431bd36f50144af9cab148b590431ddb6b27c82d08b2909 Oct 06 13:30:01 crc kubenswrapper[4958]: I1006 13:30:01.740782 4958 generic.go:334] "Generic (PLEG): container finished" podID="0d17d1b4-e76d-48d7-b880-7a93942a1a76" containerID="b8454c10015f96432f202e01e01553aa7a293db68bddde4b9d02e6fa745d8ec6" exitCode=0 Oct 06 13:30:01 crc kubenswrapper[4958]: I1006 13:30:01.741344 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29329290-66hxp" event={"ID":"0d17d1b4-e76d-48d7-b880-7a93942a1a76","Type":"ContainerDied","Data":"b8454c10015f96432f202e01e01553aa7a293db68bddde4b9d02e6fa745d8ec6"} Oct 06 13:30:01 crc kubenswrapper[4958]: I1006 13:30:01.741411 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29329290-66hxp" event={"ID":"0d17d1b4-e76d-48d7-b880-7a93942a1a76","Type":"ContainerStarted","Data":"e156a64acab5ef7a9431bd36f50144af9cab148b590431ddb6b27c82d08b2909"} Oct 06 13:30:02 crc kubenswrapper[4958]: I1006 13:30:02.924018 4958 scope.go:117] "RemoveContainer" containerID="7fae5e7c3f2061ad0524619a6772893fa1ae12670cf4b11ced46230c90fe0223" Oct 06 13:30:02 crc kubenswrapper[4958]: E1006 13:30:02.924724 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-whw6z_openshift-machine-config-operator(1a63a5e9-6d00-40ff-a080-cfcaf03a1c1b)\"" pod="openshift-machine-config-operator/machine-config-daemon-whw6z" podUID="1a63a5e9-6d00-40ff-a080-cfcaf03a1c1b" Oct 06 13:30:03 crc kubenswrapper[4958]: I1006 13:30:03.187598 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29329290-66hxp" Oct 06 13:30:03 crc kubenswrapper[4958]: I1006 13:30:03.211293 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/0d17d1b4-e76d-48d7-b880-7a93942a1a76-secret-volume\") pod \"0d17d1b4-e76d-48d7-b880-7a93942a1a76\" (UID: \"0d17d1b4-e76d-48d7-b880-7a93942a1a76\") " Oct 06 13:30:03 crc kubenswrapper[4958]: I1006 13:30:03.211350 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/0d17d1b4-e76d-48d7-b880-7a93942a1a76-config-volume\") pod \"0d17d1b4-e76d-48d7-b880-7a93942a1a76\" (UID: \"0d17d1b4-e76d-48d7-b880-7a93942a1a76\") " Oct 06 13:30:03 crc kubenswrapper[4958]: I1006 13:30:03.211443 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s6pv6\" (UniqueName: \"kubernetes.io/projected/0d17d1b4-e76d-48d7-b880-7a93942a1a76-kube-api-access-s6pv6\") pod \"0d17d1b4-e76d-48d7-b880-7a93942a1a76\" (UID: \"0d17d1b4-e76d-48d7-b880-7a93942a1a76\") " Oct 06 13:30:03 crc kubenswrapper[4958]: I1006 13:30:03.212128 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0d17d1b4-e76d-48d7-b880-7a93942a1a76-config-volume" (OuterVolumeSpecName: "config-volume") pod "0d17d1b4-e76d-48d7-b880-7a93942a1a76" (UID: "0d17d1b4-e76d-48d7-b880-7a93942a1a76"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 06 13:30:03 crc kubenswrapper[4958]: I1006 13:30:03.227324 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0d17d1b4-e76d-48d7-b880-7a93942a1a76-kube-api-access-s6pv6" (OuterVolumeSpecName: "kube-api-access-s6pv6") pod "0d17d1b4-e76d-48d7-b880-7a93942a1a76" (UID: "0d17d1b4-e76d-48d7-b880-7a93942a1a76"). InnerVolumeSpecName "kube-api-access-s6pv6". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 06 13:30:03 crc kubenswrapper[4958]: I1006 13:30:03.227491 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0d17d1b4-e76d-48d7-b880-7a93942a1a76-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "0d17d1b4-e76d-48d7-b880-7a93942a1a76" (UID: "0d17d1b4-e76d-48d7-b880-7a93942a1a76"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 06 13:30:03 crc kubenswrapper[4958]: I1006 13:30:03.313489 4958 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/0d17d1b4-e76d-48d7-b880-7a93942a1a76-secret-volume\") on node \"crc\" DevicePath \"\"" Oct 06 13:30:03 crc kubenswrapper[4958]: I1006 13:30:03.313790 4958 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/0d17d1b4-e76d-48d7-b880-7a93942a1a76-config-volume\") on node \"crc\" DevicePath \"\"" Oct 06 13:30:03 crc kubenswrapper[4958]: I1006 13:30:03.313805 4958 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-s6pv6\" (UniqueName: \"kubernetes.io/projected/0d17d1b4-e76d-48d7-b880-7a93942a1a76-kube-api-access-s6pv6\") on node \"crc\" DevicePath \"\"" Oct 06 13:30:03 crc kubenswrapper[4958]: I1006 13:30:03.761378 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29329290-66hxp" event={"ID":"0d17d1b4-e76d-48d7-b880-7a93942a1a76","Type":"ContainerDied","Data":"e156a64acab5ef7a9431bd36f50144af9cab148b590431ddb6b27c82d08b2909"} Oct 06 13:30:03 crc kubenswrapper[4958]: I1006 13:30:03.761430 4958 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e156a64acab5ef7a9431bd36f50144af9cab148b590431ddb6b27c82d08b2909" Oct 06 13:30:03 crc kubenswrapper[4958]: I1006 13:30:03.761439 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29329290-66hxp" Oct 06 13:30:04 crc kubenswrapper[4958]: I1006 13:30:04.276506 4958 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29329245-bttcc"] Oct 06 13:30:04 crc kubenswrapper[4958]: I1006 13:30:04.293126 4958 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29329245-bttcc"] Oct 06 13:30:04 crc kubenswrapper[4958]: I1006 13:30:04.925517 4958 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f2a33723-60fc-479f-912c-799341c0deba" path="/var/lib/kubelet/pods/f2a33723-60fc-479f-912c-799341c0deba/volumes" Oct 06 13:30:13 crc kubenswrapper[4958]: I1006 13:30:13.913555 4958 scope.go:117] "RemoveContainer" containerID="7fae5e7c3f2061ad0524619a6772893fa1ae12670cf4b11ced46230c90fe0223" Oct 06 13:30:13 crc kubenswrapper[4958]: E1006 13:30:13.914390 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-whw6z_openshift-machine-config-operator(1a63a5e9-6d00-40ff-a080-cfcaf03a1c1b)\"" pod="openshift-machine-config-operator/machine-config-daemon-whw6z" podUID="1a63a5e9-6d00-40ff-a080-cfcaf03a1c1b" Oct 06 13:30:28 crc kubenswrapper[4958]: I1006 13:30:28.939828 4958 scope.go:117] "RemoveContainer" containerID="7fae5e7c3f2061ad0524619a6772893fa1ae12670cf4b11ced46230c90fe0223" Oct 06 13:30:28 crc kubenswrapper[4958]: E1006 13:30:28.940747 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-whw6z_openshift-machine-config-operator(1a63a5e9-6d00-40ff-a080-cfcaf03a1c1b)\"" pod="openshift-machine-config-operator/machine-config-daemon-whw6z" podUID="1a63a5e9-6d00-40ff-a080-cfcaf03a1c1b" Oct 06 13:30:29 crc kubenswrapper[4958]: I1006 13:30:29.854245 4958 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-api-5784c7f6c4-pqpwp_d5b91f63-f0c4-4c4b-a06a-0136898c0beb/barbican-api/0.log" Oct 06 13:30:29 crc kubenswrapper[4958]: I1006 13:30:29.925556 4958 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-api-5784c7f6c4-pqpwp_d5b91f63-f0c4-4c4b-a06a-0136898c0beb/barbican-api-log/0.log" Oct 06 13:30:30 crc kubenswrapper[4958]: I1006 13:30:30.054816 4958 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-keystone-listener-7f4f85d9b4-tcldf_f3a9469b-86a8-4eec-9722-8bec4159b05e/barbican-keystone-listener/0.log" Oct 06 13:30:30 crc kubenswrapper[4958]: I1006 13:30:30.208935 4958 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-keystone-listener-7f4f85d9b4-tcldf_f3a9469b-86a8-4eec-9722-8bec4159b05e/barbican-keystone-listener-log/0.log" Oct 06 13:30:30 crc kubenswrapper[4958]: I1006 13:30:30.262850 4958 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-worker-56588d4b7-rsgzm_d1f5dd7f-a3f7-4c0d-8e1e-ccbc0fc5063f/barbican-worker/0.log" Oct 06 13:30:30 crc kubenswrapper[4958]: I1006 13:30:30.392808 4958 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-worker-56588d4b7-rsgzm_d1f5dd7f-a3f7-4c0d-8e1e-ccbc0fc5063f/barbican-worker-log/0.log" Oct 06 13:30:30 crc kubenswrapper[4958]: I1006 13:30:30.449526 4958 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_bootstrap-edpm-deployment-openstack-edpm-ipam-64zqq_bff355e0-d99f-4997-81e9-849deb8cea2a/bootstrap-edpm-deployment-openstack-edpm-ipam/0.log" Oct 06 13:30:30 crc kubenswrapper[4958]: I1006 13:30:30.701902 4958 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_8110dd9c-85f9-4427-909e-1bc397a4678c/ceilometer-notification-agent/0.log" Oct 06 13:30:30 crc kubenswrapper[4958]: I1006 13:30:30.772995 4958 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_8110dd9c-85f9-4427-909e-1bc397a4678c/ceilometer-central-agent/0.log" Oct 06 13:30:30 crc kubenswrapper[4958]: I1006 13:30:30.834173 4958 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_8110dd9c-85f9-4427-909e-1bc397a4678c/proxy-httpd/0.log" Oct 06 13:30:30 crc kubenswrapper[4958]: I1006 13:30:30.895123 4958 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_8110dd9c-85f9-4427-909e-1bc397a4678c/sg-core/0.log" Oct 06 13:30:31 crc kubenswrapper[4958]: I1006 13:30:31.104369 4958 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-api-0_5631f7c8-d7b1-4655-8acd-83a29bb5f3b3/cinder-api/0.log" Oct 06 13:30:31 crc kubenswrapper[4958]: I1006 13:30:31.123755 4958 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-api-0_5631f7c8-d7b1-4655-8acd-83a29bb5f3b3/cinder-api-log/0.log" Oct 06 13:30:31 crc kubenswrapper[4958]: I1006 13:30:31.360651 4958 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-scheduler-0_131b14e5-e45a-4fc4-817c-b8f82c27e92e/cinder-scheduler/0.log" Oct 06 13:30:31 crc kubenswrapper[4958]: I1006 13:30:31.431333 4958 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-scheduler-0_131b14e5-e45a-4fc4-817c-b8f82c27e92e/probe/0.log" Oct 06 13:30:31 crc kubenswrapper[4958]: I1006 13:30:31.561295 4958 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_configure-network-edpm-deployment-openstack-edpm-ipam-7whq7_55d6c75b-9ef4-4576-bdc9-46bd62865410/configure-network-edpm-deployment-openstack-edpm-ipam/0.log" Oct 06 13:30:31 crc kubenswrapper[4958]: I1006 13:30:31.721308 4958 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_configure-os-edpm-deployment-openstack-edpm-ipam-7ppk7_9966ebae-f14d-4b3a-aea7-28843e2fe605/configure-os-edpm-deployment-openstack-edpm-ipam/0.log" Oct 06 13:30:31 crc kubenswrapper[4958]: I1006 13:30:31.898508 4958 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_configure-os-edpm-deployment-openstack-edpm-ipam-b6zqz_b84c284f-00cf-4afd-a3e6-84c24af1caae/configure-os-edpm-deployment-openstack-edpm-ipam/0.log" Oct 06 13:30:31 crc kubenswrapper[4958]: I1006 13:30:31.956189 4958 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-cb6ffcf87-5lvxx_760d1ffb-81bb-4765-865c-c655d0886553/init/0.log" Oct 06 13:30:32 crc kubenswrapper[4958]: I1006 13:30:32.165136 4958 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-cb6ffcf87-5lvxx_760d1ffb-81bb-4765-865c-c655d0886553/init/0.log" Oct 06 13:30:32 crc kubenswrapper[4958]: I1006 13:30:32.385784 4958 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-cb6ffcf87-5lvxx_760d1ffb-81bb-4765-865c-c655d0886553/dnsmasq-dns/0.log" Oct 06 13:30:32 crc kubenswrapper[4958]: I1006 13:30:32.491330 4958 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_download-cache-edpm-deployment-openstack-edpm-ipam-xg9gd_81afece6-fe0a-491c-94b8-3b19d00058c5/download-cache-edpm-deployment-openstack-edpm-ipam/0.log" Oct 06 13:30:32 crc kubenswrapper[4958]: I1006 13:30:32.627963 4958 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-external-api-0_33d3c29a-90b4-49ce-9dd3-0c3a415a8ec4/glance-httpd/0.log" Oct 06 13:30:32 crc kubenswrapper[4958]: I1006 13:30:32.670546 4958 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-external-api-0_33d3c29a-90b4-49ce-9dd3-0c3a415a8ec4/glance-log/0.log" Oct 06 13:30:33 crc kubenswrapper[4958]: I1006 13:30:33.002701 4958 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-internal-api-0_ddd9cc34-6d2f-41d2-ba9f-e41230964003/glance-httpd/0.log" Oct 06 13:30:33 crc kubenswrapper[4958]: I1006 13:30:33.035101 4958 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-internal-api-0_ddd9cc34-6d2f-41d2-ba9f-e41230964003/glance-log/0.log" Oct 06 13:30:33 crc kubenswrapper[4958]: I1006 13:30:33.416160 4958 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_horizon-69f5d58bb-ghq4l_7fdb6376-1709-4378-8fe4-eaf26cf5fde7/horizon/0.log" Oct 06 13:30:33 crc kubenswrapper[4958]: I1006 13:30:33.420064 4958 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_install-certs-edpm-deployment-openstack-edpm-ipam-gmxmw_f1064552-8f6a-46ac-8628-d9d2bc8c2a95/install-certs-edpm-deployment-openstack-edpm-ipam/0.log" Oct 06 13:30:33 crc kubenswrapper[4958]: I1006 13:30:33.642445 4958 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_install-os-edpm-deployment-openstack-edpm-ipam-m7n8m_27373b57-9835-4096-9b31-eab53444391c/install-os-edpm-deployment-openstack-edpm-ipam/0.log" Oct 06 13:30:33 crc kubenswrapper[4958]: I1006 13:30:33.715685 4958 scope.go:117] "RemoveContainer" containerID="8979002fccc3c0dd33fd689f00cc5b07110ff792ddbcdd9f4ddd889deecd5683" Oct 06 13:30:33 crc kubenswrapper[4958]: I1006 13:30:33.854751 4958 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_keystone-cron-29329261-tpdvf_80b45d5d-1a89-4c03-a387-d74a9e2912f4/keystone-cron/0.log" Oct 06 13:30:34 crc kubenswrapper[4958]: I1006 13:30:34.038167 4958 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_horizon-69f5d58bb-ghq4l_7fdb6376-1709-4378-8fe4-eaf26cf5fde7/horizon-log/0.log" Oct 06 13:30:34 crc kubenswrapper[4958]: I1006 13:30:34.066192 4958 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_kube-state-metrics-0_670f79b0-7850-4798-a452-f387018cd4d3/kube-state-metrics/0.log" Oct 06 13:30:34 crc kubenswrapper[4958]: I1006 13:30:34.340177 4958 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_libvirt-edpm-deployment-openstack-edpm-ipam-rqzwk_186200b0-8ce3-46a8-9691-42b254a077be/libvirt-edpm-deployment-openstack-edpm-ipam/0.log" Oct 06 13:30:34 crc kubenswrapper[4958]: I1006 13:30:34.461054 4958 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_keystone-6d6b6556f7-c2dwg_2ba0cfe8-aec4-4d54-b307-7e7f5b1d9756/keystone-api/0.log" Oct 06 13:30:35 crc kubenswrapper[4958]: I1006 13:30:35.072622 4958 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-7bc84f8f6c-tdr2k_6ef174b4-138f-4dc1-8618-afb9c9e8f9b3/neutron-httpd/0.log" Oct 06 13:30:35 crc kubenswrapper[4958]: I1006 13:30:35.082063 4958 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-metadata-edpm-deployment-openstack-edpm-ipam-hch6t_c7524451-dd6e-42b7-8454-4e9efe77c79c/neutron-metadata-edpm-deployment-openstack-edpm-ipam/0.log" Oct 06 13:30:35 crc kubenswrapper[4958]: I1006 13:30:35.192358 4958 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-7bc84f8f6c-tdr2k_6ef174b4-138f-4dc1-8618-afb9c9e8f9b3/neutron-api/0.log" Oct 06 13:30:36 crc kubenswrapper[4958]: I1006 13:30:36.233424 4958 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell0-conductor-0_ded8adc6-35b0-4901-89ec-7f314c7817e7/nova-cell0-conductor-conductor/0.log" Oct 06 13:30:36 crc kubenswrapper[4958]: I1006 13:30:36.915097 4958 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell1-conductor-0_f769ee5d-6085-4e88-a212-2c3e2e8f6f2b/nova-cell1-conductor-conductor/0.log" Oct 06 13:30:37 crc kubenswrapper[4958]: I1006 13:30:37.242607 4958 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-api-0_18ae50d9-6e14-4379-b6e2-6a1845859f0c/nova-api-log/0.log" Oct 06 13:30:37 crc kubenswrapper[4958]: I1006 13:30:37.732096 4958 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-edpm-deployment-openstack-edpm-ipam-nx8z6_226865fc-14de-4b5f-a693-a27ef3d06efa/nova-edpm-deployment-openstack-edpm-ipam/0.log" Oct 06 13:30:37 crc kubenswrapper[4958]: I1006 13:30:37.771921 4958 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell1-novncproxy-0_07232aba-c139-41f7-b153-ab542bbfa39a/nova-cell1-novncproxy-novncproxy/0.log" Oct 06 13:30:37 crc kubenswrapper[4958]: I1006 13:30:37.902252 4958 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-api-0_18ae50d9-6e14-4379-b6e2-6a1845859f0c/nova-api-api/0.log" Oct 06 13:30:38 crc kubenswrapper[4958]: I1006 13:30:38.641017 4958 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-metadata-0_22f238b7-8e7b-408c-81cd-9635a10e7d3d/nova-metadata-log/0.log" Oct 06 13:30:38 crc kubenswrapper[4958]: I1006 13:30:38.929854 4958 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-scheduler-0_1b61e54d-6905-4ce9-b034-987af62ab20a/nova-scheduler-scheduler/0.log" Oct 06 13:30:39 crc kubenswrapper[4958]: I1006 13:30:39.112873 4958 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_16e7f0fd-80ed-4a05-8ec6-c3b82fe3fb51/mysql-bootstrap/0.log" Oct 06 13:30:39 crc kubenswrapper[4958]: I1006 13:30:39.260266 4958 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_16e7f0fd-80ed-4a05-8ec6-c3b82fe3fb51/mysql-bootstrap/0.log" Oct 06 13:30:39 crc kubenswrapper[4958]: I1006 13:30:39.318153 4958 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_16e7f0fd-80ed-4a05-8ec6-c3b82fe3fb51/galera/0.log" Oct 06 13:30:39 crc kubenswrapper[4958]: I1006 13:30:39.499697 4958 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_acaf745d-7462-44e9-be0b-28424e3c2f31/mysql-bootstrap/0.log" Oct 06 13:30:39 crc kubenswrapper[4958]: I1006 13:30:39.748183 4958 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_acaf745d-7462-44e9-be0b-28424e3c2f31/galera/0.log" Oct 06 13:30:39 crc kubenswrapper[4958]: I1006 13:30:39.749663 4958 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_acaf745d-7462-44e9-be0b-28424e3c2f31/mysql-bootstrap/0.log" Oct 06 13:30:39 crc kubenswrapper[4958]: I1006 13:30:39.960420 4958 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstackclient_2474d50f-478f-4d0f-abc0-f0a5135285ca/openstackclient/0.log" Oct 06 13:30:40 crc kubenswrapper[4958]: I1006 13:30:40.211436 4958 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-5x288_ca1fdee5-1c5e-4740-b69a-d2111ba255ee/ovn-controller/0.log" Oct 06 13:30:40 crc kubenswrapper[4958]: I1006 13:30:40.385921 4958 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-metrics-dk97g_0001ceb8-4afd-4d37-acfe-8ed9c976b6d9/openstack-network-exporter/0.log" Oct 06 13:30:40 crc kubenswrapper[4958]: I1006 13:30:40.648694 4958 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-k7nm2_f8475601-8235-4d69-958e-53f8e6a2f71b/ovsdb-server-init/0.log" Oct 06 13:30:40 crc kubenswrapper[4958]: I1006 13:30:40.862334 4958 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-k7nm2_f8475601-8235-4d69-958e-53f8e6a2f71b/ovs-vswitchd/0.log" Oct 06 13:30:40 crc kubenswrapper[4958]: I1006 13:30:40.900632 4958 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-k7nm2_f8475601-8235-4d69-958e-53f8e6a2f71b/ovsdb-server-init/0.log" Oct 06 13:30:40 crc kubenswrapper[4958]: I1006 13:30:40.912926 4958 scope.go:117] "RemoveContainer" containerID="7fae5e7c3f2061ad0524619a6772893fa1ae12670cf4b11ced46230c90fe0223" Oct 06 13:30:40 crc kubenswrapper[4958]: E1006 13:30:40.913352 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-whw6z_openshift-machine-config-operator(1a63a5e9-6d00-40ff-a080-cfcaf03a1c1b)\"" pod="openshift-machine-config-operator/machine-config-daemon-whw6z" podUID="1a63a5e9-6d00-40ff-a080-cfcaf03a1c1b" Oct 06 13:30:41 crc kubenswrapper[4958]: I1006 13:30:41.103651 4958 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-k7nm2_f8475601-8235-4d69-958e-53f8e6a2f71b/ovsdb-server/0.log" Oct 06 13:30:41 crc kubenswrapper[4958]: I1006 13:30:41.324985 4958 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-edpm-deployment-openstack-edpm-ipam-5ljwg_08d67da7-f4f3-4e1c-acd8-c8fcec30f59d/ovn-edpm-deployment-openstack-edpm-ipam/0.log" Oct 06 13:30:41 crc kubenswrapper[4958]: I1006 13:30:41.644130 4958 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-northd-0_0bee2760-9ae4-4988-80cc-1bf507ae032b/openstack-network-exporter/0.log" Oct 06 13:30:41 crc kubenswrapper[4958]: I1006 13:30:41.751201 4958 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-northd-0_0bee2760-9ae4-4988-80cc-1bf507ae032b/ovn-northd/0.log" Oct 06 13:30:41 crc kubenswrapper[4958]: I1006 13:30:41.898532 4958 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-metadata-0_22f238b7-8e7b-408c-81cd-9635a10e7d3d/nova-metadata-metadata/0.log" Oct 06 13:30:41 crc kubenswrapper[4958]: I1006 13:30:41.952173 4958 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-0_d699699e-9c26-4129-9483-3ac7d597f948/openstack-network-exporter/0.log" Oct 06 13:30:42 crc kubenswrapper[4958]: I1006 13:30:42.138293 4958 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-0_d699699e-9c26-4129-9483-3ac7d597f948/ovsdbserver-nb/0.log" Oct 06 13:30:42 crc kubenswrapper[4958]: I1006 13:30:42.182208 4958 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-0_90b1ca14-6697-42b0-8e63-fcec51e4599a/openstack-network-exporter/0.log" Oct 06 13:30:42 crc kubenswrapper[4958]: I1006 13:30:42.425437 4958 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-0_90b1ca14-6697-42b0-8e63-fcec51e4599a/ovsdbserver-sb/0.log" Oct 06 13:30:42 crc kubenswrapper[4958]: I1006 13:30:42.718851 4958 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_placement-999fc56db-gbkpz_13b3f1be-75c2-49c7-a3c6-d6dd842788c4/placement-api/0.log" Oct 06 13:30:42 crc kubenswrapper[4958]: I1006 13:30:42.908619 4958 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_35b611fd-63b3-4146-b713-3fef7c26c3c7/setup-container/0.log" Oct 06 13:30:42 crc kubenswrapper[4958]: I1006 13:30:42.955391 4958 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_placement-999fc56db-gbkpz_13b3f1be-75c2-49c7-a3c6-d6dd842788c4/placement-log/0.log" Oct 06 13:30:43 crc kubenswrapper[4958]: I1006 13:30:43.140318 4958 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_35b611fd-63b3-4146-b713-3fef7c26c3c7/rabbitmq/0.log" Oct 06 13:30:43 crc kubenswrapper[4958]: I1006 13:30:43.175647 4958 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_35b611fd-63b3-4146-b713-3fef7c26c3c7/setup-container/0.log" Oct 06 13:30:43 crc kubenswrapper[4958]: I1006 13:30:43.310628 4958 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_02fc87b1-4709-4476-a597-9154c5c3a322/setup-container/0.log" Oct 06 13:30:43 crc kubenswrapper[4958]: I1006 13:30:43.588701 4958 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_02fc87b1-4709-4476-a597-9154c5c3a322/rabbitmq/0.log" Oct 06 13:30:43 crc kubenswrapper[4958]: I1006 13:30:43.606637 4958 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_02fc87b1-4709-4476-a597-9154c5c3a322/setup-container/0.log" Oct 06 13:30:43 crc kubenswrapper[4958]: I1006 13:30:43.818049 4958 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_reboot-os-edpm-deployment-openstack-edpm-ipam-bf2ht_eee45a20-bff0-4c1c-a7a7-84646b71c82d/reboot-os-edpm-deployment-openstack-edpm-ipam/0.log" Oct 06 13:30:43 crc kubenswrapper[4958]: I1006 13:30:43.901446 4958 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_redhat-edpm-deployment-openstack-edpm-ipam-m996d_3dd583f8-4c3f-4059-8b0f-621021a4eaa1/redhat-edpm-deployment-openstack-edpm-ipam/0.log" Oct 06 13:30:44 crc kubenswrapper[4958]: I1006 13:30:44.059050 4958 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_repo-setup-edpm-deployment-openstack-edpm-ipam-jchd4_99935553-e8d4-497e-be84-8fa4a807fd72/repo-setup-edpm-deployment-openstack-edpm-ipam/0.log" Oct 06 13:30:44 crc kubenswrapper[4958]: I1006 13:30:44.262045 4958 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_run-os-edpm-deployment-openstack-edpm-ipam-jd665_7a0b8144-e1d6-4d95-8f13-71ea09785481/run-os-edpm-deployment-openstack-edpm-ipam/0.log" Oct 06 13:30:44 crc kubenswrapper[4958]: I1006 13:30:44.389975 4958 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ssh-known-hosts-edpm-deployment-r248v_263da9ff-5240-442f-9a4b-e2d8b5a30321/ssh-known-hosts-edpm-deployment/0.log" Oct 06 13:30:44 crc kubenswrapper[4958]: I1006 13:30:44.597574 4958 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-proxy-5cc4dd9879-7xgdr_eb6c6362-e91c-47c8-8616-702c4cada19a/proxy-server/0.log" Oct 06 13:30:44 crc kubenswrapper[4958]: I1006 13:30:44.813995 4958 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-ring-rebalance-mvxsm_368db9f7-1e1b-42e1-a8d7-af0c7d9d910f/swift-ring-rebalance/0.log" Oct 06 13:30:44 crc kubenswrapper[4958]: I1006 13:30:44.890610 4958 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-proxy-5cc4dd9879-7xgdr_eb6c6362-e91c-47c8-8616-702c4cada19a/proxy-httpd/0.log" Oct 06 13:30:45 crc kubenswrapper[4958]: I1006 13:30:45.051046 4958 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_7a90ffe6-00a1-4bee-862b-b1ca74e3185d/account-auditor/0.log" Oct 06 13:30:45 crc kubenswrapper[4958]: I1006 13:30:45.122514 4958 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_7a90ffe6-00a1-4bee-862b-b1ca74e3185d/account-reaper/0.log" Oct 06 13:30:45 crc kubenswrapper[4958]: I1006 13:30:45.269490 4958 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_7a90ffe6-00a1-4bee-862b-b1ca74e3185d/account-replicator/0.log" Oct 06 13:30:45 crc kubenswrapper[4958]: I1006 13:30:45.292352 4958 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_7a90ffe6-00a1-4bee-862b-b1ca74e3185d/account-server/0.log" Oct 06 13:30:45 crc kubenswrapper[4958]: I1006 13:30:45.402352 4958 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_7a90ffe6-00a1-4bee-862b-b1ca74e3185d/container-auditor/0.log" Oct 06 13:30:45 crc kubenswrapper[4958]: I1006 13:30:45.630010 4958 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_7a90ffe6-00a1-4bee-862b-b1ca74e3185d/container-replicator/0.log" Oct 06 13:30:45 crc kubenswrapper[4958]: I1006 13:30:45.734625 4958 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_7a90ffe6-00a1-4bee-862b-b1ca74e3185d/container-server/0.log" Oct 06 13:30:45 crc kubenswrapper[4958]: I1006 13:30:45.797570 4958 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_7a90ffe6-00a1-4bee-862b-b1ca74e3185d/container-updater/0.log" Oct 06 13:30:45 crc kubenswrapper[4958]: I1006 13:30:45.850068 4958 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_7a90ffe6-00a1-4bee-862b-b1ca74e3185d/object-auditor/0.log" Oct 06 13:30:45 crc kubenswrapper[4958]: I1006 13:30:45.970721 4958 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_7a90ffe6-00a1-4bee-862b-b1ca74e3185d/object-expirer/0.log" Oct 06 13:30:46 crc kubenswrapper[4958]: I1006 13:30:46.050130 4958 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_7a90ffe6-00a1-4bee-862b-b1ca74e3185d/object-replicator/0.log" Oct 06 13:30:46 crc kubenswrapper[4958]: I1006 13:30:46.063634 4958 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_7a90ffe6-00a1-4bee-862b-b1ca74e3185d/object-server/0.log" Oct 06 13:30:46 crc kubenswrapper[4958]: I1006 13:30:46.199279 4958 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_7a90ffe6-00a1-4bee-862b-b1ca74e3185d/object-updater/0.log" Oct 06 13:30:46 crc kubenswrapper[4958]: I1006 13:30:46.261096 4958 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_7a90ffe6-00a1-4bee-862b-b1ca74e3185d/rsync/0.log" Oct 06 13:30:46 crc kubenswrapper[4958]: I1006 13:30:46.319130 4958 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_7a90ffe6-00a1-4bee-862b-b1ca74e3185d/swift-recon-cron/0.log" Oct 06 13:30:46 crc kubenswrapper[4958]: I1006 13:30:46.483670 4958 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_telemetry-edpm-deployment-openstack-edpm-ipam-47hl4_5569cfb8-1cd6-4f3d-9eee-282ddce72171/telemetry-edpm-deployment-openstack-edpm-ipam/0.log" Oct 06 13:30:46 crc kubenswrapper[4958]: I1006 13:30:46.680279 4958 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_validate-network-edpm-deployment-openstack-edpm-ipam-kg5b5_264707ac-53c7-4002-bb44-5ed2af779aec/validate-network-edpm-deployment-openstack-edpm-ipam/0.log" Oct 06 13:30:50 crc kubenswrapper[4958]: I1006 13:30:50.887737 4958 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_memcached-0_3a4b3c4e-da8b-4eb1-a159-6376181dcbb8/memcached/0.log" Oct 06 13:30:54 crc kubenswrapper[4958]: I1006 13:30:54.913115 4958 scope.go:117] "RemoveContainer" containerID="7fae5e7c3f2061ad0524619a6772893fa1ae12670cf4b11ced46230c90fe0223" Oct 06 13:30:54 crc kubenswrapper[4958]: E1006 13:30:54.913743 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-whw6z_openshift-machine-config-operator(1a63a5e9-6d00-40ff-a080-cfcaf03a1c1b)\"" pod="openshift-machine-config-operator/machine-config-daemon-whw6z" podUID="1a63a5e9-6d00-40ff-a080-cfcaf03a1c1b" Oct 06 13:31:05 crc kubenswrapper[4958]: I1006 13:31:05.914171 4958 scope.go:117] "RemoveContainer" containerID="7fae5e7c3f2061ad0524619a6772893fa1ae12670cf4b11ced46230c90fe0223" Oct 06 13:31:05 crc kubenswrapper[4958]: E1006 13:31:05.914968 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-whw6z_openshift-machine-config-operator(1a63a5e9-6d00-40ff-a080-cfcaf03a1c1b)\"" pod="openshift-machine-config-operator/machine-config-daemon-whw6z" podUID="1a63a5e9-6d00-40ff-a080-cfcaf03a1c1b" Oct 06 13:31:16 crc kubenswrapper[4958]: I1006 13:31:16.922970 4958 scope.go:117] "RemoveContainer" containerID="7fae5e7c3f2061ad0524619a6772893fa1ae12670cf4b11ced46230c90fe0223" Oct 06 13:31:16 crc kubenswrapper[4958]: E1006 13:31:16.924226 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-whw6z_openshift-machine-config-operator(1a63a5e9-6d00-40ff-a080-cfcaf03a1c1b)\"" pod="openshift-machine-config-operator/machine-config-daemon-whw6z" podUID="1a63a5e9-6d00-40ff-a080-cfcaf03a1c1b" Oct 06 13:31:27 crc kubenswrapper[4958]: I1006 13:31:27.605292 4958 generic.go:334] "Generic (PLEG): container finished" podID="45173304-a172-46b1-9c5d-58049cf47c8c" containerID="4bc21e43fe47aa882346acc918ce051ace27984147d80380f4657f94f9123256" exitCode=0 Oct 06 13:31:27 crc kubenswrapper[4958]: I1006 13:31:27.605360 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-x4wtp/crc-debug-w2xnt" event={"ID":"45173304-a172-46b1-9c5d-58049cf47c8c","Type":"ContainerDied","Data":"4bc21e43fe47aa882346acc918ce051ace27984147d80380f4657f94f9123256"} Oct 06 13:31:28 crc kubenswrapper[4958]: I1006 13:31:28.725585 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-x4wtp/crc-debug-w2xnt" Oct 06 13:31:28 crc kubenswrapper[4958]: I1006 13:31:28.757612 4958 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-x4wtp/crc-debug-w2xnt"] Oct 06 13:31:28 crc kubenswrapper[4958]: I1006 13:31:28.765007 4958 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-x4wtp/crc-debug-w2xnt"] Oct 06 13:31:28 crc kubenswrapper[4958]: I1006 13:31:28.871456 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hlhr7\" (UniqueName: \"kubernetes.io/projected/45173304-a172-46b1-9c5d-58049cf47c8c-kube-api-access-hlhr7\") pod \"45173304-a172-46b1-9c5d-58049cf47c8c\" (UID: \"45173304-a172-46b1-9c5d-58049cf47c8c\") " Oct 06 13:31:28 crc kubenswrapper[4958]: I1006 13:31:28.871587 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/45173304-a172-46b1-9c5d-58049cf47c8c-host\") pod \"45173304-a172-46b1-9c5d-58049cf47c8c\" (UID: \"45173304-a172-46b1-9c5d-58049cf47c8c\") " Oct 06 13:31:28 crc kubenswrapper[4958]: I1006 13:31:28.871743 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/45173304-a172-46b1-9c5d-58049cf47c8c-host" (OuterVolumeSpecName: "host") pod "45173304-a172-46b1-9c5d-58049cf47c8c" (UID: "45173304-a172-46b1-9c5d-58049cf47c8c"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 06 13:31:28 crc kubenswrapper[4958]: I1006 13:31:28.872133 4958 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/45173304-a172-46b1-9c5d-58049cf47c8c-host\") on node \"crc\" DevicePath \"\"" Oct 06 13:31:28 crc kubenswrapper[4958]: I1006 13:31:28.880369 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/45173304-a172-46b1-9c5d-58049cf47c8c-kube-api-access-hlhr7" (OuterVolumeSpecName: "kube-api-access-hlhr7") pod "45173304-a172-46b1-9c5d-58049cf47c8c" (UID: "45173304-a172-46b1-9c5d-58049cf47c8c"). InnerVolumeSpecName "kube-api-access-hlhr7". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 06 13:31:28 crc kubenswrapper[4958]: I1006 13:31:28.930703 4958 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="45173304-a172-46b1-9c5d-58049cf47c8c" path="/var/lib/kubelet/pods/45173304-a172-46b1-9c5d-58049cf47c8c/volumes" Oct 06 13:31:28 crc kubenswrapper[4958]: I1006 13:31:28.974322 4958 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hlhr7\" (UniqueName: \"kubernetes.io/projected/45173304-a172-46b1-9c5d-58049cf47c8c-kube-api-access-hlhr7\") on node \"crc\" DevicePath \"\"" Oct 06 13:31:29 crc kubenswrapper[4958]: I1006 13:31:29.628855 4958 scope.go:117] "RemoveContainer" containerID="4bc21e43fe47aa882346acc918ce051ace27984147d80380f4657f94f9123256" Oct 06 13:31:29 crc kubenswrapper[4958]: I1006 13:31:29.628905 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-x4wtp/crc-debug-w2xnt" Oct 06 13:31:29 crc kubenswrapper[4958]: I1006 13:31:29.945855 4958 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-x4wtp/crc-debug-thvzf"] Oct 06 13:31:29 crc kubenswrapper[4958]: E1006 13:31:29.946232 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="45173304-a172-46b1-9c5d-58049cf47c8c" containerName="container-00" Oct 06 13:31:29 crc kubenswrapper[4958]: I1006 13:31:29.946243 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="45173304-a172-46b1-9c5d-58049cf47c8c" containerName="container-00" Oct 06 13:31:29 crc kubenswrapper[4958]: E1006 13:31:29.946274 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0d17d1b4-e76d-48d7-b880-7a93942a1a76" containerName="collect-profiles" Oct 06 13:31:29 crc kubenswrapper[4958]: I1006 13:31:29.946281 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="0d17d1b4-e76d-48d7-b880-7a93942a1a76" containerName="collect-profiles" Oct 06 13:31:29 crc kubenswrapper[4958]: I1006 13:31:29.946455 4958 memory_manager.go:354] "RemoveStaleState removing state" podUID="0d17d1b4-e76d-48d7-b880-7a93942a1a76" containerName="collect-profiles" Oct 06 13:31:29 crc kubenswrapper[4958]: I1006 13:31:29.946483 4958 memory_manager.go:354] "RemoveStaleState removing state" podUID="45173304-a172-46b1-9c5d-58049cf47c8c" containerName="container-00" Oct 06 13:31:29 crc kubenswrapper[4958]: I1006 13:31:29.947038 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-x4wtp/crc-debug-thvzf" Oct 06 13:31:30 crc kubenswrapper[4958]: I1006 13:31:30.146128 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/5a996379-5a68-4790-8c60-19cf1e31399a-host\") pod \"crc-debug-thvzf\" (UID: \"5a996379-5a68-4790-8c60-19cf1e31399a\") " pod="openshift-must-gather-x4wtp/crc-debug-thvzf" Oct 06 13:31:30 crc kubenswrapper[4958]: I1006 13:31:30.146213 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xw5dt\" (UniqueName: \"kubernetes.io/projected/5a996379-5a68-4790-8c60-19cf1e31399a-kube-api-access-xw5dt\") pod \"crc-debug-thvzf\" (UID: \"5a996379-5a68-4790-8c60-19cf1e31399a\") " pod="openshift-must-gather-x4wtp/crc-debug-thvzf" Oct 06 13:31:30 crc kubenswrapper[4958]: I1006 13:31:30.247872 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/5a996379-5a68-4790-8c60-19cf1e31399a-host\") pod \"crc-debug-thvzf\" (UID: \"5a996379-5a68-4790-8c60-19cf1e31399a\") " pod="openshift-must-gather-x4wtp/crc-debug-thvzf" Oct 06 13:31:30 crc kubenswrapper[4958]: I1006 13:31:30.248217 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xw5dt\" (UniqueName: \"kubernetes.io/projected/5a996379-5a68-4790-8c60-19cf1e31399a-kube-api-access-xw5dt\") pod \"crc-debug-thvzf\" (UID: \"5a996379-5a68-4790-8c60-19cf1e31399a\") " pod="openshift-must-gather-x4wtp/crc-debug-thvzf" Oct 06 13:31:30 crc kubenswrapper[4958]: I1006 13:31:30.247983 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/5a996379-5a68-4790-8c60-19cf1e31399a-host\") pod \"crc-debug-thvzf\" (UID: \"5a996379-5a68-4790-8c60-19cf1e31399a\") " pod="openshift-must-gather-x4wtp/crc-debug-thvzf" Oct 06 13:31:30 crc kubenswrapper[4958]: I1006 13:31:30.267698 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xw5dt\" (UniqueName: \"kubernetes.io/projected/5a996379-5a68-4790-8c60-19cf1e31399a-kube-api-access-xw5dt\") pod \"crc-debug-thvzf\" (UID: \"5a996379-5a68-4790-8c60-19cf1e31399a\") " pod="openshift-must-gather-x4wtp/crc-debug-thvzf" Oct 06 13:31:30 crc kubenswrapper[4958]: I1006 13:31:30.564251 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-x4wtp/crc-debug-thvzf" Oct 06 13:31:30 crc kubenswrapper[4958]: I1006 13:31:30.641565 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-x4wtp/crc-debug-thvzf" event={"ID":"5a996379-5a68-4790-8c60-19cf1e31399a","Type":"ContainerStarted","Data":"a8ec4591c2cfde2a23a5dddddca84c8c9ec43a79676fc3f5642aa4c754ca2d59"} Oct 06 13:31:30 crc kubenswrapper[4958]: I1006 13:31:30.913293 4958 scope.go:117] "RemoveContainer" containerID="7fae5e7c3f2061ad0524619a6772893fa1ae12670cf4b11ced46230c90fe0223" Oct 06 13:31:30 crc kubenswrapper[4958]: E1006 13:31:30.913817 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-whw6z_openshift-machine-config-operator(1a63a5e9-6d00-40ff-a080-cfcaf03a1c1b)\"" pod="openshift-machine-config-operator/machine-config-daemon-whw6z" podUID="1a63a5e9-6d00-40ff-a080-cfcaf03a1c1b" Oct 06 13:31:31 crc kubenswrapper[4958]: I1006 13:31:31.655627 4958 generic.go:334] "Generic (PLEG): container finished" podID="5a996379-5a68-4790-8c60-19cf1e31399a" containerID="6232070dca24217c9e7d358cad9741f9729cba860784a472e312323c603c8b1b" exitCode=0 Oct 06 13:31:31 crc kubenswrapper[4958]: I1006 13:31:31.655677 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-x4wtp/crc-debug-thvzf" event={"ID":"5a996379-5a68-4790-8c60-19cf1e31399a","Type":"ContainerDied","Data":"6232070dca24217c9e7d358cad9741f9729cba860784a472e312323c603c8b1b"} Oct 06 13:31:32 crc kubenswrapper[4958]: I1006 13:31:32.766660 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-x4wtp/crc-debug-thvzf" Oct 06 13:31:32 crc kubenswrapper[4958]: I1006 13:31:32.890342 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/5a996379-5a68-4790-8c60-19cf1e31399a-host\") pod \"5a996379-5a68-4790-8c60-19cf1e31399a\" (UID: \"5a996379-5a68-4790-8c60-19cf1e31399a\") " Oct 06 13:31:32 crc kubenswrapper[4958]: I1006 13:31:32.890414 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xw5dt\" (UniqueName: \"kubernetes.io/projected/5a996379-5a68-4790-8c60-19cf1e31399a-kube-api-access-xw5dt\") pod \"5a996379-5a68-4790-8c60-19cf1e31399a\" (UID: \"5a996379-5a68-4790-8c60-19cf1e31399a\") " Oct 06 13:31:32 crc kubenswrapper[4958]: I1006 13:31:32.890469 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5a996379-5a68-4790-8c60-19cf1e31399a-host" (OuterVolumeSpecName: "host") pod "5a996379-5a68-4790-8c60-19cf1e31399a" (UID: "5a996379-5a68-4790-8c60-19cf1e31399a"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 06 13:31:32 crc kubenswrapper[4958]: I1006 13:31:32.891214 4958 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/5a996379-5a68-4790-8c60-19cf1e31399a-host\") on node \"crc\" DevicePath \"\"" Oct 06 13:31:32 crc kubenswrapper[4958]: I1006 13:31:32.897803 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5a996379-5a68-4790-8c60-19cf1e31399a-kube-api-access-xw5dt" (OuterVolumeSpecName: "kube-api-access-xw5dt") pod "5a996379-5a68-4790-8c60-19cf1e31399a" (UID: "5a996379-5a68-4790-8c60-19cf1e31399a"). InnerVolumeSpecName "kube-api-access-xw5dt". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 06 13:31:32 crc kubenswrapper[4958]: I1006 13:31:32.992817 4958 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xw5dt\" (UniqueName: \"kubernetes.io/projected/5a996379-5a68-4790-8c60-19cf1e31399a-kube-api-access-xw5dt\") on node \"crc\" DevicePath \"\"" Oct 06 13:31:33 crc kubenswrapper[4958]: I1006 13:31:33.671566 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-x4wtp/crc-debug-thvzf" event={"ID":"5a996379-5a68-4790-8c60-19cf1e31399a","Type":"ContainerDied","Data":"a8ec4591c2cfde2a23a5dddddca84c8c9ec43a79676fc3f5642aa4c754ca2d59"} Oct 06 13:31:33 crc kubenswrapper[4958]: I1006 13:31:33.671611 4958 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a8ec4591c2cfde2a23a5dddddca84c8c9ec43a79676fc3f5642aa4c754ca2d59" Oct 06 13:31:33 crc kubenswrapper[4958]: I1006 13:31:33.671622 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-x4wtp/crc-debug-thvzf" Oct 06 13:31:33 crc kubenswrapper[4958]: I1006 13:31:33.790687 4958 scope.go:117] "RemoveContainer" containerID="dfab613495c8c0f535addb29d08ff6814e5174b2362e727f452431f62a7579bb" Oct 06 13:31:40 crc kubenswrapper[4958]: I1006 13:31:40.942447 4958 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-x4wtp/crc-debug-thvzf"] Oct 06 13:31:40 crc kubenswrapper[4958]: I1006 13:31:40.951670 4958 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-x4wtp/crc-debug-thvzf"] Oct 06 13:31:42 crc kubenswrapper[4958]: I1006 13:31:42.139823 4958 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-x4wtp/crc-debug-99hpr"] Oct 06 13:31:42 crc kubenswrapper[4958]: E1006 13:31:42.140347 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5a996379-5a68-4790-8c60-19cf1e31399a" containerName="container-00" Oct 06 13:31:42 crc kubenswrapper[4958]: I1006 13:31:42.140370 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="5a996379-5a68-4790-8c60-19cf1e31399a" containerName="container-00" Oct 06 13:31:42 crc kubenswrapper[4958]: I1006 13:31:42.140848 4958 memory_manager.go:354] "RemoveStaleState removing state" podUID="5a996379-5a68-4790-8c60-19cf1e31399a" containerName="container-00" Oct 06 13:31:42 crc kubenswrapper[4958]: I1006 13:31:42.141507 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-x4wtp/crc-debug-99hpr" Oct 06 13:31:42 crc kubenswrapper[4958]: I1006 13:31:42.260407 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/c7820fa4-6abe-404b-ad6a-439c3bc0abb0-host\") pod \"crc-debug-99hpr\" (UID: \"c7820fa4-6abe-404b-ad6a-439c3bc0abb0\") " pod="openshift-must-gather-x4wtp/crc-debug-99hpr" Oct 06 13:31:42 crc kubenswrapper[4958]: I1006 13:31:42.261293 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k5qfm\" (UniqueName: \"kubernetes.io/projected/c7820fa4-6abe-404b-ad6a-439c3bc0abb0-kube-api-access-k5qfm\") pod \"crc-debug-99hpr\" (UID: \"c7820fa4-6abe-404b-ad6a-439c3bc0abb0\") " pod="openshift-must-gather-x4wtp/crc-debug-99hpr" Oct 06 13:31:42 crc kubenswrapper[4958]: I1006 13:31:42.362908 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/c7820fa4-6abe-404b-ad6a-439c3bc0abb0-host\") pod \"crc-debug-99hpr\" (UID: \"c7820fa4-6abe-404b-ad6a-439c3bc0abb0\") " pod="openshift-must-gather-x4wtp/crc-debug-99hpr" Oct 06 13:31:42 crc kubenswrapper[4958]: I1006 13:31:42.362982 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k5qfm\" (UniqueName: \"kubernetes.io/projected/c7820fa4-6abe-404b-ad6a-439c3bc0abb0-kube-api-access-k5qfm\") pod \"crc-debug-99hpr\" (UID: \"c7820fa4-6abe-404b-ad6a-439c3bc0abb0\") " pod="openshift-must-gather-x4wtp/crc-debug-99hpr" Oct 06 13:31:42 crc kubenswrapper[4958]: I1006 13:31:42.363030 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/c7820fa4-6abe-404b-ad6a-439c3bc0abb0-host\") pod \"crc-debug-99hpr\" (UID: \"c7820fa4-6abe-404b-ad6a-439c3bc0abb0\") " pod="openshift-must-gather-x4wtp/crc-debug-99hpr" Oct 06 13:31:42 crc kubenswrapper[4958]: I1006 13:31:42.383071 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k5qfm\" (UniqueName: \"kubernetes.io/projected/c7820fa4-6abe-404b-ad6a-439c3bc0abb0-kube-api-access-k5qfm\") pod \"crc-debug-99hpr\" (UID: \"c7820fa4-6abe-404b-ad6a-439c3bc0abb0\") " pod="openshift-must-gather-x4wtp/crc-debug-99hpr" Oct 06 13:31:42 crc kubenswrapper[4958]: I1006 13:31:42.463724 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-x4wtp/crc-debug-99hpr" Oct 06 13:31:42 crc kubenswrapper[4958]: I1006 13:31:42.741521 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-x4wtp/crc-debug-99hpr" event={"ID":"c7820fa4-6abe-404b-ad6a-439c3bc0abb0","Type":"ContainerStarted","Data":"0930707e888290ce8160e571498419528e629b90fc7c3213a120162097695905"} Oct 06 13:31:42 crc kubenswrapper[4958]: I1006 13:31:42.741860 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-x4wtp/crc-debug-99hpr" event={"ID":"c7820fa4-6abe-404b-ad6a-439c3bc0abb0","Type":"ContainerStarted","Data":"b5b421c0419550c48fdda5099c98f371c42a1cc52aeb3e05359236097b9e0440"} Oct 06 13:31:42 crc kubenswrapper[4958]: I1006 13:31:42.759292 4958 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-x4wtp/crc-debug-99hpr" podStartSLOduration=0.759271489 podStartE2EDuration="759.271489ms" podCreationTimestamp="2025-10-06 13:31:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-06 13:31:42.755039197 +0000 UTC m=+6256.641064505" watchObservedRunningTime="2025-10-06 13:31:42.759271489 +0000 UTC m=+6256.645296787" Oct 06 13:31:42 crc kubenswrapper[4958]: I1006 13:31:42.926847 4958 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5a996379-5a68-4790-8c60-19cf1e31399a" path="/var/lib/kubelet/pods/5a996379-5a68-4790-8c60-19cf1e31399a/volumes" Oct 06 13:31:43 crc kubenswrapper[4958]: I1006 13:31:43.751703 4958 generic.go:334] "Generic (PLEG): container finished" podID="c7820fa4-6abe-404b-ad6a-439c3bc0abb0" containerID="0930707e888290ce8160e571498419528e629b90fc7c3213a120162097695905" exitCode=0 Oct 06 13:31:43 crc kubenswrapper[4958]: I1006 13:31:43.751750 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-x4wtp/crc-debug-99hpr" event={"ID":"c7820fa4-6abe-404b-ad6a-439c3bc0abb0","Type":"ContainerDied","Data":"0930707e888290ce8160e571498419528e629b90fc7c3213a120162097695905"} Oct 06 13:31:44 crc kubenswrapper[4958]: I1006 13:31:44.887248 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-x4wtp/crc-debug-99hpr" Oct 06 13:31:44 crc kubenswrapper[4958]: I1006 13:31:44.930222 4958 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-x4wtp/crc-debug-99hpr"] Oct 06 13:31:44 crc kubenswrapper[4958]: I1006 13:31:44.930260 4958 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-x4wtp/crc-debug-99hpr"] Oct 06 13:31:45 crc kubenswrapper[4958]: I1006 13:31:45.010224 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-k5qfm\" (UniqueName: \"kubernetes.io/projected/c7820fa4-6abe-404b-ad6a-439c3bc0abb0-kube-api-access-k5qfm\") pod \"c7820fa4-6abe-404b-ad6a-439c3bc0abb0\" (UID: \"c7820fa4-6abe-404b-ad6a-439c3bc0abb0\") " Oct 06 13:31:45 crc kubenswrapper[4958]: I1006 13:31:45.010398 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/c7820fa4-6abe-404b-ad6a-439c3bc0abb0-host\") pod \"c7820fa4-6abe-404b-ad6a-439c3bc0abb0\" (UID: \"c7820fa4-6abe-404b-ad6a-439c3bc0abb0\") " Oct 06 13:31:45 crc kubenswrapper[4958]: I1006 13:31:45.010977 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c7820fa4-6abe-404b-ad6a-439c3bc0abb0-host" (OuterVolumeSpecName: "host") pod "c7820fa4-6abe-404b-ad6a-439c3bc0abb0" (UID: "c7820fa4-6abe-404b-ad6a-439c3bc0abb0"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 06 13:31:45 crc kubenswrapper[4958]: I1006 13:31:45.016742 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c7820fa4-6abe-404b-ad6a-439c3bc0abb0-kube-api-access-k5qfm" (OuterVolumeSpecName: "kube-api-access-k5qfm") pod "c7820fa4-6abe-404b-ad6a-439c3bc0abb0" (UID: "c7820fa4-6abe-404b-ad6a-439c3bc0abb0"). InnerVolumeSpecName "kube-api-access-k5qfm". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 06 13:31:45 crc kubenswrapper[4958]: I1006 13:31:45.112729 4958 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-k5qfm\" (UniqueName: \"kubernetes.io/projected/c7820fa4-6abe-404b-ad6a-439c3bc0abb0-kube-api-access-k5qfm\") on node \"crc\" DevicePath \"\"" Oct 06 13:31:45 crc kubenswrapper[4958]: I1006 13:31:45.112771 4958 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/c7820fa4-6abe-404b-ad6a-439c3bc0abb0-host\") on node \"crc\" DevicePath \"\"" Oct 06 13:31:45 crc kubenswrapper[4958]: I1006 13:31:45.771723 4958 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b5b421c0419550c48fdda5099c98f371c42a1cc52aeb3e05359236097b9e0440" Oct 06 13:31:45 crc kubenswrapper[4958]: I1006 13:31:45.771803 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-x4wtp/crc-debug-99hpr" Oct 06 13:31:45 crc kubenswrapper[4958]: I1006 13:31:45.913369 4958 scope.go:117] "RemoveContainer" containerID="7fae5e7c3f2061ad0524619a6772893fa1ae12670cf4b11ced46230c90fe0223" Oct 06 13:31:45 crc kubenswrapper[4958]: E1006 13:31:45.913732 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-whw6z_openshift-machine-config-operator(1a63a5e9-6d00-40ff-a080-cfcaf03a1c1b)\"" pod="openshift-machine-config-operator/machine-config-daemon-whw6z" podUID="1a63a5e9-6d00-40ff-a080-cfcaf03a1c1b" Oct 06 13:31:46 crc kubenswrapper[4958]: I1006 13:31:46.652182 4958 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_barbican-operator-controller-manager-58c4cd55f4-l8pp6_cbe81ce3-66c4-4226-bc0a-78d6757561ff/kube-rbac-proxy/0.log" Oct 06 13:31:46 crc kubenswrapper[4958]: I1006 13:31:46.658728 4958 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_barbican-operator-controller-manager-58c4cd55f4-l8pp6_cbe81ce3-66c4-4226-bc0a-78d6757561ff/manager/0.log" Oct 06 13:31:46 crc kubenswrapper[4958]: I1006 13:31:46.857284 4958 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_cinder-operator-controller-manager-7d4d4f8d-th999_b7998026-248b-4fdb-b5fe-8e6ca29c69f0/kube-rbac-proxy/0.log" Oct 06 13:31:46 crc kubenswrapper[4958]: I1006 13:31:46.932082 4958 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c7820fa4-6abe-404b-ad6a-439c3bc0abb0" path="/var/lib/kubelet/pods/c7820fa4-6abe-404b-ad6a-439c3bc0abb0/volumes" Oct 06 13:31:46 crc kubenswrapper[4958]: I1006 13:31:46.961594 4958 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_cinder-operator-controller-manager-7d4d4f8d-th999_b7998026-248b-4fdb-b5fe-8e6ca29c69f0/manager/0.log" Oct 06 13:31:47 crc kubenswrapper[4958]: I1006 13:31:47.020257 4958 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_designate-operator-controller-manager-75dfd9b554-4qklz_efb3e8ee-d92e-49fe-82c8-3fbe5794410f/kube-rbac-proxy/0.log" Oct 06 13:31:47 crc kubenswrapper[4958]: I1006 13:31:47.109939 4958 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_designate-operator-controller-manager-75dfd9b554-4qklz_efb3e8ee-d92e-49fe-82c8-3fbe5794410f/manager/0.log" Oct 06 13:31:47 crc kubenswrapper[4958]: I1006 13:31:47.178404 4958 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_f5a5c0b02fdcad19334731c5311e773a7ecde0ba9b698f7efdee87252acvwtt_825a268b-7779-4e3a-b87c-6769d02e8213/util/0.log" Oct 06 13:31:47 crc kubenswrapper[4958]: I1006 13:31:47.345623 4958 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_f5a5c0b02fdcad19334731c5311e773a7ecde0ba9b698f7efdee87252acvwtt_825a268b-7779-4e3a-b87c-6769d02e8213/pull/0.log" Oct 06 13:31:47 crc kubenswrapper[4958]: I1006 13:31:47.348163 4958 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_f5a5c0b02fdcad19334731c5311e773a7ecde0ba9b698f7efdee87252acvwtt_825a268b-7779-4e3a-b87c-6769d02e8213/util/0.log" Oct 06 13:31:47 crc kubenswrapper[4958]: I1006 13:31:47.350747 4958 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_f5a5c0b02fdcad19334731c5311e773a7ecde0ba9b698f7efdee87252acvwtt_825a268b-7779-4e3a-b87c-6769d02e8213/pull/0.log" Oct 06 13:31:47 crc kubenswrapper[4958]: I1006 13:31:47.506819 4958 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_f5a5c0b02fdcad19334731c5311e773a7ecde0ba9b698f7efdee87252acvwtt_825a268b-7779-4e3a-b87c-6769d02e8213/util/0.log" Oct 06 13:31:47 crc kubenswrapper[4958]: I1006 13:31:47.537768 4958 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_f5a5c0b02fdcad19334731c5311e773a7ecde0ba9b698f7efdee87252acvwtt_825a268b-7779-4e3a-b87c-6769d02e8213/extract/0.log" Oct 06 13:31:47 crc kubenswrapper[4958]: I1006 13:31:47.555839 4958 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_f5a5c0b02fdcad19334731c5311e773a7ecde0ba9b698f7efdee87252acvwtt_825a268b-7779-4e3a-b87c-6769d02e8213/pull/0.log" Oct 06 13:31:47 crc kubenswrapper[4958]: I1006 13:31:47.681919 4958 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_glance-operator-controller-manager-5dc44df7d5-jj784_da95cbd7-81b9-48e7-99eb-207063cf651a/kube-rbac-proxy/0.log" Oct 06 13:31:47 crc kubenswrapper[4958]: I1006 13:31:47.773418 4958 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_glance-operator-controller-manager-5dc44df7d5-jj784_da95cbd7-81b9-48e7-99eb-207063cf651a/manager/0.log" Oct 06 13:31:47 crc kubenswrapper[4958]: I1006 13:31:47.794259 4958 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_heat-operator-controller-manager-54b4974c45-xc89l_96446a00-b397-4b48-94bf-432c32ed13cb/kube-rbac-proxy/0.log" Oct 06 13:31:47 crc kubenswrapper[4958]: I1006 13:31:47.891878 4958 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_heat-operator-controller-manager-54b4974c45-xc89l_96446a00-b397-4b48-94bf-432c32ed13cb/manager/0.log" Oct 06 13:31:47 crc kubenswrapper[4958]: I1006 13:31:47.980516 4958 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_horizon-operator-controller-manager-76d5b87f47-7nfvh_7d2f0a48-cffe-49d6-8ac8-830558228e2a/kube-rbac-proxy/0.log" Oct 06 13:31:48 crc kubenswrapper[4958]: I1006 13:31:48.017132 4958 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_horizon-operator-controller-manager-76d5b87f47-7nfvh_7d2f0a48-cffe-49d6-8ac8-830558228e2a/manager/0.log" Oct 06 13:31:48 crc kubenswrapper[4958]: I1006 13:31:48.168605 4958 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_infra-operator-controller-manager-658588b8c9-dnk6g_8a0f56a2-c168-4707-acac-43cc91b44835/kube-rbac-proxy/0.log" Oct 06 13:31:48 crc kubenswrapper[4958]: I1006 13:31:48.307993 4958 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ironic-operator-controller-manager-649675d675-6h8b6_78e7c04a-fb1a-420f-a99b-94b6b0cf899a/kube-rbac-proxy/0.log" Oct 06 13:31:48 crc kubenswrapper[4958]: I1006 13:31:48.363077 4958 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_infra-operator-controller-manager-658588b8c9-dnk6g_8a0f56a2-c168-4707-acac-43cc91b44835/manager/0.log" Oct 06 13:31:48 crc kubenswrapper[4958]: I1006 13:31:48.383129 4958 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ironic-operator-controller-manager-649675d675-6h8b6_78e7c04a-fb1a-420f-a99b-94b6b0cf899a/manager/0.log" Oct 06 13:31:48 crc kubenswrapper[4958]: I1006 13:31:48.529534 4958 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_keystone-operator-controller-manager-7b5ccf6d9c-86s48_e1d4f271-a424-45d7-abf0-33633ac7713c/kube-rbac-proxy/0.log" Oct 06 13:31:48 crc kubenswrapper[4958]: I1006 13:31:48.606908 4958 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_keystone-operator-controller-manager-7b5ccf6d9c-86s48_e1d4f271-a424-45d7-abf0-33633ac7713c/manager/0.log" Oct 06 13:31:48 crc kubenswrapper[4958]: I1006 13:31:48.676394 4958 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_manila-operator-controller-manager-65d89cfd9f-gqj7w_cb612d52-fceb-471f-af53-104bfc2966e7/kube-rbac-proxy/0.log" Oct 06 13:31:48 crc kubenswrapper[4958]: I1006 13:31:48.750501 4958 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_manila-operator-controller-manager-65d89cfd9f-gqj7w_cb612d52-fceb-471f-af53-104bfc2966e7/manager/0.log" Oct 06 13:31:48 crc kubenswrapper[4958]: I1006 13:31:48.810773 4958 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_mariadb-operator-controller-manager-6cd6d7bdf5-42wnq_78e5cfa4-7e8c-4fb2-b90d-bd9967385a71/kube-rbac-proxy/0.log" Oct 06 13:31:48 crc kubenswrapper[4958]: I1006 13:31:48.880018 4958 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_mariadb-operator-controller-manager-6cd6d7bdf5-42wnq_78e5cfa4-7e8c-4fb2-b90d-bd9967385a71/manager/0.log" Oct 06 13:31:48 crc kubenswrapper[4958]: I1006 13:31:48.951772 4958 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_neutron-operator-controller-manager-8d984cc4d-7r9bg_3332b65c-b3bf-44f5-ae31-865d77029641/kube-rbac-proxy/0.log" Oct 06 13:31:49 crc kubenswrapper[4958]: I1006 13:31:49.022570 4958 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_neutron-operator-controller-manager-8d984cc4d-7r9bg_3332b65c-b3bf-44f5-ae31-865d77029641/manager/0.log" Oct 06 13:31:49 crc kubenswrapper[4958]: I1006 13:31:49.163483 4958 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_nova-operator-controller-manager-7c7fc454ff-nvbjx_c35f7c69-655c-4d86-bcfd-29a899cf3011/kube-rbac-proxy/0.log" Oct 06 13:31:49 crc kubenswrapper[4958]: I1006 13:31:49.264657 4958 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_nova-operator-controller-manager-7c7fc454ff-nvbjx_c35f7c69-655c-4d86-bcfd-29a899cf3011/manager/0.log" Oct 06 13:31:49 crc kubenswrapper[4958]: I1006 13:31:49.316348 4958 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_octavia-operator-controller-manager-7468f855d8-crg8g_28816408-a493-4e27-8213-998d338cc1d0/kube-rbac-proxy/0.log" Oct 06 13:31:49 crc kubenswrapper[4958]: I1006 13:31:49.388750 4958 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_octavia-operator-controller-manager-7468f855d8-crg8g_28816408-a493-4e27-8213-998d338cc1d0/manager/0.log" Oct 06 13:31:49 crc kubenswrapper[4958]: I1006 13:31:49.471938 4958 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-baremetal-operator-controller-manager-5dfbbd665clrhkv_6558de0f-a2a0-4841-9764-574061835f3b/kube-rbac-proxy/0.log" Oct 06 13:31:49 crc kubenswrapper[4958]: I1006 13:31:49.509023 4958 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-baremetal-operator-controller-manager-5dfbbd665clrhkv_6558de0f-a2a0-4841-9764-574061835f3b/manager/0.log" Oct 06 13:31:49 crc kubenswrapper[4958]: I1006 13:31:49.626438 4958 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-manager-64c95c565c-djmq7_e3f7c90b-8bb7-4b1c-bab9-0c341627ee32/kube-rbac-proxy/0.log" Oct 06 13:31:49 crc kubenswrapper[4958]: I1006 13:31:49.938331 4958 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-operator-57448bb547-2ptw6_a2a23b45-5568-49fb-9e85-6bce53831d13/kube-rbac-proxy/0.log" Oct 06 13:31:50 crc kubenswrapper[4958]: I1006 13:31:50.004323 4958 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-operator-57448bb547-2ptw6_a2a23b45-5568-49fb-9e85-6bce53831d13/operator/0.log" Oct 06 13:31:50 crc kubenswrapper[4958]: I1006 13:31:50.154799 4958 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-index-wdw62_e61771dc-7e62-4e42-a99a-c8eae920cb26/registry-server/0.log" Oct 06 13:31:50 crc kubenswrapper[4958]: I1006 13:31:50.230850 4958 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ovn-operator-controller-manager-646d647dd5-fqrj2_28613b85-8223-4190-b2de-a88d186b8901/kube-rbac-proxy/0.log" Oct 06 13:31:50 crc kubenswrapper[4958]: I1006 13:31:50.437831 4958 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ovn-operator-controller-manager-646d647dd5-fqrj2_28613b85-8223-4190-b2de-a88d186b8901/manager/0.log" Oct 06 13:31:50 crc kubenswrapper[4958]: I1006 13:31:50.546840 4958 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_placement-operator-controller-manager-54689d9f88-lf55c_7c2e69f6-a1a9-46e9-9fa5-ff6002bf1f97/kube-rbac-proxy/0.log" Oct 06 13:31:50 crc kubenswrapper[4958]: I1006 13:31:50.640710 4958 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_placement-operator-controller-manager-54689d9f88-lf55c_7c2e69f6-a1a9-46e9-9fa5-ff6002bf1f97/manager/0.log" Oct 06 13:31:50 crc kubenswrapper[4958]: I1006 13:31:50.741260 4958 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-manager-64c95c565c-djmq7_e3f7c90b-8bb7-4b1c-bab9-0c341627ee32/manager/0.log" Oct 06 13:31:50 crc kubenswrapper[4958]: I1006 13:31:50.776896 4958 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_rabbitmq-cluster-operator-manager-5f97d8c699-8c99l_ccce5a46-80c5-4f14-b63d-d4eff64bef36/operator/0.log" Oct 06 13:31:50 crc kubenswrapper[4958]: I1006 13:31:50.844058 4958 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_swift-operator-controller-manager-6859f9b676-4d8cf_a8547a74-8a2d-4a7f-9852-71036642c51a/kube-rbac-proxy/0.log" Oct 06 13:31:50 crc kubenswrapper[4958]: I1006 13:31:50.935411 4958 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_telemetry-operator-controller-manager-5d4d74dd89-zfxpj_c060a91a-1009-469c-a9f4-d2e3b3d34840/kube-rbac-proxy/0.log" Oct 06 13:31:51 crc kubenswrapper[4958]: I1006 13:31:51.013862 4958 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_swift-operator-controller-manager-6859f9b676-4d8cf_a8547a74-8a2d-4a7f-9852-71036642c51a/manager/0.log" Oct 06 13:31:51 crc kubenswrapper[4958]: I1006 13:31:51.062251 4958 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_telemetry-operator-controller-manager-5d4d74dd89-zfxpj_c060a91a-1009-469c-a9f4-d2e3b3d34840/manager/0.log" Oct 06 13:31:51 crc kubenswrapper[4958]: I1006 13:31:51.107249 4958 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_test-operator-controller-manager-5cd5cb47d7-m5gw5_d79b059c-ab85-4eed-937e-6f7844c24621/kube-rbac-proxy/0.log" Oct 06 13:31:51 crc kubenswrapper[4958]: I1006 13:31:51.151861 4958 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_test-operator-controller-manager-5cd5cb47d7-m5gw5_d79b059c-ab85-4eed-937e-6f7844c24621/manager/0.log" Oct 06 13:31:51 crc kubenswrapper[4958]: I1006 13:31:51.259864 4958 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_watcher-operator-controller-manager-6cbc6dd547-hfq6r_0bd35c63-af59-499c-baaa-8cec7e13f7bc/kube-rbac-proxy/0.log" Oct 06 13:31:51 crc kubenswrapper[4958]: I1006 13:31:51.322858 4958 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_watcher-operator-controller-manager-6cbc6dd547-hfq6r_0bd35c63-af59-499c-baaa-8cec7e13f7bc/manager/0.log" Oct 06 13:31:58 crc kubenswrapper[4958]: I1006 13:31:58.913863 4958 scope.go:117] "RemoveContainer" containerID="7fae5e7c3f2061ad0524619a6772893fa1ae12670cf4b11ced46230c90fe0223" Oct 06 13:31:58 crc kubenswrapper[4958]: E1006 13:31:58.914777 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-whw6z_openshift-machine-config-operator(1a63a5e9-6d00-40ff-a080-cfcaf03a1c1b)\"" pod="openshift-machine-config-operator/machine-config-daemon-whw6z" podUID="1a63a5e9-6d00-40ff-a080-cfcaf03a1c1b" Oct 06 13:32:07 crc kubenswrapper[4958]: I1006 13:32:07.042352 4958 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_control-plane-machine-set-operator-78cbb6b69f-jxkkt_dc407428-3c19-40aa-b476-c159f9b8f2a4/control-plane-machine-set-operator/0.log" Oct 06 13:32:07 crc kubenswrapper[4958]: I1006 13:32:07.230090 4958 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-x8mdr_d949b5d0-2cd6-40ab-8bd1-9fb19fd615f8/kube-rbac-proxy/0.log" Oct 06 13:32:07 crc kubenswrapper[4958]: I1006 13:32:07.268812 4958 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-x8mdr_d949b5d0-2cd6-40ab-8bd1-9fb19fd615f8/machine-api-operator/0.log" Oct 06 13:32:13 crc kubenswrapper[4958]: I1006 13:32:13.913927 4958 scope.go:117] "RemoveContainer" containerID="7fae5e7c3f2061ad0524619a6772893fa1ae12670cf4b11ced46230c90fe0223" Oct 06 13:32:13 crc kubenswrapper[4958]: E1006 13:32:13.914688 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-whw6z_openshift-machine-config-operator(1a63a5e9-6d00-40ff-a080-cfcaf03a1c1b)\"" pod="openshift-machine-config-operator/machine-config-daemon-whw6z" podUID="1a63a5e9-6d00-40ff-a080-cfcaf03a1c1b" Oct 06 13:32:18 crc kubenswrapper[4958]: I1006 13:32:18.371889 4958 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-5b446d88c5-d6ffb_063d4ef1-4461-4677-90de-7e746456a573/cert-manager-controller/0.log" Oct 06 13:32:18 crc kubenswrapper[4958]: I1006 13:32:18.557004 4958 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-cainjector-7f985d654d-75ksk_78fb0e67-9bf0-4357-9208-9fff92c3074c/cert-manager-cainjector/0.log" Oct 06 13:32:18 crc kubenswrapper[4958]: I1006 13:32:18.645784 4958 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-webhook-5655c58dd6-bcxhs_2e49dc60-a2ba-4e79-9563-2dd8857d45b0/cert-manager-webhook/0.log" Oct 06 13:32:25 crc kubenswrapper[4958]: I1006 13:32:25.913517 4958 scope.go:117] "RemoveContainer" containerID="7fae5e7c3f2061ad0524619a6772893fa1ae12670cf4b11ced46230c90fe0223" Oct 06 13:32:26 crc kubenswrapper[4958]: I1006 13:32:26.148441 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-whw6z" event={"ID":"1a63a5e9-6d00-40ff-a080-cfcaf03a1c1b","Type":"ContainerStarted","Data":"b2f276db3618825e62e624b3eb0087b1b964491be60c64739bf3f0b8d6d1c8b2"} Oct 06 13:32:29 crc kubenswrapper[4958]: I1006 13:32:29.714985 4958 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-console-plugin-6b874cbd85-rd5gn_5f482352-713c-4502-aded-dfe37c5fa8bc/nmstate-console-plugin/0.log" Oct 06 13:32:29 crc kubenswrapper[4958]: I1006 13:32:29.859651 4958 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-handler-6vlgl_89638e7b-fd22-4b85-8ee9-7eb5353f06c0/nmstate-handler/0.log" Oct 06 13:32:29 crc kubenswrapper[4958]: I1006 13:32:29.868510 4958 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-metrics-fdff9cb8d-cmgch_fda4902f-9bcc-419f-80f2-40a46dc2e7dd/kube-rbac-proxy/0.log" Oct 06 13:32:29 crc kubenswrapper[4958]: I1006 13:32:29.896531 4958 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-metrics-fdff9cb8d-cmgch_fda4902f-9bcc-419f-80f2-40a46dc2e7dd/nmstate-metrics/0.log" Oct 06 13:32:30 crc kubenswrapper[4958]: I1006 13:32:30.047984 4958 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-operator-858ddd8f98-cl46g_66c0e594-48e4-4f2f-b25e-5b69f377d6e2/nmstate-operator/0.log" Oct 06 13:32:30 crc kubenswrapper[4958]: I1006 13:32:30.129735 4958 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-webhook-6cdbc54649-82rnp_4605847c-947f-4955-b80b-87bb98b3c946/nmstate-webhook/0.log" Oct 06 13:32:42 crc kubenswrapper[4958]: I1006 13:32:42.962885 4958 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-68d546b9d8-tz4r7_38cbcdd4-5fc7-4c9f-9adc-d2b4c8362ce5/kube-rbac-proxy/0.log" Oct 06 13:32:43 crc kubenswrapper[4958]: I1006 13:32:43.018127 4958 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-68d546b9d8-tz4r7_38cbcdd4-5fc7-4c9f-9adc-d2b4c8362ce5/controller/0.log" Oct 06 13:32:43 crc kubenswrapper[4958]: I1006 13:32:43.135597 4958 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-mllk7_2cab249a-2dc5-4211-a567-b55c234a8853/cp-frr-files/0.log" Oct 06 13:32:43 crc kubenswrapper[4958]: I1006 13:32:43.258531 4958 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-mllk7_2cab249a-2dc5-4211-a567-b55c234a8853/cp-reloader/0.log" Oct 06 13:32:43 crc kubenswrapper[4958]: I1006 13:32:43.284955 4958 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-mllk7_2cab249a-2dc5-4211-a567-b55c234a8853/cp-frr-files/0.log" Oct 06 13:32:43 crc kubenswrapper[4958]: I1006 13:32:43.322224 4958 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-mllk7_2cab249a-2dc5-4211-a567-b55c234a8853/cp-reloader/0.log" Oct 06 13:32:43 crc kubenswrapper[4958]: I1006 13:32:43.344935 4958 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-mllk7_2cab249a-2dc5-4211-a567-b55c234a8853/cp-metrics/0.log" Oct 06 13:32:43 crc kubenswrapper[4958]: I1006 13:32:43.554728 4958 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-mllk7_2cab249a-2dc5-4211-a567-b55c234a8853/cp-frr-files/0.log" Oct 06 13:32:43 crc kubenswrapper[4958]: I1006 13:32:43.560600 4958 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-mllk7_2cab249a-2dc5-4211-a567-b55c234a8853/cp-metrics/0.log" Oct 06 13:32:43 crc kubenswrapper[4958]: I1006 13:32:43.584363 4958 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-mllk7_2cab249a-2dc5-4211-a567-b55c234a8853/cp-reloader/0.log" Oct 06 13:32:43 crc kubenswrapper[4958]: I1006 13:32:43.606250 4958 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-mllk7_2cab249a-2dc5-4211-a567-b55c234a8853/cp-metrics/0.log" Oct 06 13:32:43 crc kubenswrapper[4958]: I1006 13:32:43.755408 4958 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-mllk7_2cab249a-2dc5-4211-a567-b55c234a8853/cp-frr-files/0.log" Oct 06 13:32:43 crc kubenswrapper[4958]: I1006 13:32:43.782478 4958 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-mllk7_2cab249a-2dc5-4211-a567-b55c234a8853/cp-reloader/0.log" Oct 06 13:32:43 crc kubenswrapper[4958]: I1006 13:32:43.795767 4958 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-mllk7_2cab249a-2dc5-4211-a567-b55c234a8853/cp-metrics/0.log" Oct 06 13:32:43 crc kubenswrapper[4958]: I1006 13:32:43.829350 4958 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-mllk7_2cab249a-2dc5-4211-a567-b55c234a8853/controller/0.log" Oct 06 13:32:44 crc kubenswrapper[4958]: I1006 13:32:44.015472 4958 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-mllk7_2cab249a-2dc5-4211-a567-b55c234a8853/frr-metrics/0.log" Oct 06 13:32:44 crc kubenswrapper[4958]: I1006 13:32:44.057319 4958 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-mllk7_2cab249a-2dc5-4211-a567-b55c234a8853/kube-rbac-proxy/0.log" Oct 06 13:32:44 crc kubenswrapper[4958]: I1006 13:32:44.068889 4958 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-mllk7_2cab249a-2dc5-4211-a567-b55c234a8853/kube-rbac-proxy-frr/0.log" Oct 06 13:32:44 crc kubenswrapper[4958]: I1006 13:32:44.303807 4958 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-mllk7_2cab249a-2dc5-4211-a567-b55c234a8853/reloader/0.log" Oct 06 13:32:44 crc kubenswrapper[4958]: I1006 13:32:44.308039 4958 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-webhook-server-64bf5d555-vf4w8_f4ed48dc-17a8-4241-9c4e-8febcebb2c45/frr-k8s-webhook-server/0.log" Oct 06 13:32:44 crc kubenswrapper[4958]: I1006 13:32:44.536351 4958 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-controller-manager-7bff9bd6d4-tzpwc_cf4a240a-9885-4f19-aea0-799fe1715bb3/manager/0.log" Oct 06 13:32:44 crc kubenswrapper[4958]: I1006 13:32:44.724291 4958 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-webhook-server-698665c988-htg28_b9d4c539-c6bf-4300-a2cc-9647dbb9fe53/webhook-server/0.log" Oct 06 13:32:44 crc kubenswrapper[4958]: I1006 13:32:44.861645 4958 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-qpvkk_7d0d517b-7a87-4cd1-9039-998c3765332f/kube-rbac-proxy/0.log" Oct 06 13:32:45 crc kubenswrapper[4958]: I1006 13:32:45.434653 4958 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-qpvkk_7d0d517b-7a87-4cd1-9039-998c3765332f/speaker/0.log" Oct 06 13:32:45 crc kubenswrapper[4958]: I1006 13:32:45.691891 4958 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-mllk7_2cab249a-2dc5-4211-a567-b55c234a8853/frr/0.log" Oct 06 13:32:56 crc kubenswrapper[4958]: I1006 13:32:56.759850 4958 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_8f2f4ee801e5826a37d84a7b1fc4ccbf6b79de668302737d0f1152d8d2rtm7t_ea75ba71-b6d0-4620-a412-26fb313a0bff/util/0.log" Oct 06 13:32:56 crc kubenswrapper[4958]: I1006 13:32:56.948662 4958 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_8f2f4ee801e5826a37d84a7b1fc4ccbf6b79de668302737d0f1152d8d2rtm7t_ea75ba71-b6d0-4620-a412-26fb313a0bff/util/0.log" Oct 06 13:32:56 crc kubenswrapper[4958]: I1006 13:32:56.972308 4958 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_8f2f4ee801e5826a37d84a7b1fc4ccbf6b79de668302737d0f1152d8d2rtm7t_ea75ba71-b6d0-4620-a412-26fb313a0bff/pull/0.log" Oct 06 13:32:56 crc kubenswrapper[4958]: I1006 13:32:56.975550 4958 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_8f2f4ee801e5826a37d84a7b1fc4ccbf6b79de668302737d0f1152d8d2rtm7t_ea75ba71-b6d0-4620-a412-26fb313a0bff/pull/0.log" Oct 06 13:32:57 crc kubenswrapper[4958]: I1006 13:32:57.180654 4958 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_8f2f4ee801e5826a37d84a7b1fc4ccbf6b79de668302737d0f1152d8d2rtm7t_ea75ba71-b6d0-4620-a412-26fb313a0bff/util/0.log" Oct 06 13:32:57 crc kubenswrapper[4958]: I1006 13:32:57.203170 4958 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_8f2f4ee801e5826a37d84a7b1fc4ccbf6b79de668302737d0f1152d8d2rtm7t_ea75ba71-b6d0-4620-a412-26fb313a0bff/extract/0.log" Oct 06 13:32:57 crc kubenswrapper[4958]: I1006 13:32:57.220737 4958 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_8f2f4ee801e5826a37d84a7b1fc4ccbf6b79de668302737d0f1152d8d2rtm7t_ea75ba71-b6d0-4620-a412-26fb313a0bff/pull/0.log" Oct 06 13:32:57 crc kubenswrapper[4958]: I1006 13:32:57.345827 4958 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-qqm7m_c8ab94cb-0fde-495e-9e1b-cb57600ce892/extract-utilities/0.log" Oct 06 13:32:57 crc kubenswrapper[4958]: I1006 13:32:57.501502 4958 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-qqm7m_c8ab94cb-0fde-495e-9e1b-cb57600ce892/extract-content/0.log" Oct 06 13:32:57 crc kubenswrapper[4958]: I1006 13:32:57.534048 4958 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-qqm7m_c8ab94cb-0fde-495e-9e1b-cb57600ce892/extract-content/0.log" Oct 06 13:32:57 crc kubenswrapper[4958]: I1006 13:32:57.537613 4958 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-qqm7m_c8ab94cb-0fde-495e-9e1b-cb57600ce892/extract-utilities/0.log" Oct 06 13:32:57 crc kubenswrapper[4958]: I1006 13:32:57.681675 4958 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-qqm7m_c8ab94cb-0fde-495e-9e1b-cb57600ce892/extract-content/0.log" Oct 06 13:32:57 crc kubenswrapper[4958]: I1006 13:32:57.746329 4958 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-qqm7m_c8ab94cb-0fde-495e-9e1b-cb57600ce892/extract-utilities/0.log" Oct 06 13:32:57 crc kubenswrapper[4958]: I1006 13:32:57.865846 4958 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-4zfnc_446071e0-1a30-4a0d-8b68-76eb7ece32c9/extract-utilities/0.log" Oct 06 13:32:58 crc kubenswrapper[4958]: I1006 13:32:58.218238 4958 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-4zfnc_446071e0-1a30-4a0d-8b68-76eb7ece32c9/extract-content/0.log" Oct 06 13:32:58 crc kubenswrapper[4958]: I1006 13:32:58.299014 4958 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-4zfnc_446071e0-1a30-4a0d-8b68-76eb7ece32c9/extract-utilities/0.log" Oct 06 13:32:58 crc kubenswrapper[4958]: I1006 13:32:58.304108 4958 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-4zfnc_446071e0-1a30-4a0d-8b68-76eb7ece32c9/extract-content/0.log" Oct 06 13:32:58 crc kubenswrapper[4958]: I1006 13:32:58.414757 4958 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-qqm7m_c8ab94cb-0fde-495e-9e1b-cb57600ce892/registry-server/0.log" Oct 06 13:32:58 crc kubenswrapper[4958]: I1006 13:32:58.487444 4958 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-4zfnc_446071e0-1a30-4a0d-8b68-76eb7ece32c9/extract-content/0.log" Oct 06 13:32:58 crc kubenswrapper[4958]: I1006 13:32:58.497729 4958 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-4zfnc_446071e0-1a30-4a0d-8b68-76eb7ece32c9/extract-utilities/0.log" Oct 06 13:32:58 crc kubenswrapper[4958]: I1006 13:32:58.689103 4958 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_fa9831ede5d93c33d525b70ce6ddf94e500d80992af75a3305fe98835c5x2pl_0b95314f-617e-41fc-9afd-26ea796825c8/util/0.log" Oct 06 13:32:58 crc kubenswrapper[4958]: I1006 13:32:58.893472 4958 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_fa9831ede5d93c33d525b70ce6ddf94e500d80992af75a3305fe98835c5x2pl_0b95314f-617e-41fc-9afd-26ea796825c8/pull/0.log" Oct 06 13:32:58 crc kubenswrapper[4958]: I1006 13:32:58.948819 4958 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_fa9831ede5d93c33d525b70ce6ddf94e500d80992af75a3305fe98835c5x2pl_0b95314f-617e-41fc-9afd-26ea796825c8/pull/0.log" Oct 06 13:32:58 crc kubenswrapper[4958]: I1006 13:32:58.966865 4958 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_fa9831ede5d93c33d525b70ce6ddf94e500d80992af75a3305fe98835c5x2pl_0b95314f-617e-41fc-9afd-26ea796825c8/util/0.log" Oct 06 13:32:59 crc kubenswrapper[4958]: I1006 13:32:59.080391 4958 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_fa9831ede5d93c33d525b70ce6ddf94e500d80992af75a3305fe98835c5x2pl_0b95314f-617e-41fc-9afd-26ea796825c8/util/0.log" Oct 06 13:32:59 crc kubenswrapper[4958]: I1006 13:32:59.142827 4958 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_fa9831ede5d93c33d525b70ce6ddf94e500d80992af75a3305fe98835c5x2pl_0b95314f-617e-41fc-9afd-26ea796825c8/pull/0.log" Oct 06 13:32:59 crc kubenswrapper[4958]: I1006 13:32:59.261838 4958 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_fa9831ede5d93c33d525b70ce6ddf94e500d80992af75a3305fe98835c5x2pl_0b95314f-617e-41fc-9afd-26ea796825c8/extract/0.log" Oct 06 13:32:59 crc kubenswrapper[4958]: I1006 13:32:59.340089 4958 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-4zfnc_446071e0-1a30-4a0d-8b68-76eb7ece32c9/registry-server/0.log" Oct 06 13:32:59 crc kubenswrapper[4958]: I1006 13:32:59.642005 4958 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_marketplace-operator-79b997595-xg6kb_e4ac258a-8e41-4889-9395-9f0a614425cb/marketplace-operator/0.log" Oct 06 13:32:59 crc kubenswrapper[4958]: I1006 13:32:59.711186 4958 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-gcls7_01236baa-2bc7-4607-b92a-be74c28426af/extract-utilities/0.log" Oct 06 13:32:59 crc kubenswrapper[4958]: I1006 13:32:59.926007 4958 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-gcls7_01236baa-2bc7-4607-b92a-be74c28426af/extract-content/0.log" Oct 06 13:32:59 crc kubenswrapper[4958]: I1006 13:32:59.935237 4958 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-gcls7_01236baa-2bc7-4607-b92a-be74c28426af/extract-utilities/0.log" Oct 06 13:32:59 crc kubenswrapper[4958]: I1006 13:32:59.962303 4958 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-gcls7_01236baa-2bc7-4607-b92a-be74c28426af/extract-content/0.log" Oct 06 13:33:00 crc kubenswrapper[4958]: I1006 13:33:00.190191 4958 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-gcls7_01236baa-2bc7-4607-b92a-be74c28426af/extract-utilities/0.log" Oct 06 13:33:00 crc kubenswrapper[4958]: I1006 13:33:00.243977 4958 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-gcls7_01236baa-2bc7-4607-b92a-be74c28426af/extract-content/0.log" Oct 06 13:33:00 crc kubenswrapper[4958]: I1006 13:33:00.459080 4958 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-gcls7_01236baa-2bc7-4607-b92a-be74c28426af/registry-server/0.log" Oct 06 13:33:00 crc kubenswrapper[4958]: I1006 13:33:00.486263 4958 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-lsbmd_a492d97c-2ec7-4443-9bda-536127c35353/extract-utilities/0.log" Oct 06 13:33:00 crc kubenswrapper[4958]: I1006 13:33:00.647440 4958 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-lsbmd_a492d97c-2ec7-4443-9bda-536127c35353/extract-content/0.log" Oct 06 13:33:00 crc kubenswrapper[4958]: I1006 13:33:00.650927 4958 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-lsbmd_a492d97c-2ec7-4443-9bda-536127c35353/extract-utilities/0.log" Oct 06 13:33:00 crc kubenswrapper[4958]: I1006 13:33:00.653647 4958 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-lsbmd_a492d97c-2ec7-4443-9bda-536127c35353/extract-content/0.log" Oct 06 13:33:00 crc kubenswrapper[4958]: I1006 13:33:00.848059 4958 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-lsbmd_a492d97c-2ec7-4443-9bda-536127c35353/extract-content/0.log" Oct 06 13:33:00 crc kubenswrapper[4958]: I1006 13:33:00.855099 4958 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-lsbmd_a492d97c-2ec7-4443-9bda-536127c35353/extract-utilities/0.log" Oct 06 13:33:01 crc kubenswrapper[4958]: I1006 13:33:01.596451 4958 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-lsbmd_a492d97c-2ec7-4443-9bda-536127c35353/registry-server/0.log" Oct 06 13:34:53 crc kubenswrapper[4958]: I1006 13:34:53.802129 4958 patch_prober.go:28] interesting pod/machine-config-daemon-whw6z container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 06 13:34:53 crc kubenswrapper[4958]: I1006 13:34:53.802771 4958 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-whw6z" podUID="1a63a5e9-6d00-40ff-a080-cfcaf03a1c1b" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 06 13:35:12 crc kubenswrapper[4958]: I1006 13:35:12.814572 4958 generic.go:334] "Generic (PLEG): container finished" podID="8aa28a48-3171-4382-9bad-39e174c8d36e" containerID="db04cb24d00cc71b6958ce019828946407954a2b07e6457dc7af8d9cbd486456" exitCode=0 Oct 06 13:35:12 crc kubenswrapper[4958]: I1006 13:35:12.814665 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-x4wtp/must-gather-vgmlt" event={"ID":"8aa28a48-3171-4382-9bad-39e174c8d36e","Type":"ContainerDied","Data":"db04cb24d00cc71b6958ce019828946407954a2b07e6457dc7af8d9cbd486456"} Oct 06 13:35:12 crc kubenswrapper[4958]: I1006 13:35:12.816239 4958 scope.go:117] "RemoveContainer" containerID="db04cb24d00cc71b6958ce019828946407954a2b07e6457dc7af8d9cbd486456" Oct 06 13:35:13 crc kubenswrapper[4958]: I1006 13:35:13.764473 4958 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-x4wtp_must-gather-vgmlt_8aa28a48-3171-4382-9bad-39e174c8d36e/gather/0.log" Oct 06 13:35:23 crc kubenswrapper[4958]: I1006 13:35:23.802541 4958 patch_prober.go:28] interesting pod/machine-config-daemon-whw6z container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 06 13:35:23 crc kubenswrapper[4958]: I1006 13:35:23.804380 4958 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-whw6z" podUID="1a63a5e9-6d00-40ff-a080-cfcaf03a1c1b" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 06 13:35:25 crc kubenswrapper[4958]: I1006 13:35:25.872055 4958 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-x4wtp/must-gather-vgmlt"] Oct 06 13:35:25 crc kubenswrapper[4958]: I1006 13:35:25.872729 4958 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-must-gather-x4wtp/must-gather-vgmlt" podUID="8aa28a48-3171-4382-9bad-39e174c8d36e" containerName="copy" containerID="cri-o://72331bf4b89470bca1271171ae4824eadbb2f1ae5683f92d0ecf735dd1d7a09a" gracePeriod=2 Oct 06 13:35:25 crc kubenswrapper[4958]: I1006 13:35:25.880333 4958 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-x4wtp/must-gather-vgmlt"] Oct 06 13:35:26 crc kubenswrapper[4958]: I1006 13:35:26.303676 4958 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-x4wtp_must-gather-vgmlt_8aa28a48-3171-4382-9bad-39e174c8d36e/copy/0.log" Oct 06 13:35:26 crc kubenswrapper[4958]: I1006 13:35:26.304477 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-x4wtp/must-gather-vgmlt" Oct 06 13:35:26 crc kubenswrapper[4958]: I1006 13:35:26.495681 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/8aa28a48-3171-4382-9bad-39e174c8d36e-must-gather-output\") pod \"8aa28a48-3171-4382-9bad-39e174c8d36e\" (UID: \"8aa28a48-3171-4382-9bad-39e174c8d36e\") " Oct 06 13:35:26 crc kubenswrapper[4958]: I1006 13:35:26.495750 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wq9wn\" (UniqueName: \"kubernetes.io/projected/8aa28a48-3171-4382-9bad-39e174c8d36e-kube-api-access-wq9wn\") pod \"8aa28a48-3171-4382-9bad-39e174c8d36e\" (UID: \"8aa28a48-3171-4382-9bad-39e174c8d36e\") " Oct 06 13:35:26 crc kubenswrapper[4958]: I1006 13:35:26.503447 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8aa28a48-3171-4382-9bad-39e174c8d36e-kube-api-access-wq9wn" (OuterVolumeSpecName: "kube-api-access-wq9wn") pod "8aa28a48-3171-4382-9bad-39e174c8d36e" (UID: "8aa28a48-3171-4382-9bad-39e174c8d36e"). InnerVolumeSpecName "kube-api-access-wq9wn". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 06 13:35:26 crc kubenswrapper[4958]: I1006 13:35:26.598577 4958 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wq9wn\" (UniqueName: \"kubernetes.io/projected/8aa28a48-3171-4382-9bad-39e174c8d36e-kube-api-access-wq9wn\") on node \"crc\" DevicePath \"\"" Oct 06 13:35:26 crc kubenswrapper[4958]: I1006 13:35:26.663657 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8aa28a48-3171-4382-9bad-39e174c8d36e-must-gather-output" (OuterVolumeSpecName: "must-gather-output") pod "8aa28a48-3171-4382-9bad-39e174c8d36e" (UID: "8aa28a48-3171-4382-9bad-39e174c8d36e"). InnerVolumeSpecName "must-gather-output". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 06 13:35:26 crc kubenswrapper[4958]: I1006 13:35:26.701190 4958 reconciler_common.go:293] "Volume detached for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/8aa28a48-3171-4382-9bad-39e174c8d36e-must-gather-output\") on node \"crc\" DevicePath \"\"" Oct 06 13:35:26 crc kubenswrapper[4958]: I1006 13:35:26.923893 4958 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8aa28a48-3171-4382-9bad-39e174c8d36e" path="/var/lib/kubelet/pods/8aa28a48-3171-4382-9bad-39e174c8d36e/volumes" Oct 06 13:35:26 crc kubenswrapper[4958]: I1006 13:35:26.963175 4958 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-x4wtp_must-gather-vgmlt_8aa28a48-3171-4382-9bad-39e174c8d36e/copy/0.log" Oct 06 13:35:26 crc kubenswrapper[4958]: I1006 13:35:26.963809 4958 generic.go:334] "Generic (PLEG): container finished" podID="8aa28a48-3171-4382-9bad-39e174c8d36e" containerID="72331bf4b89470bca1271171ae4824eadbb2f1ae5683f92d0ecf735dd1d7a09a" exitCode=143 Oct 06 13:35:26 crc kubenswrapper[4958]: I1006 13:35:26.963866 4958 scope.go:117] "RemoveContainer" containerID="72331bf4b89470bca1271171ae4824eadbb2f1ae5683f92d0ecf735dd1d7a09a" Oct 06 13:35:26 crc kubenswrapper[4958]: I1006 13:35:26.964008 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-x4wtp/must-gather-vgmlt" Oct 06 13:35:26 crc kubenswrapper[4958]: I1006 13:35:26.982933 4958 scope.go:117] "RemoveContainer" containerID="db04cb24d00cc71b6958ce019828946407954a2b07e6457dc7af8d9cbd486456" Oct 06 13:35:27 crc kubenswrapper[4958]: I1006 13:35:27.068370 4958 scope.go:117] "RemoveContainer" containerID="72331bf4b89470bca1271171ae4824eadbb2f1ae5683f92d0ecf735dd1d7a09a" Oct 06 13:35:27 crc kubenswrapper[4958]: E1006 13:35:27.069161 4958 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"72331bf4b89470bca1271171ae4824eadbb2f1ae5683f92d0ecf735dd1d7a09a\": container with ID starting with 72331bf4b89470bca1271171ae4824eadbb2f1ae5683f92d0ecf735dd1d7a09a not found: ID does not exist" containerID="72331bf4b89470bca1271171ae4824eadbb2f1ae5683f92d0ecf735dd1d7a09a" Oct 06 13:35:27 crc kubenswrapper[4958]: I1006 13:35:27.069204 4958 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"72331bf4b89470bca1271171ae4824eadbb2f1ae5683f92d0ecf735dd1d7a09a"} err="failed to get container status \"72331bf4b89470bca1271171ae4824eadbb2f1ae5683f92d0ecf735dd1d7a09a\": rpc error: code = NotFound desc = could not find container \"72331bf4b89470bca1271171ae4824eadbb2f1ae5683f92d0ecf735dd1d7a09a\": container with ID starting with 72331bf4b89470bca1271171ae4824eadbb2f1ae5683f92d0ecf735dd1d7a09a not found: ID does not exist" Oct 06 13:35:27 crc kubenswrapper[4958]: I1006 13:35:27.069236 4958 scope.go:117] "RemoveContainer" containerID="db04cb24d00cc71b6958ce019828946407954a2b07e6457dc7af8d9cbd486456" Oct 06 13:35:27 crc kubenswrapper[4958]: E1006 13:35:27.069618 4958 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"db04cb24d00cc71b6958ce019828946407954a2b07e6457dc7af8d9cbd486456\": container with ID starting with db04cb24d00cc71b6958ce019828946407954a2b07e6457dc7af8d9cbd486456 not found: ID does not exist" containerID="db04cb24d00cc71b6958ce019828946407954a2b07e6457dc7af8d9cbd486456" Oct 06 13:35:27 crc kubenswrapper[4958]: I1006 13:35:27.069648 4958 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"db04cb24d00cc71b6958ce019828946407954a2b07e6457dc7af8d9cbd486456"} err="failed to get container status \"db04cb24d00cc71b6958ce019828946407954a2b07e6457dc7af8d9cbd486456\": rpc error: code = NotFound desc = could not find container \"db04cb24d00cc71b6958ce019828946407954a2b07e6457dc7af8d9cbd486456\": container with ID starting with db04cb24d00cc71b6958ce019828946407954a2b07e6457dc7af8d9cbd486456 not found: ID does not exist" Oct 06 13:35:53 crc kubenswrapper[4958]: I1006 13:35:53.801852 4958 patch_prober.go:28] interesting pod/machine-config-daemon-whw6z container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 06 13:35:53 crc kubenswrapper[4958]: I1006 13:35:53.802692 4958 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-whw6z" podUID="1a63a5e9-6d00-40ff-a080-cfcaf03a1c1b" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 06 13:35:53 crc kubenswrapper[4958]: I1006 13:35:53.802764 4958 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-whw6z" Oct 06 13:35:53 crc kubenswrapper[4958]: I1006 13:35:53.803902 4958 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"b2f276db3618825e62e624b3eb0087b1b964491be60c64739bf3f0b8d6d1c8b2"} pod="openshift-machine-config-operator/machine-config-daemon-whw6z" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Oct 06 13:35:53 crc kubenswrapper[4958]: I1006 13:35:53.804016 4958 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-whw6z" podUID="1a63a5e9-6d00-40ff-a080-cfcaf03a1c1b" containerName="machine-config-daemon" containerID="cri-o://b2f276db3618825e62e624b3eb0087b1b964491be60c64739bf3f0b8d6d1c8b2" gracePeriod=600 Oct 06 13:35:54 crc kubenswrapper[4958]: I1006 13:35:54.239176 4958 generic.go:334] "Generic (PLEG): container finished" podID="1a63a5e9-6d00-40ff-a080-cfcaf03a1c1b" containerID="b2f276db3618825e62e624b3eb0087b1b964491be60c64739bf3f0b8d6d1c8b2" exitCode=0 Oct 06 13:35:54 crc kubenswrapper[4958]: I1006 13:35:54.239361 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-whw6z" event={"ID":"1a63a5e9-6d00-40ff-a080-cfcaf03a1c1b","Type":"ContainerDied","Data":"b2f276db3618825e62e624b3eb0087b1b964491be60c64739bf3f0b8d6d1c8b2"} Oct 06 13:35:54 crc kubenswrapper[4958]: I1006 13:35:54.239511 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-whw6z" event={"ID":"1a63a5e9-6d00-40ff-a080-cfcaf03a1c1b","Type":"ContainerStarted","Data":"160e7b57ffacd7a6b8855b571815530a5e6d8dd32252b294e1ba20b225947997"} Oct 06 13:35:54 crc kubenswrapper[4958]: I1006 13:35:54.239539 4958 scope.go:117] "RemoveContainer" containerID="7fae5e7c3f2061ad0524619a6772893fa1ae12670cf4b11ced46230c90fe0223"